A Mobile App for Wound Localization using Deep Learning

We present an automated wound localizer from 2D wound and ulcer images by using deep neural network, as the first step towards building an automated and complete wound diagnostic system. The wound localizer has been developed by using YOLOv3 model, which is then turned into an iOS mobile application.The developed localizer can detect the wound and its surrounding tissues and isolate the localized wounded region from images, which would be very helpful for future processing such as wound segmentation and classification due to the removal of unnecessary regions from wound images. For Mobile App development with video processing, a lighter version of YOLOv3 named tiny-YOLOv3 has been used. The model is trained and tested on our own image dataset in collaboration with AZH Wound and Vascular Center, Milwaukee, Wisconsin. The YOLOv3 model is compared with SSD model, showing that YOLOv3 gives a mAP value of 93.9%, which is much better than the SSD model (86.4%). The robustness and reliability of these models are also tested on a publicly available dataset named Medetec and shows a very good performance as well.

使用深度学习进行伤口定位的移动应用程序

我们将通过使用深度神经网络从2D伤口和溃疡图像中呈现一个自动伤口定位器,这是朝着建立一个自动化且完整的伤口诊断系统迈出的第一步。通过使用YOLOv3模型开发了伤口定位器,然后将其转变为iOS移动应用程序。.. 所开发的定位器可以检测伤口及其周围组织,并从图像中分离出局部的伤口区域,这对将来的处理非常有用,例如由于从伤口图像中去除了不必要的区域,因此可以进行伤口分割和分类。对于具有视频处理功能的移动应用程序开发,已使用了较轻版本的YOLOv3,名为tiny-YOLOv3。与威斯康星州密尔沃基市AZH伤口和血管中心合作,在我们自己的图像数据集上对该模型进行了训练和测试。将YOLOv3模型与SSD模型进行比较,结果表明YOLOv3的mAP值为93.9%,比SSD模型(86.4%)好得多。这些模型的鲁棒性和可靠性也在名为Medetec的公开数据集上进行了测试,并显示出非常好的性能。 (阅读更多)