Imaging and Simulation Laboratory

Worcester Polytechnic Institute

Smartphone-based System for Wound Assessment

Chronic wound care is costly, time consuming and inconvenient for patients. The assessment is done manually using standardized scales and indices based on visual examination. Such evaluation is somewhat subjective and is not always recorded in a consistent format. Travel to wound clinics is often a hardship, especially for mobility-impaired patients.

wound system

We have developed smart phone app that tracks the area and healing status of chronic wounds along with other relevant health parameters. The app first identifies the wound boundaries and determines the wound area without human intervention, either based on mean shift based segmentation (for faster processing) or on machine learning, using statistical analysis of wound and non-wound tissue texture, for wound recognition under more challenging circumstances. The app will next perform a color analysis of the wound tissue, based on which the app makes a healing score assessment, and displays wound area trend and healing score trend.

The smartphone app can be used in the wound clinic, for any type of chronic wound at any location. Alternatively, for patients with chronic wounds on the sole of their feet, the app can be used at the patients’ home, in connection with an image capture box, allowing them to take a more active role in daily wound care and wound assessment, with the potential for accelerating the wound healing, saving travel cost and reducing healthcare expenses.


For the app to operate on the smartphone alone, the computational requirements must match the capabilities of the smartphone. A well-suited algorithm is the mean-shift segmentation method. We have shown this approach to be efficient to implement, prior-knowledge independent and reasonably accurate under controlled lighting and range conditions, as achieved with the image capture box, and given a set of well-tuned parameters.


The mean shift algorithm takes into consideration the spatial continuity inside the image by expanding the original 3D color range space to 5D space, including two spatial components, since direct classification on the pixels proved to be inefficient. The quality of the segmentation is easily controlled by the spatial and color range resolution parameters and the segmentation algorithm can be adjustable to different degrees of skin color smoothness by changing the resolution parameters. Finally, the mean shift filtering algorithm is suitable for parallel implementation since the basic processing unit is the pixel. In this case, the high computational efficiency of GPUs can be exploited.


Machine learning algorithms are computationally more demanding, but can be trained when a large data set of segmented wounds are available. To improve performance, we have developed a two-stage support vector machine (SVM) binary classifier based wound recognition approach. This approach consists of three major steps: (i)unsupervised super-pixel segmentation, (ii) feature descriptor extraction for each super-pixel and (iii) supervised classifier based wound boundary determination. The experimental results show that this approach provides good performance when dealing with foot ulcer images captured with our image capture box.

wounds svm

Our methodology includes using simple linear iterative clustering (SLIC) method to segment the images into a number of super-pixels, extracting significant color and texture features from these super-pixel and using Principle Component Analysis (PCA) technique to reduce the dimensionality of the feature space. The first stage trains k SVM binary classifier based on partially different training images while the second stage trains one SVM binary classifier based on the incorrectly classified test instances from the first stage.


To generalize the application of our system, the conditional random field (CRF) based model is applied to solve the wound boundary determination. The key modules in this approach are the TextonBoost based potential learning in different scales and efficient CRF model inference to find the optimal labeling.

wounds random field

We use a non-generative (discriminative) approach to directly model the conditional probability of labels given images, with the result that fewer labeled images will be required, and the resources will be directly relevant to the task of inferring labels. This is the key idea underlying the conditional random field (CRF). This conditional probability model can depend on arbitrary non-independent characteristic of the observation, unlike a generative image model which is forced to account for dependencies in the image and therefore requires strict independence assumptions to make inference tractable.


The RYB (red-yellow-black) wound classification model is a consistent, simple assessment model to evaluate wounds and represents the different phases on the continuum of the wound healing process,  where red tissues are viewed as the inflammatory; yellow tissues imply infection or tissue containing slough that are not ready to heal; and black tissues indicate necrotic tissue state. The pixels within the wound boundary are clustered into these three categories using the k-mean clustering algorithm.

wounds score assessment

We have developed a simple, empirical algorithm that emulates the healing evaluation of three expert wound specialists, where the algorithm utilizes the wound area and the color segmentation.

wounds clinicians


For type 2 diabetic patients with chronic wound on the sole of the foot, the smartphone app can be used at home, either by the patients themselves or their caregiver. An image capture box allows easy capture of a wound image at the sole of the foot, and guarantees uniform lighting and fixed distance between the smartphone camera and the wound. The wound area and healing score data can be set for automatic upload to the patients’ wound clinic.

capture boxOur image capture box is specifically designed to aid patients with type 2 diabetes in photographing ulcers occurring on the sole of their feet. To this end, we designed and built an image capture box with an optical system containing a dual set of front surface mirrors, integrated LED lighting and a comfortable, slanted surface for the patients to place their foot. The design ensures consistent illumination and a fixed optical path length between the sole of the foot and the camera, so that pictures captured at different times would be taken from the same camera angles and under the same lighting conditions.