Analyzing psoriasis skin disease in HSA KIT

Eight million patients in Germany (800 million worldwide) suffer from chronic skin diseases. Approximately 2% of these patients suffer from psoriasis and 2% from neurodermatitis. At the clinics for dermatology of the University Medical Centre Mannheim (UMM) and the University Hospital Würzburg (UKM), this leads to at least 10,000 patient contacts per year. Waiting times for patients for a university treatment appointment average over 6 months. Access to the health care system and an optimal and fast therapy is therefore more difficult.

Analyzing psoriasis skin disease in HSA KIT deep learning software is able to detect a skin that caused by psoriasis, Psoriasis is a chronic autoimmune disease that primarily affects the skin, causing it to develop red, scaly patches, Specialists at HS Analysis are able to use advanced deep learning AI software to diagnose this disease to use it in clinics, institutions and health care facilities.

Knowing your psoriasis type can help your healthcare provider create a treatment plan. Most people experience one type at one time, but it is possible to have more than one type of psoriasis.

Ground Truth Data (GTD)

Ground Truth Data (GTD) refers to the data that is manually annotated or labeled and used to train, validate or test machine learning models. When it comes to a 2D image Ground Truth Data consists of precise annotations or labels that describe the objects, patterns or features found in the image.

For instance in object detection scenarios the Ground Truth Data, for a 2D image would involve bounding boxes or segmentation masks around each object of interest in the image. Additionally it would include corresponding class labels for each object. In cases of image classification tasks the Ground Truth Data would consist of assigned class labels, for the image.

This table describe the whole classes of annotation per number of files and also the totlae annotations numbers:

Files number Annotated image/ Base ROI Number of skin annotations Percentage from total annotation Number of hyperpigmentation  annotations Percentage from total annotation Number of inflamed annotations Percentage from total annotation
001 88 124 1.45% 118 1.38% 401 4.70%
002 94 134 1.57% 53 0.62% 435 5.10%
003 97 104 1.22% 43 0.50% 315 3.69%
004 94 152 1.78% 99 1.16% 330 3.87%
006 92 130 1.52% 52 0.61% 426 4.99%
007 96 125 1.47% 23 0.27% 211 2.47%
009 88 124 1.45% 0 0.00% 302 3.54%
010 88 101 1.18% 15 0.18% 304 3.56%
011 91 106 1.24% 38 0.45% 238 2.79%
013 89 119 1.40% 8 0.09% 516 6.05%
014 92 115 1.35% 8 0.09% 246 2.89%
015 84 105 1.23% 18 0.21% 392 4.59%
016 90 107 1.25% 2 0.02% 232 2.72%
017 98 115 1.35% 24 0.28% 175 2.05%
018 86 109 1.28% 1 0.01% 328 3.85%
020 90 116 1.36% 0 0.00% 193 2.26%
021 75 95 1.11% 18 0.21% 237 2.78%
022 94 118 1.38% 4 0.05% 176 2.06%
023 89 108 1.27% 33 0.39% 308 3.61%
Total 1715 2207 25.87% 557 6.53% 5765 67.58%
The total number of all annotations 8529  

The first column describe the file numbers each file consist of 100 images and the second column describe the region of interesting from this 100 image we exclude some of images for some specific reasons and the third column describe the skin annotations with its percentages from total annotations in the 4th column in the 5th column describe the hyperpigmentation, with its percentages from total annotations in the 6th column and the 7th column describe the inflamed area which is plaques with its percentages from total annotations in the 8th column And the 1 before last row describe the total number of annotations of each classes, and the last row that describe The total number of all annotations.

Dataset Selection

After creation of GTD, the settings in this Table were used for 3 different architectures to train a model.

Model Type Dataset approach Epochs Learning Rate Batch Size
Segmentation Full image 100 0.0001 2

Artificial Intelligence

HSA KIT works on the development of AI machine and deep learning methods which refers to the simulation of human intelligence in machines that are programmed to perform tasks that typically require human intelligence. AI systems are designed to perceive their environment, reason about it, and take appropriate actions to achieve specific goals.  Deep learning can be utilized to define or identify skin diseases by leveraging its ability to learn intricate patterns and features from large datasets.

The HS Analysis‘ Touch

One key technology of automatic interpretation of tissue samples in the HS Analysis software is the latest artificial intelligence. We are developing a DL in the cloud to analyze smartphone images in 2D, but also surface features (heatmaps and thus 3D) with CNNs. we have the ability to create ground truth data and train models for both detection of skin and detection of plaques.

How we develop our AI: Firstly, we collect real data images and annotate them based on colors for the skin we use yellow and we annotate all the shown skin areas in the picture to get the best possible AI model with minimum mistakes.

Annotation Examples

The next step Is we annotate inflamed areas on the skin using red color and then we train the model too be able to automatically detect the plaques.

Results

This is the result of skin and plaques Model after training and applying on other skin area with different opacity:

We see here a segmentation of skin in orange that detected by deep learning and also, we see another color that show in red which is describe plaques detected by the second deep learning. both model that have been trained by high techniques can detect all skin and inflamed area.

Model Training

We train each model individually. Then we tests and optimize the models, we use our testing dataset to evaluate how well our AI model performs on the task of distinguishing psoriasis from other skin conditions and normal skin.

In this image we can see the process of developing AI and this is one of our example not just this image, we applying the model on various image and skin area that was just for example of developing.

During training, we have datasets that consist of various images, each image is detected 100 times.

Types of Training include:

  • Classification: assign objects to different classes
  • Object Detection: detect object and draw bounding box around it
  • Segmentation: detect object and draw exact border around object Instance
  • Instance Segmentation: segmentation + differentiate between touching objects

Augmentations

  • Horizontal Flip
    Suitable for: Natural scenes, animals, objects without a specific orientation.
    Not suitable for: Text, scenes with a clear left-to-right or right-to-left context, images with directional signs, etc.
  • Vertical Flip
    Suitable for: Reflections in water, some abstract art.
    Not suitable for: Most real-world images, as a vertical flip can make them look unnatural. For example, flipping a person upside down.
  • Rotation
    Small rotations (e.g., ±10°) can be suitable for most images to simulate the effect of tilting a camera.
    Large rotations (e.g., 90°, 180°) can change the context and might not be suitable for all images. For instance, rotating a portrait of a person by 90° or 180° would look odd.

We select Instance Segmentation model type which detect object and draw exact border around objects plus differentiate between touching objects the structures depends on the AI that we want which are skin or inflamed , and we use some augmentations (horizontal-flip, vertical-flip, and rotation) in order to prevent overfitting, and we train the newest model version based on the existing versions.

The Future Of HSA KIT's AI

A very important aspect of this project is looking to improve the AI model  in the future and there are a lot of ideas that we can integrate and develop to improve the quality and versatility of the AI model to include and be able to detect the skin and plaques for many different photos and different types of noise and artefacts and get a better and more accurate result despite these artefacts. We take a look at the challenges and artefacts and find there are different types of them and the task is for the AI to be able to detect the correct targets despite these artefacts being present in the image.

Being able to beat these challenges in the future will certainly make the AI model extremely accurate and having state of the art annotations.

Here are some of these Types of noise or artefacts that we want to improve on:

Image blurring

Blurring is one of the most common things that will hinder the AI from correctly detecting and solving this problem will be very helpful to detect skin and plaques from other background objects.

Lighting and Shadows

Another very common artefact is the light and shadow presence and the different contrast that happens on the image that detours the detection.

Out of focus images

Almost all of the images that were used to train the Al model are from smartphones and sometimes the patient sends images that are out of focus.

Edges and Borders Accuracy

To have an excellent and highly accurate model annotating the edges and boarders accurately is very important to not include any other unwanted objects or pixels that will affect the model accuracy.

Unwanted Images

Having the ability to automatically exclude images that are not useable to annotate be it by not including any skin or having the identity of the patient visible and having inappropriate or private images being sent.

Explainable Artificial Intelligence (xAI)

The primary goal of xAI is to achieve understandable AI decisions. This is realized through methods such as Feature Visualization, Feature Attribution, and the use of Surrogate Models. These techniques aim to visually show which parts of data the model deems important, assign scores to individual data features based on their impact on the output, and approximate complex model decisions using simpler, interpretable models. The importance of xAI cannot be understated; it fosters trust, aids in model validation, and ensures compliance with regulations that mandate transparency in automated decisions.

Activation Matrices

These are multidimensional arrays representing neuron output values in neural network layers, often seen in convolutional layers of a CNN. They provide insights into how input data is processed and transformed within the network. Visualizing these as heatmaps can aid in understanding the network’s feature detection and is useful for debugging and optimization.

Class Activation Mapping (CAM)

CAM’s chief objective is to pinpoint which regions in an image play a pivotal role in determining its classification by a CNN. This is accomplished by leveraging the weights from the global average pooling layer in a CNN to generate a heatmap of the image, emphasizing the crucial regions. By identifying the regions in an image that significantly influence its classification, CAM serves as a powerful tool for the visual interpretation of CNN decisions, ensuring the model’s focus on the correct image features.

This video show that how we have annotated the skin and the inflamed area, then after annotating process, we have trained both skin and plaques models and then we have tested the models and we have a plied at different skin area.

Continue Reading

continue-reading