Ophthalmic Tumor detection in HSA KIT

HS Analysis is a market-leading company in ophthalmology, focused on human-machine interaction (HMI) with a deep learning-based assistant HSA KIT in module “nAmdClassifier” for nAMD research. This module is designed to automatically assess the malignancy of choroidal melanocytic lesions using color fundus photographs (CFP).

Fundus imaging is a widely accessible tool in outpatient settings, whereas multimodal imaging is often reserved for specialized clinics. Therefore, utilizing the HSA KIT software with the nAmdClassifier module, which is based on CFP, offers a resource- and cost-effective approach for pre-stratification.

Our deep learning models, HyperTumorEyeNet and HyperNAmdNet, based on a single imaging technique CFP, have achieved an accuracy similar to the MOLES score, which relies on complex and not everywhere available multimodal imaging. Early detection of malignant disease is critical, as treatment of metastatic disease is rarely effective. On the other hand, immediate treatment of indeterminate lesions could result in potential vision loss.

By using the nAmdClassifier module, there is potential to improve the accuracy of non-invasive diagnostics, while minimizing the risks associated with pathological confirmation, including tumor cell seeding, iatrogenic retinal detachment, and vitreous hemorrhage.

Data and Implementation

The OphthalTumorClassifier module was developed using data from individuals diagnosed with a choroidal melanocytic lesion at a tertiary clinic between 2011 and 2023. The cohort included 762 eligible cases.

The deep learning-based assistant, integrated into the software, was trained on a dataset of 762 color fundus photographs (CFP) of choroidal lesions captured by various fundus cameras. The dataset was augmented with different complex methods and classified into the following categories:

  • Benign nevus
  • Untreated choroidal melanoma
  • Irradiated choroidal melanoma

The reference standard for evaluation was established by retinal specialists using multimodal imaging.

Imaging Technology and Model Variability

The nAmdClassifier module in the HSA KIT was implemented using two different fundus imaging technologies and includes two different DL models, depending on the location of the lesions. Unlike pseudocolor imaging used in Optos, the Clarus device offers true-color imaging, which increases variability among the images.

Performance and Conclusion

The HSA KIT with the nAmdClassifier module demonstrates excellent performance in discerning the malignancy of choroidal lesions. The software shows promise for resource-efficient and cost-effective pre-stratification for nAMD patients.

In conclusion, the nAmdClassifier module assesses the malignancy of choroidal lesions with satisfying discriminative performance only using CFP, comparable to experienced retinal specialists using multimodal imaging, in a fully automated, reproducible, and objective manner.

Modalities used for diagnosis

The world of intraocular imaging is exploding with new technology. For years, fundus photography, fluorescein angiography, and ocular ultrasonography prevailed, but now there are even more choices, including microimaging modalities such as optical coherence tomography (OCT), OCT angiography (OCTA), fundus autofluorescence (FAF), and indocyanine green angiography (ICGA). There are also the macroimaging technologies computed tomography (CT) and magnetic resonance imaging (MRI). Yet to be explored fully for use with intraocular tumors are multispectral imaging, dark adaptation, and adaptive optics.

The most effective tool for assessing intraocular tumors is indirect ophthalmoscopy in the hands of an experienced ocular oncologist. Numerous factors go into the equation of tumor diagnosis, such as lesion configuration, surface contour, associated tumor seeding, presence and extent of subretinal fluid, shades of tumor coloration, intrinsic vascularity, and others that hint at the diagnosis and direct our thoughts on management. For example, an orange-yellow mass deep to the retinal pigment epithelium (RPE) or in the choroid with surrounding hemorrhage and/or exudation would be suspicious for peripheral exudative hemorrhagic chorioretinopathy (versus choroidal metastasis from renal cell carcinoma or cutaneous melanoma). The combination of features leads to pattern recognition.

diagnosis

Detection and Analysis

HS Analysis’s software is a useful tool to aid in the early detection and analysis of choroidal tumors. With its assistance, primary care providers and ophthalmologists can potentially improve their ability to differentiate between tumors and non-tumor conditions, ultimately leading to earlier diagnoses and more effective treatment options.

The software use various techniques, such as image recognition algorithms and machine learning models, to identify and classify tumors based on characteristics such as size, shape, and location. It also help to compare images of a patient’s tumor over time to detect any changes that may indicate tumor growth or progression.

Overall, the development of HS Analysis’s software represents a promising step towards improving the early detection and analysis of choroidal tumors, which ultimately lead to better patient outcomes and a reduction in morbidity and mortality.

detection-analysis

Importance of Ophthalmic tumor in HSA software

  • The technician can Classify few tumor cells so they can be used as ground truth data (GTD) for the training of a Deep Learning model.
  • By using AI algorithms incorporated into the program, that quickly locates and categorizes tumors using Deep Learning models.
  • It makes early detection of tumors fast and more accurate.
importance

The way Ophthalmic tumor model functions

  • We simply categorize which images include tumors before and after radiation and which do not, as seen in the image above.
  • Then the program develops a Deep Learning model together with Ophthalmologist which can automatically recognize tumors in other images.
  • The non-tumor and tumor bevor and after the radiation are automatically distinguished using HyperNAmdNet.
function

Optimization of deep learning to HyperOphalTumorNet as state of the art network in Ophthalmology

HyperTumorEyenet(Type1) and HyperNAmdNet was implemented that is based on EfficientNet, which is a convolutional neural network (CNN) architecture, that uniformly scales the depth, width, and resolution using a fixed set of scaling coefficients, rather than scaling them arbitrarily as is commonly done. HyperNAmdNet was implemented by using integrated feature built-in within the HSA AI Cockpit.

The results show that the model achieved an accuracy of 96%Additionally, it shows that the precision of the model was 92.59%, . Furthermore, the model achieved recall of 99.9% , F1 score also achieved 96.15%, and Finally, the Cohen Kappa score reached 92%.

classification-architectures

xAI im HSA KIT

HSA KIT software technique for Explainable AI (xAI) was used to locate class specific image region.

The steps taken to implement xAI with HSA KIT Software technique include: loading an image, pre-processing the image, loading the pre-trained model, passing the processed image through the model to generate predictions, defining the target class, computing gradients, applying global average pooling, generating the heatmap, and finally, overlaying the heatmap on the original image to visualize the areas contributing to the target class prediction.

As an ophthalmologist you can understand the decision of already trained deep learning model as well as understand which direction should be optimized to get best quality deep learning model during the training process. In this iterative process you can see exactly which features should be activated or how much more data of specific class should be used to get best quality results, as Figure below shows tumor images as identified by the model; warm colors (yellow, red) signify regions within the tumor tissue sample that exhibit a higher pixel activation of tissue activity as detected by the model.

The results of xAI showed that software method accurately identifies regions of high tissue pixel activity, which are represented with warm colors, The results indicate that the performance of HyperNAmdNet yielded accurate and precise results, making it an excellent option for medical diagnosis.

xai

HS Analysis interoperable with Hardware/Software devices

HSA KIT matches perfectly with the following devices to work on Ophthalmic solutions

The medical experts working with this device on retinal diagnostic procedures which produce a unique level of diagnostic accuracy, and deliver images with the clearest details, thus providing a dependable basis for first-class treatment outcomes.

Combined with HSA’s deep learning by using AI algorithms incorporated into the program software kit it allows the medical professionals to work more efficiently on the diagnostic procedure and reach end results faster and more accurately.

This device from ZEISS was developed as a comprehensive ultra wide-angle fundus camera for ophthalmologists. This enables ultra-wide-angle images to be recorded in true color and with first-class image quality. It offers the full range of imaging modalities, including fluorescence angiography.

Output from this device can be analyzed using HSA kit to annotate and highlight tumor cells and abnormal tissue faster using deep learning algorithms.

This confocal scanning laser ophthalmoscope is a widefield digital imaging device that can capture images of the retina from the central pole to the far periphery. The retinal images are captured automatically and in a patient-friendly manner, with no scleral depression or contact with the cornea.

The images captured by these devices (New,Old) will be scanned and analyzed using the HSA kit to generate a highlighted and annotated output image of tumor cells.

The SPECTRALIS® is an ophthalmic imaging platform with an upgradable, modular design. This platform allows clinicians to configure each SPECTRALIS to the specific diagnostic workflow in the practice or clinic.

Multimodal imaging options include: OCT, multiple scanning laser fundus imaging modalities, widefield and ultra-widefield, scanning laser angiography and OCT angiography.

These images can then be used in HSA software to highlight tumor cells accurately and in a short-time.

OptosAdvance is a comprehensive image management solution for eyecare. It enables clinicians to review, annotate, securely refer and archive images from many eyecare diagnostic devices in their practices using a single, industry-standard DICOM solution.
This along with the AI technology incorporated within the HSA software will allow health care professionals to use both software swiftly and get accurate results efficiently.

hs-sw-interoperable

Publications

Charité 2024: Using Deep Learning to Distinguish Highly Malignant Uveal Melanoma from Benign Choroidal Nevi.

Abstract

Background: This study aimed to evaluate the potential of human–machine interaction (HMI) in a deep learning software for discerning the malignancy of choroidal melanocytic lesions based on fundus photographs.

Methods: The study enrolled individuals diagnosed with a choroidal melanocytic lesion at a tertiary clinic between 2011 and 2023, resulting in a cohort of 762 eligible cases. A deep learning-based assistant integrated into the software underwent training using a dataset comprising 762 color fundus photographs (CFPs) of choroidal lesions captured by various fundus cameras. The dataset was categorized into benign nevi, untreated choroidal melanomas, and irradiated choroidal melanomas. The reference standard for evaluation was established by retinal specialists using multimodal imaging. Trinary and binary models were trained, and their classification performance was evaluated on a test set consisting of 100 independent images. The discriminative performance of deep learning models was evaluated based on accuracy, recall, and specificity.

Results: The final accuracy rates on the independent test set for multi-class and binary (benign vs. malignant) classification were 84.8% and 90.9%, respectively. Recall and specificity ranged from 0.85 to 0.90 and 0.91 to 0.92, respectively. The mean area under the curve (AUC) values were 0.96 and 0.99, respectively. Optimal discriminative performance was observed in binary classification with the incorporation of a single imaging modality, achieving an accuracy of 95.8%. Conclusions: The deep learning models demonstrated commendable performance in distinguishing the malignancy of choroidal lesions. The software exhibits promise for resource-efficient and cost-effective pre-stratification.

https://www.mdpi.com/2077-0383/13/14/4141
https://doi.org/10.3390/jcm13144141

Modules Used

Ophthalmic Tumor detection in HSA KIT

Continue Reading

continue-reading