young woman showing tongue

(Roman Samborskyi/Shutterstock)

AI program shows nearly 97% accuracy in identifying ailments from tongue scans

ADELAIDE, Australia — An ancient medical technique could be making a high-tech comeback. Researchers from Australia and Iraq have developed a cutting-edge system that can diagnose diseases just by looking at your tongue. This isn’t your grandmother’s “stick out your tongue and say ‘ah'” — it’s artificial intelligence meets traditional Chinese medicine, and it could revolutionize the way we detect illnesses.

For over two thousand years, practitioners of traditional Chinese medicine have been examining patients’ tongues as a window into their overall health. The color, texture, and coating of the tongue can reveal a wealth of information about what’s going on inside the body. Now, a team led by Ali Al-Naji, an adjunct associate professor from Middle Technical University and the University of South Australia, has brought this ancient practice into the 21st century with a system that uses machine learning algorithms to analyze tongue images and predict potential health issues with astonishing accuracy.

The system, described in a recent paper published in the journal Technologies, combines a simple webcam with sophisticated image processing and artificial intelligence to examine tongue characteristics in real time. It’s like having a highly trained traditional Chinese medicine practitioner available 24/7 but with the added power of computer vision and big data analysis.

How does it work?

Imagine you’re feeling under the weather and wondering if you should see a doctor. Instead of scheduling an appointment or heading to urgent care, you could simply sit in front of your computer, stick out your tongue, and let the AI do its work. The system captures an image of your tongue, processes it through various color models, and compares it to a vast database of tongue images associated with different health conditions.

Within seconds, the system can tell you if your tongue looks healthy or if it’s showing signs of potential issues like diabetes, respiratory problems, or even early stages of certain cancers. It’s not meant to replace proper medical diagnosis, but it could serve as an early warning system, prompting you to seek professional medical attention when needed.

A researcher demonstrates how a camera captures images of the tongue and analyses it for disease.
A researcher demonstrates how a camera captures images of the tongue and analyses it for disease. (Credit: Middle Technical University)

The researchers trained their system on over 5,000 tongue images, categorizing them into seven different colors: red, yellow, green, blue, gray, white, and pink (the color of a healthy tongue). Each color can indicate different health conditions. For example, a yellow tongue might suggest diabetes or liver issues, while a purple tongue could be a sign of poor circulation or even cancer.

“The color, shape and thickness of the tongue can reveal a litany of health conditions,” Ali Al-Naji says in a statement. “Typically, people with diabetes have a yellow tongue; cancer patients a purple tongue with a thick greasy coating; and acute stroke patients present with an unusually shaped red tongue. A white tongue can indicate anemia; people with severe cases of COVID-19 are likely to have a deep red tongue; and an indigo or violet colored tongue indicates vascular and gastrointestinal issues or asthma.”

What sets this system apart from previous attempts at computerized tongue diagnosis is its ability to account for different lighting conditions. Anyone who’s tried to take a good selfie knows that lighting can dramatically affect how colors appear in photos. The same is true for tongue images. By training their algorithms on images taken under various light intensities, the team created a system that can accurately assess tongue color regardless of ambient lighting.

The researchers tested several machine-learning algorithms to find the most accurate method for classifying tongue colors. The winner? A technique called Extreme Gradient Boost (XGBoost), which achieved an impressive 98.71% accuracy in identifying tongue colors correctly.

But the real test came when the team put their system to work in real-world hospital settings. They examined 60 patients with various conditions at two hospitals in Iraq. The results were remarkable — the system correctly diagnosed 58 out of 60 cases for a real-world accuracy rate of 96.6%.

Imagine the possibilities of such a system. In regions with limited access to healthcare, this could serve as a quick, non-invasive screening tool to help identify people who need further medical attention. It could also be a valuable tool for monitoring chronic conditions or tracking the progression of diseases over time.

Of course, as with any new technology, there are limitations and ethical considerations to consider. The researchers are quick to point out that their system is not meant to replace trained medical professionals. Rather, it’s intended as a complementary tool to aid in early detection and monitoring of health issues.

There’s also the question of privacy and data security. The system requires capturing and storing images of people’s tongues, which could be considered sensitive medical information. Ensuring that this data is protected and used ethically will be crucial as this technology develops further.

Despite these challenges, the potential benefits of this AI-powered tongue diagnosis system are enormous. It combines the wisdom of ancient medical practices with the power of modern technology, potentially offering a quick, non-invasive, and accessible method for early disease detection.

As we continue to search for ways to make healthcare more efficient and accessible, innovations like this remind us that sometimes the answers we seek have been right under our noses – or, in this case, right on the tip of our tongues – all along.

Paper Summary

Methodology

The researchers used a simple webcam to capture images of patients’ tongues. They then used computer software to process these images, focusing on the center part of the tongue. The images were analyzed using five different color models (RGB, YCbCr, HSV, LAB, and YIQ) to get detailed color information.

This data was then fed into six different machine-learning algorithms, which were trained on a large dataset of tongue images associated with various health conditions. The system was designed to work under different lighting conditions to ensure accuracy in real-world settings.

Key Results

Of the six machine learning algorithms tested, the Extreme Gradient Boost (XGBoost) method performed the best, with an accuracy of 98.71% in correctly identifying tongue colors. When tested on 60 real patients in hospitals, the system correctly diagnosed 58 cases, achieving a 96.6% accuracy rate in real-world conditions. The system was able to identify various conditions, including diabetes, mycotic infections, asthma, and COVID-19, based on tongue color analysis.

Study Limitations

The study faced several challenges. Some patients were reluctant to participate, limiting the amount of data that could be collected. Camera reflections sometimes affected the detected colors, which could impact diagnosis accuracy. The system also hasn’t been tested on a large, diverse population, which would be necessary to ensure its effectiveness across different ethnicities and age groups.

Discussion & Takeaways

This study demonstrates the potential of combining traditional diagnostic methods with modern AI technology. The high accuracy rates, both in controlled settings and real-world applications, suggest that this could be a valuable tool for early disease detection and monitoring.

However, the researchers emphasize that this system is not meant to replace professional medical diagnosis but rather to complement existing healthcare practices. Future research could focus on refining the system to account for camera reflections and testing it on larger, more diverse populations.

Funding & Disclosures

The paper states that this research received no external funding. The study was conducted in accordance with the Declaration of Helsinki and approved by the Human Research Ethics Committee at the Ministry of Health and Environment, Training and Human Development Centre, Iraq. Written informed consent was obtained from all participants. The authors declared no conflicts of interest.

About StudyFinds Analysis

Called "brilliant," "fantastic," and "spot on" by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.

Our Editorial Process

StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.

Our Editorial Team

Steve Fink

Editor-in-Chief

John Anderer

Associate Editor

Leave a Reply