With its Health Acoustic Representations (HeAR) project, Google unveiled a cutting-edge AI technology that could revolutionize audio-based illness identification. This cutting-edge technology analyzes faint acoustic cues, such coughs and sniffles, as well as breathing patterns, to find early disease indicators. A recent Bloomberg story claims that the AI model is extremely skilled at identifying patterns linked to different ailments because it was trained on a large dataset of 300 million audio samples.
The early diagnosis of tuberculosis (TB), a disease that kills 4,500 people every day globally and accounts for a quarter of a million fatalities annually in India, is the main objective of this technique. Because the HeAR AI model has been trained on 100 million cough sounds, it shows great promise for TB detection. The AI may be able to diagnose tuberculosis (TB) at an early stage, which is critical for efficient treatment and halting the disease’s transmission, by examining minute variations in cough patterns.
To include this technology into smartphones, Google has teamed up with Salcit Technologies, an Indian business that specializes in AI for respiratory healthcare. With this partnership, high-risk populations will have easier access to the AI system, particularly in areas with scarce healthcare resources. The incorporation of this technology into mobile devices has the potential to greatly improve screening capacities in underserved and distant places, where conventional diagnostic methods are frequently inaccessible or scarce.
Salcit Technologies is improving its own machine learning system, Swaasa, which has already received approval from India’s medical device regulator, by utilizing Google’s AI breakthroughs. Users can upload a 10-second cough sample for screening using the Swaasa app, which claims a 94% accuracy rate in illness detection. This integration shows how Google AI may be used in the real world to enhance current diagnostic tools.
But there are a number of difficulties with the new technology. These include getting acceptance in medical settings, making sure audio samples are of a high caliber, and boosting user familiarity—especially in remote locations with possibly inadequate access to technology. For this technology to be widely adopted, certain obstacles must be removed.
Google is also looking into using its bioacoustic AI technology for other purposes. For example, in partnership with Chang Gung Memorial Hospital in Taiwan, it is looking into the application of comparable techniques for early ultrasound-based breast cancer screening.
In conclusion, by utilizing the capabilities of AI and audio analysis, Google’s HeAR project is a major step forward in healthcare technology and has the potential to completely transform disease identification, especially in underprivileged areas.
SOURCE :
TIME OF INDIA