Google has trained AI to identify sounds associated with respiratory sickness

Google has trained AI to identify sounds associated with respiratory sickness


Key Takeaways

  • Google’s AI innovation now aids in clinical diagnoses, thanks to training on sound variations.
  • Google’s HeAR model is trained to detect signs of tuberculosis and other pulmonary sickness by analyzing patient coughing sounds.
  • A partner firm is using Google’s HeAR to improve lung assessments and TB diagnosis, showing 94% accuracy.




Google’s AI innovation has expanded dramatically in the past couple of years since consumer-facing AI applications gained popularity. Today, Google’s Gemini is more than a chatbot on the web — it is tightly integrated into various features and apps on the new Google Pixel 9 series, but the company is still relying heavily on innovation. A recent report reveals Google is now working with multiple international partners to aid clinical diagnoses with AI trained to recognize ailments by variations in the mere sound from symptoms like coughing and sneezing.


Besides the marvel of AI itself, Google has achieved a key milestone with Gemini, and more specifically, Gemini Nano. It is a smaller, scaled-down model of the generative AI, which can run on device on most modern flagship Android phones. This makes it independent of cellular network instability and other variables associated with cloud processing of queries for AI models. Bloomberg reports the tech titan has joined hands with an Indian start-up, Salcit Technologies, specializing in enhancing respiratory healthcare using AI, to create a similar solution which we hope runs on-device eventually.

It’s easy to see where this is going — Google’s on-device AI models can help accelerate respiratory ailment diagnosis in remote areas where primary healthcare and access to expensive medical equipment remains a concern. This partnership has already yielded a result, which Google is calling the HeAR model, short for Health Acoustic Representations.


Generative AI to the rescue

google-bard-sge-ai-ap-hero


HeAR is essentially a foundation AI model from Google that’s trained on 300 million audio clips of coughs, sniffles, sneezes, and breathing from around the world available in publicly viewable content. Although imperceptible to the untrained ear, these clips sound different from a healthy person’s respiration. Google’s training data for HeAR also included 100 million cough sounds to help quickly screen people for diseases like Tuberculosis. Salcit Technologies is using Google’s HeAR to improve lung assessments and TB diagnosis delivered by its in-house AI called Swaasa.

While AI-assisted diagnosis isn’t a replacement for proper clinical assessment and treatment, Swaasa has been approved for use by the Indian medical device regulator. Running as an app on a mobile device, Swaasa needs a 10-second audio clip of the patient coughing for diagnosing ailments with 94% (claimed) accuracy. While the method doesn’t guarantee foolproof reliability, and has its own issues like clear recordings, it is already cheaper than typical spirometry testing used for diagnosing TB and other ailments.


Most importantly, Swaasa still relies on cloud processing, and has ample room for improvement before HeAR can be implemented on-device. Meanwhile, Google is betting on similar AI tech for training foundation models to detect autism based on sounds an infant makes. Exciting times.



Source link

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *