Machine Vision & Health
Together with LLMs (large language models) like ChatGPT, machine vision is one of the most important recent developments in AI. The various methods used to achieve it, such as neural networks and deep learning, are now allowing computers to truly see their environment.
One application of this idea is letting computers take over humans” tasks like driving or giving robots full autonomy in the real world instead of staying confined to factory floors.
It is one one of the large drivers of machine vision adoption, a quickly growing market expected to reach $9.2B by 2029.
But another equally important task is judging our health. Many medical diagnoses rely on doctors’ visual assessments. This includes direct examination and expert-level analysis of radiography, scanners, and other medical images.
In most cases, these visual inspections are somewhat subjective. A doctor will need many years of experience to become truly confident in his assessment. And until now, the complexity of each patient’s body being slightly different made it too complex for automatized measurement.
This is finally changing, and many companies are now exploring deploying AI machine vision to create superior analyses of medical images.
Direct Visual Examination
We recently explored the case of using AI to detect better than human doctor ear infections in our article “AI Poised to Become Invaluable Medical Diagnosis Tool”
This can be expanded to many other pathologies; for example, Google’s AI imaging can be used to diagnose 26 different skin diseases (80% of cases seen in primary care), detect diabetic retinopathy, or even predict the risk of developing diabetic retinopathy in the future.
More surprisingly, AI also appears to be able to detect health issues from visual inspections that humans could not use; for example, detecting anemia (normally requiring a blood test) from a look at the eye’s retina. The same method could also be used to predict heart attack risks.
It can also detect lesions that a human eye would miss; for example, the greater accuracy of Iterative Health’s AI can help detect gastric and colorectal cancers.
Medical Image Analysis
The more refined medical imagery from X-rays, tomography, CT-scans, echography, and others become, the more complex the image analysis.
Advanced AI can now help in improving image quality, data handling, and data analysis.
For example, Cleery is using AI to analyze Coronary Computed Tomography Angiography (CCTA) data to “ generate a 3D model of the patient’s coronary arteries, identify their lumen and vessel walls, locate and quantify stenoses, as well as identify, quantify and categorize plaque.”
Butterfly Network is changing how echography is done thanks to semiconductor chips replacing piezoelectric sensors and AI to create mobile 3D models of the patient’s internal organs.
Verisound AI, a subsidiary of GE Healthcare, gives real-time guidance to the practician, automated image capture, and dedicated specialized AIs for general imaging, cardiology, breast ultrasounds, women’s health, nerves, kidneys, liver, etc.
Google’s AIs are also used to help detect breast cancer and save oncologists’ time, helping alleviate labor shortages.
ChestEye and ChestLink from Oxipit double-check radiology images for potentially missed lung tumors.
Surgery
AI can also help surgeons in many ways.
One is surgery guidance, especially in robot assisted-surgery, where the AI can provide extra information or advice, to the surgeon in real-time.
Based on its review of millions of surgical videos, AI has the ability to anticipate the next 15 to 30 seconds of an operation and provide additional oversight during the surgery
Dr. Eckhoff – artificial intelligence and innovation fellow at the Surgical Artificial Intelligence and Innovation Laboratory at Massachusetts General Hospital
An example is French Pixee Medical, allowing 3D tracking with a smartphone or smart glasses for orthopedic surgery.
AI can also automate the crucial procedure of recording the surgery, especially for repetitive and error-prone tasks. As surgeons forget instruments inside the patient in about 1500 surgeries per year in the US, computer vision could equally help eliminate this issue completely by automatically keeping track of everything used during the surgery.
Data Management
Patient Data
Although it may be less spectacular than spotting a tumor, performing a diagnostic without human intervention, or assisting surgeons, managing patient data can be equally important for patients’ recovery.
The first part AI can help with is facial recognition and linking it to patient ID and medical files. There is always a risk that identity gets mismatched in the high-speed, high-stress work environment of most hospitals. This technology is more general and can be deployed by vendors of facial recognition software, like Facia, for example.
The second step is integrating all the possible data for a patient into one system. For example, it might have images from the radiology department (and its associated software), general patient files, documents from previous treatments in another hospital, pharmacy prescriptions, etc.
Integrating all these data together can be very complex, and AI tools like CloudMedxHealth could help realize all the potential of medical digital data.
Medical data can also be aggregated at a higher level, such as the way Komodo Health creates an overview of the whole U.S. healthcare system.
Monitoring
Another segment where AI can help is monitoring the hospital environment. This can include respect for good hygiene practices (cleaning, masks, etc.) or really any procedure that needs to be monitored automatically.
Similar monitoring can be provided to patients at home to check on their recovery. This can make things like physical therapy possible at home, making it not only more practical for patients and doctors but also cheaper for the healthcare system at large.
Home monitoring can also include fall detection, with the automatic calling of emergency services if needed.
Telehealth in general can benefit from AI, like for example the solution from Corti.AI. We also explored this topic in our article “Heal From Home: Top 5 Telemedicine Stocks”.
Improving Medical Research
Biology is often described as the most “soft” of the scientific fields because medical and biological data tend to be a lot “messier” than chemistry or physics.
In part, this is unavoidable, as biological samples are extremely complex and might vary from each other.
It is also due to the field still often relying on manual counting for things like cell count on a microscope. The same is true for medical analyses like blood count. Naturally, 2 different persons will have slightly different ways of counting a complex sample.
This can be solved with AI machine vision, with products like Shonit from Sigtuple now able to provide reliable and standardized counts of all types of blood cells.
Medical images are also receiving standardized classification and annotation through tools like Enlitic, allowing researchers to access higher-quality datasets.
While medical research is struggling to deal with a flood of new data coming from progress in analytical sciences in genomic, transcriptomic, proteomic, and the overall “multiomics” revolution we described in “Multiomics Are The Next Step In Biotechnology”.
This also includes gene editing, with now open-source AI helping researchers to design new gene editing tools like discussed in “AI-Enabled Gene-Editing Made Possible with ‘OpenCRISPR-1’”
Replacing failing senses
Another way machine vision can change healthcare is by seeing things for us. Blind people or those with other impairments can get AI helping them to identify items in their environment.
AI can also convert text to speech that would normally be inaccessible.
Lastly, machine vision can help people with disabilities to control devices through facial expressions or gestures.
Machine Vision Companies
1. Alphabet
Google is a leading tech company in many sectors, with of course Search and its Android smartphone OS good examples.
It is equally a leader in AI, with a strong presence in machine vision for healthcare, ranging from cancer and disease detection to prevention and improving treatment.
AI is also leveraged in other fields than machine vision. For example, Google AlphaFold AI is able to predict proteins’ 3D configuration, key data for creating new drugs, and predict new molecules’ efficiency and safety profile. We discussed how AlphaFold’s predictions have proven to be reliable and greatly helpful for drug development in “Prospective Modelling Points to Bright Future for AI-based Drug Discovery”.
Google’s Health AI includes Med-PaLM, the first large language model to reach expert performance on medical licensing exam-style questions, the open source Open Health Stack for developers, DeepVariant for genomic analysis, and Deep learning for electronic health records.
Of course, Google’s Vision AI has many other applications beyond healthcare, including transportation, content generation (text, image, and video), document interpretation, etc.
So overall, Alphabet/Google is a giant of not only tech but also AI, including in healthcare and the healthcare applications of machine vision.
2. Butterfly Network
Butterfly is both the developer of an advanced ultra-portable ultrasound diagnosis tool and an integrated software using AI helping diagnosis, called “Compass”.
The company is now on its 3rd generation of the ultrasound probe, with the release in 2024 of the iQ3, with a higher data transfer rate and 2x the processing power of the version before it. Like all previous Butterfly ultrasound probes, it relies on the superior semiconductor “ultrasound-on-chip” technology instead of classical piezoelectric sensors.
iQ3 offers a superior user experience, including the possibility to visualize both 3D and multi-planes simultaneously, integrated cloud software, and quick start-up, all for a cheaper price.
The company uses AI to improve the images, generate diagnosis-relevant measurements automatically, and provide training/teaching practice.
Butterfly is quickly expanding into new markets in Asia (Singapore, Indonesia, Philippines, etc.) and also in the veterinarian markets, such as checking feedlot cattle health and leveraging the ultra-portability of its ultrasound tool.