- Gulf investing in healthcare AI
- Integration challenging
- Training and support needed
Large language models such as the AI chatbot ChatGPT are gaining traction in healthcare, and Gulf states are investing heavily in the prospect, but LLMs’ integration into clinical practice presents challenges.
A study led by Dr Ethan Goh, published last month in the Journal of the American Medical Association Open Network, tested whether ChatGPT could improve diagnostic reasoning among physicians.
While LLMs performed well independently, they did not improve diagnostic reasoning when used by doctors, the study found.
The study included 50 practitioners across family, internal, and emergency medicine, and took place in late 2023. Participants reviewed six clinical cases using either conventional diagnostic tools alone or ChatGPT alongside these tools.
Physicians using ChatGPT scored a median of 76 percent, against 74 percent for those relying on traditional methods. However, when ChatGPT operated independently, it achieved a score of 92 percent.
“These tools are powerful in healthcare, but the real takeaway is that we need to figure out how doctors, patients, and health systems can fully benefit from them,” Goh said.
He emphasised that the study’s open-ended approach better reflects the complexity of real-world practice: “It’s about understanding how clinicians assess complex cases.”
Saudi Arabia and the UAE are heavily investing in artificial intelligence in the healthcare sector.
The UAE healthcare company M42 leads initiatives such as the Emirati Genome Programme, using AI to develop personalised treatments based on one of the world’s largest DNA databases. In Saudi Arabia, investments such as the $100 billion “Project Transcendence” aim to embed AI across sectors, including healthcare, as part of Vision 2030.
While these investments are significant, Goh’s study underscored a critical challenge: technology alone is not enough. If doctors are not trained to use AI tools effectively, these systems can fail to deliver on their potential.
“Doctors often don’t get much onboarding when it comes to tools like LLMs or electronic medical records. Without proper guidance, these tools can add to their workload instead of easing it,” Goh said.
Dr Maaz Shaikh, vice-president of product management at M42, said an AI talent gap in healthcare exists on two fronts: a lack of medical expertise among AI specialists and limited knowledge among clinicians on how to use AI tools.
“This mismatch leads to poorly integrated systems that disrupt workflows and fail to meet their potential,” Shaikh said. He emphasised the need for training programmes, change management and safeguards to ensure smooth adoption.
“Strategies must focus on people, process, and technology. Interdisciplinary teams of AI experts and healthcare specialists are essential for development,” he said.
- Bias a challenge for AI ethics, says Silicon Valley analyst
- Bahrain government rebuffs citizens’ quota in healthcare
- PureHealth in talks to buy hospital operator NMC
The Goh study also highlighted the dangers of over-reliance on AI without understanding its limitations. Goh and Shaikh both said that tools such as ChatGPT are valuable for assisting clinicians in data interpretation and decision-making but should not replace expertise.
“AI should be used to facilitate rapid access to data and evidence, not to make decisions for clinicians,” Shaikh said.
While generative AI has shown promise in automating documentation and improving efficiency, its benefits are only likely to be realised if clinicians are equipped to integrate it into workflows.
Shaikh pointed to examples of M42’s early successes, such as AI-powered systems for colorectal cancer screenings and personalised medicine, which demonstrate how AI can enhance care quality and patient outcomes when used effectively.
“These tools have great potential, but their benefits will only be realised with deliberate efforts to align technology with clinical practice,” Goh said.