
[Adobe Stock]
What do you make of Deep Research for general academic use?
Hyde: Using tools not specifically built for healthcare raises important concerns. Healthcare data is nuanced and the training matters. For general academic uses, AI has proven to be a useful tool, with human oversight, for rapid research and summarization of publicly available information.
Where do you see general-purpose AI platforms like Deep Research offering value in healthcare?
Hyde: I see general-purpose AI platforms offering significant value in search functions and summarization from publicly available resources on the internet. However, the limitation is that these platforms often lack specific training in healthcare. Their recommendations are often generalized and not specific to patient care, which may lead to errors or misinterpretations when applied to specific patient populations or clinical settings.
Can you speak to the risk of hallucinations in medical questions with DeepSeek?
Hyde: Hallucinations in AI in a healthcare setting can directly impact patient safety. It’s important that AI-generated answers provide sources and that AI users check those sources if the datasets include conflicting, outdated, or unverified information. At Atropos Health, we train our AI models on 300M+ de-identified patient records. We score each data set using our published methodology, RWDS, to ensure that the RWD is high quality. We also subject our solutions to independent evaluation. In an independent study, our healthcare-focused LLM, ChatRWD, outperformed other LLMs when evaluated by independent clinicians, answering 94% of the questions and delivering the best answers 87% of the time. In contrast, competing LLMs such as ChatGPT-4 and Gemini Pro 1.5 offered relevant, evidence-based responses less than 10% of the time.
How do you ensure reliable and secure use of your data, and what concerns do you have about general AI tools meeting these standards?
Hyde: We offer two options for data. The Atropos Evidence Network is a federated model. For customers who want to use their own data, we install GENEVA OS within the customer’s firewall – reducing the need to ship data and the risks associated with that. Some customers choose both – the Atropos Evidence Network and their own data. We recently announced the ability for AI developers to build, test and train AI models on the Atropos Evidence Network. The goal is that the more models trained on high-quality RWD, the more rapid advancements healthcare can make.
Can you speak to the importance of domain expertise and custom tools versus off-the-shelf, more generalized applications for agentic research, particularly in healthcare?
Hyde: Healthcare expertise is critical because medical research and patient care are complex and highly specialized. Off-the-shelf, generalized applications may work well in certain contexts, but when applied to healthcare, they may not have all the data needed to ensure accuracy when it comes to medical treatments.
How do you see the AI agent space evolving in healthcare in the coming 12 months or so?
Hyde: In the next 12 months, I expect rapid advancements in healthcare AI agents, with a growing focus on precision medicine. We will likely see AI tools become more integrated with electronic health records (EHR) systems, enabling real-time data analysis and decision support at the point of care.
Filed Under: machine learning and AI