
[OpenAI]
In this Q&A, Dr. Mitesh Rao, CEO of OMNY Health, provides a grounded perspective on OpenAI’s Deep Research tool, specifically in healthcare sector contexts. He acknowledges its potential for tasks like synthesizing research and handling complex queries, but cautions against its immediate “primetime” use in healthcare due to data limitations. We are still in the “first inning,” he says. For example, he points to the “Achilles heel” of AI in healthcare as being data availability and highlights the critical risk of “hallucinations” in medical contexts, where patient harm is completely unacceptable. Dr. Rao delves into the necessity of domain-specific data and training to make such tools truly effective in specialized fields like healthcare, emphasizing data security and privacy concerns along the way.
What do you make of Deep Research for general academic use?

Mitesh Rao, MD
Rao: The model still needs more time and training, especially considering its limited exposure to healthcare data. The fact that it can function within secure environments and inside an organization’s firewall is a positive aspect, but it doesn’t mean it’s ready for widespread, “primetime” use in healthcare.
Where do you see general-purpose AI platforms like Deep Research offering value in healthcare?
Rao: The critical limitation is data. This is truly the Achilles heel of AI across the healthcare industry. Deep Research could become more domain-specific with time and access to the right healthcare information, but that’s a concerted effort.
Can you speak to the risk of hallucinations in medical questions with DeepSeek?
Rao: Hallucinations are a major risk factor – primarily because we have a zero-tolerance policy for causing harm to patients. This is a huge issue for AI and the only way around it is to get out of the black-box approach and ensure the model has access to source data that is verifiable and traceable. In that scenario, any output can be validated against its sources.
How do you ensure the reliable and secure use of your data, and what concerns do you have about general AI tools meeting these data security and privacy standards?
Rao: De-identification and expert determination/certification of de-identification are mandatory for security and privacy when using data in AI. Any AI model will likely “learn” from and potentially “carry” the information it is exposed to. This means there is a risk if the AI is exposed to Protected Health Information (PHI). The only way to prevent this data leakage risk is by ensuring the data is appropriately de-identified and safe before it is used by the AI.
Can you speak to the importance of domain expertise and custom tools versus off-the-shelf, more generalized applications for agentic research, particularly in healthcare?
Rao: To me, the effectiveness ultimately comes down to data and training. With sufficient, relevant data and targeted training, you can transform any “off-the-shelf” AI solution into something highly domain-specific and effective for healthcare.
How do you see the AI agent space evolving in healthcare in the coming 12 months or so?
Rao: There are many potential applications for AI agents in healthcare that should reduce the time burden for administrative tasks and speed up various processes. However, we are still in the very early stages (“first inning”). Currently, there’s a tendency to apply AI to everything simply because it’s a new tool (“when you’re holding a hammer, everything is a nail”). We have yet to focus on and identify the best and most effective use cases in healthcare to target with AI agents.
Filed Under: machine learning and AI