Today’s large language models can be as unreliable as they are eloquent. Their tendency to fabricate facts and lose the thread makes them risky tools for scientific research, especially in highly regulated industries like pharmaceuticals and chemistry. They also struggle to provide sources and will fabricate a bogus academic journal without batting an eye. Speaking…
Raising the efficiency floor and innovation ceiling with generative AI in drug discovery
Large language models (LLMs) such as ChatGPT promise advances that extend beyond capturing public interest. Because transformer models like GPT have an architecture that supports the understanding of language in context, they point to an array of novel possibilities for scientific research. “The transformer architecture is critical,” according to Michael Connell, the chief operating officer…