AWS aims to provide a broad and deep range of AI services and takes an end-to-end approach to AI that includes infrastructure, software, hardware and services.In an increasingly competitive AI landscape, AWS recently launched a $100 million initiative, the AWS Generative AI Innovation Center, to accelerate enterprise generative AI adoption. As Syed put it, “AWS has a perspective on generative AI as kind of like we’re just starting out on this journey.” This comes as major tech companies like OpenAI, Microsoft, Google, and Meta heavily invest in generative AI.
Pharmaceutical companies are increasingly focused on both cloud computing and artificial intelligence. The convergence of AI and pharma aligns with our previous coverage on how cloud technology could act as a catalyst for change in this industry.
AWS’s multilayered architecture paves a path for ML and generative AI in pharma
AWS has built a multi-layered architecture for its AI and ML services. Syed revealed the details of this tri-level structure with each layer catering to different user expertise levels and needs:
- Foundation layer: This base layer comprises fundamental tools like PyTorch and TensorFlow. “The foundation layer, or as we term it, the infrastructure layer, caters to deep experts in machine learning who wish to tweak the hardware and train really large models,” Syed said. This layer uses specialized Amazon Machine Images (AMIs), which run on specialized hardware, with tools like PyTorch, TensorFlow and Apache MXNet.
- ML services layer: This middle layer houses Amazon SageMaker. It is designed for ML practitioners who require a managed service for handling the entire ML lifecycle, from model building and training to deployment. SageMaker equips users with capabilities beyond just creating and deploying models. It also enables continuous monitoring to detect issues like bias or drift in live models. If models deviate from their original assessment due to new data or contexts, SageMaker tools help users spot these changes and retrain or fine-tune the models accordingly.
- AI services layer: The topmost layer consists of purpose-built services that have AI built into them. These services eliminate the need for users to build their own models and can be accessed via APIs. They are particularly suited for tasks such as information extraction from documents. “The practitioner doesn’t have to build anything,” Syed said. “From an ML perspective, they’re just using the service capabilities.” Applications range from vision to tasks such as information extraction from documents. As Syed noted, users can “just call an API and say, here’s a PDF document, I need to extract the information from it.”
An expanding focus on customizing models
While generative AI is still emerging, it has already shown immense promise and potential. OpenAI’s ChatGPT sparked widespread interest in this space, but foundation models and generative AI have been developing for some time, Syed said.
Amazon Bedrock, AWS’s managed generative AI service, provides fully managed access to both third-party foundation models and AWS’s own Titan models via an API. This enables users to choose the model best suited for their needs. With Bedrock, AWS is making AI more accessible to businesses by allowing them to privately customize models with their own data and seamlessly deploy them into applications using familiar AWS tools without infrastructure management.
“We’re going to offer first-party AWS models, the Titan models, which are AWS built foundation models. But we’re also going to include a wide range of third-party models. As more foundation models prove their usefulness, we’ll incorporate them, enabling users to have a managed, customizable experience,” Syed said. These Titan models are built for general-purpose use but can also be tailored to specific tasks using your data.
Bedrock: A platform for generative AI
Through Bedrock, AWS is fostering an ecosystem of AI tools by offering a mix of AWS’s own models and third-party models. AWS sees opportunities for generative AI across industries, including in knowledge management, clinical trials, and digital twin creation. The Allen Institute for Brain Science is using Bedrock as part of a broader initiative to map the human brain in high resolution.
With fine-tuning, AWS not only provides the tools but also the secure environment needed to fine-tune these models. Users can adapt models to their specific needs by leveraging AWS’s secure APIs. According to Syed, “So you have a base model. And then with an API call, and you know, maybe just like a handful of your own data, you can kind of tune that model and make it much more specific to your use cases.”
Regulating language model outputs: One of the challenges in deploying generative AI models at scale is ensuring their outputs align with factual information. AWS addresses this through innovative techniques like the retrieval augmented generation (RAG). When users interact with a large language model, the responses it generates will be refined and aligned with the data in their enterprise repository, Syed noted. “If you’re a life sciences company, and you want to use a large language model to help enable your researchers to find answers in your proprietary knowledge bases, you can use RAG to tune that model so the answers come back based on what’s in your knowledge repository,” he added.
Clinical trial possibilities
As Syed noted, fusing AI and life sciences opens up new research avenues. Foundation models could improve patient identification for clinical trials by comparing protocols against databases of patient attributes. Traditionally labor-intensive, this process could be streamlined by evaluating expansive data pools and promptly finding optimal matches. AI models introduce the possibility of digital twins: simulating patients using generative AI to conduct synthetic trials. While nascent and prompting regulatory questions, Syed said “the FDA is more and more open to those kinds of capabilities.”
Filed Under: clinical trials, Data science, Drug Discovery, Industry 4.0, machine learning and AI