But that image is outdated — especially in risk-focused industries like financial services where CISOs are integral to digital transformation projects and the broader risk management considerations.
From CIS-‘no’ to risk maestro
“Drug development is a risk-focused industry as well,” said Daniel Ayala, chief security and trust officer for Dotmatics. “There is a huge amount of risk.” Consequently, CISOs working in pharma contexts are increasingly expanding their roles from technical experts to risk-aware business leaders who happen to have deep technical expertise. “Risk-finding goes into everything,” Ayala said.
“The CISOs have to transform ourselves from being propellerheads — technologists — to being business aligned,” Ayala emphasized. “You are seeing CISOs come out from under the realm of IT. 90% of my day has nothing to do with IT or technology. It has everything to do with product privacy, responding to compliance.”
Staying ahead in an AI-powered cyber game of cat and mouse
Traditionally, highly regulated sectors, including drug development, have been cautious in their embrace of new technology. That dynamic is quickly shifting as senior leaders in the sector see emerging technologies like AI as potential competitive advantages when deployed strategically. Roughly two-thirds of pharma companies are increasing their IT budgets in 2024. Life science companies are actively integrating AI into their cybersecurity defenses, enhancing drug safety operations, and streamlining compliance processes. But attackers are also tapping AI for increasingly sophisticated threats. “There are LLMs (large language models) that are focused on finding good attack methods such as more targeting phishing emails so that you are going to be more likely to click on a malicious link,” Ayala said. “So that’s the game we are playing.
In terms of AI and bias mitigation for patient recruitment in clinical trials, Ayala noted that while demographic information is important for selecting suitable patients, AI models involved in this process also need to be carefully developed and scrutinized to avoid amplifying bias or discrimination. When using AI and machine learning for tasks like clinical trial selection and design, it is important to take steps to ensure the computations do not inadvertently introduce bias or discriminate against certain patient populations, Ayala stressed. Given the dynamic nature of the clinical trial and AI landscapes, proactive measures are essential for ethical research practices. “It’s a moving target,” he said.
Filed Under: clinical trials, machine learning and AI