Unlocking the Potential of AI in Life Sciences: Real-World AI Adoption and Navigating AI's Ethical Landscape
As artificial intelligence (AI) drives transformation across the life sciences, balancing its vast potential with the ethical and operational challenges it introduces is more critical than ever. Elsevier’s 2024 report, “Insights 2024: Attitudes toward AI,” based on a survey of 3,000 corporate researchers, captures this tension, revealing an industry eager to explore AI’s possibilities while remaining vigilant about its risks. In an earlier Pharma IQ story, Mirit Eldor, Managing Director of Life Sciences Solutions at Elsevier, shared insights from this report, emphasising the industry’s enthusiasm for AI’s potential to accelerate knowledge discovery and reduce costs, balanced by concerns over data integrity, transparency, and regulatory compliance.
In this exclusive follow-up interview, Mirit delves deeper, offering fresh insights on trust-building measures, robust data, and the collaborative path forward. Mirit also shares her reflections from Pharma IQ's recent AI Pharma & Healthcare conference, where discussions centred around real-world integration challenges and the ethical frameworks needed to guide AI innovation responsibly.
AI in Pharma: Insights on Industry Attitudes
Pharma IQ: "Your report paints a complex picture of the current attitudes toward AI in pharma and healthcare. Were there any findings that particularly surprised you or that you found especially revealing about the industry’s current relationship with AI?"
Mirit Eldor: "Yes and no. We launched the report soon after ChatGPT’s big wave, and we were already seeing both enthusiasm and a lot of unknowns around AI. So, when I say yes and no, I wasn’t totally surprised by the themes, but I was surprised by the extent of them. The enthusiasm was incredibly high—96% felt AI would accelerate knowledge discovery, and there was strong sentiment around cost savings and improving work quality. But while the excitement was palpable, only about a third were actually using GenAI at work, and even then, most were still in the experimentation phase.
I was somewhat surprised by the extent of concerns, especially around misinformation. Just as many—96%—expressed worries about AI’s potential for spreading misinformation. So, you have this interesting contrast: almost every respondent was excited about AI’s opportunities, yet nearly everyone was deeply concerned about the risks."
Ensuring Data Integrity in AI Tools for Life Sciences
Pharma IQ: "With AI positioned to transform knowledge discovery, your report emphasises the importance of high-quality, trusted data. What steps do you see as critical for organisations to maintain data integrity and quality as they integrate AI into research and development?"
Mirit Eldor: "There are several critical steps, and they fall into three areas: data, the AI tool, and the overall approach to combining the two. Let’s start with data. Organisations need to ensure that AI recommendations are based on high-quality, scientifically robust data. The data should be peer-reviewed and correct. In a regulated industry like life sciences, it’s essential to understand what data is going into the AI models and ensure it is reliable and scientifically valid.
Secondly, it’s crucial to select the right tool. Public, open-access AI tools can compromise the integrity of internal data if they’re not designed for a specific industry. For example, an AI tool trained on general data might not suit the unique contexts of life sciences. Finally, in combining these elements, organisations should adopt a “human-in-the-loop” approach to ensure the AI’s results are explainable and bias is minimised. Historical data can carry bias, which can get amplified in AI models, so it is essential to manage this actively."
Building Trust in AI: Strategies for Verified, Reliable Data
Pharma IQ: "What do you see as the key strategies for building trust in AI tools, especially when 91% of respondents expect them to draw from verified sources?"
Mirit Eldor: "It starts with choosing the right AI tools. Organisations should ensure they’re using AI that’s trained on high-quality, verified data and configured in private instances so that any internal data input is protected. This requires setting up the appropriate infrastructure—secure, reliable, and trustworthy—for the AI’s operations. Once we have this structure in place, we can embrace the incredible opportunities AI presents. With the right protections, AI tools can become essential allies in accelerating knowledge and improving life sciences work."
Mitigating AI Misinformation Risks in Life Sciences
Pharma IQ: "How can life sciences companies guard against the risks of misinformation while still leveraging AI to streamline processes and uphold scientific standards?"
Mirit Eldor: "At Elsevier, we developed responsible AI principles early on, which I find very useful. The first principle is considering the real-world impact of our solutions—whether they’re being used for drug safety, patient outcomes, or other high-stakes areas. We need to keep that impact front of mind when building AI tools.
Another key principle is bias prevention. Every AI algorithm we develop goes through a bias check to prevent reinforcing any unfair biases. Explainability is another pillar: all our solutions must be transparent, and we ensure that users can understand what any recommendation is based on. We also maintain accountability through human oversight; there’s always a person in the loop and someone accountable for each AI tool. Finally, we strictly adhere to privacy and data governance, ensuring that we have the necessary rights and protections for any data we use in our models."
Collaboration in Life Sciences AI: Bridging Stakeholders for Responsible Innovation
Pharma IQ: "Based on the insights from your report, how crucial is collaboration between different industry stakeholders, such as regulators, tech providers, and pharma companies, to create a responsible AI framework that addresses both innovation and ethical concerns?"
Mirit Eldor: "I’d say it’s critical. Life sciences have always operated as an ecosystem—no single company can do it all. And especially now, with new technologies coming in, we need the best technology, content, infrastructure, and legislation to keep pace. The regulatory side has always lagged a bit, trying to catch up with new technologies, so we can all learn from each other.
I was part of discussions with legislators, including the U.S. Chamber of Commerce, which ran interviews with companies already working with AI. We shared insights into our approach at Elsevier and discussed the common obstacles we face—such as bias and keeping data current. These discussions showed the power of collaboration across stakeholders. To drive the best outcomes, we need to ensure that our technologies connect well. The productivity gap in life sciences is real, and despite technological advancements, we still see declining success rates in clinical trials and rising development costs. Collaboration can help us address these challenges and make meaningful progress in efficiency and patient care."
Achieving Interoperability in AI for Pharma and Healthcare
Pharma IQ: "At the AI Pharma & Healthcare conference, attendees discussed the need for interoperable AI ecosystems to bridge isolated applications in clinical trials, diagnostics, and patient care. How do you see this shift impacting the industry, and what are some challenges that might arise?"
Mirit Eldor: "Interoperable AI ecosystems represent a significant leap forward. But as usual, it starts with data—ensuring that we have access to the right data and that it’s robust enough to support interoperable systems. Bringing data together is particularly challenging because it often sits across various silos and devices, each with its own context and framework. At Elsevier, we face this challenge even with our own extensive data sources, from journals to specialised databases. We want all this information searchable in one place because it’s often a small, overlooked detail that can lead to major breakthroughs.
But the technical, operational, and IP challenges are enormous. My approach is to break down these big challenges into manageable pieces. For example, we might start by integrating data on a specific disease or a specific use case, then expand from there. Building these systems incrementally is key, because if we try to solve everything at once, we might never start."
Conclusion: Collaborative AI Development and Data Integrity in Life Sciences
Mirit Eldor’s insights illustrate a crucial moment for the life sciences sector as it seeks to harness AI’s full potential. From trust-building measures to responsible AI principles, Mirit highlights the steps needed to foster ethical AI implementation while pushing innovation forward. Reflecting on the recent AI Pharma & Healthcare conference, she underscores the importance of collaboration, incremental growth, and a shared commitment to data integrity and transparency.
As life sciences and AI evolve together, frameworks and open industry dialogues provide valuable guidance. For further insights, read Elsevier’s “Insights 2024: Attitudes toward AI” report and follow Pharma IQ for more on AI-driven transformation in life sciences.