• BioUpdates.AI
  • Posts
  • AI Data Risks for Biopharma, AI and Health Equity, and more

AI Data Risks for Biopharma, AI and Health Equity, and more

From synthetic data threats to strategic breakthroughs, here's what's shaping the future of AI and life sciences.

Dear BioUpdates.AI Subscribers,

In this issue, we spotlight the potential risks of AI in healthcare data analytics, share a thought-provoking fireside chat on AI and health equity, and round up the latest headlines shaping AI in pharma and biotech.

Read about...

AI in Real-World Data Analytics: The Quiet Threats of Re-identification and Synthetic Data

AI and machine learning (ML) are rapidly transforming how biopharma companies leverage real-world data (RWD) for Health Economics and Outcomes Research (HEOR) as well as for early-stage discovery and general research. AI/ML algorithms can now parse through vast datasets with incredible speed, uncovering insights previously hidden beneath the sheer volume of data.

At the same time, this enhanced analytical capability introduces two critical threats: AI-driven re-identification of anonymized or de-identified patient data and the potential infiltration of AI-generated synthetic data into datasets that were thought to be “real-world”.

Biopharma executives must understand, recognize, and proactively manage these emerging risks. The foundational assumption of patient data privacy rests on effective anonymization. However, advances in AI technology have begun unraveling the protective layers traditionally used to de-identify or anonymize patient data. A 2019 study published in Nature Communications estimated that a staggering 99.98% of Americans could be re-identified using just 15 demographic attributes.

Even digitized medical images pose concerns. Deep learning models trained on chest X-rays have demonstrated the ability to accurately re-identify patients, translating anatomical details into biometric identifiers. Genomic data may present an even greater risk. As AI grows more adept at finding subtle identifiers within seemingly anonymized datasets, previous anonymization standards are becoming obsolete. Simply put, anonymization that might have been sufficient even a few years ago is obsolete in today’s AI-driven environment, and this puts all data handlers at risk, including biopharma companies licensing datasets from typical sources. Read more…

Caught between datasets and dilemmas: Navigating AI's tricky landscape

Peat & Perspectives:
Smoky Takes on Hot Topics

In each newsletter, one of our editors will conduct a short interview with an expert on a topic related to our news roundup. To make sure they will deliver a candid conversation to you, these interviews will be conducted while they each sip a glass of Scotch – or their beverage of choice. We hope you enjoy our inaugural edition with Dr. Taylor Hirschberg discussing health equity as it applies to AI.

Neat Ideas, Served on the Rocks

AI in Pharma - Balancing Innovation with Equity

Editor Cory Kidd, PhD interviews Taylor Hirschberg, DrPH

As pharmaceutical companies increasingly invest in artificial intelligence, questions about ethics, equity, and implementation become paramount. I recently sat down with Taylor Hirschberg to discuss how pharma can responsibly harness AI while ensuring health equity remains central to innovation.

Cory: Taylor, what is our Scotch of choice this evening?

Taylor: We’re going with Laphroig 14, one of my favorites!

Cory: Nice choice! Sounds great. I’m joining you with an Oban 14 tonight. Cheers, and let’s dive in. So, pharma companies are investing heavily in AI, particularly for drug discovery. What guardrails are necessary to prevent perpetuating biases in clinical research?

Taylor: Right, well—The fundamental principle must be "do no harm", and I mean that at individual, community, family, institutional, and even global levels. Honestly, we really need to ensure equity and access for all populations, which, as you can imagine, is especially challenging in the pharmaceutical con

Cory: Totally makes sense. But, practically speaking, how do we translate these ethical theories into concrete steps that executives can implement?

Taylor: Yeah, that's a big question. Data governance is paramount. There's significant difficulty in properly de-identifying data while still capturing the necessary characteristics for modeling, and ensuring population diversity—it's tricky. While I haven't yet seen concrete examples of best practices in the industry, there's some promising research and policy frameworks emerging, particularly out of Europe, actually.

Cory: Interesting. What specific questions should pharma buyers ask AI vendors regarding health equity and representation in their training data?

Taylor: Well, [biopharma] companies should definitely inquire about the vendor's established business ethos and their framework for health equity. Honestly, vendors should understand they're responsible for the client's reputation, and a lack of an established health equity framework—well, that should really be a deterrent. Our team brings collective health equity experience, but we still need iterative and measurable approaches for actually incorporating these principles into product development.

Cory: Good point. Many pharma companies already have existing health equity bodies. So, how can these groups effectively collaborate with AI working groups?

Taylor: Yeah, that's a bit complicated—there's often hesitation around measuring the impact of health equity initiatives compared to, say, environmental measures. I've had executives express genuine confusion—real confusion—about how to measure health equity outcomes, which makes practical implementation pretty challenging.

Cory: I see. What metrics would you recommend for measuring progress in health equity?

Taylor: I'd start with SMART objectives and really focus on causal pathways to define inputs, activities, and the outcomes you're after. Basically, keep your eyes on what can actually be counted at each step. Current measures in pharma often revolve around money spent and market access rather than, you know, more nuanced impact metrics.

Cory: Got it. Some argue AI could help address historical inequities in clinical trial participation through better design and recruitment. What's your perspective?

Taylor: Hmm. I'm cautious about premature implementation of AI in recruitment. We should fix existing recruitment standards first, as AI could exacerbate inequities, especially for vulnerable populations with lower technology uptake. AI used in recruitment should be considered a medical device with similar governance structures.

Cory: But couldn't AI be leveraged behind the scenes to optimize recruitment processes and reduce costs, maybe enabling more outreach to diverse populations?

Taylor: That's a valid perspective I hadn't fully considered. AI could indeed help identify missed populations through data analysis, potentially overcoming limitations in trial recruitment that often prioritize cost over diversity.

Cory: Great point. What's your final thought on AI implementation in pharma?

Taylor: We have a genuine opportunity to leverage AI to address historical inequities, but only if we implement it thoughtfully and strategically. Creating project charters to identify inherent flaws and leveraging the experience of true experts for unbiased implementation will be crucial moving forward.

Bottom line: The path to equitable AI implementation starts with candid dialogue, careful scrutiny of data practices, and measurable, iterative goals. As Dr. Hirschberg reminds us, leveraging AI responsibly means ensuring it reduces inequity rather than perpetuating it—a conversation we'll keep revisiting over another dram.

Loving our newsletter? Stay tuned—there’s plenty more ahead.

If you’re enjoying the newsletter, don’t forget to subscribe here—and forward to friends or colleagues who would find it valuable.

YOUR ESSENTIAL NEWS ROUNDUP

From AI-assisted automation to better identification and drug safety, here’s a roundup of some of the biotech and AI news that are making headlines.

1. Tempus AI in line for $200M from AstraZeneca, Pathos deal to develop cancer model (via Fierce Biotech)

  • Global pharma leader AstraZeneca, genomic sequencing and precision analytics company Tempus, and AI biotech Pathos have formed a three-way collaboration to build a multimodal AI foundation model for oncology, with Tempus contributing its vast repository of patient data in exchange for $200 million in licensing and development fees. The completed model will be shared by all three parties to identify novel drug targets and advance cancer therapies.

Our View: This alliance signals big pharma’s appetite for AI at scale. Yet, from a regulatory readiness standpoint, any AI-derived insights must still clear FDA validation. Tempus’ model will need transparent, credible outputs to inform trials. With the technical maturity of multimodal models unproven, AstraZeneca is hedging its bets via partnership, effectively sharing IP and risk across collaborators rather than overcommitting to a solo platform. Sigla’s experts also have some skepticism over Tempus’s data consenting and privacy policies and wonder what risks they have from the patient and consumer data sovereignty side. We find it interesting that the fees here are in the same ballpark as Regeneron’s acquisition of the 23 & Me data (reportedly $256M).

Executive Takeaways: Pharma leaders should ask: do we join AI data consortia or risk going it alone? Collaborations like this can accelerate learning, but insist on validation plans up front so “AI-driven” doesn’t become mere hype on the critical path to new drugs.

2.Scientific ‘superintelligence‘ firm Lila Sciences launches $200M to seed financing AI-Powered Labs (via Pharmaphorum)

  • Flagship Pioneering unveiled Lila Sciences, emerging from stealth with a massive $200 million seed financing to develop a “scientific superintelligence”. Lila’s AI-driven, autonomous labs are intended to go beyond analyzing existing data – the platform will generate and test its own hypotheses via automated experiments, essentially letting the AI “experiment, play, and learn” at unprecedented scale.

Our View: Big funding doesn’t equal proven science – technical maturity will be the crux here. Lila is selling a moonshot vision of an AI-run research engine, which today is more concept than reality. Our IP risk radar is blinking too: if an AI automates discovery, who owns the inventions it creates? Sigla advises healthy skepticism until Lila produces real-world results to back its lofty claims. Flagship’s patience (and deep pockets) may nurture this moonshot, but pharma partners will rightly demand evidence that this “AI scientist” can actually deliver breakthroughs, not just press releases.

Executive Takeaways: Lila’s launch poses a provocative question for R&D executives: how aggressively should we invest in automating science? It’s wise to monitor pilot data (or even run a small joint project) with these AI lab systems, but avoid overcommitting resources until an “AI researcher” demonstrates it can hit drug discovery milestones better, faster or cheaper than traditional labs.

3. BigHat Biosciences and Lilly Collaborate to Advance AI-driven Antibody Therapeutics (via Business Wire)

  • Eli Lilly inked a strategic collaboration with BigHat Biosciences, a startup known for its machine learning-guided antibody engineering platform, to co-design up to two novel antibodies. Lilly will also make an equity investment as part of its Catalyze360 program, providing support while BigHat deploys its ML-powered Milliner wet lab platform to engineer antibodies with improved properties.

Our View: Lilly adding another AI-driven biotech to its roster underscores that ML-enhanced biologics are quickly becoming mainstream. The deal’s scope, limited to two programs plus an equity stake, shows a disciplined portfolio fit: Lilly secures a foothold in BigHat’s tech without overcommitting, and BigHat gains capital while retaining control of its broader pipeline (a savvy move to mitigate IP risk). Technically, BigHat’s integration of rapid wet-lab iteration with AI has impressed a who’s-who of pharma partners; now it must actually deliver a standout clinical candidate to validate the hype.

Executive Takeaways: In the rush to embrace AI, follow Lilly’s playbook of “optionality” deals: partner on a targeted pilot and invest just enough to secure access. This lets you test-drive an AI platform’s real impact on R&D. Biopharma execs should line up similar focused collaborations, creating some exposure to cutting-edge AI tools, but they should not bet the whole pipeline until the data justifies scaling up.

4. Axiom Bio Launches with $15M to replace animal testing with AI for Drug Toxicity Prediction (via BioPharmaTrend)

  • Axiom Bio, a San Francisco startup led by Brandon White (ex-Uber AI), launched with $15 million in seed funding to develop AI models that predict drug toxicity and reduce the need for animal testing. The company is initially focused on flagging liver-toxic compounds via computational models, positioning AI as a faster, more ethical alternative to traditional preclinical animal studies in drug development.

Our View: Axiom’s vision aligns with the FDA’s own direction, as regulators begin to warm to AI-based toxicity assays (a positive sign for regulatory readiness in this space). The big question is technical maturity: early reports claim ~75% sensitivity in predicting hepatotoxic compounds, but pharma will demand near certainty before ditching animal models. If Axiom’s models prove as reliable as in vivo tests, it could dramatically cut early-stage failure rates… yet there’s conceptually an IP risk if everyone trains on the same public datasets. Axiom’s real moat will need to be proprietary data or algorithms that competitors can’t easily replicate.

Executive Takeaways: This is a wake-up call for R&D chiefs. Traditional animal tox screens may become obsolete sooner than expected. Start piloting AI-driven safety checks on a few drug candidates now. It’s low cost compared to a failed IND, and it positions your pipeline for a future where regulators (and investors) will ask why you’re still relying on animal models when smarter, faster AI predictors are available.

5. FutureHouse Launches Superintelligent AI Agents for Scientific Discovery (via FutureHouse) 

  • FutureHouse has unveiled its groundbreaking platform offering four superintelligent scientific AI agents—Crow, Falcon, Owl, and Phoenix—designed to accelerate research through superhuman literature synthesis and experimental planning using open-access papers and scientific tools. These agents, accessible via web and API, aim to unlock bottlenecks in scientific discovery by automating complex tasks with explainable reasoning and API-level integration.

Our view: By combining state-of-the-art natural language understanding, FutureHouse’s agents promise to collapse weeks of literature review into minutes—enabling a new speed of science. This comes at a critical time when the world is moving at a rapid pace to integrate technically mature AI into its everyday life. From Sigla’s point of view, it represents a strategic fit to address the gap of transforming raw data into actionable knowledge at superhuman speed. However, the real question is can the AI be trusted for research if it can’t parse through the high quality closed-access papers? Our experts advise healthy skepticism as the limited database poses reproducibility concerns. The platform may represent the first viable AI co-pilot for scientists, but whether it can truly become your next research partner remains to be seen.

Executive Takeaways: Without integration of paywalled literature, research pipelines using these tools risk incomplete insights and potential reproducibility failures—especially in regulated discovery environments. It’s wise to adopt a “trust-but-verify” approach: initiate a side-by-side evaluation of FutureHouse alongside alternative scientific AI agents like Elicit, Consensus, and Paperguide. Focus on core workflows and assess each tool’s coverage, explainability, and reproducibility. 

This was your newsletter for May 2025. Thanks for being part of our readership. If you found this issue insightful, don’t forget to share it and invite colleagues to subscribe here. We’d love to hear your thoughts in the comments section below!

Have comments or news? Share with us by contacting [email protected] 

Reply

or to participate.