As GenAI gains traction across life sciences, the conversation is rapidly shifting from potential to practicality. Yet, many organizations remain stuck in pilot purgatory, unable to scale solutions enterprise-wide. In this interview, Ram Yeleswarapu, SVP of Enterprise Clinical Solutions at Indegene, shares hard-earned insights from deploying GenAI across over 70 use cases in clinical and medical domains. He discusses the real barriers to adoption, why human oversight still matters, and what companies must do to build trust with regulators and teams alike.

What’s stopping GenAI from scaling beyond pilots in life sciences?

In the life sciences sector, we see many companies using GenAI to transform processes across the value chain, from drug discovery to marketing and patient administration. However, widespread challenges such as biases, misconceptions, and inaccuracies hinder progress, emphasizing the need to integrate technology with deep domain expertise. Without a structure that effectively blends these elements, scaling GenAI becomes difficult. The common organizational, technical, and regulatory barriers include:

  • Organizational Barriers: Lack of alignment between leaders and stakeholders, absence of top-down direction, and a missing enterprise roadmap led by a center of excellence impede meaningful, long-term value.
  • Technical Barriers: Disparities between existing technical capabilities and GenAI requirements, loss of subject matter expertise, and inadequate human-in-the-loop interventions complicate integration into workflows.
  • Regulatory Barriers: Challenges include data security, privacy, legal issues, ethical concerns, undefined GenAI regulations, and the absence of responsible AI teams to enforce frameworks for safe AI development.

Addressing these barriers is critical to scaling GenAI and realizing its potential in life sciences.

How can AI-generated content stay compliant and accurate at scale?

Our digital content creation approach is built on three pillars: strategy, production, and measurement. Modular content development is at its core to boost efficiency and effectiveness. Successful implementation requires orchestrating these elements cohesively. While pushing content is vital, obtaining data feedback is equally crucial for tailoring future experiences and making informed decisions. Over time, we have refined our methodology, incorporating strategy, production, and measurement alongside key performance indicators (KPIs).

Validating AI-driven content is critical, and organizations must integrate robust compliance tools to automatically screen for issues related to data privacy, intellectual property, and industry-specific guidelines. Continuous refinement of algorithms is equally important to ensure AI models maintain their efficacy in processing vast, complex datasets. Beyond technology, investing in team training and fostering cross-departmental collaboration are essential steps for unlocking AI’s full potential in content generation.

While AI’s advanced capabilities streamline foundational content creation, human oversight remains indispensable. The most effective approach combines AI for content generation with human oversight to uphold accuracy, compliance, and ethical integrity. This collaborative model maximizes the strengths of both AI and human expertise, ensuring impactful and credible digital content creation.

Will regulators accept AI-generated clinical content anytime soon?

Recent advancements by regulatory agencies highlight their commitment to establishing frameworks and methodologies that enhance AI’s reliability and transparency, ultimately driving efficiency in clinical research. For instance, in January 2025, the U.S. Food and Drug Administration (FDA) released draft guidance titled “Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products.” This guidance emphasizes the importance of transparency, data quality, and model validation to ensure AI outputs comply with regulatory standards. To refine these recommendations, the FDA has proactively engaged stakeholders for feedback.

Similarly, the European Medicines Agency (EMA) has introduced an AI work plan focusing on responsible and effective AI integration.

Despite these efforts, immediate and widespread regulatory acceptance of AI-generated clinical and medical content is unlikely in the short term. However, these initiatives underscore a gradual yet rigorous approach by regulatory agencies to promote transparency, accountability, and patient protection in the deployment of AI technologies.

What are the top challenges in training staff for GenAI?

We leverage GenAI across over 70 use cases, spanning medical and regulatory affairs, medical writing, pharmacovigilance, MLR review, sales, marketing, and clinical trials. Through our GenAI-powered platforms, practitioner expertise, consulting capabilities, and a dedicated GAI Innovation Lab, we prototype and scale solutions aimed at accelerating go-to-market strategies, delivering personalized customer experiences at scale, reducing costs, and enhancing efficiency and effectiveness.

In general, implementing GenAI training across large organizations with diverse technical proficiencies requires tailored programs catering to different skill levels and learning styles. To measure ROI, organizations should track KPIs like operational efficiency, innovation outcomes, and employee productivity. Pilot programs within specific departments can offer insights to refine training approaches and validate their effectiveness.

GenAI adoption also necessitates process re-engineering, integrating AI only where it adds measurable value. Human-in-the-loop mechanisms, such as reinforcement learning and curated review workflows, ensure oversight and control. Equally critical is the data infrastructure – GenAI’s output quality depends on well-curated, validated, and trustworthy data sources. Mechanisms like retrieval augmentation further enhance reliability and accuracy. Establishing robust systems to manage and validate these data sources is vital for achieving dependable and actionable GenAI outcomes.

What have companies learned from failed AI implementations?

Progressive pharmaceutical companies are shifting from fragmented point solutions to a platform-based approach, focusing on long-term AI-first strategies. Agent-based AI is increasingly seen as capable of managing mission-critical application logic, making point solutions redundant. Platforms enable seamless data federation, centralized governance, and rapid, scalable solution development, ensuring reliable and efficient integration across enterprises.

Effective AI integration hinges on robust data infrastructure. Many AI projects fail due to fragmented systems unable to meet the demands of advanced models, large-scale data processing, model retraining, and continuous monitoring. Further, GenAI requires a balanced approach that intersects technology with domain expertise. An organizational framework that leverages this intersection is crucial for realizing its full potential. Companies can start with low-risk pilot initiatives aligned with broader enterprise goals, categorizing use cases into clusters and refining measurable ROI frameworks for assessing impact.

The industry is also evolving by merging pharmaceutical expertise with data science capabilities, prompting organizational restructuring. Key roles like Chief Digital Officer and Chief Technology Officer are being introduced alongside SWAT teams to evaluate GenAI use cases. To foster collaboration, operational frameworks now blend traditional roles in R&D, medical, and commercial functions with new positions integrating epidemiology, statistics, IT, and data science. Finally, incorporating Human-in-the-Loop (HITL) mechanisms ensures ethical compliance and adds context to critical projects. This collaborative model enhances trust in AI systems, while human expertise remains central to decision-making processes for maintaining accuracy, compliance, and nuance.

Website | + posts

Moe Alsumidaie is Chief Editor of The Clinical Trial Vanguard. Moe holds decades of experience in the clinical trials industry. Moe also serves as Head of Research at CliniBiz and Chief Data Scientist at Annex Clinical Corporation.