Artificial Intelligence (AI) is playing an increasingly significant role in scientific research and development (R&D), particularly within the biopharmaceutical and healthcare sectors. By enabling rapid data analysis and providing sophisticated predictive models, AI technologies are reshaping the way new treatments are discovered, developed, and delivered. However, this transformative potential also introduces complex regulatory considerations, particularly in the context of the European Union’s Artificial Intelligence Act (EU AI Act). This article examines the application of AI in scientific R&D, the exclusions outlined in Article 2(6) of the EU AI Act, and the European Federation of Pharmaceutical Industries and Associations (EFPIA)’s stance on these regulatory measures. Additionally, it addresses how AI systems that process personal data remain subject to the General Data Protection Regulation (GDPR).
The Impact of AI in Scientific Research and Development
AI technologies are revolutionizing the medicinal product lifecycle, from drug discovery to clinical trials and post-market surveillance. Machine learning algorithms can analyze extensive biomedical data to identify potential drug candidates, predict their efficacy, and optimize clinical trial designs, which accelerates drug development and reduces associated costs. Additionally, AI tools are increasingly used for real-time patient monitoring and data management, providing valuable insights into treatment safety and effectiveness.
Despite these advantages, integrating AI into scientific R&D also presents several challenges. AI systems must comply with data privacy regulations, uphold patient safety, and maintain clinical standards. Moreover, ethical considerations, such as transparency, accountability, and fairness, are crucial to fostering trust in AI-driven innovations.
Article 2(6) of the EU AI Act: Exclusion of AI in Scientific Research
The EU AI Act aims to establish a comprehensive regulatory framework for AI, designed to protect safety and fundamental rights while promoting innovation. A key component of the Act is its risk-based approach, which classifies AI applications into various risk levels with corresponding regulatory requirements.
Article 2(6) of the EU AI Act specifically excludes AI systems developed or used exclusively for scientific research from the scope of the Act. This exclusion acknowledges the unique nature of scientific research, which often involves experimentation, hypothesis testing, and innovative activities under conditions that may not align with traditional regulatory frameworks. The exclusion is intended to encourage scientific freedom and innovation while ensuring responsible AI development.
However, this exclusion has sparked debate. While some argue that it provides necessary flexibility for researchers, others raise concerns about potential regulatory gaps, especially if AI tools initially developed for research are later deployed in clinical or commercial settings without sufficient oversight.
EFPIA’s Position on the AI Act and Article 2(6)
The European Federation of Pharmaceutical Industries and Associations (EFPIA) supports the EU AI Act’s intent to foster innovation and uphold scientific freedom without undermining R&D activities in the pharmaceutical sector. EFPIA endorses the exclusions outlined in Recital 25 and Articles 2(6) and 2(8) of the Act, which state that AI systems and models developed and used solely for scientific research and development are exempt from the Act’s regulatory scope.
EFPIA specifically argues that this exemption applies to AI-based drug development tools used exclusively in the R&D of medicines. The organization emphasizes that these tools are designed solely to support the development of new medicinal products and should therefore be excluded from the Act’s broader regulatory requirements. EFPIA believes such exclusions are vital to ensuring that regulatory measures do not stifle innovation or impede the development of new therapies.
Moreover, EFPIA stresses the importance of clear definitions and boundaries for these exemptions to prevent ambiguity and ensure that AI tools are appropriately regulated when transitioning from research to clinical or commercial use. The organization points out that even if the exemption did not apply, the majority of AI uses in medicine R&D would not qualify as high-risk AI under the current EU AI Act. Most AI-enabled tools used in this context do not fall under the legal frameworks referenced in Annex I (such as those for medical devices) or are not listed under Annex III high-risk uses, meaning they would not require CE marking.
GDPR Compliance for AI Systems Using Personal Data
While AI systems and models may be excluded from the scope of the EU AI Act under Article 2(6), they are still subject to other regulatory frameworks, particularly the General Data Protection Regulation (GDPR) when personal data is involved. The GDPR sets out strict requirements for how personal data is collected, processed, stored, and shared within the EU, emphasizing consent, transparency, and data protection.
In the context of scientific research and R&D, AI systems often handle substantial amounts of personal data, especially in healthcare and pharmaceutical settings. Therefore, even if these systems are not governed by the AI Act, they must comply with GDPR standards. This includes ensuring lawful, transparent, and purpose-limited data processing, along with robust safeguards to protect individuals’ rights and privacy.
The requirement for compliance with both GDPR and the exclusions under the EU AI Act creates a complex regulatory landscape for researchers and developers of AI systems. This necessitates a careful balance between fostering innovation and ensuring adherence to ethical standards and legal obligations.
Balancing Innovation and Regulation in AI
The regulatory challenges highlighted by the EU AI Act and GDPR underscore the complexities of integrating AI into sectors like healthcare and pharmaceuticals. While the exclusion for AI systems used solely for scientific research under Article 2(6) of the EU AI Act provides some flexibility, concerns remain about potential regulatory gaps and the effective application of these exclusions.
A balanced approach to AI regulation is essential—one that promotes innovation while ensuring safety, efficacy, and ethical standards. Tailored guidance and a flexible regulatory framework are required to address the unique challenges the healthcare and life sciences sectors face when implementing AI technologies.
EFPIA advocates for a dynamic, risk-based approach to AI governance, recognizing that traditional policy instruments may struggle to keep pace with rapid technological advancements. The organization calls for nuanced guidance that considers the specifics of AI use and the context in which it is deployed, ensuring appropriate human oversight and distinguishing between different roles and impacts of AI in medicine development.
Conclusion
AI has the potential to significantly advance scientific research and development, particularly in the pharmaceutical and healthcare sectors. However, integrating AI technologies also presents substantial regulatory challenges, particularly regarding the EU AI Act and GDPR. While the exclusion under Article 2(6) of the AI Act provides needed flexibility for researchers, EFPIA and other stakeholders stress the importance of a clear, nuanced regulatory framework that balances innovation with safety, ethical considerations, and data privacy.
Looking ahead, collaboration among policymakers, industry leaders, and researchers is vital to developing a regulatory environment that fosters innovation while safeguarding patient safety, rights, and privacy. By achieving this balance, the full potential of AI can be harnessed to drive scientific breakthroughs and improve healthcare outcomes.
Diana is the Founder & Managing Director at RD Privacy and a contributing columnist, specializing in privacy for the pharmaceuticals and life science sectors, particularly small biopharma companies, with extensive experience as a European qualified privacy attorney and Data Protection Officer (DPO).