Shaping a human-centred European strategy for AI in science
05 June 2025
Coimbra Group’s input to the European Commission’s Call for Evidence for the ‘European Strategy for AI in science – paving the way for a European AI research council’

The Coimbra Group (CG) has submitted its feedback to the European Commission’s call for evidence on the ‘European Strategy for AI in science – paving the way for a European AI research council’. The Coimbra Group (CG) welcomes the opportunity to contribute to the development of the European Strategy for AI in science. As key actors in scientific research and education, universities play a central role in shaping responsible, innovative, and inclusive AI development and application across disciplines.
Their voice is essential in ensuring AI policies reflect academic values and societal needs. We must foster bridges for dialogue and cooperation, as universities are uniquely positioned not only to advance AI technologies but also to implement them meaningfully in scientific research and training.
The first section presents a set of considerations and recommendations, followed by concrete feedback on the main challenges, needs, and priorities identified by our member universities to support the uptake of AI in science.
Considerations and Recommendations:
- Prioritise long-term, stable public investment in foundational AI research and infrastructures, while ensuring alignment with core European values such as openness, inclusivity, transparency, and human dignity.
- Promote the development and adoption of open-source AI tools and models, to reduce dependence on proprietary systems and strengthen European digital sovereignty.
- Support FAIR-aligned, secure, and inclusive research data ecosystems, including federated infrastructures that allow for cross-border collaboration without compromising data protection or national autonomy.
- A European AI in science strategy should place greater emphasis on hybrid intelligence rather than relying solely on data-driven or fully automated approaches.
- Ensure robust funding for interdisciplinary AI education and training, with specific emphasis on cross-disciplinary collaboration between STEM, social sciences, humanities, law, and the arts.
- Recognise the critical role of SSH (social sciences and humanities) in evaluating societal impacts and shaping ethical, legal, and democratic frameworks for AI in science.
- Embed ethics, transparency, and explainability into AI governance, with clear mechanisms to ensure accountability and public trust across all scientific domains.
- Foster distributed, collaborative governance models that respect institutional autonomy and national diversity while pooling expertise and resources.
- Ensure policy and regulatory clarity, including on the further implementation of the AI Act in research contexts, to support compliance without supressing innovation.
- Continue and expand initiatives like the ‘Living Guidelines on the Use of Generative AI in Research’, keeping them adaptive, community-driven, and broadly disseminated across disciplines.