13 Implications of the use of AI in research
The question of epistemology and transparency in the use of AI in research processes is a decisive factor in the very legitimacy of contemporary science.
The adoption of Generative Systems is not merely an instrumental update, but has a profound impact on the logic of knowledge production, altering the criteria by which knowledge is constructed, validated and shared.
The GAI, in its ability to select, organise and synthesise sources, directly intervenes in the definition of “scientific truth”, expanding the possibilities of access to information but at the same time introducing new areas of opacity and fragility in terms of verifiability.
The tools developed in the field of Explainable AI (XAI) are a significant attempt to restore transparency to otherwise uninterpretable processes, but they fail to bridge the gap between the epistemic needs of the scientific community and the opaque complexity of deep architectures.
This transformation requires the Academic Community to reflect on the soundness of the founding principles of modern science:
- the verifiability of results
- the methodological transparency
- the neutrality of sources
- the individual and collective responsibility in the evaluation phases.
Introducing AI Systems, particularly their generative applications, tests these assumptions along four main lines:
- the Epistemological Dimension of Transparency
- the issue of Bias and Source quality
- the redefinition of Accountability in peer review processes
- towards a renewed Operational Ethic.
The use of GAI in research cannot be considered a simple technical support, but must be treated as an epistemic object in its own right, capable of influencing the entire knowledge ecosystem.
👉 On the one hand, AI significantly expands the analytical and synthetic capabilities of researchers, offering tools capable of processing huge amounts of data and generating hypotheses.
👉 On the other hand, it introduces concrete risks, such as algorithmic opacity, the propagation of errors or biases, the weakening of traditional accountability mechanisms and the potential erosion of public trust in science.
Analysing these implications in depth means maintaining a constant tension between technological innovation and scientific integrity as a guiding principle, so that AI can be integrated without compromising the principles that guarantee the credibility of research.
13.1 Key References
UNESCO Recommendation on the Ethics of Artificial Intelligence
First global ethical framework on the use of AI, with principles on transparency, accountability and inclusion. (2021 International recommendation)EU Artificial Intelligence Act (AI Act)
Binding regulations governing AI systems, including those used in research, with documentation and monitoring requirements. (2024 Legislation)EUI - European University Institute - Guidelines for the Responsible Use of Artificial Intelligence for Research
Document specific to the research context: emphasises disclosure, traceability and scientific integrity. (2024 Academic guidelinese)On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?
Critical analysis of the risks of bias, opacity and epistemic impact of large language models. (2021 Conference paper (FAccT))OECD Principles on Artificial Intelligence
Policy principles for the responsible use of AI, also adopted by OECD member countries