13  Implications of the use of AI in research

The question of epistemology and transparency in the use of AI in research processes is a decisive factor in the very legitimacy of contemporary science.

The adoption of Generative Systems is not merely an instrumental update, but has a profound impact on the logic of knowledge production, altering the criteria by which knowledge is constructed, validated and shared.

The GAI, in its ability to select, organise and synthesise sources, directly intervenes in the definition of “scientific truth”, expanding the possibilities of access to information but at the same time introducing new areas of opacity and fragility in terms of verifiability.

The tools developed in the field of Explainable AI (XAI) are a significant attempt to restore transparency to otherwise uninterpretable processes, but they fail to bridge the gap between the epistemic needs of the scientific community and the opaque complexity of deep architectures.

ImportantThe relationship between researcher and AI is one of co-agency, in which humans retain the role of guarantor of cognitive and ethical processes, but must contend with a technological agent that introduces new possibilities and, at the same time, new opacity.

This transformation requires the Academic Community to reflect on the soundness of the founding principles of modern science:
- the verifiability of results
- the methodological transparency
- the neutrality of sources
- the individual and collective responsibility in the evaluation phases.

Introducing AI Systems, particularly their generative applications, tests these assumptions along four main lines:

  1. the Epistemological Dimension of Transparency
  2. the issue of Bias and Source quality
  3. the redefinition of Accountability in peer review processes
  4. towards a renewed Operational Ethic.

The use of GAI in research cannot be considered a simple technical support, but must be treated as an epistemic object in its own right, capable of influencing the entire knowledge ecosystem.

👉 On the one hand, AI significantly expands the analytical and synthetic capabilities of researchers, offering tools capable of processing huge amounts of data and generating hypotheses.

👉 On the other hand, it introduces concrete risks, such as algorithmic opacity, the propagation of errors or biases, the weakening of traditional accountability mechanisms and the potential erosion of public trust in science.

Analysing these implications in depth means maintaining a constant tension between technological innovation and scientific integrity as a guiding principle, so that AI can be integrated without compromising the principles that guarantee the credibility of research.

13.1 Key References