12  Epistemology and Transparency

NoteContent
12.0.0.0.1 AI and the construction of scientific “truth”
12.0.0.0.2 Interpretability and eXplainable Artificial Intelligence (XAI)
12.0.0.0.3 Limits of opaque architectures
12.0.0.0.4 Transparency as an epistemic condition

Scientific research has historically been based on shared epistemological principles: verifiability, falsifiability and methodological transparency.

These criteria, developed within the framework of logical positivism and subsequently consolidated through Karl Popper’s falsificationism, have ensured that scientific results are accumulable, subject to critical scrutiny and open to intersubjective control.

The introduction of AI, and in particular generative systems based on LLMs, now forces us to rethink these assumptions.
AI does not merely provide advanced computational tools, but intervenes directly in the production of knowledge by selecting, synthesising and organising sources.

This function reorients the processes of constructing “scientific truth”, redefining the relationship between data, theories and research communities.

12.1 1. AI and the construction of “scientific truth”

One of the most significant transformations concerns the way in which AI participates in the selection and synthesis of sources.

Generative systems can draw on large text corpora, identify correlations and propose coherent syntheses, presenting them as information with scientific value.

👉 However, the probabilistic logic that governs these models does not coincide with traditional epistemic criteria: what is produced is not the result of deductive or inductive reasoning, but the projection of a statistical distribution learned from the data.

The notion of “scientific truth” risks being progressively replaced by a form of “linguistic plausibility”, a discourse which, while presenting itself with syntactic coherence and rhetorical force, does not necessarily guarantee the verifiability of its content.

This dynamic requires a clear distinction between scientifically validated knowledge and output generated by AI systems, to prevent the rhetorical power of AI-generated language from being mistaken for scientific evidence.
From this perspective, AI does not merely produce texts, but acts as a cognitive filtering device that implicitly guides the hierarchy of sources, the salience of concepts and the interpretative trajectories considered relevant.

The result is a redefinition of the criteria of epistemic relevance, capable on the one hand of broadening access to knowledge and on the other of excluding minority perspectives or sources not represented in the training datasets.

12.2 2. Interpretability and eXplainable Artificial Intelligence (XAI)

The problem of the opacity of deep learning algorithms has given rise to a specific field of study known as eXplainable Artificial Intelligence (XAI).

The aim of this approach is to provide methodological and technical tools that make the decision-making processes of models “interpretable”, allowing researchers to understand why a particular inference or synthesis has been produced.

👉 Interpretability is not only about the readability of the model, but also has epistemological significance: without the ability to explain the reasons for an output, the traceability necessary to recognise a result as scientific is lost.

Key strategies include feature attribution techniques (such as LIME and SHAP), which identify which variables have contributed most to a prediction, and intrinsically interpretable models, designed to prioritise readability over computational complexity.

Although none of these approaches completely eliminates opacity, they provide sufficient levels of transparency to reintroduce accountability and verifiability criteria into research processes.

12.3 3. The limits of opaque architectures

Despite advances in XAI, the most advanced architectures remain largely “opaque”.
Deep learning models, especially those based on billions of parameters, cannot be fully interpreted by either developers or users.

This black box condition introduces an epistemological divide. Science, traditionally anchored in the reconstructibility of processes, finds itself dependent on systems that produce inferences whose internal mechanisms cannot be explained.

The problem is not only technical but also “conceptual”.
AI does not operate through logical reasoning but through the reproduction of statistical patterns.
As a result, it does not distinguish between what is epistemically grounded and what is only probabilistically plausible.

WarningThis characteristic opens up the possibility of systematic errors, textual hallucinations and bias amplification, phenomena that undermine scientific credibility if not accompanied by rigorous control and validation practices.

12.4 4. Transparency as an epistemic condition

The issue of transparency is not limited to the technical readability of algorithms, but concerns the entire cycle of scientific production.
It becomes necessary to make explicit the criteria for data selection, processing methods and circumstances of AI use, so that the results can be subjected to intersubjective control.

Transparency assumes an eminently epistemic function, since only by ensuring the possibility of collective reconstruction can AI outputs be prevented from turning into opaque products, shielded from critical scrutiny.

Alongside the individual responsibility of researchers, who must accurately document the use of AI in their work, there is also institutional responsibility.

International organisations, funding agencies and Academic communities have begun to define standards and guidelines aimed at preserving epistemic integrity.

👉 These directives require systematic disclosure of AI use, together with auditing and reporting protocols.
Publishing houses are also moving in this direction, requiring precise declarations on the use of generative systems in writing and editing processes.

12.5 Further Reading

See How the machine ‘thinks’: Understanding opacity in machine learning algorithms

See Towards A Rigorous Science of Interpretable Machine Learning

See Connecting ethics and epistemology of AI

See Digital epistemology: evaluating the credibility of knowledge generated by AI

See Towards a Manifesto for Cyber Humanities: Paradigms, Ethics, and Prospects

See The mythos of model interpretability

See Explanation in Artificial Intelligence: Insights from the Social Sciences