Upcoming Session
Thursday, March 26, 2026
12:15h
Presented by
François Acquatella (Dauphine- DRM)

Designing Auditable AI for IT General Controls: Evidence from a Governed Small Language Model

Abstract

Auditing increasingly relies on artificial intelligence to analyze large volumes of textual evidence, yet its adoption in assurance environments remains constrained by strict requirements for traceability and reproducibility. This study examines whether a governable Small Language Model (SLM) can support Phase 1 IT General Controls (ITGC) auditing under such constraints. Using a comparative research design, three configurations are evaluated: a deterministically trained SLM, a zero-shot Large Language Model (LLM), and a retrieval-augmented LLM (RAG). The experimental protocol combines frozen inference environments, cross-validation, and local explainability techniques. Results show that the SLM produces fully reproducible predictions and stable explanations, while LLM-based approaches exhibit stochastic variability across runs. The findings highlight a trade-off between contextual flexibility and auditability, and demonstrate that lightweight, controlled architectures can provide reliable decision support in regulated auditing contexts.

About this workshop

The aim of this workshop is to promote technical and practical exchanges between researchers who use NLP methods. There is no hesitation in detailing the code (r/python), sharing tips, and discovering new methods and models.

Periodicity: Thursdays from 12h15 to 13h30, by videoconference.

To attend, please fill the form.