Artificial integrity: social performativity of honesty in the age of generative AI
1 Theology/Philosophy Department, De La Salle-College of Saint Benilde, Manila, Philippines
2 Philosophy Department, Providence College, Providence, USA
Abstract

This article analyzes the paradoxical phenomenon in which students extensively utilize generative AI for academic work while sincerely maintaining that their submissions are honest and original. Beyond simple confusion or concealment, it introduces artificial integrity: a techno-ethical dilemma arising from technologically scaffolded knowing self-deception. Drawing from dramaturgical analysis, narrative identity theory, and recent empirical research, a framework is developed that reveals how integrity is socially performed and stabilized within ambiguous institutional ecologies. The analysis demonstrates that students, while retaining awareness of AI’s core intellectual labor, sustain credible honesty claims through epistemic layering, manifesting in strategic disclosure, resistance to transparency, and persistent anxiety. This condition is co-produced by institutional designs that prioritize polished outputs over visible process, creating a rationalization space where traditional legal-ethical frameworks for authorship and accountability break down. Rather than policing AI use, this article argues institutions must develop clear, legally sound AI-use policies and redesign assessment to mandate transparency, through methods such as process portfolios, reflective annotations, and structured disclosure protocols, thereby resetting the academic stage to reward visible cognition over performative authorship.

Keywords

artificial integrity; academic integrity; generative AI; performativity; dramaturgy; higher education; knowing self-deception; epistemic layering

Preview