The use of artificial intelligence (AI) in high-risk domains like healthcare and human subject research raises critical ethical tensions particularly between ‘technological ought’ that prioritize efficiency and ‘ethical ought’ which focuses on autonomy, informed consent, and the principle of Respect for Persons. This article applies the Social Construction of Technology (SCOT) theory to analyze how these tensions are negotiated within legal instruments such as GDPR Articles 22 and 25, and Recital 27 of the EU AI Act. We explore how consent and autonomy core expressions of the Kantian principle for Respect for Persons (PRP) are socially constructed through interactions among regulators, developers, and civil society. SCOT reveals that legal protections are not static but reflect competing visions of accountability, transparency, and moral agency. Recital 27 of the EU AI Act, by exempting research and development applications, illustrates how anticipatory governance can be selectively applied, privileging innovation while potentially sidelining early-stage ethical safeguards in opaque domains like genomic diagnostics. We argue that meaningful consent, sustained human oversight, and the ethical commitment to respecting persons are essential to uphold ethical ought in AI system development. This article asks: how do legal norms around autonomy and consent become contested, negotiated, and stabilized through socio-technical processes in AI regulation, and how might this reshape our understanding of moral agency in law?
SCOT theory in AI governance; GDPR Articles 22 and 25; EU AI Act Recital 27; ethical and technological ‘ought’ and moral agency in algorithmic regulation