European AI Standards: What CEN-CLC/JTC 21 Is Doing and Why It Matters
Executive Summary
CEN-CLC/JTC 21 is a behind-the-scenes EU standards body that plays a key role in how artificial intelligence is governed in practice.
As the EU advances the AI Act, technical standards are needed to translate legal rules into real-world AI design and use.
JTC 21 bridges law and implementation by developing those technical standards.
The committee is expected to publish key standards by the end of 2026, which will directly support the AI Act.
As a result, JTC 21’s work will become central to how EU AI regulation actually functions.
Context and Recent Developments
CEN-CLC/JTC 21 is a joint technical committee set up by Europe’s main standardization bodies, CEN and CENELEC. Its job is to develop standards related to AI, covering things like shared terminology, risk management, trustworthiness, system robustness, and human oversight.
Its importance has grown alongside the EU’s broader push to regulate AI across industries. Recent policy moves (including the European Commission’s proposed Digital Omnibus in November 2025) have reinforced the role of harmonized standards as a key part of EU digital regulation.
Under the EU’s “New Legislative Framework,” following harmonized standards can give companies a presumption that they’re complying with the law. That makes JTC 21 a crucial link between high-level legal principles and concrete technical requirements.
At the same time, AI is evolving fast. Foundation models, automated decision-making systems, and data-heavy applications are moving much quicker than traditional, consensus-based standard-setting processes. This creates real tension between the speed of technology and the slower pace of consensus-based standardization making.
Standardization in a Regulatory Context
Unlike purely voluntary technical standards, the work of JTC 21 is increasingly tied to binding EU law. When the European Commission mandates a standard and later references it in the EU’s Official Journal, that standard can start to function like a regulatory tool. This blurring of lines raises some important questions. Who holds responsibility when standards shape legal compliance? How transparent is the process? And who has the most influence when technical decisions start to carry legal weight?
Europe vs. Global Standards Landscape
JTC 21 doesn’t operate in isolation. It works alongside international bodies like ISO/IEC JTC 1/SC 42, which also develops AI standards. While there is coordination, there are clear differences in approach. The EU places much stronger emphasis on fundamental rights and risk-based governance. These differences reflect broader global dynamics, where standards are not just technical tools but also ways of exporting regulatory values and influence.
Who Gets a Seat at the Table?
JTC 21 brings together a wide mix of participants: national standards bodies, industry representatives, technical experts, and to a lesser degree, civil society groups. In theory, participation is open. In practice, it often depends on resources, time, and technical expertise.
This tends to favor large companies and well-funded organizations, while smaller actors and public interest groups may struggle to engage meaningfully. The consensus-based process encourages compromise, but it can also smooth over difficult issues especially when ethical or societal concerns clash with engineering-focused perspectives. Because the work is driven largely by technical experts, certain ways of thinking such as risk management frameworks can dominate, while legal, social, or rights-based perspectives may receive less attention.
Institutionally, JTC 21 sits within a complex, multi-level system involving EU institutions, national authorities, and international standard-setters. It’s not fully independent, but it’s also not directly controlled. Its influence depends heavily on political mandates and how regulators and companies choose to use its standards.
Implication Considerations
The growing role of CEN-CLC/JTC 21 shows just how central technical standards have become to AI governance in Europe. Even though they’re developed outside traditional legislative processes, standards can strongly shape how legal obligations are understood and applied.
One example is PrEN 18286. It’s one of the standards that CEN-CLC/JTC 21 is in the process of developing and may end up shaping how the EU AI Act is implemented in practice. However, it only examines AI quality management systems during the early stages of an AI product’s lifecycle (system design, dataset collection/governance, validation & development). At the same time it overlooks what happens after deployment, such as training-serving skew, drift (data, model or concept) and edge cases emergent within production. These real-world issues remain mostly unaddressed in the current EU standards framework.
There are still open questions about whether European AI standards can keep up with rapid technological change, and whether fundamental rights can truly be embedded into technical specifications. The balance between inclusive participation and efficient standard-setting also remains unresolved.
More broadly, JTC 21 raises bigger questions about legitimacy and authority in AI governance. How much power should technical committees have in shaping the real-world meaning of regulatory principles? And how open and contestable should these processes be as they increasingly shape how AI is governed in practice?