AI regulation is no longer a future concern — it’s here. The EU AI Act is the most comprehensive example, now in force with penalties up to 35 million euro or 7% of global turnover. But it’s part of a global trend: Canada’s AIDA, China’s AI regulations, sector-specific rules from US federal agencies, and a growing patchwork of state-level legislation in the US are all moving in the same direction. For enterprises deploying AI agent systems, the question is not whether regulation will affect you, but which regulations apply and how to prepare.
The good news: the emerging regulatory frameworks share common principles. The EU AI Act’s risk classification system is the most mature model, and understanding it gives you a head start on compliance globally.
Risk classification: where do your agents fall?
The EU AI Act — the most detailed regulatory framework to date — classifies AI systems into four risk tiers. This classification approach is influential and likely to be adopted or adapted by other jurisdictions. Most enterprise agent systems will fall into one of two categories.
Limited risk. This covers the majority of enterprise agents — systems that assist with internal workflows, generate content, summarise documents, or interact with customers in clearly AI-mediated contexts. The main obligation here is transparency: users must be informed they’re interacting with an AI system. If your agent generates text or images, the output should be identifiable as AI-generated.
High risk. Agents that operate in specific regulated domains fall into this category. If your agent makes or materially influences decisions about employment (screening CVs, evaluating performance), creditworthiness (assessing loan applications), insurance (calculating premiums or processing claims), education (grading, admissions), or law enforcement, it’s high risk. High-risk systems face substantially more demanding requirements.
Unacceptable risk. Some applications are prohibited outright — social scoring, real-time biometric surveillance in public spaces, and manipulation of vulnerable groups, among others. These are unlikely to apply to typical enterprise agent deployments, but know the boundaries.
If you’re unsure where your system falls, start by mapping each agent to its specific use case and the decisions it influences. The classification is about application, not technology.
Transparency requirements
Transparency is a common thread across AI regulations worldwide. Under the EU AI Act, all AI systems regardless of risk level must meet basic transparency obligations — and similar requirements are emerging in other frameworks.
Disclosure. People interacting with your agent must know they’re interacting with AI. This applies to chatbots, voice agents, and any system where a person might reasonably think they’re communicating with a human.
AI-generated content labelling. If your agent generates or manipulates text, images, audio, or video, the output should be marked as AI-generated. The technical implementation is still being standardised, but the principle is established.
For enterprise agents, this usually means clear UI labelling: “This response was generated by an AI assistant.” It’s straightforward to implement and shouldn’t be controversial. Transparency builds trust.
Human oversight obligations
Regulations increasingly require that high-risk AI systems be designed to allow effective human oversight. The EU AI Act codifies this explicitly, and similar principles appear in emerging frameworks elsewhere. For agent systems, this means specific engineering decisions.
Human-in-the-loop for consequential decisions. If your agent makes decisions that significantly affect individuals, a human must be able to review and override those decisions. This doesn’t mean a human must approve every action — it means the system must be designed so that meaningful human review is possible.
Override mechanisms. Operators must be able to intervene in or halt the AI system’s operation. For agents, this means kill switches, the ability to pause execution mid-workflow, and clear escalation paths when the agent encounters situations outside its defined scope.
Interpretability. The humans overseeing the system need to understand what it’s doing. This requires that agent systems produce outputs and logs that a trained operator can interpret — not just raw model outputs, but structured reasoning traces that explain why the agent took a particular action.
Documentation and logging
Across regulatory frameworks, high-risk systems must maintain technical documentation and logs that demonstrate compliance. For agent systems, this translates to several concrete requirements.
System documentation. Document the agent’s purpose, capabilities, limitations, and intended use cases. Document the model(s) it uses, how it was trained or fine-tuned, and the data it has access to. This is not optional paperwork — it’s the basis for your conformity assessment.
Decision logging. Every decision the agent makes that falls under the high-risk classification must be logged with sufficient detail to enable post-hoc review. Record inputs, reasoning steps, tool calls, and outputs. Retain these logs for the period specified by your applicable regulations.
Risk management. High-risk systems require an ongoing risk management process — identify risks, implement mitigations, monitor effectiveness, iterate. For agents, this means tracking failure modes, measuring accuracy and fairness, and having a process for addressing issues when they’re discovered.
Data governance. Document the data your agent processes, how it’s sourced, and how quality is maintained. If you fine-tuned the model, document the training data and its provenance.
This is a competitive advantage
The instinct in some quarters is to view AI regulation as a burden. We think that’s wrong.
Companies that build compliant agent systems now gain several advantages. First, market access — regulated markets represent billions of potential customers, and non-compliant AI systems will be excluded. The EU alone is a 450-million-person market. Second, customer trust — enterprises buying AI solutions increasingly ask about compliance as part of procurement. Having a clear compliance story is a sales advantage. Third, operational quality — the requirements around documentation, logging, human oversight, and risk management are just good engineering practices. Systems built to these standards are more reliable, more debuggable, and more maintainable.
Regulation sets a floor for quality and safety. Companies that were already building agent systems responsibly will find that most of the requirements align with what they’re already doing. Companies that weren’t — this is the push to start.
What to do now
Start with classification. Map every agent system you operate or plan to deploy to a risk category. For high-risk systems, begin the documentation and logging work immediately — retrofitting these capabilities is far more expensive than building them in.
For all systems, implement transparency measures. Label AI interactions. Build audit trails. Design for human oversight.
The enterprises that treat AI regulation as a framework for building better AI systems — rather than a compliance obstacle to route around — will be the ones that lead in any market.