The EU AI Act reached full enforcement earlier this year, and the first enforcement actions from national supervisory authorities are already landing. Fines for prohibited-practice violations can reach €35 million or 7% of worldwide annual turnover, whichever is higher. For most enterprises, that's not a line item — it's an existential risk.
What the Act actually requires
The Act takes a risk-tiered approach. Most productivity tools in common use — ChatGPT, Claude, Gemini — fall under the "limited risk" or "general-purpose AI" categories. That sounds benign, but the obligations attached to those tiers are non-trivial:
- Transparency: users must know when they are interacting with an AI system and when content is AI-generated.
- Data governance: the provenance, quality, and lawful basis of input data must be documented and traceable.
- Human oversight: decisions informed by AI must remain auditable and contestable.
The twist: if your employees paste personal data, client contracts, or patient information into a third-party LLM, you become a controller under GDPR for that processing. The AI Act doesn't replace GDPR — it compounds it.
Why existing AI tools fall short
Enterprise deals from the major AI providers all include some version of "we don't train on your data." That promise addresses one risk, but not the ones that matter most under EU law:
- Data still crosses into third countries for inference.
- Logs, even short-retention ones, still exist — and are still discoverable.
- The supply chain of sub-processors is opaque and shifts without notice.
"Trust us" is not a lawful basis under Article 6. And Article 9 data — health, biometric, political, sexual — cannot be processed on trust at all.
A three-step playbook
1. Classify, don't ban
Bans don't work. They push AI usage into shadow IT, where you have no visibility and no control. Instead, inventory which AI tools are in use, what categories of data they touch, and which of those categories create compliance exposure.
2. Intercept at the boundary
Once you know where the risk lives, put anonymization between the user and the model. The key is that this has to happen before data leaves the user's device — otherwise you've just moved the problem to a new vendor.
3. Prove it
Regulators no longer accept "we have a policy" as evidence. They want logs of what was protected, attestations of where inference happened, and a demonstrable kill-switch. Your architecture itself has to be the evidence.
Where local anonymization fits
This is the category SOWA Privacy operates in: a local-first proxy that identifies protected categories on-device, replaces them with context-preserving placeholders, and sends only the sanitized prompt to the model. The original data never leaves the endpoint. That property isn't just convenient — under the AI Act it's the difference between a compliance program and a compliance theatre.
The question is not whether AI belongs in regulated industries. It's whether you own the boundary between your data and the cloud — or whether someone else does.
Compliance is no longer a legal exercise. It's an architectural one.