AI Coding Is Changing Everything Right Now
And Breaking a Lot, Including the Law

I started coding extensively with AI over the past few months. With different tools. From Cursor to Claude Code, over Anti-Gravity, Gemini CLI, and Codex.
Coding with AI is fun.
We’re in this weird moment where AI can write really good code. It makes everything so much faster. And I am not even talking about the hip platforms like Lovable or Bolt that make this AI coding even more simple for non-techy users.
All those tools are doing many things entirely wrong. And it’s not the UI, or feature set, bug fixing, or coding style. It’s much worse.
Background
I’ve been web designing since 2008. For fun on the side. I’ve never worked as a web developer professionally, apart from a short internship a decade ago.
But I’ve done a lot of things. Mainly websites, CMS with WordPress, or PHP development. And more recently, I’ve jumped head on into app development with Flutter and React Native.
It’s fun.
And with AI coding tools, it’s even more fun. Already said that.
But I’ve noticed a pattern in all the different AI coding tools.
The Good Part First
AI coding tools are legitimately helpful.
Some do a few things better than others. I like Claude Code at the moment. But that’s not the point. Many models do cool stuff.
I’ve cut development time in half or more on many projects. They’re great at:
Generating boilerplate code
Debugging weird errors
Suggesting better patterns
Writing documentation nobody wants to write
and much more
ChatGPT, Claude, Cursor, Anti-Gravity, Codex, GitHub Copilot — they all work. They speed things up. No question.
But Here’s the Problem
AI models usually don’t think about GDPR. They don’t care about CCPA. Unless you tell them to.
They have no real concept of data minimization or legitimate interest or any of the legal frameworks we’re supposed to follow. Unless we tell them to.
They don’t know what to do to actually make the app we’re vibe coding legal… unless we tell them to. And that means we actually need to know.
But most of us don’t.
So when AI writes your authentication system, it might store way more user data than legally necessary (or allowed). I am in Germany. No country is stricter when it comes to this stuff than Germany. Well, maybe Switzerland. But that’s it.
When AI builds your analytics, it probably won’t include proper consent mechanisms for the EU or other places.
When it creates your contact form, it’s tough getting a privacy-compliant setup without manually fixing everything.
I’m not exaggerating. Most AI-generated code I see completely ignores:
Cookie consent requirements
Data processing agreements
User deletion workflows
Audit logging for GDPR requests
Proper encryption standards
Legitimate interest assessments
And that’s important…
Most New Tools Are Probably Illegal
Look at the average new SaaS or AI tool launching right now. Many of them vibe coded by individuals on the side at home.
They don’t want to do harm, but I’d bet money most of the tools they create (and publish quickly) are violating multiple privacy regulations. They’re practically illegal.
They’re missing:
Required documentation: Where’s your data processing agreement? Your GDPR compliance statement? Your privacy impact assessment?
Consent mechanisms: If you’re using cookies for anything beyond strictly necessary functions, you need explicit consent. Most tools either skip this or implement it wrong.
User rights: Can users actually export their data? Delete their account properly? See what you’re storing? Most tools have maybe one of these, badly implemented.
Data minimization: AI doesn’t naturally think “collect the minimum data needed.” It thinks “let me grab everything that might be useful.”
Vendor compliance: If you’re using third-party services (which you probably are), you need data processing agreements with all of them. Who’s checking that?
What Actually Needs to Happen
I’m not saying stop using AI for coding. I’m definitely not stopping. But we need to be way more critical about what it produces.
We can’t just watch a YouTube video called “I vibe coded this app over the weekend and now make 15K a month”. Maybe they did. Likely, they did it wrong. Perhaps, it’s even illegal.
Some basic steps:
Manual privacy review: Every AI-generated feature needs a human looking at privacy implications. Not optional.
Better prompting: We should be explicitly prompting for privacy-compliant implementations. “Write a user authentication system that complies with GDPR” gets you closer than just “write user authentication.”
Privacy-focused training: AI models need better training data that includes proper privacy implementations. That means the community needs to write and share more compliant code.
Automated scanning: We need tools that scan AI-generated code specifically for privacy violations. This should be part of the development workflow. Skills or plugins for this.
Legal consultation: If you’re launching anything that handles user data, talk to an actual lawyer. AI can’t replace that.
We’re Building on Shaky Ground
The speed of AI development means we’re launching products faster than ever. That’s cool for innovation. We’ll get some great new apps and services (and a gazillion bad ones or straight up copies).
But it also means we’re deploying privacy violations at scale before anyone notices.
I’ve seen so quite a few AI-built tools that are genuinely useful but would get absolutely destroyed in a GDPR audit. And these aren’t always small startups who don’t know better. Sometimes it’s companies with actual funding and huge profits already.
The problem is that AI makes everything feel easy. Including the hard parts that SHOULD NOT be easy.
Legal is not easy.
The Bottom Line
AI coding is powerful. It’s going to keep getting better. But right now, it’s generating a ton of code that’s functionally correct but legally questionable.
If you’re using AI to build anything that touches user data, assume the AI got the privacy stuff wrong. Check it manually. Consult the latest regulations. Prompt AI to check this.
Because the speed advantage of AI development becomes a massive liability when you’re moving fast in the wrong direction.
And most of us are moving fast right now.


