Bits of Flutter
Engineering · · 5 min read

AI Coding Got Better and It Requires Us to Be Even Better Engineers

This weekend I spent some time reviewing the backend code of a project generated with AI. I wanted to test how it feels to build a full project from scratch only vibe coding with Claude Code. So I waited for the project to be mature to review the code and see what we have there.

I found security issues everywhere. Tokens that never expire, endpoints with no proper validation, things that if deployed to production as they were could let anyone inject malicious behavior, steal data or break things. The code worked, it ran fine, but the quality underneath was not what you’d want anywhere near production. When I flagged the token expiration issue, the AI told me it wasn’t a problem because the project didn’t handle sensitive data. I pushed back. The project didn’t deal with sensitive data directly, but it had an open channel that depending on the user’s setup could allow controlling personal data through AI. Once I explained the situation to the agent it gave me the classic: you are totally right.

And I think this is happening a lot more than we imagine right now. People are shipping AI-generated code to production without truly understanding what’s inside. Vibe coding works for a lot of things, but it’s also putting apps with real vulnerabilities out there and most people don’t even realize it.

What I take from this

We actually need to be better engineers than before, not worse. AI writes code fast, but someone still needs to read that code, understand it and decide if it’s good enough. That requires the boring stuff that doesn’t trend on Twitter: architecture, security, patterns, performance.

Our role is shifting towards orchestration

We decide which entities exist in the system, what role each one plays, how the pieces connect. AI can fill in a lot of the details, but the big picture is still on us. And to do that well, reading engineering books matters more than ever. If you want to guide an AI, you need to know what good looks like. You need to tell it which authors, which patterns, which standards to follow. You need to build context that actually makes the AI produce quality code, and you can’t do that if you don’t understand the craft yourself.

Parallel work is the new superpower

The other thing that changed a lot is parallel work. The engineers who will get the most out of this are the ones who can manage multiple AI agents at the same time, review one PR while another agent writes the next feature, orchestrate instead of just executing. That’s a skill that didn’t matter much before and now it’s becoming essential.

The window is now

I think we are in a sweet spot right now. AI tools are incredibly powerful and relatively cheap. But this window won’t last forever. My rough estimate is 1-2 years before pricing goes up significantly, and maybe 3-5 years before non-engineers can deploy moderately complex products on their own. So if you are an engineer, the time to level up is now. Not by writing more code, but by understanding it better.