Sarah ran a small design studio. Four employees, a dozen active clients, the kind of business where every relationship is personal. She’d been looking for a better way to manage client projects. Something where clients could check timelines, approve designs, and see invoices without her having to send a dozen emails a week.
When she found one that fit, she was excited. The interface was clean. Setup took an hour. Her clients picked it up immediately. She uploaded project files, contact info, billing details, everything she needed in one place. For six months, it worked exactly the way she’d hoped. The tool felt like it had been built for someone running exactly her kind of business.
Then she read an article. A vulnerability in the platform her tool was built on had been exploited. A weakness in how the application handled data access meant that anyone with a basic understanding of web requests could pull user data from any app on the platform. No password needed. No special tools.
Her clients’ names, email addresses, project details, invoice amounts. All of it had been sitting in an unprotected database for the entire time she’d been using the tool. She never received a notification. The app never went down. Nothing looked wrong from the inside.
The person who built it wasn’t a scammer. They were someone who’d used AI to write the code, shipped something that worked, and never realized the database had no access controls. Nobody, not the builder and not the AI that wrote the code, had thought about what happens when someone requests data they shouldn’t be able to see.
This story is a composite. Sarah isn’t a real person. But every detail is drawn from security incidents documented in the last twelve months, across platforms whose names you’d recognize. Hundreds of apps. Thousands of users. Exposed data ranging from email addresses to personal debt amounts to home addresses to medical records. I’m not calling out specific products because the point isn’t that one platform failed. This is happening everywhere, and most of the people affected will never know.
Two Kinds of Builders
AI-assisted coding has produced two kinds of builders, and from the outside they look identical.
The first kind uses AI as an amplifier. They already understand architecture, security, edge cases, what happens at scale. They’ve seen systems fail. They know where the landmines are buried. AI makes them faster. Dramatically faster. The software they produce is sound because the person directing the AI knows what sound software looks like.
The second kind uses AI as a substitute for that knowledge. They describe what they want, the AI builds it, it works in the demo, and they ship it. They’ve never seen a system fail at scale. They’ve never thought about what happens when someone tries to access data that isn’t theirs. The AI handled it, so it must be handled.
I’m the second kind of builder. I use AI to write code for tools I want to use, and I’m aware enough of what I don’t know to never release them to other people without sending them through a real developer to battle test first. When I build something for myself, the only person taking the risk is me. But not everyone who builds with AI has that self-awareness, and the tools certainly don’t encourage it.
Here’s the dynamic that makes this dangerous: the AI doesn’t flag its own blind spots. It writes confident code whether or not the right questions were asked. It doesn’t say “you should think about what happens when someone tries to access another user’s data” or “this endpoint doesn’t actually verify permissions.” The tool is confident, so the builder is confident. The confidence is inherited, but the understanding isn’t.
Both of these builders produce working software. Only one of them produces sound software.
The Invisible Layers
The things that separate sound software from working software are invisible to the person using it.
Security. Architecture. Error handling. Data integrity. How the app behaves under load, under attack, under conditions the builder never imagined. These are the layers that matter most, and they’re the layers you can’t evaluate by using the product.
This has always been true. What’s changed is the volume.
When building software was expensive and difficult, fewer people shipped. More of the people who did ship had earned the knowledge to do it well. The barrier to entry was also a quality filter. Not a perfect one, but a real one. Plenty of bad software shipped in the old world too. But the economics selected for people who’d spent enough time in the craft to learn where things break.
That selection pressure is gone.
A December 2025 study by Tenzai tested five major AI coding platforms by running identical prompts through each one. Across fifteen applications generated, they found 69 vulnerabilities. Six were critical. But here’s the part that matters: the AI tools were actually good at avoiding the traditional security flaws that have plagued human-coded software for decades. No SQL injection. No cross-site scripting. The code was syntactically clean.
What failed was judgment.
The vulnerabilities were business logic failures. E-commerce apps that let users set negative prices. Endpoints that didn’t check whether the person requesting data actually owned that data. Admin-only functions that never verified the user was an admin. The AI wrote code that ran. It just couldn’t think about how the software should behave when someone used it in ways the builder didn’t anticipate.
That’s the pattern worth paying attention to. The code works. The thinking behind it is what determines whether it holds.
More Software, Less Certainty
Here’s what I think most people are missing about what’s happening right now.
AI has made it dramatically cheaper and faster to build software. That’s genuinely a good thing. It means more specificity, better fit, less forcing yourself into someone else’s idea of how you should work. Things that were never worth building are suddenly viable. Tools for audiences that were too small to justify the investment a few years ago. Software that fits your specific needs, not the needs some other company decided were universal.
But cheaper to build doesn’t mean safer to use.
When the cost to build drops far enough, both the expert and the novice ship. The person with twenty years of architecture experience and the person who described what they wanted to an AI both end up in the same marketplace. One of them thought carefully about how their application handles data under adversarial conditions. The other never considered it. And the market has no reliable way to tell which is which.
It’s going to get harder before it gets easier, because the tools that make building easy are improving faster than the tools that make evaluating quality easy.
What I keep coming back to is this: the explosion of AI-built software doesn’t distinguish between amplified expertise and amplified ignorance. Both produce applications that work. Both look credible. The difference only shows up later, when the software encounters conditions the builder never thought about. And by then, you’ve already given it your data.
Developing New Instincts
You don’t stop experimenting. There’s real value in having more options. But you develop some new instincts for navigating a world where the supply of software has gotten ahead of the ability to evaluate it.
Let time do its work. Software that has survived real usage, real attacks, real edge cases is different from software that launched last week. Time is a quality signal. Not because old is inherently better, but because software that’s been beaten on has had the chance to prove what it’s made of. Every month an app operates, it encounters conditions its builder didn’t predict. How it handles those conditions tells you something a first impression never could. Give new tools some runway before you go all in.
Look for builder transparency. The builders who actually know what they’re doing can talk about it. They can articulate their security practices, their architecture decisions, why they made the choices they made. They can explain what happens to your data, how it’s stored, who has access, what happens if you leave. That ability to explain the thinking behind the code is becoming a genuine differentiator. Not because the explanation itself protects you, but because the person who can explain the thinking is more likely to have actually done the thinking. If a builder can’t tell you how they handle your data, that’s telling you something.
Protect yourself by default. Use a dedicated email address for new apps. Never reuse passwords. Don’t connect a tool you’re experimenting with to systems that hold sensitive information. None of this is new advice. Good security hygiene has always mattered. But in an era where the number of new applications is growing faster than anyone’s ability to vet them, these are practices worth being intentional about. Try everything. Just do it with guardrails.
The Real Scarcity
The real scarcity in this new landscape isn’t coding ability. AI handles that. The scarcity is judgment. Knowing what to build, why to build it, how it should be structured, what’s actually secure versus what just looks secure, what happens when the app scales from ten users to ten thousand. The thinking that happens before and around the code.
Every new application you try looks like it was built by someone who knows what they’re doing. That’s the entire point of this piece. You can’t tell from the outside whether the thinking underneath is sound.
The question worth learning to ask isn’t “does this work?” It’s “who built this, and do they know what they don’t know?”

