AI does not push back. It does not ask questions. It fills the gaps silently, with the most plausible interpretation of what you probably meant, and then it gets to work.
The wish problem
There is an old idea in folklore about wishes. Ask for something without being precise enough, and you get exactly what you asked for, and exactly not what you wanted. The genie is not malicious. It just fills the gaps.
AI behaves the same way. Hand it a vague user story, “add password reset functionality”, and it will implement password reset functionality. It will also, in some cases, quietly remove multi-factor authentication in the process, because that was not mentioned, and it was in the way.
Not because it is careless. Because the gaps in your prompt are not left blank. They are filled with assumptions. And those assumptions are not always the ones you would have made.
What vibe code drift looks like
When teams adopt AI, something predictable happens downstream. Code gets written faster. A lot faster. Pull requests get larger and more frequent. Reviewers, under pressure to keep up, start approving changes they have not truly scrutinised. The code looks right: AI knows all the patterns, follows all the conventions, produces clean and idiomatic output. That surface-level quality creates a false sense of confidence.
Over time, ownership blurs. Dark code accumulates; code that functions, but that nobody truly understands. Nobody can say with confidence whether the system is actually sound, or whether it just appears to be. We call this vibe code drift.
The dangerous thing about vibe code drift is that it is invisible until it is not. Everything looks fine. Tests pass. Deployments succeed. And then something goes wrong… a security flaw, a compliance gap, an architectural decision that nobody made but everybody is now living with. And the question of who owns it has no clean answer.
Why the Vinext audit matters
When an engineering manager at Cloudflare rewrote most of Next.js using AI in a week, it was widely covered as an impressive feat of AI-assisted development. Over a thousand automated tests. Running production deployments. Delivered in days.
When security researchers audited the code, they found cache poisoning vulnerabilities, authentication bypasses, and a file exposure bug that allowed users to access any file on the server. Every one of those issues was a design choice made by AI. Every one of them passed the test suite.
The tests checked what they were designed to check. Nobody had taken ownership of the internals. The security assumptions, the trust boundaries, the things that a professional review would have caught. The AI filled the gaps. The gaps happened to be where the vulnerabilities lived.
Capturing the negative space
The fix is not to slow down. It is to be more deliberate about what you give AI to work with before it starts.
The most important shift is learning to capture the negative space, not just what should change, but what must not. What are the constraints? What are the boundaries? What would a failure look like even if the tests pass?
Teams that do this well use intent documents written before any code is generated. These docs make the requirements, the scope and the non-negotiables explicit. They treat the contract as the primary artifact and the code as something derived from it. Less back and forth, fewer unintended consequences, and a much clearer basis for reviewing what AI produces.
The gaps do not go away. But they stop being silent.
This post is the second in a four-part series based on our recent webinar on Agentic Software Development.
Watch the full 49-minute session free
Want to discuss how this applies to your team? Eman is making time for a limited number of 30-minute conversations with engineering leaders working through these challenges.
Register your interest here