Technically, it solved the problem. The connection issue would disappear. The code would work. And in the process, we would have quietly dismantled the security control that was there for a very good reason.
I did not run the plan. But I understood immediately what had happened — and why it matters for any team using AI in a professional environment.
AI optimises for resolution, not integrity
The AI was not being careless. It was doing exactly what it was designed to do: find a solution to the stated problem. The stated problem was a failing connection. Moving the secret fixed the failing connection. Job done.
What it did not account for was everything surrounding that problem; the security architecture, the compliance requirements, the reason the vault existed in the first place. That context was not in the prompt. So the AI did not factor it in.
This is what we mean by the path of least resistance. AI will consistently find the most direct route to resolving the immediate issue. It will not pause to ask whether that route undermines something else. It will not flag that the fix it is proposing removes a control that took your security team six months to implement. It will just solve the problem you gave it.
The genie problem
There is an old idea in folklore about wishes. That if you ask for something without being precise enough, you get exactly what you asked for, and exactly not what you wanted. You wish for a million dollars and it arrives as an insurance payout after a disaster. The genie is not malicious. It just fills the gaps.
AI behaves the same way. The gaps in your prompt are not left blank. They are filled with assumptions, with defaults, with the most plausible interpretation of what you probably meant. In most cases that is fine. In cases involving security, compliance, or system integrity, it is a risk that needs managing.
The vault story is a relatively benign example because I caught it before anything was run. But the same dynamic plays out in subtler ways every day in teams using AI to write production code. A vague user story gets interpreted generously. A security constraint that was not mentioned gets omitted. A design decision gets made silently that nobody notices until much later.
What this means for how you work with AI
The fix is not to stop using AI. It is to change what you give it to work with.
Before AI writes a single line of code, the intent needs to be captured explicitly, not just what should change, but what must not. What are the boundaries? What constraints apply? What does success look like, and what would a failure look like even if the tests pass?
We call these intent manifests. Documents that define the negative space as clearly as the positive. They give AI the context it needs to stay within scope, and they give your team a reference point for reviewing what comes back.
The vault situation would have been caught by a well-written intent manifest. The constraint, do not move secrets out of the vault under any circumstances, would have been explicit. The AI would have been forced to find a different route, or flag that it could not resolve the issue within those constraints. Either outcome is better than a plan that silently removes a security control.
The broader point
AI is a remarkably capable tool. It is also a tool that does not know what it does not know, and more importantly, does not know what you have not told it. The teams that use it most effectively are the ones that have learned to be precise about intent, explicit about constraints, and rigorous about reviewing the plan before they run it.
Not because AI cannot be trusted. Because professionals do not outsource their judgment.
This post is the third in a four-part series based on our recent webinar on Agentic Software Development.
Watch the full 49-minute session free
Want to discuss how this applies to your team? Eman is making time for a limited number of 30-minute conversations with engineering leaders working through these challenges.
Register your interest here