AGENTIC SOFTWARE DEVELOPMENT WEBINAR
Moving fast… without breaking things
Live CTO Briefing | 17 April 2026 | 1500 CET
AI is accelerating coding, but its also quietly changing how risk enters production.
This session explores Vibe Code Drift: the slow, normal-looking failure mode where AI-generated code is shipped because it looks plausible, not because it has been verified.
If your teams are using AI coding assistants (and they are), this webinar will help you ship fast without shipping blind.
Why this matters now
AI-generated code often looks clean, idiomatic, and correct. That’s the problem.
When speed becomes the KPI and review feels like friction, teams drift into a state where:
- Code is merged with shallow scrutiny
- Security assumptions are implicit
- Domain logic edge cases are missed
- Architecture coherence erodes
- Nobody can confidently explain what runs in production
This is not a developer problem. It’s a systems and incentives problem. And it requires a new operating model.
What you'll learn
Everyone wants to use AI, and most development teams already are. What remains far less clear is where the real risks sit, what practical controls are needed, and how teams can move faster without weakening reliability, security, auditability, or delivery clarity.
In this webinar, we’ll discuss what changes when code generation becomes cheap but verification, governance, and operational discipline remain the real bottlenecks, and what that means for organisations trying to adopt AI without losing control. The session will offer a sharper way to think about the challenge, the trade-offs, and the path forward.
In this webinar, we’ll discuss what changes when code generation becomes cheap but verification, governance, and operational discipline remain the real bottlenecks, and what that means for organisations trying to adopt AI without losing control. The session will offer a sharper way to think about the challenge, the trade-offs, and the path forward.
- How AI amplifies risk across:
- Security vulnerabilities
- Domain-critical logic
- Architecture drift
- Operational risk
- Software supply chain
Why banning AI doesn’t work and what to do instead.
Why threat modelling must now happen at two levels: the system being built and the system doing the building.
You’ll learn how to extend traditional approaches like STRIDE to cover agent-specific risks, including prompt injection, memory poisoning, tool misuse, goal hijacking and cascading agent failures. As well as how to apply frameworks such as the OWASP Agentic Top 10 and CSA MAESTRO to secure your AI development workflow itself.
You’ll learn how to extend traditional approaches like STRIDE to cover agent-specific risks, including prompt injection, memory poisoning, tool misuse, goal hijacking and cascading agent failures. As well as how to apply frameworks such as the OWASP Agentic Top 10 and CSA MAESTRO to secure your AI development workflow itself.
How to let developers generate freely with AI, while enforcing structured, risk-based verification before anything reaches production.
- Concrete, implementable mechanisms including:
- Intent manifests
- Small PR orchestration
- Risk-based CI gating
- Non-bypassable QA enforcement
- Scrutiny scoring and review-quality measurement
- Translate engineering discipline into audit-ready artefacts:
- Evidence-driven reviews
- Security-aware prompting
- SBOM and provenance alignment
Who should attend
This briefing is designed for:
CTOs and VPs of Engineering
Engineering Managers and Staff+ Engineers
CISOs and Security Leaders
DevSecOps and Platform Teams
Risk, Compliance and Governance leaders
If you are responsible for shipping software safely in an AI-assisted environment, this session is for you.