Can it be trusted?
In his published article, Simon argues that we are entering a new cycle in the technology industry one defined not by speed or scale alone, but by trust.
From Hype Cycles to the Trust Cycle
The technology sector has historically moved through waves of hype: ecommerce, social media, short-form video. Each cycle has prioritised growth and user acquisition, often at the expense of governance and long-term responsibility.
AI, however, changes the stakes.
When algorithms influence financial decisions, healthcare outcomes, or educational pathways, “move fast and break things” is no longer a viable philosophy. Trust becomes foundational.
Simon highlights that while large technology firms have traditionally resisted self-regulation, public scrutiny around data usage, children’s safety and mental health has intensified. As AI becomes more deeply integrated into daily life, society expects higher standards.
This is where Europe’s approach stands apart.
Europe’s Regulatory Advantage
While EU regulation is often criticised for slowing innovation, Simon suggests that it may now represent a strategic advantage.
Following the 2008 financial crisis, European financial institutions were subjected to stricter governance and capital requirements. Though sometimes seen as conservative, these measures ultimately strengthened trust in European banks relative to their US counterparts.
The same philosophy is now being applied to data and AI.
European initiatives, from data governance frameworks to sector-specific standards such as the European Health Data Space, reflect a growing view that companies handling personal data and AI systems should carry responsibilities like fiduciary duties in financial services.
In short: safety and accountability are not obstacles to innovation; they are enablers of long-term resilience.
The Double-Edged Sword
At the same time, Simon acknowledges a core challenge:
Safe can be slow.
Europe’s fragmented markets and risk-averse culture have historically hindered rapid scaling in technology. If the EU wants to lead the AI trust cycle, regulation alone will not be enough.
He argues that Europe must move from policing to partnering creating regulatory sandboxes, strengthening public procurement support for EU tech firms, and helping investors navigate risk thresholds.
The opportunity is not just to regulate global technology players, but to empower European innovators to build trusted AI systems from the ground up.
An Opportunity for European Tech
For many years, Europe was perceived as unlikely to shape the global technology agenda.
The trust cycle changes that dynamic.
As AI becomes embedded in critical infrastructure, trust becomes competitive advantage. Europe’s long-standing focus on governance, consumer protection, and social responsibility may position it as “the adult in the room” not merely a referee, but a standard-setter.
Cleverbit Perspective
At Cleverbit, we believe that innovation and responsibility must evolve together.
As AI systems become integral to enterprise environments, governance, transparency and safety are no longer optional add-ons, they are core design principles. Building technology that organisations and end-users can trust is foundational to sustainable innovation.
Simon’s contribution to this broader discussion reflects our ongoing commitment to advancing responsible, secure, and forward-looking digital systems within Europe and beyond.
Read the full opinion piece:
The EU’s Moment: Leading the AI ‘Trust Cycle’ Where Safety Meets Innovation