From Compliance to Culture:
Rethinking Cybersecurity in the
Age of AI
09/07/2025
16 MIN READ /
We are living in a paradox. Artificial intelligence is often hailed as the great enabler of digital transformation. However, it is also proving to be a potent weapon in the hands of cyber attackers. Yet on the defensive side, organizations often struggle to extract similar value. Why is it that AI empowers offense more effectively than defense?
The answer isn’t just about algorithms or tools—it’s about alignment, governance, and culture. The hard truth is that cybersecurity can no longer be addressed solely through compliance checklists or silver-bullet technologies. In an age where AI accelerates both opportunity and risk, organizations must rethink their cybersecurity strategy from the ground up.
AI in Cybersecurity: Offense Outpaces Defense
Threat actors are rapidly leveraging generative AI to scale and sophisticate their attacks. Phishing campaigns are now hyper-personalized and grammatically flawless. Social engineering tactics exploit publicly available data to mimic human behavior with eerie accuracy. Malicious code generation—once the domain of seasoned hackers—is now within reach for amateurs equipped with open-source AI tools.
Meanwhile, AI systems themselves are becoming attack vectors. Prompt injection, data poisoning, and model manipulation expose critical vulnerabilities in both commercial and open-source models. As enterprises race to integrate AI into products and services, they’re inadvertently increasing their attack surface.
And yet, defensive AI is lagging. Many security tools lack the contextual understanding to adapt to evolving threats. Detection algorithms often miss subtle signals hidden in noisy environments. Most importantly, defensive AI systems are only as good as the data and human expertise behind them, which are often lacking.
The danger here is misplaced confidence. Organizations may assume that simply deploying AI-powered security solutions equates to resilience. In reality, defensive AI without contextual awareness and governance can lull teams into a false sense of security.
The Organizational Reality: Partial Security
This imbalance isn’t just technological; it’s deeply rooted in organizational culture.
AI development often happens in silos. That was one of the key highlights of the recent panel discussion during the Machines Can See conference held in Dubai. Machine learning engineers and data scientists rarely train in secure software development or compliance practices. Product teams prioritize performance and innovation over hardening code or adhering to data protection principles. And security teams, already stretched thin, are often brought in too late—or not at all.
The result is a culture of “partial security” where critical gaps go unnoticed. Shadow AI—unsanctioned use of AI tools and platforms—proliferates across departments. Experimentation outpaces policy, and security controls fail to keep up with the velocity of innovation.
Security by design is still more of a buzzword than a practice in most organizations, as Michael Bletsas, Governor of the National Cybersecurity Authority of Greece, noted in a recent fireside chat organized by Route Lab in Patras. Without a conscious effort to bridge these divides, AI systems will remain vulnerable, regardless of how advanced the tooling may be.
Why Cross-Skilled Teams Are Essential
To move forward, we must break down the artificial barriers between disciplines. The complexity of modern AI systems demands cross-functional teams with blended expertise in AI/ML, software engineering, and cybersecurity.
These are not separate domains—they are interdependent pillars of responsible innovation.
Embedding security into the development lifecycle requires more than secure coding checklists and software quality best practices underpinned by various ISOs. It calls for teams that can understand threat modeling for ML pipelines, implement privacy-preserving data strategies, and continuously monitor deployed models for anomalous behavior.
Cross-skilled squads foster shared accountability. When cybersecurity is integrated into day-to-day development work—rather than bolted on at the end—security becomes a quality attribute, not a constraint.
Passing the Vision to the Teams
However, teams can only execute what leadership articulates. Organizations need more than frameworks—they need a vision and a purpose.
Leaders must define and communicate the “why” behind secure innovation. It’s not about compliance for its own sake—it’s about building products and systems that users can trust, regulators can verify, and stakeholders can rely on.
A culture of security begins with clarity of purpose. Employees need training, tools, and incentives that align with organizational values, not just regulatory demands. Security champions within development teams. Continuous education on secure AI practices. Recognition for catching vulnerabilities before release.
These are cultural investments that will help businesses innovate responsibly and securely.
How NIS2 and the EU AI Act Address Cybersecurity Weaknesses
The regulatory landscape is also evolving to reflect these realities. The NIS2 Directive and the EU AI Act are more than just compliance burdens. They are strategic frameworks for secure digital transformation.
NIS2 broadens the scope of cybersecurity regulation across critical infrastructure sectors, emphasizing risk management, incident reporting, and supply chain resilience. Meanwhile, the EU AI Act targets systems that pose high risks to safety, privacy, and fundamental rights, mandating transparency, human oversight, and security-by-design practices.
Together, they offer a coherent model: manage AI risks like any other operational risk within a broader cybersecurity governance structure.
This convergence is an opportunity. Organizations can use the AI Act to assess and mitigate AI development risks while leveraging NIS2 to ensure end-to-end resilience across their digital infrastructure. Embedding AI governance within existing cybersecurity frameworks creates unified controls and clear lines of accountability.
Conclusion: AI Security Begins with Culture
AI won’t solve cybersecurity on its own. It’s only as secure as the culture, teams, and controls surrounding it. In the age of AI, attackers think creatively, move quickly, and adapt constantly. To defend against them, organizations must do the same, not just with tools, but with a mindset.
Regulations like NIS2 and the EU AI Act can help structure that journey, but they are just starting points. The real transformation lies in shifting from a compliance-oriented mentality to a culture of secure innovation.
Security is no longer a department. It’s a responsibility shared across teams, fueled by leadership, and embedded in every decision—from model design to product deployment.
It’s time to stop asking, “Are we compliant?” and start asking, “Are we trustworthy?”
If you want to dive into the topics discussed in this article, you may watch on demand:
Yiannis Kanellopoulos fireside chat with Michael Bletsas during the recent Cyber Security Social Hub by Route Lab (in Greek).
The panel discussion “Defending Intelligence: Navigating the Maze of Adversarial Machine Learning” with Yiannis Kanellopoulos’ participation, moderated by Rob Van Der Veer (skip to 8:04:29).
Contact code4thought to find out how we can help you navigate safely the many challenges of the AI era.