code4thought

Preparing for ISO 42001:
Insights from the Field

11/12/2025
6 MIN READ  /
Author
Yiannis Kanellopoulos CEO and Founder | code4thought
Petros Stravroulakis
Author
Petros Stavroulakis
As AI adoption accelerates across sectors, executives are increasingly looking for ways to demonstrate that their organisation builds and deploys AI systems responsibly. ISO 42001, the world’s first certifiable standard for AI management systems, is quickly becoming the reference point.
But what does it really take to prepare for ISO 42001?
At code4thought, we’ve participated in multiple readiness assessments and have supported certification audits as AI governance experts. Across industries, maturity levels, and AI use cases, the same pattern consistently emerges:
ISO 42001 is far more demanding than most organisations expect, especially those already familiar and certified under ISO 27001.
This article distils the most important lessons we’ve learned from field engagements, highlighting what auditors actually ask for, where organisations struggle, and what leaders should prioritise if they want to reach certification with confidence.

ISO 42001 is Not a 27001 Add-On. It’s a Stand-Alone Standard

One of the most common misconceptions we encounter in early discussions is the assumption that ISO 42001 is simply a “natural extension” of ISO 27001. The logic seems sound: both standards emphasise risk management, and both deal with information-driven systems.
But in practice, ISO 42001 introduces an entirely different scope and set of expectations. Being compliant with ISO 27001 helps — but only partially.
Where there is overlap
We routinely see 27001-certified organisations benefit from existing controls around:
  • Information security
  • Data governance and data quality
  • Access management
  • Secure software development
  • Incident response structures
These foundations are undoubtedly helpful, and auditors will recognise them as positive indicators.
However, this is where the overlap ends
ISO 42001 goes far beyond IT security. It demands:
  • A definition of what “AI system” means for the organisation
  • A governance model that covers the entire AI lifecycle
  • Evidence of trustworthy AI principles in practice
  • Ongoing model monitoring, human oversight, and decision accountability
  • Explicit risk treatment for model behaviour, fairness, bias, drift, explainability, and misuse
These areas are typically not covered in 27001. As a result, leaders who assume minimal adjustments will suffice often experience the steepest learning curve.

The First Barrier: Defining “AI System” and “Trustworthy AI”

Before any gap assessment or control mapping can begin, organisations need to answer two deceptively simple questions:
  1. What exactly qualifies as an AI system in your organisation?
  2. Which principles of trustworthy AI are you committing to uphold, monitor, and prove?
This is where we see the majority of early readiness efforts fall apart.
The definition of “AI system” is crucial because it determines:
  • What falls within the audit scope
  • Which teams need to be involved
  • Which risks must be assessed
The definition will also allow you to meet the auditors’ expectations, since they will typically ask you to determine what the risk level of your systems is, what trustworthy AI principles you apply, and how they are documented.
But many organisations struggle to distinguish between rule-based automation, classical machine learning, generative AI use cases, and vendor-supplied AI functionality embedded into enterprise platforms.
Here’s the issue at stake:
  • A vague or overly narrow definition leads to under-scoping.
  • An overly broad definition leads to audits you cannot operationally support.

Risk Management: The Backbone of ISO 42001

Risk-related process documentation is the backbone of ISO 42001. However, most organizations underestimate the meaning of “risk management” in the context of AI.
Most organizations, influenced by ISO 27001, typically maintain a static risk register, conduct security-focused risk assessments, and provide basic documentation on data processing risks.
However, ISO 42001 auditors actually look for:
  • A structured, traceable link between AI risks, mitigation measures, responsible owners, monitoring processes, and triggers for escalation
  • Model-specific risks, not generic security risks
  • Lifecycle-aware risks — from data collection to model retirement
  • Evidence that risk mitigations are active, not theoretical
In other words, ISO 42001 expects organisations to treat AI risk as a continuous operational process.
Many clients rely on tools like model cards, data sheets, AI risk logs, or custom governance templates. Others use software solutions to manage AI risks. Both are acceptable, as long as the process is demonstrably systematic.

Operationalisation: Where Readiness Fails or Succeeds

Documentation alone will not guarantee certification. ISO 42001 is explicitly designed to detect “compliance theatre.”
This is where we observe the sharpest divide between organisations that succeed and those that fail.
The ISO 42001 standard expects real evidence. Auditors typically request artefacts such as:
  • Model testing and validation reports
  • Bias and fairness assessment logs
  • Monitoring dashboards showing drift or performance degradation
  • Human-in-the-loop decision logs
  • Audit trails for critical decisions
  • Change management tickets for model updates
  • Meeting minutes of AI governance committees
  • Records showing how governance decisions were operationalised
If the processes exist only on paper, auditors are immediately aware of this.
This is also where organisations underestimate the work. Many teams assume that because their data science team “tests models,” that will be enough.
In reality, ISO 42001 requires evidence of testing, oversight, monitoring, and governance that is structured, consistent, and traceable. This means:
  • Linking testing reports to specific risks
  • Recording who reviewed them and when
  • Demonstrating decisions taken as a result
  • Showing periodic monitoring actions
  • Proving governance cycles actually run

Quick Wins for Organisations Starting Their ISO 42001 Journey

While achieving ISO 42001 requires substantial effort, organisations can make rapid progress by focusing on several strategic priorities.
1. Engage early with all relevant stakeholders and build a cross-functional, multi-disciplined team to accelerate every moving part of the readiness process. Your team may include (but not limited to):
  • Legal & Compliance
  • CISO or Information Security experts
  • Data Science & ML Engineering
  • Risk & Internal Audit
  • Business owners of AI systems
  • DPOs
2. Map and document the AI lifecycle. Mapping all processes and data flows enables organizations to identify gaps, determine the required evidence, assign ownership, and develop a governance model tailored to their specific realities.
3. Document governance “in motion.” Instead of backfilling evidence before audits, organizations should start capturing it immediately. Such evidence may include:
  • Governance committee minutes
  • Testing records for existing AI use cases
  • Review and approval workflows
  • Risk treatment decisions
  • Monitoring records
This “living governance” approach gives organisations a head start and the confidence that they know where they are and what they are doing.
4. If you are about to start a new AI project as a company, it is the perfect opportunity to use it as a testbed to:
  • Familiarize yourself with the gaps between the standard requirements and your current processes in place for the duration of the AI lifecycle
  • Generate objective evidence
  • Test risk controls in practice and not only on paper
  • Setup the AIMS framework tailored to your company’s needs

ISO 42001 Is a Shift to Accountable AI

One of the most important takeaways from our field experience is that ISO 42001 is not about passing a one-off audit. It is about embedding responsible, trustworthy, and well-governed AI across the organization.
In that sense, ISO 42001 acts as both a compliance framework and an organizational transformation enabler:
  • It drives clarity on where AI is used.
  • It strengthens cross-functional collaboration.
  • It forces organisations to adopt trustworthy AI principles in practice.
  • It creates visibility and accountability for AI decisions.
  • It aligns internal governance with regulatory demands.
For organisations serious about AI maturity, ISO 42001 is a strategic asset.

How code4thought Helps Organisations Prepare

Based on the challenges we’ve encountered and the lessons captured in this article, we’ve developed specialised services to help organisations prepare efficiently and confidently.

These include:

  • Perform AI quality testing and audits leveraging our proprietary platform iQ4AI.
  • Adhere to the regulatory requirements set forth by the EU AI Act in a timely and cost-efficient manner.
  • Conduct a 360-degree assessment to ensure the high-quality implementation, data processing integrity, and modern capabilities of your AI systems.
  • Exercise AI due diligence using our iQ4AI platform to analyse any AI-based system and deliver a thorough risk analysis and a practical improvement roadmap.
  • Advise on the best practices for setting up the proper processes and infrastructure that will ensure your AI-based systems are Responsible, Reliable, and can be Trusted.
  • Evaluate your readiness level specifically for ISO 42001, both in principle as well as in practice using our ISO 42001 readiness test.
Each engagement is tailored to the organisation’s AI maturity, industry, risk profile, and regulatory obligations.

Final Thoughts

ISO 42001 represents a significant step in the evolution of AI governance. Where many frameworks focus on principles, ISO 42001 demands evidence, the real operational heartbeat of AI systems.
From our experience supporting organizations and audit bodies, success comes when organizations move beyond compliance checklists and embrace a broader shift:
  • from ad hoc to structured AI processes
  • from siloed ownership to cross-functional governance
  • from static documentation to living evidence
  • from security-driven perspectives to holistic trustworthy AI
Achieving ISO 42001 is challenging. But it is also transformative. For organisations that invest the time, clarity, and operational discipline it requires, certification becomes both attainable and strategically valuable.