code4thought

TRUSTWORTHY AI

EU AI Act Assurance

SOLUTIONS > EU AI Act Assurance
Our EU AI Act Assurance service provides technical & compliance assessments of existing AI systems, processes and algorithms, so as to identify and mitigate legal, ethical and reputational risks associated with non-compliance. It also serves as a comprehensive technical guide for fostering responsible AI deployment and usage, by promoting quality, transparency, accountability and human-centric AI practices within the organization, while maximizing the business value of AI systems.

Service Pillars

Regulatory Requirements
In-Depth Comprehension

Our advisory methodology places great focus on helping organisations comprehend the specific provisions, requirements, and obligations outlined in the EU AI Act, interpret the legislation and understand its potential business and systemic impact to promote proactivity.

Comprehensive Technical & Compliance Assessment

We conduct extensive technical & compliance assessments of existing AI systems, processes, and algorithms to identify gaps or areas of non-compliance. Our proprietary assessment platform, combined with our team’s management advisory and legal partners’ expertise, ensures thorough evaluations and effective solutions.

Risk Management & Mitigation

The service helps organisations accurately identify and mitigate legal, ethical, and reputational risks associated with non-compliance. We provide practical recommendations for remediation to ensure adherence to the EU AI Act and related legislation.

Continuous Monitoring & Improvement

Our advisory services and tooling capabilities allow for fast and cost-effective integration of continuous monitoring and improvement for AI systems & processes. This supports & promotes innovation, while maintaining compliance at minimal and pragmatic adaptation costs.

Why us?

Client-specific approach: code4thought offers structured advisory services that are tailored to the unique requirements and circumstances of each organisation, so as to develop comprehensive compliance strategies, risk mitigation and implementation plans aligned with the client’s business objectives, industry sector, and the EU AI Act requirements.
Cross-Disciplinary Expertise: code4thought’s team combines legal, technical, and ethical perspectives on AI governance. By assembling a team of experts with diverse backgrounds and skill sets, code4thought provides solutions that address the multifaceted challenges of AI regulation and compliance.
Unique combination of expert advisory and proprietary audit tooling platform: our capabilities can support accuracy, speed, consistency and continuity on AI Risk Management frameworks being implemented at scale.
Proven Track Record: code4thought has an extensive track record in assessing risks associated with large-scale software systems across various industries and sectors. Our experts are (more than) capable in identifying, analysing, and mitigating complex risks inherent in software development and deployment. Such experience offers valuable insights and best practices that are also applicable to AI systems operating in diverse contexts.

Features

The EU AI Act Assurance service features capabilities such as guidance, consultation, and support to organizations in ensuring compliance with the regulations outlined in the EU AI Act. Our solution includes:
AI Inventory Mapping: Create a centralized registration location that maintains detailed, up-to-date records of projects, models, and databases associated with AI technology across an organization, ensuring clear visibility and strong control over all AI assets.
AI Technical Assessment: Evaluate AI systems, algorithms, and models to ensure compliance with technical standards and guidelines, as well as assess performance, transparency, fairness, accountability, and robustness of AI systems. Conduct technical testing using our audit tooling platform and analysis to identify vulnerabilities and areas for improvement.
Regulatory Compliance Assessment: Evaluate the organization’s identified AI systems and processes, to assess compliance with the specific requirements and obligations outlined in the EU AI Act.
AI Risk Assessment and Mitigation: Identify potential risks associated with AI deployment and operations within the context of the EU AI Act. These risks may be associated with data vulnerabilities, AI assets’ performance, robustness, security, explainability and transparency; they may even have legal implications. They are evaluated and assigned an appropriate “importance level” with a proprietary, intuitive scoring algorithm. Finally, strategies and frameworks are developed to mitigate these risks effectively.
Documentation and Reporting: Assist in the preparation of documentation and reports required for regulatory compliance, including impact assessments, documentation of AI systems, and regulatory filings as mandated by the EU AI Act.
Advisory Services: Offer advisory support and expert consultation throughout the compliance and risk analysis process. This can be an on-going commitment to address any emerging issues, interpret regulatory changes, and adapt compliance strategies accordingly. Our delivery methods implement a phased approach, which allows for detailed understanding of challenges and solutions and informs the client’s decision-making process regarding efficient, cost-effective planning and use of resources.
Continuous Monitoring and Auditing (optional): Use our proprietary tooling platform to perform independent, recurring audits combined with on-going advisory support. Continuously enhance the mechanisms established for the ongoing monitoring, auditing, and reporting processes, to ensure continued EU AI Act compliance as regulations evolve and as the organization’s AI landscape changes.

Benefits

Compliance Assurance: Gain assurance that your organization meets the requirements and obligations outlined in the EU AI Act, mitigating the risk of non-compliance penalties and legal issues.
Risk Mitigation: Identify and address legal, ethical, and reputational risks associated with AI deployment, safeguarding organisation.
Continuous Improvement: Implement mechanisms for ongoing monitoring, auditing, and evaluation of compliance efforts, adapting to evolving regulatory requirements and industry standards.
Minimise Adaptation Costs: A proven methodology and an audit platform deliver a comprehensive solution which contributes to reducing the initial costs required for timely compliance as well as subsequent costs, by minimising the risk of costly and hasty implementations on the client side which may add technical debt due to lack of adequate analysis & planning
Maximize Business Value:
  • Improve ROI
    • AI Inventory Mapping helps organizations gain visibility and control over their AI assets, enabling them to effectively manage and leverage AI technologies.
    • Technical assessment ensures that AI systems perform as intended under various conditions, leading to improved efficiency and productivity. By optimizing performance and reliability, organizations can achieve better outcomes and maximize the value derived from their AI investments.
    • Continuous Monitoring and auditing ensures that each AI asset always represents and resolves a current business problem. If not, it facilitates timely corrections and enhancements. A well-designed monitoring and continuous learning process is key to maximizing the business value any AI asset may add over time
Enhance Governance & Build Trust
  • The processes and mechanisms defined within the EU AI Act assurance context may serve as the stepping stone for further development and implementations of robust policies, procedures, and governance frameworks to promote transparency, accountability, and responsible AI practices across your organization.
  • Present tangible evidence of compliance using various metrics in an intuitive manner which makes it easily comprehensible by a broad audience
  • Demonstrate your commitment to compliance with the EU AI Act, enhancing confidence and trust among regulatory authorities, customers, investors, and other stakeholders.

Frequently Asked Questions

What is the EU AI Act?
The EU AI Act is Europe’s attempt to lead AI regulation and set a global standard. The Act aims to prioritize human rights in the development and deployment of AI, categorizing systems based on the impact they can have on people’s lives. The Act embraces a risk-based approach, prohibits certain AI systems, requires high-risk AI systems to comply with strict requirements and be assessed before putting them on the market, as well as during their lifecycle. The Act imposes heavy fines for non-compliance, seeking to build a trustworthy technology to foster innovation, growth and competitiveness in the EU’s internal market for AI.
Why do we need to regulate the use of Artificial Intelligence?
AI is a rapidly developing technology that can bring significant benefits to society and the economy but also poses new challenges and risks that need to be addressed in order to avoid undesirable outcomes. For example, some AI systems may be opaque, biased, inaccurate, or harmful to users or third parties. Therefore, the EU has decided to act as one to regulate the use of AI in a human-centric and proportionate manner based on its democratic values and fundamental human rights.
Why does the EU AI Act exist?

The AI Act seeks to balance the following goals:

  • Promotion of AI Innovation: Encourage the development and uptake of trustworthy AI within the EU.
  • Protection of Fundamental Rights: Ensure AI systems respect fundamental human rights (e.g., privacy, non-discrimination) and EU values.
  • User Safety: Guarantee that AI systems placed on the market or used within the EU are safe and mitigate potential risks.
What are the key elements of the EU AI Act?
The EU AI Act introduces a comprehensive regulatory framework addressing various aspects of AI development and use. Here’s a breakdown of its key elements:
  • Regulation of AI Systems: The Act establishes rules for the placement, putting into service, and using artificial intelligence systems within the EU. This extends to general-purpose AI models that can be adapted for various tasks.
  • Prohibition of Harmful Practices: Certain AI practices that present unacceptable risks are outright banned. These include government social scoring and the use of AI to exploit the vulnerabilities of specific groups.
  • High-Risk AI Requirements: AI systems categorized as “high-risk” have strict requirements including:
      • Risk management systems
      • Use of high-quality training datasets
      • Logging and record-keeping
      • Human oversight mechanisms
      • Transparency and clear information provided to users
      • Accuracy, robustness and cybersecurity
  • Transparency Rules: Specific AI systems, such as chatbots or deepfakes, must disclose that they are AI-powered to ensure users aren’t misled.
  • Innovation Support: The Act aims to foster a trustworthy AI ecosystem within the EU, particularly by supporting the development and uptake of AI by small and medium-sized enterprises (SMEs) and start-ups. This includes initiatives like regulatory sandboxes for testing.
  • Hefty fines: Non-compliance with the prohibition of certain AI systems faces fines of up to 35 000 000 EUR or 7% of the offender’s global annual turnover.
Who needs to comply with the EU AI Act?
The EU AI Act applies to both public and private actors – providers, importers, distributors, and users of AI systems – inside and outside the EU, as long as the AI system is placed on the EU market or its use affects people located in the EU. Importers of AI systems will also have to ensure that the foreign provider has already carried out the appropriate conformity assessment procedure, bears a European Conformity (CE) marking, and is accompanied by the required documentation and instructions of use.

Providers of free and open-source models are exempt from most of these obligations. However, this exemption does not cover obligations for providers of general-purpose AI models with systemic risks.

Obligations also do not apply to research, development, and prototyping activities preceding the release on the market, and, furthermore, the regulation does not apply to AI systems that are exclusively for military, defense, or national security purposes, regardless of the type of entity carrying out those activities.
What are the obligations of general-purpose AI models?
In addition, certain obligations are foreseen for providers of general-purpose AI models, including large generative AI models. The obligations are mainly comprised of:
  • make technical documentation accessible to competent authorities,
  • comply with copyright and related rights,
  • provide information on training data,
  • make necessary information and documentation available, where needed, to integrate GPAI into another AI system.
When will the EU AI Act be enforced?
Following its adoption by the European Parliament and the Council, the AI Act shall enter into force on the twentieth day following its publication in the Official Journal. It will become fully applicable 24 months after entry into force, with a staggered approach as follows:
  • 6 months after entry into force, member states shall phase out any prohibited systems.
  • 12 months: obligations for general-purpose AI and governance provisions shall become applicable.
  • 24 months: all rules of the AI Act become applicable, including obligations for high-risk systems defined in Annex III (list of high-risk use cases).
  • 36 months: obligations for high-risk systems defined in Annex I (list of EU harmonization legislation) apply.
How will the AI Act be enforced?
Each member state should designate one or more competent national authorities to supervise the application and implementation of the AI Act as well as carry out market surveillance activities.

To increase efficiency and to create an official point of contact regarding the public and other counterparts, each member state should designate one national supervisory authority three months after the Act’s entry into force, which will also represent the country in the European Artificial Intelligence Board. Additional technical expertise will be provided by an advisory forum representing a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society, and academia.

In addition, the Commission will establish a new European AI Office within the Commission, which will supervise the achievement of codes of practice as well as general-purpose AI models, cooperate with the European Artificial Intelligence Board, and be supported by a scientific panel of independent experts.
How does the EU AI Act categorize AI systems?
The AI Act uses a risk-based categorization system:
  • Unacceptable Risk: AI systems deemed to have an unacceptable level of risk are prohibited entirely (for example, social scoring systems).
  • High-Risk: AI systems with significant potential for harm (such as those used in critical infrastructure, HR recruitment, or law enforcement) are subject to strict compliance and regulations. Annexes I and III to the Act list the high-risk AI systems, which can be reviewed in order to align with the evolution of AI use cases.
  • Limited Risk: AI systems with specific transparency obligations (like chatbots or deepfakes, where users need to be aware they’re interacting with AI).
  • Minimal Risk: The vast majority of AI systems (like spam filters or AI in video games) fall into this category with no specific regulations beyond existing laws.
Specific transparency requirements are imposed for certain AI systems, for example, where there is a clear risk of manipulation (e.g., via the use of chatbots). Users should be aware that they are interacting with a machine.

In addition, the AI Act considers systemic risks that could arise from general-purpose AI models, including large generative AI models, if they are highly capable or widely used. For example, powerful models could cause serious accidents or be misused for far-reaching cyberattacks. Many individuals could be affected or discriminated against if a model propagates harmful biases across many applications.
What are the key requirements for high-risk AI systems?

If your AI system is classified as high-risk, you’ll need to adhere to requirements including:

  • Robust Risk Management: Providers of high-risk AI systems must establish a comprehensive risk management system. This involves identifying, analyzing, mitigating, and monitoring risks associated with the AI system throughout its entire lifecycle.
  • High-Quality Datasets: High-risk AI systems need to be trained on datasets that are:
    • Relevant, representative, free of errors, and complete.
    • Collected or created in ways that respect privacy and data protection laws.
    • Designed to mitigate the risk of bias and discrimination in the AI system’s outputs.
    • Special categories of personal data can only be processed for bias detection and correction under rigid access, retention, documentation, and misuse controls.
  • Technical Documentation & Record-keeping: Detailed documentation on the AI system’s design, general logic, algorithms, capabilities, limitations, and intended purpose must be maintained. Records of the system’s operation must also be kept, logging capabilities should enable the identification of potential high-risk situations, enabling traceability and review of the AI system’s decisions when needed.
  • Human Oversight: Human oversight should be embedded to prevent or minimize the risks to fundamental rights. This includes mechanisms for humans to:
    • Monitor the system’s output for any unexpected performance.
    • Understand how decisions are made and remain aware of automation bias.
    • Intervene, override, or stop the AI system when necessary.
  • Transparency & User Information: Users must be clearly informed that they are interacting with an AI system and must be provided with instructions containing information about:
    • The system’s capabilities and limitations.
    • How to correctly use and interpret the system’s outputs, potential risks to health and safety led by a reasonably foreseeable misuse.
    • How to contact the system provider.
  • Accuracy, Robustness, & Cybersecurity: High-risk AI systems must meet high standards in these areas:
    • Accuracy: Outputs should be reliable and accurate within intended limits.
    • Robustness: Systems must be resilient to errors, inconsistencies, and attacks (model evasion, confidentiality attacks).
    • Cybersecurity: Strong safeguards against security vulnerabilities are required.
  • Conformity Assessment: Before being placed on the market and after every substantial modification, high-risk AI systems must undergo a conformity assessment to demonstrate compliance with the AI Act. For some systems, this may involve third-party certification.
Which uses of AI systems are categorized as ‘unacceptable risk’?

The Act bans a very limited set of particularly harmful uses of AI that violate EU values and fundamental rights:

  • Social scoring by public or private actors.
  • Exploitation of vulnerabilities of persons and the use of subliminal techniques with the effect of materially distorting the behavior of a person
  • Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions.
  • Biometric categorization of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation; Filtering of lawfully acquired biometric datasets by law enforcement is exempt.
  • Individual predictive policing.
  • Emotion recognition in the workplace and education institutions unless for medical or safety reasons (i.e., monitoring the tiredness levels of a pilot).
  • Untargeted scraping of the Internet or CCTV for facial images to build up or expand databases.
What are the obligations of providers of high-risk AI systems?

Providers of high-risk AI systems have significant responsibilities to ensure the safety, transparency, and ethical use of their systems. Key obligations under the EU AI Act include:

  • Compliance with AI Act Requirements: Ensure their high-risk AI systems fulfill the requirements set out in the Act, including those for data quality, risk management, human oversight, accuracy, robustness, cybersecurity, and transparency.
  • Quality Management System: Implement and maintain a quality management system covering design and quality control, testing and validation, technical specifications, data, and risk management of the high-risk AI system to ensure ongoing compliance with requirements.
  • Technical Documentation & Record-keeping: Prepare and maintain extensive technical documentation detailing the system’s design, function, risks, mitigation strategies, and relevant changes. Additionally, keep accurate records of the AI system’s operation for traceability and review.
  • Conformity Assessment: Subject the high-risk AI system to a conformity assessment procedure, which can be performed as an internal control or by a notified third-party body depending on the risk level and the specific use of the AI system.
  • CE Marking: Upon successful conformity assessment, affix the indelible CE marking to the AI system, indicating its compliance with the EU AI Act
  • Post-Market Monitoring: Implement a system for actively monitoring the high-risk AI system’s compliance and, where relevant, interaction with other AI systems in use .
  • Corrective Actions & Reporting: Take prompt corrective actions if risks or incidents are identified or disable the system as appropriate. Providers must also notify the distributors of high-risk AI systems and report serious incidents and malfunctions to national authorities.
  • Registration: Register certain high-risk AI systems in a publicly accessible EU database
What is a Fundamental Rights Impact Assessment (FRIA)?
A Fundamental Rights Impact Assessment (FRIA) is a proactive process that evaluates the potential impact of an AI system on fundamental human rights. It helps identify potential risks and allows deployers and users of AI systems to take steps to mitigate those risks and ensure their AI system operates in a way that respects fundamental rights.

Who Conducts a FRIA?

The responsibility for conducting an FRIA falls on several parties depending on the context:
  • Deployers of High-Risk AI Systems: Under the EU AI Act, deployers of high-risk AI systems are primarily responsible for conducting a FRIA. This assessment should take place before the deployment of the high-risk AI system. According to the Act, deployers are either bodies governed by public law or private entities providing public services. The definition also includes deployers of high-risk AI systems referred to in points 5(b) and (c) of Annex III.
  • Users of High-Risk AI Systems: Organizations that intend to use high-risk AI systems within the EU may also need to conduct an FRIA. This is particularly relevant if the system is being adapted for a new use case or context.
When is an FRIA Conducted? The ideal time to conduct an FRIA is during the early stages of development for a high-risk AI system. This allows for early identification and mitigation of potential risks. However, an FRIA can be conducted at any point in the development or use of a high-risk AI system, particularly if concerns arise about its potential impact on fundamental rights. These concerns include changes in the intended use purpose of the system or groups that are likely to be affected by the AI system or the extension of the designated time period of use of the system.
What are the reasons to invest in an EU AI Act Assurance Service?
  • Compliance Expertise: EU AI Act Assurance services bring specialized expertise in understanding the complex regulatory requirements of the Act. These experts can help organizations navigate the risk classification process, develop appropriate compliance strategies, and prepare for conformity assessments.
  • Risk Reduction: Assurance services aid in identifying and mitigating risks inherent in the development and deployment of AI systems. This includes risks related to fundamental rights, data bias, and system safety. Proactive risk management can prevent costly non-compliance penalties and reputational damage.
  • Streamlining Conformity Assessments: Experienced service providers can guide organizations through the conformity assessment processes, ensuring the necessary technical documentation, testing, and risk management systems are in place. This helps streamline the path toward achieving CE marking and market access.
  • Enhancing Governance & Building Trust: Engaging with an assurance service demonstrates a commitment to responsible AI development and can enhance trust with users, investors, and regulators. It signals that the organization prioritizes ethical principles and compliance.
  • Competitive Advantage: In the increasingly regulated AI landscape, companies that proactively demonstrate compliance with the EU AI Act will have a competitive edge. Early adopters of assurance services are seen as leaders in trustworthy and responsible AI.
Who benefits most from Assurance Services?

  • Providers of high-risk AI systems: Companies developing or deploying AI systems classified as high-risk are prime candidates, as compliance is mandatory.
  • Organizations aiming for a leadership position: Companies looking to establish themselves as frontrunners in ethical and responsible AI development can benefit from demonstrating their adherence to the AI Act.
  • Companies with complex AI systems: Organizations using AI systems with multi-faceted use cases or those processing sensitive data will find value in a comprehensive compliance and risk assessment.
  •    WE' D LOVE TO HELP YOU  
  •    WE' D LOVE TO HELP YOU  
  •    WE' D LOVE TO HELP YOU  
  •   WE' D LOVE TO HELP YOU  
  •    WE' D LOVE TO HELP YOU  
  •    WE' D LOVE TO HELP YOU  

Get ready for the 
EU AI Act with us!

    Let's get started with your
    AI Bias Audit!

      FURTHER READING