EU AI Act Assurance
Service Pillars
Regulatory Requirements
In-Depth Comprehension
Comprehensive Technical & Compliance Assessment
Risk Management & Mitigation
Continuous Monitoring & Improvement
Why us?
Features
Benefits
- Improve ROI
- AI Inventory Mapping helps organizations gain visibility and control over their AI assets, enabling them to effectively manage and leverage AI technologies.
- Technical assessment ensures that AI systems perform as intended under various conditions, leading to improved efficiency and productivity. By optimizing performance and reliability, organizations can achieve better outcomes and maximize the value derived from their AI investments.
- Continuous Monitoring and auditing ensures that each AI asset always represents and resolves a current business problem. If not, it facilitates timely corrections and enhancements. A well-designed monitoring and continuous learning process is key to maximizing the business value any AI asset may add over time
- The processes and mechanisms defined within the EU AI Act assurance context may serve as the stepping stone for further development and implementations of robust policies, procedures, and governance frameworks to promote transparency, accountability, and responsible AI practices across your organization.
- Present tangible evidence of compliance using various metrics in an intuitive manner which makes it easily comprehensible by a broad audience
- Demonstrate your commitment to compliance with the EU AI Act, enhancing confidence and trust among regulatory authorities, customers, investors, and other stakeholders.
Frequently Asked Questions
The AI Act seeks to balance the following goals:
- Promotion of AI Innovation: Encourage the development and uptake of trustworthy AI within the EU.
- Protection of Fundamental Rights: Ensure AI systems respect fundamental human rights (e.g., privacy, non-discrimination) and EU values.
- User Safety: Guarantee that AI systems placed on the market or used within the EU are safe and mitigate potential risks.
- Regulation of AI Systems: The Act establishes rules for the placement, putting into service, and using artificial intelligence systems within the EU. This extends to general-purpose AI models that can be adapted for various tasks.
- Prohibition of Harmful Practices: Certain AI practices that present unacceptable risks are outright banned. These include government social scoring and the use of AI to exploit the vulnerabilities of specific groups.
- High-Risk AI Requirements: AI systems categorized as “high-risk” have strict requirements including:
-
- Risk management systems
- Use of high-quality training datasets
- Logging and record-keeping
- Human oversight mechanisms
- Transparency and clear information provided to users
- Accuracy, robustness and cybersecurity
-
Providers of free and open-source models are exempt from most of these obligations. However, this exemption does not cover obligations for providers of general-purpose AI models with systemic risks.
Obligations also do not apply to research, development, and prototyping activities preceding the release on the market, and, furthermore, the regulation does not apply to AI systems that are exclusively for military, defense, or national security purposes, regardless of the type of entity carrying out those activities.
- make technical documentation accessible to competent authorities,
- comply with copyright and related rights,
- provide information on training data,
- make necessary information and documentation available, where needed, to integrate GPAI into another AI system.
- 6 months after entry into force, member states shall phase out any prohibited systems.
- 12 months: obligations for general-purpose AI and governance provisions shall become applicable.
- 24 months: all rules of the AI Act become applicable, including obligations for high-risk systems defined in Annex III (list of high-risk use cases).
- 36 months: obligations for high-risk systems defined in Annex I (list of EU harmonization legislation) apply.
To increase efficiency and to create an official point of contact regarding the public and other counterparts, each member state should designate one national supervisory authority three months after the Act’s entry into force, which will also represent the country in the European Artificial Intelligence Board. Additional technical expertise will be provided by an advisory forum representing a balanced selection of stakeholders, including industry, start-ups, SMEs, civil society, and academia.
In addition, the Commission will establish a new European AI Office within the Commission, which will supervise the achievement of codes of practice as well as general-purpose AI models, cooperate with the European Artificial Intelligence Board, and be supported by a scientific panel of independent experts.
- Unacceptable Risk: AI systems deemed to have an unacceptable level of risk are prohibited entirely (for example, social scoring systems).
- High-Risk: AI systems with significant potential for harm (such as those used in critical infrastructure, HR recruitment, or law enforcement) are subject to strict compliance and regulations. Annexes I and III to the Act list the high-risk AI systems, which can be reviewed in order to align with the evolution of AI use cases.
- Limited Risk: AI systems with specific transparency obligations (like chatbots or deepfakes, where users need to be aware they’re interacting with AI).
- Minimal Risk: The vast majority of AI systems (like spam filters or AI in video games) fall into this category with no specific regulations beyond existing laws.
In addition, the AI Act considers systemic risks that could arise from general-purpose AI models, including large generative AI models, if they are highly capable or widely used. For example, powerful models could cause serious accidents or be misused for far-reaching cyberattacks. Many individuals could be affected or discriminated against if a model propagates harmful biases across many applications.
If your AI system is classified as high-risk, you’ll need to adhere to requirements including:
- Robust Risk Management: Providers of high-risk AI systems must establish a comprehensive risk management system. This involves identifying, analyzing, mitigating, and monitoring risks associated with the AI system throughout its entire lifecycle.
- High-Quality Datasets: High-risk AI systems need to be trained on datasets that are:
- Relevant, representative, free of errors, and complete.
- Collected or created in ways that respect privacy and data protection laws.
- Designed to mitigate the risk of bias and discrimination in the AI system’s outputs.
- Special categories of personal data can only be processed for bias detection and correction under rigid access, retention, documentation, and misuse controls.
- Technical Documentation & Record-keeping: Detailed documentation on the AI system’s design, general logic, algorithms, capabilities, limitations, and intended purpose must be maintained. Records of the system’s operation must also be kept, logging capabilities should enable the identification of potential high-risk situations, enabling traceability and review of the AI system’s decisions when needed.
- Human Oversight: Human oversight should be embedded to prevent or minimize the risks to fundamental rights. This includes mechanisms for humans to:
- Monitor the system’s output for any unexpected performance.
- Understand how decisions are made and remain aware of automation bias.
- Intervene, override, or stop the AI system when necessary.
- Transparency & User Information: Users must be clearly informed that they are interacting with an AI system and must be provided with instructions containing information about:
- The system’s capabilities and limitations.
- How to correctly use and interpret the system’s outputs, potential risks to health and safety led by a reasonably foreseeable misuse.
- How to contact the system provider.
- Accuracy, Robustness, & Cybersecurity: High-risk AI systems must meet high standards in these areas:
- Accuracy: Outputs should be reliable and accurate within intended limits.
- Robustness: Systems must be resilient to errors, inconsistencies, and attacks (model evasion, confidentiality attacks).
- Cybersecurity: Strong safeguards against security vulnerabilities are required.
- Conformity Assessment: Before being placed on the market and after every substantial modification, high-risk AI systems must undergo a conformity assessment to demonstrate compliance with the AI Act. For some systems, this may involve third-party certification.
The Act bans a very limited set of particularly harmful uses of AI that violate EU values and fundamental rights:
- Social scoring by public or private actors.
- Exploitation of vulnerabilities of persons and the use of subliminal techniques with the effect of materially distorting the behavior of a person
- Real-time remote biometric identification in publicly accessible spaces by law enforcement, subject to narrow exceptions.
- Biometric categorization of natural persons based on biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs or sexual orientation; Filtering of lawfully acquired biometric datasets by law enforcement is exempt.
- Individual predictive policing.
- Emotion recognition in the workplace and education institutions unless for medical or safety reasons (i.e., monitoring the tiredness levels of a pilot).
- Untargeted scraping of the Internet or CCTV for facial images to build up or expand databases.
Providers of high-risk AI systems have significant responsibilities to ensure the safety, transparency, and ethical use of their systems. Key obligations under the EU AI Act include:
- Compliance with AI Act Requirements: Ensure their high-risk AI systems fulfill the requirements set out in the Act, including those for data quality, risk management, human oversight, accuracy, robustness, cybersecurity, and transparency.
- Quality Management System: Implement and maintain a quality management system covering design and quality control, testing and validation, technical specifications, data, and risk management of the high-risk AI system to ensure ongoing compliance with requirements.
- Technical Documentation & Record-keeping: Prepare and maintain extensive technical documentation detailing the system’s design, function, risks, mitigation strategies, and relevant changes. Additionally, keep accurate records of the AI system’s operation for traceability and review.
- Conformity Assessment: Subject the high-risk AI system to a conformity assessment procedure, which can be performed as an internal control or by a notified third-party body depending on the risk level and the specific use of the AI system.
- CE Marking: Upon successful conformity assessment, affix the indelible CE marking to the AI system, indicating its compliance with the EU AI Act
- Post-Market Monitoring: Implement a system for actively monitoring the high-risk AI system’s compliance and, where relevant, interaction with other AI systems in use .
- Corrective Actions & Reporting: Take prompt corrective actions if risks or incidents are identified or disable the system as appropriate. Providers must also notify the distributors of high-risk AI systems and report serious incidents and malfunctions to national authorities.
- Registration: Register certain high-risk AI systems in a publicly accessible EU database
Who Conducts a FRIA?
The responsibility for conducting an FRIA falls on several parties depending on the context:
- Deployers of High-Risk AI Systems: Under the EU AI Act, deployers of high-risk AI systems are primarily responsible for conducting a FRIA. This assessment should take place before the deployment of the high-risk AI system. According to the Act, deployers are either bodies governed by public law or private entities providing public services. The definition also includes deployers of high-risk AI systems referred to in points 5(b) and (c) of Annex III.
- Users of High-Risk AI Systems: Organizations that intend to use high-risk AI systems within the EU may also need to conduct an FRIA. This is particularly relevant if the system is being adapted for a new use case or context.
- Compliance Expertise: EU AI Act Assurance services bring specialized expertise in understanding the complex regulatory requirements of the Act. These experts can help organizations navigate the risk classification process, develop appropriate compliance strategies, and prepare for conformity assessments.
- Risk Reduction: Assurance services aid in identifying and mitigating risks inherent in the development and deployment of AI systems. This includes risks related to fundamental rights, data bias, and system safety. Proactive risk management can prevent costly non-compliance penalties and reputational damage.
- Streamlining Conformity Assessments: Experienced service providers can guide organizations through the conformity assessment processes, ensuring the necessary technical documentation, testing, and risk management systems are in place. This helps streamline the path toward achieving CE marking and market access.
- Enhancing Governance & Building Trust: Engaging with an assurance service demonstrates a commitment to responsible AI development and can enhance trust with users, investors, and regulators. It signals that the organization prioritizes ethical principles and compliance.
- Competitive Advantage: In the increasingly regulated AI landscape, companies that proactively demonstrate compliance with the EU AI Act will have a competitive edge. Early adopters of assurance services are seen as leaders in trustworthy and responsible AI.
- Providers of high-risk AI systems: Companies developing or deploying AI systems classified as high-risk are prime candidates, as compliance is mandatory.
- Organizations aiming for a leadership position: Companies looking to establish themselves as frontrunners in ethical and responsible AI development can benefit from demonstrating their adherence to the AI Act.
- Companies with complex AI systems: Organizations using AI systems with multi-faceted use cases or those processing sensitive data will find value in a comprehensive compliance and risk assessment.
- WE' D LOVE TO HELP YOU
- WE' D LOVE TO HELP YOU
- WE' D LOVE TO HELP YOU
- WE' D LOVE TO HELP YOU
- WE' D LOVE TO HELP YOU
- WE' D LOVE TO HELP YOU
Get ready for the EU AI Act with us!
Let's get started with your
AI Bias Audit!
FURTHER READING
OECD AI Principles: Guardrails to Responsible AI Adoption
Amidst the transformative wave of artificial intelligence (AI) invoked in the digital ecosystem, the Organization for Economic Cooperation and Development...
Read MoreA Technical Report Providing Guidelines for Testing AI-Based Systems
Today, software permeates every aspect of life, and artificial intelligence (AI) is becoming increasingly integral to business operations. As AI...
Read MoreOWASP Top 10 Vulnerabilities for Large Language Models (LLM): Impact and Mitigation
Large Language Models (LLMs) have garnered significant attention since the mass-market introduction of pre-trained chatbots in late 2022. Companies keen...
Read More