code4thought

TRUSTWORTHY AI

Into which risk category does your AI system fall?
AI Regulation Questionnaire

QUESTIONAIRE

Into which risk category does your AI system fall?

Answer the following questions for a specific AI system of your organisation and quickly determine whether your AI system falls into the high, medium, or low-risk category, as outlined in the EU AI Act.

This tool provides an initial estimation of your system’s potential regulatory status, that may require further attention for exploring compliance measures to ensure alignment with the EU AI Act’s requirements, promoting safety, transparency, and accountability.


Actors involved in AI

[Article 3(3-7) – EU AI Act]

Question 1/11

Does any of the following apply to you?

Mark only one box:

 Provider A natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge. placing on the market or putting into service AI systems “AI system” is a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
[Article 3(1) – EU AI Act]
ISO/IEC 22989:— defines an AI system as an “engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human defined objectives”. In other words, models that perform actions that require a form of intelligence.
or placing on the market general-purpose AI models in the EU

 Importer A natural or legal person located or established in the Union that places on the market an AI system that bears the name or trademark of a natural or legal person established in a third country. and/or distributor A natural or legal person in the supply chain, other than the provider or the importer, that makes an AI system available on the Union market. of AI systems

 Authorised representative A natural or legal person located or established in the Union who has received and accepted a written mandate from a provider of an AI system or a general purpose AI model to, respectively, perform and carry out on its behalf the obligations and procedures established by this Regulation. of providers, not established in the EU

 DeployerA natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non professional activity. of AI systems that is located within the EU or have their place of establishment there


Product manufacturers

Question 2/11

Does your system fall in any of the following categories?

Mark only one box:
  • Equipment and protective systems intended for use in potentially explosive atmospheres
  • Recreational craft and personal watercraft
  • In vitro diagnostic medical devices
  • Lifts and safety component for lifts
  • Appliances burning gaseous fuels
  • Personal protective equipment
  • Cableway equipment
  • Pressure equipment
  • Radio equipment
  • Medical devices
  • Safety of toys
  • Machinery

Placing of the AI system on the market/Putting the AI system into service

Question 3/11

Does the AI system in your product meet any of these conditions?

Mark only one box:

 Placed ‘placing on the market’ means the first making available of an AI system or a general-purpose AI model on the Union market on the  market ‘making available on the market’ means the supply of an AI system or a general purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge together with the product under my name or trademark

 Put into service ‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose; under my name or trademark after the product has been placed on the market



Excluded systems

Question 4/11

Does your system fall within any of the following categories?

Mark only one box:


Prohibited AI Practices (1/2)

Question 5/11

Does your AI system perform any of the following actions?

Mark only one box:
  • Exploits any of the vulnerabilities of a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with the objective, or the effect, of materially distorting their behaviour
  • Categorises individually natural persons based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation;
  • Deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that that they would not have otherwise taken
  • Social scoring of natural persons or groups of persons that leads to the detrimental or unfavourable treatment of certain natural persons or groups of persons a)in social contexts that are unrelated to the contexts in which the data was originally generated or collected and/or b)that is unjustified or disproportionate to their social behaviour or its gravity
  • Creating or expanding facial recognition databases using untargeted image scaping
  • Assess or predicts the risk of a natural person to commit a criminal offense, based solely on the profiling of a natural person or on assessing their personality traits and characteristics (does not apply to systems used to support the human assessment of the involvement of a person in a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity)
  • Infers emotions of a natural person in the areas of workplace and education institutions


Prohibited AI Practices (2/2)

Question 6/11

Is your system a ‘real-time’ remote biometric identification system used in publicly accessible spaces for the purpose of law enforcement?

Mark only one box:
  • remote biometric identification system: an AI system for the purpose of identifying natural persons, without their active involvement, typically at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database
  • real-time remote biometric identification system: a remote biometric identification system, whereby the capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising not only instant identification, but also limited short delays in order to avoid circumvention
  • ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including safeguarding against and preventing threats to public security



Real-time remote biometric identification system

Question 7/11

Is your system used for one of the following purposes?

Mark only one box:
  • The targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing persons;
  • The prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or a genuine and present or genuine and foreseeable threat of a terrorist attack;
  • The localisation or identification of a person suspected of having committed a criminal offence, for the purposes of conducting a criminal investigation, prosecution or executing a criminal penalty for offences, referred to in Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least four years.



High Risk AI systems (systems in Annex I)

Question 8/11

Does your system fall in one of the following high risk categories?

Mark only one box:
  • Machinery
  • Safety of Toys
  • Recreational craft and personal watercraft
  • Lifts and safety components for lifts
  • Equipment and protective systems intended for use in potentially explosive atmospheres
  • Radio equipment
  • Pressure equipment
  • Cableway installations
  • Personal protective equipment
  • Appliances burning gaseous fuels
  • Medical devices
  • In vitro diagnostic medical devices
  • Interoperability of the rail system
  • Motor vehicles and their trailers
  • 2 or 3 wheel vehicles and quadricycles
  • Agricultural and forestry vehicles
  • Civil aviation security
  • Marine equipment
  • Civil aviation



High Risk AI systems (Annex III)

Question 9/11

Does your system fall in one of the following high risk categories?

Mark only one box:


Profiling of natural persons

[Article 4, point (4)], of Regulation (EU) 2016/679

Question 10/11

Does your system perform profiling of natural persons?

Mark only one box:

‘profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person’s performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements;



Systems that do not pose a significant risk of harm to the health, safety of fundamental rights of natural persons

Question 11/11

Please select which of the following applies to your system

Mark only one box:


Your results:

Excluded from Regulation

Your system is likely excluded from the EU AI Act

However, if you want to make your AI system more efficient & trustworthy, our AI Testing & Audit solution is for you.

What’s Next?

Explore More: Have a look at our solutions on Trustworthy AI that extend from AI testing & Audit to AI Technology Due Diligence and make your AI technology trusted.

Stay informed: Dive into our Knowledge Hub, where you can find our latest articles, guides and useful material on Trustworthy AI and Software Quality & Security.

Join the Community: Connect with us on LinkedIn, X and YouTube and never miss our recent blogpost, upcoming webinar or next conference speech..

Contact Us: Have any questions or need further assistance? Reach out to us at contact@code4thought.eu. We’re here to help.

Your results:

High-risk systems

Your system probably falls into the high-risk category under the EU AI Act regulation.

Our EU AI Act Assurance service can enable you to comply with the requirements for high-risk AI systems in a timely and cost-efficient manner and serves as a comprehensive technical guide for fostering responsible AI deployment and usage, while maximizing the business value of AI systems.

What’s Next?

Explore More: Have a look at our solutions on Trustworthy AI that extend from AI testing & Audit to AI Technology Due Diligence and make your AI technology trusted.

Stay informed: Dive into our Knowledge Hub, where you can find our latest articles, guides and useful material on Trustworthy AI and Software Quality & Security.

Join the Community: Connect with us on LinkedIn, X and YouTube and never miss our recent blogpost, upcoming webinar or next conference speech..

Contact Us: Have any questions or need further assistance? Reach out to us at contact@code4thought.eu. We’re here to help.

Your results:

Limited or minimal risk systems

Your system falls into the minimal or limited risk category under the EU AI Act regulation. Our EU AI Act Assurance service could help identify if there are any obligations that need to be fulfilled.

Our EU AI Act Assurance service can help you identify & mitigate non-compliance issues and serves as a comprehensive technical guide for fostering responsible AI deployment and usage, while maximizing the business value of AI systems.

What’s Next?

Explore More: Have a look at our solutions on Trustworthy AI that extend from AI testing & Audit to AI Technology Due Diligence and make your AI technology trusted.

Stay informed: Dive into our Knowledge Hub, where you can find our latest articles, guides and useful material on Trustworthy AI and Software Quality & Security.

Join the Community: Connect with us on LinkedIn, X and YouTube and never miss our recent blogpost, upcoming webinar or next conference speech..

Contact Us: Have any questions or need further assistance? Reach out to us at contact@code4thought.eu. We’re here to help.

Your results:

Prohibited

Your system is likely prohibited under the EU AI Act

What’s Next?

Explore More: Have a look at our solutions on Trustworthy AI that extend from AI testing & Audit to AI Technology Due Diligence and make your AI technology trusted.

Stay informed: Dive into our Knowledge Hub, where you can find our latest articles, guides and useful material on Trustworthy AI and Software Quality & Security.

Join the Community: Connect with us on LinkedIn, X and YouTube and never miss our recent blogpost, upcoming webinar or next conference speech..

Contact Us: Have any questions or need further assistance? Reach out to us at contact@code4thought.eu. We’re here to help.