On the road
to ensure good AI

Yiannis Kanellopoulos
CEO and Founder | code4thought
Attending an event such as the AI Quality Summit 2022 in Frankfurt proved to be an exciting mental exercise. Contributing to a dialog with a whole spectrum of experts and decision makers (i.e., regulators and policy makers, academics, corporations and startups) on what we need to do in order to build AI technology that is of high quality and can be trusted, is a thought-provoking experience.
In the aftermath of this event, the first thing I realized is that terms such as AI Quality, AI Assurance, Ethical AI, Trustworthy (or Trusted) AI, Responsible AI and Dependable AI are being used extensively by various stakeholders carrying also somehow similar semantics, but with some variations. I am not going to argue which one is the most appropriate, but all of them share the below common set of properties prescribing what a good (or trustworthy) AI system needs to abide by (do note that the provided definitions reflect the author’s interpretation):
  • Performance: This can be defined as the system’s ability to deliver its promise/solution to a given problem. One may say that accuracy is a good indicator of that, but not the only one if you ask me
  • Fairness: In other words how we ensure that an AI system has no prejudice or preference for a certain characteristic or feature in their dataset
  • Transparency: Which to us, is merely the level of understanding how an AI system came up with a given result
  • Safety & Security (aka Robustness): Which refers to the system’s ability to withstand malicious attacks by retaining its safety and security to a high degree.
Following that, while regulations are being shaped and based on them, policies and standards are formulated, it is becoming apparent that we need technology to operationalise those standards facilitating the auditing of AI systems. What still remains an open question is how to conduct such an AI Audit, what should be in the scope and how to measure it. For that we may attribute a separate blog-post, but for now it is important to note that a good AI Audit needs to:
  • Be relevant to both the engineering team building the system, as well as to the management of the organization using it
  • Provide insights that will actually help the teams improving their system(s) and not just ticking the box (of an enforced policy)
  • Translate any technical insights into a language that can be understood to the management of the organization whose system is being audited.
Concluding, I’d say that it is crucial for all of us to realize that:
  • Regulators and policy makers should harmonize policies and standards between the EU countries (incl. UK)
  • Citizens and individuals in general must understand their rights and how to protect them from the use of the AI technology
  • Startups like ours should focus our attention on the operationalization of any regulations/standards that are being currently discussed
  • We should keep the lines of communication open and continue the conversation.