Code4Thought

TRUSTWORTHY AI

Trustworthy AI:
solutions

One thing we know best at code4thought is to evaluate the quality of large scale traditional software systems. With our AI Testing & Audit solution we bring all this expertise (more than 45 years combined to be honest) to the domain of Artificial Intelligence (AI) helping organizations ensure their AI-based systems can be trusted.
Our own proprietary platform PyThia, enables the analysis of any type of data and AI models based on the ISO 29119-11 international standard for testing AI-based systems.
No Due Diligence of AI companies & systems can be complete & reliable without an AI Technology Due Diligence. A detailed and deep understanding of the AI model/algorithm is necessary in order to identify hidden risks and opportunities that stem from the AI technology itself. And this is what exactly and reliably our AI Technology Due Diligence solution does. Using our AI-testing platform PyThia we can analyse any type of AI-based system and deliver a thorough risk analysis and a practical improvement roadmap.
The New York City Bias Audit Law (Local Law 144) is a new legislation clarifying the requirements for the use of automated employment decision tools (AEDT) within New York City. Our structured auditing processes for Bias Testing have been adapted to legal requirements. Combined with our expertise in comprehensive testing and auditing of AI systems, they are your ideal solution not only for compliance, but also as your first step for a reliable and trustworthy AI system.
Despite rising investments in artificial intelligence (AI) by today’s organizations, trust in the insights delivered by AI can be a blocking factor for its further adoption especially from C-level executives. More specifically several studies indicate that more than 60% of executives express discomfort and mistrust on the results produced by AI-based systems. We are ready to help and advise as to what are the best-practices for setting up the proper processes and infrastructure that will ensure your AI-based systems are Responsible, Reliable and can be Trusted.

FURTHER READING

Tips on how to secure your ML model

Figuring out the reasons why your ML model might be consistently less accurate in certain classes than others, might help you increase not only its total accuracy but also its adversarial robustness.

In Algorithms We Need To Trust? Not There Yet

Artificial Intelligence is impacting our lives for good, so we need to take a closer look

Fix your recommender system to fix your profitability

How bias in Recommender Systems affects e-commerce, society and eventually your profits