Code4Thought

TRUSTWORTHY AI

AI Bias Audit for Kepler’s
Night Nurse Solution

CASE STUDIES > AI BIAS AUDIT
Complexity of the project
Code4Thought had to deploy PyThia in a way specialized for Kepler’s needs. More specifically, due to very strict data privacy regulations, Kepler’s data could not leave their premises, so PyThia was deployed locally in their premises.
Regarding the Bias Testing evaluation, the process was more straightforward, since there were no strict data restrictions for the data annotations and as a result it could run in Code4Thought’s premises.
Code4Thought’s proprietary technology PyThia, was used to evaluate the trustworthiness level of Kepler’s Night Nurse system, by discovering possible model biases and explaining model’s predictions.
To this end, PyThia was deployed containing:
  • A Bias Testing mechanism, whose purpose was to evaluate data for unwanted patterns using the Disparate Impact Ratio and Conditional Demographic Disparity metrics. By imposing those metrics on the data (input and output) of the given AI system, PyThia identified that the model’s decisions were free from significant bias for given demographic groups.
  • An Explainability mechanism, using the M.A.SHAP. algorithm that provides a level of understanding on how an AI system produced a given result in a model-agnostic way. By looking at the data (input and output) PyThia identified the features that contributed most to the model’s decisions, enabling Kepler developers to get insights on how their model operates. This helped foster trust and reassure users about the safety and equity of their AI system.
This whole exercise triggered the following outcomes:  
  • Code4Thought’s Bias Testing and AI explanation mechanisms were validated and operationalized as a means for providing bias audits and transparency to the KNN model’s decisions respectively.
Based on the above mentioned actions, Kepler is now able to ensure the KNN system has built in safeguards and oversight, and assess its risks in a structured way.
“As the gravity of decisions made by AI systems increases, so does our need to ensure they operate fairly and transparently. Nowhere is this needed more than in the medical device space, where the judgments of AI powered tools can literally be a matter of life and death. The EU Commission’s proposal for AI systems Regulation makes it clear that more can be done by companies using Deep Learning algorithms with high complexity and opacity to build confidence in AI systems. By working with Code4Thought, Kepler Vision is confirming its dedication to improving the lives of all patients its technology is applied to, regardless of individual differences.”
– Dr. Harro Stokman, CEO of Kepler Vision Technologies

FURTHER READING

Tips on how to secure your ML model

Figuring out the reasons why your ML model might be consistently less accurate in certain classes than others, might help you increase not only its total accuracy but also its adversarial robustness.

In Algorithms We Need To Trust? Not There Yet

Artificial Intelligence is impacting our lives for good, so we need to take a closer look

Fix your recommender system to fix your profitability

How bias in Recommender Systems affects e-commerce, society and eventually your profits