code4thought

TRUSTWORTHY AI

Kepler Vision Technologies –
AI Bias Audit for Kepler’s Night Nurse Solution

CASE STUDIES > AI AUDIT
Complexity of the project
code4thought had to deploy PyThia in a way specialized for Kepler’s needs. More specifically, due to very strict data privacy regulations, Kepler’s data could not leave their premises, so PyThia was deployed locally in their premises.
Regarding the Bias Testing evaluation, the process was more straightforward, since there were no strict data restrictions for the data annotations and as a result it could run in code4thought’s premises.
code4thought’s proprietary technology PyThia, was used to evaluate the trustworthiness level of Kepler’s Night Nurse system, by discovering possible model biases and explaining model’s predictions.
To this end, PyThia was deployed containing:
  • A Bias Testing mechanism, whose purpose was to evaluate data for unwanted patterns using the Disparate Impact Ratio and Conditional Demographic Disparity metrics. By imposing those metrics on the data (input and output) of the given AI system, PyThia identified that the model’s decisions were free from significant bias for given demographic groups.
  • An Explainability mechanism, using the M.A.SHAP. algorithm that provides a level of understanding on how an AI system produced a given result in a model-agnostic way. By looking at the data (input and output) PyThia identified the features that contributed most to the model’s decisions, enabling Kepler developers to get insights on how their model operates. This helped foster trust and reassure users about the safety and equity of their AI system.
This whole exercise triggered the following outcomes:  
  • code4thought’s Bias Testing and AI explanation mechanisms were validated and operationalized as a means for providing bias audits and transparency to the KNN model’s decisions respectively.
Based on the above mentioned actions, Kepler is now able to ensure the KNN system has built in safeguards and oversight, and assess its risks in a structured way.
“As the gravity of decisions made by AI systems increases, so does our need to ensure they operate fairly and transparently. Nowhere is this needed more than in the medical device space, where the judgments of AI powered tools can literally be a matter of life and death. The EU Commission’s proposal for AI systems Regulation makes it clear that more can be done by companies using Deep Learning algorithms with high complexity and opacity to build confidence in AI systems. By working with code4thought, Kepler Vision is confirming its dedication to improving the lives of all patients its technology is applied to, regardless of individual differences.”
– Dr. Harro Stokman, CEO of Kepler Vision Technologies

FURTHER READING