Code4Thought

TRUSTWORTHY AI

Trustworthy AI:
case studies

High-Tech Company - Accountability Evaluation and Explainability Analysis for Client’s Network Monitoring Model

Client wanted to evaluate and understand the decisions behind their newly built AI model that was supporting the Network Monitoring service component of their Analytics platform. Code4Thought was asked to test and explain Client’s AI model in order to fulfill the following goals:
  • Define whether Client follows best-practices regarding:
  • The fairness and transparency of the AI model behind their service, and
  • Their ability to hold their model accountable.
  • Deploy Code4Thought’s AI explanation mechanism for the decisions made by Client’s AI model (i.e., AI for AI).

AI Bias Audit for Kepler’s Night Nurse Solution

Built on machine learning powered human activity recognition technology, the Kepler Night Nurse (KNN) solution immediately alerts staff whenever it detects patients in care facilities have fallen, or are experiencing physical distress, meaning they receive the attention they need the moment they need it. This reduces the need to constantly check on patients, and removes the wasted time associated with false alarms that other monitoring solutions provide.
As the world’s first computer vision-based fall detector to be registered as a medical device, Kepler Night Nurse’s deep learning algorithm requires close scrutiny to ensure that it produces fair and accurate results for any patient it monitors. The Bias Testing analysis of Code4Thought is based on the ISO-29119-11 standard – guidelines designed to test black-box AI-based systems to ensure accuracy and precision.