Client wanted to evaluate and understand the decisions behind their newly built AI model that was supporting the Network Monitoring service component of their Analytics platform. Code4Thought was asked to test and explain Client’s AI model in order to fulfill the following goals:
- Define whether Client follows best-practices regarding:
- The fairness and transparency of the AI model behind their service, and
- Their ability to hold their model accountable.
- Deploy Code4Thought’s AI explanation mechanism for the decisions made by Client’s AI model (i.e., AI for AI).
Built on machine learning powered human activity recognition technology, the Kepler Night Nurse (KNN) solution immediately alerts staff whenever it detects patients in care facilities have fallen, or are experiencing physical distress, meaning they receive the attention they need the moment they need it. This reduces the need to constantly check on patients, and removes the wasted time associated with false alarms that other monitoring solutions provide.
As the world’s first computer vision-based fall detector to be registered as a medical device, Kepler Night Nurse’s deep learning algorithm requires close scrutiny to ensure that it produces fair and accurate results for any patient it monitors. The Bias Testing analysis of Code4Thought is based on the ISO-29119-11 standard – guidelines designed to test black-box AI-based systems to ensure accuracy and precision.
Tips on how to secure your ML model
Figuring out the reasons why your ML model might be consistently less accurate in certain classes than others, might help you increase not only its total accuracy but also its adversarial robustness.
In Algorithms We Need To Trust? Not There Yet
Artificial Intelligence is impacting our lives for good, so we need to take a closer look
Fix your recommender system to fix your profitability
How bias in Recommender Systems affects e-commerce, society and eventually your profits