code4thought

TRUSTWORTHY AI

IT Company – AI audit for
Network Monitoring Model

CASE STUDIES > AI AUDIT
The project’s complexity is ranked at medium.
Code4Thought had to analyze/evaluate Client’s AI model that allows network administrators of a given corporation to monitor and identify security threats by flagging inconsistent or suspicious activity from the corporation’s employees active in their networks.
The complexity of the solution provided was largely due to the complexity of the AI model being used by Client. We had to build our Explainability Algorithm on a system that:
  • Was a complex behavioral analytics system using multivariate (i.e., approximately 33 features describing a user’s behavior) anomaly detection indicators;
  • Was based on unsupervised AI algorithms that self-identify potential, previously unknown threats; and
  • Used input data that was filtered and labeled automatically by telemetry and not by a human operator.
Code4Thought’s proprietary technology PyThia, was used to help Client understand where it stands with respect to controlling the AI model behind their Network Monitoring service and to establish a mechanism for constantly explaining its decisions.
To this end, we developed:
An Accountability Evaluation checklist, whose goal is to assess how mature an organization is from a technical as well as from a managerial perspective to govern an AI system. By rating the answers in the checklist, a series of gaps or improvement points both for the AI system per se and Client were identified.
A Bias Testing mechanism, whose purpose is to evaluate data for unwanted patterns using the Disparate Impact Ratio and Conditional Demographic Disparity metrics. By imposing those metrics on the data (input and output) of the given AI system, PyThia identified that the model’s decisions were free from bias.
An Explanation Algorithm, named M.A.SHAP. that provides a level of understanding on how an AI system produced a given result. By looking at the data (input and output) PyThia identified the features that contributed most to the model’s decisions, enabling Client to provide their users more insights on how their model operates. This helped foster trust and reassure users about the safety and equity of their AI system.
This whole exercise triggered the following outcomes:
  • A process was established for mitigating risks and identifying proper roles and responsibilities (i.e.. from Product Manager upwards in the organizational hierarchy);
  • Clear responsibilities were set between the Data scientists and Machine Learning engineers who were involved in the AI system’s implementation; and
  • Code4Thought’s AI explanation mechanism was validated and operationalized as a means for providing transparency to the Network Monitoring model’s decisions.
Based on the above mentioned actions, Client is now able to ensure the model’s accuracy, has built in safeguards and oversight, and assess its risks in a structured way as:
  • Previously there was no ground-truth for the model’s decisions. Client is providing its end-users the ability to annotate the decisions so they are able to keep track of the ground truth for the model’s decisions;
  • The employed model is now supervised instead of unsupervised (as it was in the beginning); and
  • There is a risk management process to mitigate any potential issues or problems that might be caused due to the model’s decisions.
“Analyzing our cloud-based, AI-infused analytics service, as well as our data science practices, with Code4Thought was a thought-provoking experience. The improvement areas we have identified, through the concise questionnaire and illuminating visualizations of the internals of our algorithms, increased our confidence in the robustness of our product and maturity of our organization and processes. Indispensable!”
– Distinguished engineer at US company, specializing at secure digital workspaces

FURTHER READING