image

Don’t be wrong because you might be fooled: Tips on how to secure your ML model

May 17, 2022

Figuring out the reasons why your ML model might be consistently less accurate in certain classes than others, might help you increase not only its total accuracy but also its adversarial robustness. Introduction Machine Learning (ML) models and especially the Deep Learning (DL) ones can achieve impressive results, especially on unstructured data like images and […]

Read more →
image

Fairness, Accountability & Transparency (F.Acc.T) under GDPR

November 9, 2020

  Why the need for Regulation? Algorithmic decisions are already crucially affecting our lives. The last few year, news like the ones listed below are becoming more and more common: – “There’s software used across the country to predict future criminals. And it’s biased against blacks”, 2016 [source: ProPublica]. – “Amazon reportedly scraps internal AI […]

Read more →
image

Is Twitter biased against BIPOC? Maybe it’s not what you think it is.

October 2, 2020

The controversy Twitter was in the headlines recently for apparent racial bias in the photo preview of some tweets. More specifically, Twitter’s machine learning algorithm that selects which part of an image to show in a photo preview favors showing the faces of white people over black people. For example the following tweet, contains an […]

Read more →
image

Using explanations for finding bias in black-box models

October 3, 2019

The need to shed light on black box models There is no doubt, that machine learning (ML) models are being used for solving several business and even social problems. Every year, ML algorithms are getting more accurate, more innovative and consequently, more applicable to a wider range of applications. From detecting cancer   to banking and […]

Read more →

Request a session with us

Location

37 Lefkosias Street, Patras GR26441, Greece

E-mail

contact[at]code4thought.eu