Fairness, Accountability &
- “There’s software used across the country to predict future criminals. And it’s biased against blacks”, 2016 [source: ProPublica].
- “Amazon reportedly scraps internal AI recruiting tool that was biased against women”, 2018 [source: The Verge]
- “Apple Card is being investigated over claims it gives women lower credit limits”, 2019 [source: MIT Technology Review]
- “Is an Algorithm Less Racist Than a Loan Officer?”, 2020 [source: NY Times]
- Other scholars  believe that in order for the information to be meaningful it needs to allow the data subjects to exercise their rights defined by Article 22(3), which is the right to “express his or her point of view and to contest the decision”. Since explanations provide this ability, it is argued that they must be presented.
At Code4Thought, we hope that the legal gaps will be filled and the related conflicts will be resolved soon, thereby creating more clear and explicit guidelines and regulations for AI Ethics. But until then, we urge organisations to proactively care about the “F.Acc.T-ness” of their algorithmic systems, since we strongly believe that it can boost their trustworthiness and yield to more stable, responsible algorithmic systems and perhaps a more fair society.
- Personal Information Protection and Electronic Documents Act (PIPEDA) by the Parliament of Canada in 2001
- California Consumer Privacy Act (CCPA) by California State Legislature in June, 2018
- Brazilian General Data Protection Law (Lei Geral de Proteção de Dados or LGPD) passed in 2018
- White paper οn Artificial Intelligence — A European approach to excellence and trust by the European Union in February, 2020
- Denmark’s legislation about AI & Data Ethics being the first country in Europe to implement such laws in May, 2020