
Explaining decisions made with AI
Origin:
Language:
Type:
Creator:
Europe
Europe
Framework
Information Commissioner's Officer (ICO)
Mainly focused on explaining decisions made with AI, but it contains fairness issues in the model

Ethics Canvas
Origin:
Language:
Type:
Creator:
English
Tech tool
ADAPT Centre
Helps you structure ideas about the ethical implications of the projects you are working on, to visualize them and to resolve them. Produced mainly for Managers

Fairness-indicators: Tensorflow's Fairness Evaluation and Visualization Toolkit
Origin:
Language:
Type:
Creator:
English
Tech tool
Designed to support teams in evaluating, improving, and comparing models for fairness concerns in partnership with the broader Tensorflow toolkit. Perspective AI is provided as a content moderation case study.

CERTIFAI
Origin:
Language:
Type:
Creator:
English
Tech tool
Cognitive Scale - Cortex
Tool developed by Cognitive Scale for data scientists to evaluate their AI models for robustness, fairness, and explainability, and allows users to compare different models or model versions for these qualities.

Guidelines for Quality Assurance of Machine Learning-based Artificial Intelligence
Origin:
Language:
Type:
Creator:
Japanese
Guide or manual
QA4AI
The Guidelines for the Quality Assurance of AI Systems offers a comprehensive technical assessment of quality measures for AI systems, but it is not strictly speaking a document on AI Fairness. It is updated periodically in its original Japanese version, but an informal English translation is available too.

From Principles to Practice – An interdisciplinary framework to operationalize AI ethics
Origin:
Language:
Type:
Creator:
English
Guide
AIEI Group
The paper offers concrete guidance to decision-makers in organizations developing and using AI on how to incorporate values into algorithmic decision-making, and how to measure the fulfillment of values using criteria, observables and indicators combined with a context dependent risk assessment.

Review into bias in algorithmic decision-making
Origin:
Language:
Type:
Creator:
Europe
Europe
Guide
Centre for Data Ethics and Innovation
It's more an educational publication than a tool (as the name suggests: "Review of.."). However, also provides some (high-level) recommendations for governments and regulators.

Ethically Aligned Design
Origin:
Language:
Type:
Creator:
English
Guide or manual
IEEE Global A/IS Ethics Initiative
Identifies specific verticals and areas of interest and helps provide highly granular and pragmatic papers and insights as a natural evolution of our work. Produced mainly for Tech teams.

ML Fairness Gym
Origin:
Language:
Type:
Creator:
English
Tech tool
Open source development tool for building simple simulations that explore the potential long-run impacts of deploying machine learning-based decisión systems.

Fairness feature testing
Origin:
Language:
Type:
Creator:
English
Tech tool
Data Robot
Allows you to flag protected features in your dataset and then actively guides you through the selection of the best fairness metric to fit the specifics of your use case. Produced mainly for Tech teams.

RCModel, a Risk Chain Model for Risk Reduction in AI Services
Origin:
Language:
Type:
Creator:
Asia
English
Guide or manual
The University of Tokyo
The risk chain model (RCModel) supports AI service providers in proper risk assessment and control, and offers rpolicy recommendations

Algorithmic Accountability Policy Toolkit
Origin:
Language:
Type:
Creator:
English
Guide or manual
AI Now Institute
This toolkit includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms. Produced mainly for Tech teams.
Do you want to contribute?
This is a live Global Library.
This publication was last updated in August 2022. If you have any resource on AI fairness that has not been published on this Global Library and you would like for it to be considered, or if you are the creator of a resource published here, and would like to edit the information, please send us an email to info@cminds.co
Disclosures:
The material included in this site is not necessarily endorsed by the World Economic Forum, the Global Future Council on AI for Humanity, C Minds and/or other collaborators.
The readers and/or users of each resource must evaluate each tool for his/her specific intended purpose. This first interation includes only free and publicly available resources.
The intelectual property of all of the resources are owned by the creators of each individual resource.
This material may be shared, provided that it is clearly attributed to its creators. This material may not be used for commercial purposes.
Global Future Council on AI for Humanity,WEF with the support of C Minds