AI Fairness 360, an LF AI incubation project, is an extensible open source toolkit that can help users examine, report, and mitigate discrimination and bias in machine learning models throughout the AI application lifecycle. It is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. The toolkit is available in both Python and R. IBM moved AI Fairness 360 to LF AI in July 2020.
Features
10 state-of-the-art bias mitigation algorithms
Developed by the research community and include algorithms such as optimized preprocessing, reweighing, adversarial de-biasing, reject option classification, disparate impact remover, learning fair representations, equalized odds post-processing, meta-fair classifier, and more.
70 Fairness Metrics
Over seventy metrics that can help quantify aspects of individual and group fairness including statistical parity difference, equal opportunity difference, average odds difference, disparate impact, Theil index, Euclidean distance, Manhattan distance, Mahalanobis distance and more.
Industrial Applications
Packed with tutorials demonstrating an industrial use case using the toolkit that offer a deeper, data scientist-oriented introduction. Examples: Credit Scoring and Medical Expenditures.
Getting Started
Read more, try a demo, watch videos, read a paper, use tutorials, ask a question, view notebooks, and more here.
GitHub
Please visit us on GitHub where our development happens. We invite you to join our community both as a user of AI Fairness 360 and also as a contributor to its development. We look forward to your contributions!
Join the Conversation
AI Fairness 360 maintains three mailing lists. You are invited to join the one that best meets your interest:
trusted-ai-360-announce: Top-level milestone messages and announcements
trusted-ai-360-technical-discuss: Technical discussions
trusted-ai-360-tsc: Technical governance discussions