In 2022, Mozilla welcomed our inaugural Mozilla Technology Fund cohort, which focused on reducing the bias in and increasing the transparency of artificial intelligence (AI) systems. Building on our learnings from that cohort, we’ve decided to focus our resources in 2023 on an emerging area of tooling that is under-resourced and where we see a real opportunity for impact: open source auditing tools. While AI systems are increasingly being used to make determinations that impact people’s employment, health, finances and legal status, there are few mechanisms in place to ensure that these systems are being held accountable for harms, bias and discrimination. In 2023, the Mozilla Technology Fund will fund and convene eight projects which will grow, support, and better coordinate the community of AI auditors through tooling and resources. Funded projects received individual awards of up to $50,000 USD each.
The awardees which were selected for the 2022 cohort were as follows:
Project Name | Awardee | Location |
---|---|---|
AI Forensics | Tracking Exposed | France |
AI Risk Checklists | Responsible AI Collaborative | U.S. |
Countering Tenant Screening | Wonyoung So | U.S. |
Crossover | Check First | Finland |
Evaluation Harness | Big Science | U.S. |
Gigbox | The Workers’ Algorithm Observatory (WAO) | U.S. |
Reward Reports | GEESE | U.S. |
Zeno | Carnegie Mellon University | U.S. |
The selection commitee for the 2023 awards consisted of the following experts:
Name | Affiliation |
---|---|
Abeba Birhane | Trinity College Dublin, Ireland |
Bogdana Rakova | Senior Fellow, Mozilla Foundation |
Deb Raji | University of California, Berkeley |
E-M Lewis-Jong | Product Lead, Common Voice |
The staff who ran the MTF 2023 cohort consisted of the following Mozilla Foundation employees:
Name | Role |
---|---|
Mehan Jayasuriya | Senior Program Officer |
Jaselle Edward-Gill | Program Associate |
- Auditing AI: Announcing the 2023 Mozilla Technology Fund Cohort
- Reward Reports: An audit tool to better document AI
- Giving Gig Workers the Transparency They Deserve
- Evaluation Harness Is Setting the Benchmark for Auditing Large Language Models
- How to Hold AI Landlords Accountable
- Dope Puffer for Pope: How AI Risk Database is Building a Case for Proper AI Auditing
- Zeno: An Interactive Tool For AI Model Evaluation
- AI Forensics: The Detectives Researching AI Harms