JADE/Intro blog
This page is obsolete. It is being retained for archival purposes. It may document extensions or features that are obsolete and/or no longer supported. Do not rely on the information here being up-to-date. |
At Wikimedia, we develop AIs that are trained to emulate human judgments. These AIs help us support quality control work, task routing, and other critical infrastructures for maintaining Wikipedia and other Wikimedia wikis. Recently, the media and the research literature has been discussing the insidious ways in which bias creeps into AIs like ours. In order to address these biases, we're trying a novel strategy, but one that will be familiar to Wikipedians. In this blog post, we'll discuss JADE, an AI auditing support system that we're building to allow Wikipedians to collaboratively track and discuss the biases in our AIs.
Context & purpose
[edit]Weâre designing a companion project to ORES, and there is little awareness of the project outside of our immediate circle. The goal of this blog article is to highlight the new project and to bring in collaborators. We have a vision around JADE and the importance of auditing support for users of advanced AIs. In this blog post, we'd like to explain that vision and argue that for its inherent Wikipedian-ness
- Audiences
- Tool authors who will integrate with JADE.
- Wiki editors interested in new initiatives, many of whom are already invested in ORES.
- Technical contributors to MediaWiki, both WMF staff and volunteers.
- Peripheral readers of Wikipedia who are interested in tech announcements.
- Researchers in human-computer interaction and machine learning.
Outline
[edit]- Problem statement: what is lacking?
- Patrollers are working in an ad-hoc space.
- Current toolbox mostly applies to content:
- Flag that patrolling happened, but not the outcome.
- freeform discussion;
- patrol tags;
- category tags;
- fixing content.
- Outside of content,
- users can be blocked.
- revisions can be suppressed.
- Current toolbox mostly applies to content:
- ORES must be audited.
- Systemic biases cannot be self-diagnosed.
- Auditing should be diverse.
- Auditing should be transparent.
- Patrollers are working in an ad-hoc space.
- How are we fixing this?
- JADE gives a quantitative space in which to do patrolling work, and have discussions.
- Rich patroller flag to prevent duplicate work.
- Patrolling artifacts become wiki entities, and are available for meta-moderation.
- Patrolling can be suppressed, discussed, and analyzed by ORES.
- Patrollers can work with revisions directly, rather than only article content.
- JADE will democratize auditing by making it accessible and transparent.
- Mock-up of a judgment, maybe of the UI.
- JADE closes the loop between patrollers and ORES, by providing structured feedback that can be used to tune our algorithms. Â This is the audit cycle, and will help us identify biases.
- Output from a human consensus discussion is the gold standard for truth. Â Multi-rater reliability, plus rational discourse that can be reviewed, and which improves the reviewers. Â Is there some possibility that consensus formation can lead to shared biases, however?
- This is a generalization of Wiki Labels.
- Examples from JADE can be fed back into ORES with minimal technician intervention.
- External observers are able to point out our biases.
- Give examples where weâve discovered bias thanks to feedback.
- User feedback is known to potentially bring dramatic improvements to classifier accuracy. Â Structured types of feedback are even better. Â Asking for comments is shown to improve user judgments.
- Easy diagram of the ecosystem.
- JADE gives a quantitative space in which to do patrolling work, and have discussions.
- Conclusion: JADE because we have a rare opportunity to create a widely used, transparent AI algorithm, governed by democratic practices in which our users participate.
- What are normal feedback mechanisms in this industry?
- Google Jigsaw's Conversation AI is also providing feedback mechanisms and values the data they receive. Â Theirs is write-only, scalar judgments and optional comments.
- Humans are data entry bots
- Mediation is done entirely by technicians
- What are the innovative and interesting mechanisms?
- Rationales: pick out keywords or features which explain the judgment.
- Learn by example: New class defined by hand-picked samples.
- Active learning: We choose what will be judged.
- Learn from adversarial data: meta-vandalism.
- User co-training:
- Immediate algorithmic feedback: retraining or modulation of the AI to incorporate feedback in real-time.
- Whatâs so democratic about JADE?
- We allow discussion about judgments. Â Other systems only have individual judgments, and the reviewers usually canât even read each otherâs replies.
- We can pair democratized auditing with transparency about the pipeline itself.
- What are normal feedback mechanisms in this industry?