Moderator Tools/Automoderator/Testing/dtp
To help communities test and evaluate Automoderator 's accuracy, we are making a test spreadsheet available with data on past edits and whether Automoderator would have reverted them or not.
Automoderatorâs decisions result from a mix of a machine learning model score and internal settings. While the model will get better with time through re-training, weâre also looking to enhance its accuracy by defining some additional internal rules. For instance, weâve observed Automoderator occasionally misidentifying users reverting their own edits as vandalism. To improve, weâre seeking similar examples and appreciate your assistance in identifying them.
Note that this test does not necessarily reflect Automoderator's final form - we will be using the results of this test to make it better!
How to test Automoderator
- If you have a Google account:
- Use the Google Sheet link below and make a copy of it
- You can do this by clicking File > Make a Copy ... after opening the link.
- After your copy has loaded, click Share in the top corner, then give any access to swaltonwikimedia.org (leaving 'Notify' checked), so that we can aggregate your responses to collect data on Automoderator's accuracy.
- Alternatively, you can change 'General access' to 'Anyone with the link' and share a link with us directly or on-wiki.
- Use the Google Sheet link below and make a copy of it
- Alternatively, use the .ods file link to download the file to your computer.
- After adding your decisions, please send the sheet back to us at swaltonwikimedia.org, so that we can aggregate your responses to collect data on Automoderator's accuracy.
After accessing the spreadsheet...
- Follow the instructions in the sheet to select a random dataset, review 30 edits, and then uncover what decisions Automoderator would make for each edit.
- Feel free to explore the full data in the 'Edit data & scores' tab.
- If you want to review another dataset please make a new copy of the sheet to avoid conflicting data.
- Join the discussion on the talk page.
Alternatively, you can simply dive in to the individual project tabs and start investigating the data directly.
We welcome translations of this sheet - if you would like to submit a translation please make a copy, translate the strings on the 'String translations' tab, and send it back to us at swaltonwikimedia.org.
Nung manu kou do momoruhang yahai data mantad Wikipedia suai, pasaga'o yahai mongilo om oondos yahai do momonsoi.
About Automoderator
Automoderatorâs model is trained exclusively on Wikipediaâs main namespace pages, limiting its dataset to edits made to Wikipedia articles. Further details can be found below:
Internal configuration
In the current version of the spreadsheet, in addition to considering the model score, Automoderator does not take actions on:
- Edits made by administrators
- Edits made by bots
- Edits which are self-reverts
- New page creations
The datasets contain edits which meet these criteria, but Automoderator should never say it will revert them. This behaviour and the list above will be updated as testing progresses if we add new exclusions or configurations.
Caution levels
In this test Automoderator has five 'caution' levels, defining the revert likelihood threshold above which Automoderator will revert an edit.
- At high caution, Automoderator will need to be very confident to revert an edit. This means it will revert fewer edits overall, but do so with a higher accuracy.
- At low caution, Automoderator will be less strict about its confidence level. It will revert more edits, but be less accurate.
The caution levels in this test have been set by the Moderator Tools team based on our observations of the models accuracy and coverage. To illustrate the number of reverts expected at different caution levels see below:
Daily edits | Daily edit reverts | Average daily reverts by Automoderator | |||||
---|---|---|---|---|---|---|---|
Very cautious
>0.99 |
Cautious
>0.985 |
Somewhat cautious
>0.98 |
Low caution
>0.975 |
Not cautious
>0.97 | |||
English Wikipedia | 140,000 | 14,600 | 152 | 350 | 680 | 1,077 | 1,509 |
French Wikipedia | 23,200 | 1,400 | 24 | 40 | 66 | 98 | 136 |
German Wikipedia | 23,000 | 1,670 | 14 | 25 | 43 | 65 | 89 |
Spanish Wikipedia | 18,500 | 3,100 | 57 | 118 | 215 | 327 | 445 |
Russian Wikipedia | 16,500 | 2,000 | 34 | 57 | 88 | 128 | 175 |
Japanese Wikipedia | 14,500 | 1,000 | 27 | 37 | 48 | 61 | 79 |
Chinese Wikipedia | 13,600 | 890 | 9 | 16 | 25 | 37 | 53 |
Italian Wikipedia | 13,400 | 1,600 | 40 | 61 | 99 | 151 | 211 |
Polish Wikipedia | 5,900 | 530 | 10 | 16 | 25 | 35 | 45 |
Portuguese Wikipedia | 5,700 | 440 | 2 | 7 | 14 | 21 | 30 |
Hebrew Wikipedia | 5,400 | 710 | 16 | 22 | 30 | 38 | 48 |
Persian Wikipedia | 5,200 | 900 | 13 | 26 | 44 | 67 | 92 |
Korean Wikipedia | 4,300 | 430 | 12 | 17 | 23 | 30 | 39 |
Indonesian Wikipedia | 3,900 | 340 | 7 | 11 | 18 | 29 | 42 |
Turkish Wikipedia | 3,800 | 510 | 4 | 7 | 12 | 17 | 24 |
Arabic Wikipedia | 3,600 | 670 | 8 | 12 | 18 | 24 | 31 |
Czech Wikipedia | 2,800 | 250 | 5 | 8 | 11 | 15 | 20 |
Romanian Wikipedia | 1,300 | 110 | 2 | 2 | 4 | 6 | 9 |
Croatian Wikipedia | 500 | 50 | 1 | 2 | 2 | 3 | 4 |
... | ... | ... | ... | ... | ... | ... | ... |
All Wikipedia projects | 538 | 984 | 1,683 | 2,533 | 3,483 |
This data can be viewed for other Wikimedia projects here.
Score an individual edit
We have created a simple user script to retrieve a Revert Risk score for an individual edit.
Simply import User:JSherman (WMF)/revertrisk.js into your commons.js with mw.loader.load( 'https://en.wikipedia.org/wiki/User:JSherman_(WMF)/revertrisk.js?action=raw&ctype=text/javascript' );
.
You should then find a 'Get revert risk score' in the Tools menu in your sidebar. Note that this will only display the model score, and does not take into account Automoderator's internal configurations as detailed above. See the table above for the scores above which we are investigating Automoderator's false positive rate.
Initial results
Quantitative
22 testing spreadsheets were shared back with us, totalling more than 600 reviewed edits from 6 Wikimedia projects. We have aggregated the data to analyse how accurate Automoderator would be at different caution levels:
Not cautious (0.97) | Low caution (0.975) | Somewhat cautious (0.98) | Cautious (0.985) | Very cautious (0.99) |
---|---|---|---|---|
75% | 82% | 93% | 95% | 100% |
In our Moderator Tools/Automoderator/Measurement plan we said that we wanted the most permissive option Automoderator could be set at to have an accuracy of 90%. The âNot cautiousâ and âLow cautionâ levels are clearly below this, which isnât surprising as we didnât have clear data from which to select these initial thresholds. We will be removing the âNot cautiousâ threshold, as a 25% error rate is clearly too low for any communities. We will retain âLow cautionâ for now, and monitor how its accuracy changes as model and Automoderator improvements occur leading up to deployment. We want to err on the side of Automoderator not removing bad edits, so this is a priority for us to continue reviewing.
When we have real world accuracy data from Automoderator's pilot deployment we can investigate this further and consider changing the available thresholds further.
Qualitative
On the testing talk page and elsewhere we also received qualitative thoughts from patrollers.
Overall feedback about Automoderatorâs accuracy was positive, with editors feeling comfortable at various thresholds, including some on the lower end of the scale.
Some editors raised concerns about the volume of edits Automoderator would actually revert being relatively low. This is something that weâll continue to discuss with communities. From our analysis (T341857#9054727) we found that Automoderator would be operating at a somewhat similar capacity to existing anti-vandalism bots developed by volunteers, but weâll continue to investigate ways to increase Automoderatorâs coverage while minimising false positives.
Next steps
Based on the results above, we feel confident in the modelâs accuracy and plan to continue our work on Automoderator. We will now start technical work on the software, while exploring designs for the user interface. We expect that the next update we share will contain configuration wireframes for feedback.
In the meantime please feel free to continue testing Automoderator via the process above - more data and insights will continue to have a positive impact on this project.