Reading/Trust
This page is currently a draft.
|
Intro and Problem Statement
[edit]This is just a holding place to jot down ideas of how we could give readers of Wikipedia a better gauge for how much they can trust Wikipedia content. The current problem statement is:
Shorter: Wikipedia's value to the world is limited by the distrust caused by the potential for misinformation coupled with a lack of information about content quality
Longer: Most readers believe that all Wikipedia content is equally reliable/unreliabel, when some pages or revisions are much more likely to be neutral, based on reality or comprehensive. The lack of signal around validity and the seemingly random nature of vandalism or misinformation, means that students, teachers, journalists and other professionals are reluctant to use anything from Wikipedia in their professional work, when much of Wikipedia is of higher quality than the professionally written sources they would use for verification. How much this doubt impacts usage and, given the pervasiveness of Wikipedia, represents a lost opportunity for global education is unknown. The lack of signal also represents a lost opportunity to provide greater research literacy to our users by exposing the means of knowledge creation.
Impact
[edit]While trust is cited often as a concern for educators and professionals, it is unknown how greater trust would impact behavior and application. Is trust something readers only think about when asked? Would surfacing more information make people more or less 'trusting' in general. One study (summary here) indicated users who saw excerpts from talk pages rated the quality as lower, but reported the opposite impact:
"Though the between-subjects data clearly shows that those who saw any discussion rated quality below those who didn’t see a discussion, people told us that they thought reading the discussion raised their perception of the article’s quality, and of Wikipedia in general."
Finally, is this a reader issue or an editing issue? I think someone pointed out that tools for readers would also be potentially useful for editors. Building for both audiences at once is dangerous, but simply repairing and promoting an existing tool for editors might be a way to start.
Suggested Approaches/Signals
[edit]- Use ORES to generate score/signal based on various inputs:
- # of editors
- Last edit
- revision scores
- level of discussion
- Link talk page elements to their revisions/sections
- Blame display
- Edit history visualizations
Existing Work
[edit]Discussions
[edit]Quality assessment tools for Wikipedia readers (2010)
Email thread: https://lists.wikimedia.org/pipermail/wiki-research-l/2015-August/004703.html
In Journalism:
- "In workshops, journalism leaders built a list of indicators of trustworthy news based on user views collected through in-depth, ethnographic interviews." [1]
- A project to facilitate collaboration between Wikimedians and The International Fact-Checking Community.
- A prototype for aligning trust indicators with the schema.org standards in order to implement a simple, best practices based system for including trust indicators on news platforms.
Tools
[edit]- Edit history visualizations compiled by a de user: https://de.wikipedia.org/wiki/Benutzer:Atlasowa/edit_history_visualization
- WikiTrust. It coloured all text (not just links) according to various criteria, including (IIUC) who had written the text originally, the 'trustworthiness of that editor', and how long the text had lasted in it's current form. It coloured the text in darker shades, the more untrustworthy it was. * Screenshot: https://upload.wikimedia.org/wikipedia/commons/0/05/Screenshot_WikiTrust%2C_dewp_mit_Artikel_Porcupine_Tree%2C_2009-12-25.png * Article: w:en:WikiTrust (note the "Criticism" section) * Details at: https://de.wikipedia.org/wiki/Benutzer:Atlasowa/edit_history_visualization#WikiTrust_.2F_WikiPraise
- WikiWho (2 tools: whoCOLOR and whoVIS). Again, see lots of links at https://de.wikipedia.org/wiki/Benutzer:Atlasowa/edit_history_visualization#WikiWho.2C_whoCOLOR_and_whoVIS * whoCOLOR seems to just have basic features, but they plan(ned?) to add various things like in WikiTrust. (See the "coming features (for sure!)" section at http://f-squared.org/whovisual/ ) * whoVIS I don't understand at all. http://f-squared.org/whovisual/#vis
- Wikiblame[1]
- Wikihistory- linked from every page information page on English Wikipedia. Example: https://tools.wmflabs.org/xtools/wikihistory/wh.php?page_title=Mission_Dolores_Park
- Article info - https://tools.wmflabs.org/xtools-articleinfo/?article=Mission_Dolores_Park&project=en.wikipedia.org
- Contropedia - http://www.contropedia.net/ (Demo, Screenshots: http://www.contropedia.net/#graphics) (more info here: https://de.wikipedia.org/wiki/Benutzer:Atlasowa/edit_history_visualization#Contropedia) - Note: controversiality and reliability are not the same - it's long been a truism that some of the most controversial articles are also of the highest quality, etc.
- A few more:
- Still existing/functional, there's at least http://en.wikichecker.com/article/?a=Mission_Dolores_Park
- http://vs.aka-online.de/cgi-bin/wppagehiststat.pl
- the not-updated-past-2014 http://wikisearch.net/history/Dolores_Park (or http://wikisearch.net/editors/Dolores_Park )
Incomplete set of relevant papers:
[edit]"Can You Ever Trust a Wiki? Impacting Perceived Trustworthiness in Wikipedia " http://kittur.org/files/Kittur_2008_CSCW_TrustWiki.pdf
Email comments from members of the internal-wmf reading list:
[edit]- Maybe something we could do would be to survey community members about characteristics of articles which they use to determine quality and find ways to include these in some new ORES models? Maybe some something like this already exists?
- One thing that I've always thought would be interesting/useful is to get a sense of the standard deviation of editors to edit ratio - who is the top editor of a page? How are edits spread across all the editors? It can be alarming at times when you know what you are doing to view history and realise an article is mostly edited by one person - I've seen this a lot in building the reading trending service: https://www.mediawiki.org/wiki/Reading/Web/Projects/Edit_based_trending_service - and I'd be interested in exploring this more on an article level rather than over a duration of an hour. The work we did in mobile way back in 2014 was supposed to kick off this kind of work by making readers more aware of editors -
- [The above] suggestion matches something I often do, both for controversial topics, on the assumption that brigading and sock-puppeting is possible, but less problematic than single author articles. I do this by eyeballing the changelist, but I could see a lot of potential ways to make that a more digestible graphic of "blame" like display for history.
- Potentially, many talk page discussions could be re-framed as "comment" threads discoverable via highlights from the content, very similar to document comments in Google docs. This could help to highlight controversial content, and might also invite more contributions in the discussion. We did consider use cases like this during brainstorming for Parsoid content structure requirements, especially related to stable IDs. I think the latest thinking on this by cscott & others has trended more towards a hypothesis-like annotation model.