Jump to content

Reading/Trust

From mediawiki.org

Intro and Problem Statement

[edit]

This is just a holding place to jot down ideas of how we could give readers of Wikipedia a better gauge for how much they can trust Wikipedia content. The current problem statement is:

Shorter: Wikipedia's value to the world is limited by the distrust caused by the potential for misinformation coupled with a lack of information about content quality

Longer: Most readers believe that all Wikipedia content is equally reliable/unreliabel, when some pages or revisions are much more likely to be neutral, based on reality or comprehensive. The lack of signal around validity and the seemingly random nature of vandalism or misinformation, means that students, teachers, journalists and other professionals are reluctant to use anything from Wikipedia in their professional work, when much of Wikipedia is of higher quality than the professionally written sources they would use for verification. How much this doubt impacts usage and, given the pervasiveness of Wikipedia, represents a lost opportunity for global education is unknown. The lack of signal also represents a lost opportunity to provide greater research literacy to our users by exposing the means of knowledge creation.

Impact

[edit]

While trust is cited often as a concern for educators and professionals, it is unknown how greater trust would impact behavior and application.  Is trust something readers only think about when asked?  Would surfacing more information make people more or less 'trusting' in general.  One study (summary here) indicated users who saw excerpts from talk pages rated the quality as lower, but reported the opposite impact:

"Though the between-subjects data clearly shows that those who saw any discussion rated quality below those who didn’t see a discussion, people told us that they thought reading the discussion raised their perception of the article’s quality, and of Wikipedia in general."

Finally, is this a reader issue or an editing issue?  I think someone pointed out that tools for readers would also be potentially useful for editors. Building for both audiences at once is dangerous, but simply repairing and promoting an existing tool for editors might be a way to start.

Suggested Approaches/Signals

[edit]
  • Use ORES to generate score/signal based on various inputs:
    • # of editors
    • Last edit
    • revision scores
    • level of discussion
  • Link talk page elements to their revisions/sections
  • Blame display
  • Edit history visualizations

Existing Work

[edit]

Discussions

[edit]

Quality assessment tools for Wikipedia readers (2010)

Email thread: https://lists.wikimedia.org/pipermail/wiki-research-l/2015-August/004703.html

In Journalism:

  • A project to facilitate collaboration between Wikimedians and The International Fact-Checking Community.
  • A prototype for aligning trust indicators with the schema.org standards in order to implement a simple, best practices based system for including trust indicators on news platforms.


[1] https://scu.edu/ethics/focus-areas/journalism-ethics/programs/the-trust-project/indicators-of-trust-in-the-news/

Tools

[edit]

Incomplete set of relevant papers:

[edit]

"Can You Ever Trust a Wiki? Impacting Perceived Trustworthiness in Wikipedia " http://kittur.org/files/Kittur_2008_CSCW_TrustWiki.pdf

Email comments from members of the internal-wmf reading list:

[edit]
  • Maybe something we could do would be to survey community members about characteristics of articles which they use to determine quality and find ways to include these in some new ORES models? Maybe some something like this already exists?
  • One thing that I've always thought would be interesting/useful is to get a sense of the standard deviation of editors to edit ratio - who is the top editor of a page? How are edits spread across all the editors? It can be alarming at times when you know what you are doing to view history and realise an article is mostly edited by one person - I've seen this a lot in building the reading trending service: https://www.mediawiki.org/wiki/Reading/Web/Projects/Edit_based_trending_service - and I'd be interested in exploring this more on an article level rather than over a duration of an hour. The work we did in mobile way back in 2014 was supposed to kick off this kind of work by making readers more aware of editors -
  • [The above] suggestion matches something I often do, both for controversial topics, on the assumption that brigading and sock-puppeting is possible, but less problematic than single author articles. I do this by eyeballing the changelist, but I could see a lot of potential ways to make that a more digestible graphic of "blame" like display for history.
  • Potentially, many talk page discussions could be re-framed as "comment" threads discoverable via highlights from the content, very similar to document comments in Google docs. This could help to highlight controversial content, and might also invite more contributions in the discussion. We did consider use cases like this during brainstorming for Parsoid content structure requirements, especially related to stable IDs. I think the latest thinking on this by cscott & others has trended more towards a hypothesis-like annotation model.