Jump to content

Topic on Talk:Reading/Web/Projects/Mobile Page Issues/Flow

WhatamIdoing (talkcontribs)

One question that has never been answered: Does it actually matter if readers see these maintenance templates? The hope has always been that readers will see a {{Copyedit}} template, and be inspired to click the edit button. I'm not sure that it actually matters for typical readers. There are a few things that might matter under unusual circumstances (e.g., you might want to see the copyvio warning if you're planning to re-use the content), but overall, displaying these notices to logged-out readers might be a complete waste of time. I don't think that asking readers whether they were glad to see the notice will actually get at the real question: Does seeing it actually change their behavior (and not just what they say about what their behavior might be, while someone's asking them leading questions about whether it would change their behavior)?

OVasileva (WMF) (talkcontribs)

@WhatamIdoing - we will be A/B testing these changes to see their effect on the user behavior. We will be looking at clickthrough to the issue modal to see if it increases with the changes and also to see if people tend to view extra details on more severe issues. We will also look to see if the changes have any impact on mobile edits. Additional details can be found in phab:T191532, but I'll also add a note on this on the project page as well.

WhatamIdoing (talkcontribs)

Thanks. I hope that you get actionable results.

Eloquenzministerium (talkcontribs)

I do believe that at least alerts to lack of references and other quality or maintainance issues of the article should not be hidden from IP-users.

They need the info to help them assess the reliability of the article and might want to contribute towards an improved article quality.

Hiding what we display for desktop users as a matter of course from users of another platform feels completely arbitrary and lacks any reasonable justification.

WhatamIdoing (talkcontribs)

I believe that most readers are able to identify the presence or absence of references for themselves.

On the real question, if you believe that the maintenance template is to encourage maintenance (i.e., it's not a disclaimer like de:Vorlage:Gesundheitshinweis, but a message asking the reader to please improve the article), and if those messages don't work in identifiable circumstances, then it's logical to quit sending them in those identified circumstances.

WhatamIdoing (talkcontribs)

@Tbayer (WMF), I just read your comment at phab:T200794. It looks like making these banners more visible increased the number of people reading them during the four-week study, but it decreased the number of people editing the article by ~5% at the English Wikipedia.[1]

Will you be watching this long-term? It is not unusual with a UI change for everyone to click on it once or twice to figure out what it is, and then to ignore it afterwards. Consider, e.g, Facebook when added links to Wikipedia articles about the newspapers to their news feed. Everyone expected a spike in page views, but my spot check indicates that the October 2018 page views for those articles seem to be the same (or even slightly lower) as the October 2017 page views. The spike came – and went.

So what we've got now is a ~4x increase in reading the banner, when the banner was newly visible, but will that increased rate be sustained, or is that a temporary spike that appeared and then left?

This matters. The communities could decide that 5% fewer edits is okay, if 1% of readers are getting more information about how Wikipedia works. That could be a lose-win combination that we decide to accept as a tradeoff.

But if that 1% drops back down to the previous miniscule level, then we're left with (1) fewer editors, (2) no additional information, *and* (3) a more cluttered interface. That sounds like a lose-lose-lose result.

[1] It is not hard to hypothesize a mechanism for this: "Oh, look, they already know about this problem. I don't need to fix it, then."

Tbayer (WMF) (talkcontribs)

@WhatamIdoing As noted in the comments you linked to, those were not yet the final results for the entire "four-week study", but preliminary data quickly queried before the end of the experiment, and we'll publish a report soon with the fully vetted results for the entire timespan, including an assessment which of the changes were statistically significant.

"It is not unusual with a UI change for everyone to click on it once or twice to figure out what it is, and then to ignore it afterwards" - yes, that's called a novelty effect, and we anticipated that to occur (see e.g. the task description of T200792). However, we also usually assume that for such a fairly simple feature, they don't last longer than a day or two (for the individual user). This is supported by results from the page previews A/B tests. Still, this is a good point and I'll be plotting the time series of the clickthrough rates for the 4+ weeks we have, to see if they show a decay over more than just the initial few days.

BTW, are the details of your Facebook pageviews analysis published somewhere? I'm curious how you controlled for other factors and trends that may have influenced the traffic to those pages in October 2017 and October 2018.

WhatamIdoing (talkcontribs)

You are being very generous when you call my spot check an "analysis".  :-) I didn't spend more than 10 minutes checking, the pages checked were non-random (i.e., newspapers whose names I could remember offhand) and I controlled for nothing. But here's another spot check, with 10 newspapers and the same results (you'll have to switch the dates to the other year to compare them; I was looking at the total average page views for all articles combined). Here's another, this time with all the papers being "The Times" from different cities. It just seems to be very steady.

Tbayer (WMF) (talkcontribs)

Well, using the public data available in the pageviews tool is reasonable for a spot check to exclude the possibility that the Facebook feature increased views to those pages by several 100%, say. But it's hard to reliably detect smaller effects that way, for the reasons mentioned. In T191429we used geolocation and referrer data to narrow it down more regarding the initial impact. If there is interest, @MNeisler (WMF) might be able to re-run her queries from back then, to see where the Facebook-referred US views to the six articles from F16923024 have returned to pre-rollout levels. Anyway, offtopic here ;) but feel free to follow up on the linked task.

Reply to "Research question"