This is nice, but i am a translator rather experienced. How could I benefit from the structured tasks (for example to add a link) to improve any article i see when it has poor links. Briefly said, how to activate this structured task on an article i have chosen by myself ? Would be very useful even for any user. Thanks. --Christian 🇫🇷 FR ⛹🏽 Paris 2024🗼 (talk) 16:13, 12 July 2024 (UTC)
Talk:Growth/Personalized first day/Structured tasks
Appearance
Hello, @Wladek92, thanks for reaching out, and what perfect timing! Currently the only way to access structured tasks is via the Newcomer homepage (Special:Homepage), however the Growth team is actually exploring ways to surface structured tasks (if they exist) on any article. You can read about the initial project idea here: Growth/Constructive activation experimentation.
The Growth team wants to ensure these structured task suggestions aren't distracting for established editors, but ensure they are accessible enough to help support new account holders who need the additional guidance (or any editor who is interested in accessing the task).
If we were to consider one structured task, like "add a link", where would you expect to see or access the suggestion?
- advantage is also that it separates the several 'main' pages (the site one, the user one, the suggested tasks one...). Today it appears when i click on my user name in a tab besides the userpage and user talk page but it becomes rather a common tool like the translations. --Christian 🇫🇷 FR ⛹🏽 Paris 2024🗼 (talk) 07:28, 15 July 2024 (UTC)
- hi and thanks i just enter an original wiki where the functionality exists -> https://cs.wikipedia.org/w/index.php?title=Speci%C3%A1ln%C3%AD:Domovsk%C3%A1_str%C3%A1nka&source=personaltoolslink&namespace=-1 and activated the 3 options in my preferences. A bit hard since it is not FR but fr.wikipedia.org has the functionality so that i can explore it. Well for the link, i think an entry 'Structured tasks' just under 'Translate content' or 'Learn to edit' (sites wikipedia) in the sidebar would be good since it is linked with edition works. Then it could reach a page with a tabbed tool bar leading to the operations addlinks , addimages, ... --Christian 🇫🇷 FR ⛹🏽 Paris 2024🗼 (talk) 09:48, 13 July 2024 (UTC)
Thanks for the feedback, @Wladek92!
For older accounts and auto-created accounts (when you navigate to another language wiki), you will have to enable the Homepage in your Preferences. The good news is that all newly created accounts on Wikipedia have those preferences enabled by default, so new account holders are able to access the Homepage and Structured tasks without adjusting preferences.
I'm not sure I totally follow your suggestion, are you thinking there should be a portal to explore Structured tasks that is linked from the "Main menu"? Or are you suggesting that we should add a way to access Structured tasks for a particular article once you click on a link from one of the menus? Can you explain a little further? (Feel free to respond in French if you prefer, we are lucky enough to have a native French speaker on our team).
I assume you are considering how this might work from a desktop device, which is certainly important, and how I do the majority of my editing. But just so you know, the Growth team trying to first think about how this will work on mobile. In other words, we are starting our design process with mobile users in mind first, which may influence how we approach this problem. For example, the sidebar "Main menu" present on the desktop version is tucked under a hamburger menu on mobile. The mobile "Main menu" offers a more limited set of options. Do you think a "Structured task" or "Suggested edit" link makes sense on that menu on mobile?
Thanks again for spending the time to offer feedback while we are at this critical, early-thinking stage of this project!
hi, yes i work usually from a desktop and now fom my tablette but under desktop format and with skin Victor old. I think for technical persons it is richer and very functional. What i wanted to say is we could add a link 'suggested tasks' in the sidebar for example under last item 'Download QR code' leading to the complete sequence of what the suggested tasks do. Today it would be the contents of their 'Home page'. this allows to explore the different suggested options on random articles proposed by the system.
Now when displaying a dedicated page we keep |Tabs Page | Discussion | and we add | Suggested tasks| before |Read|Modify|History| . Thus we have all we can do with the current page. It is the same as Suggested tasks of the sidebar but restricted only on the current displayed page (add links, add image...).
I cannot express myself on mobile version i dont use enough.
Of course these are only ideas.
Thanks, I appreciate the feedback! I'll discuss these ideas with our designer as we work on the associated tasks (T368188 / T370539).
in T110 we say people use Wikidata but we do not understand which information they retrieve. Please can someone complete the sentence to make it more precise. Thanks. --Christian 🇫🇷 FR ⛹🏽 Paris 2024🗼 (talk) 06:42, 12 July 2024 (UTC)
We have been discussing some of the challenges of patrolling "add a link" and "add an image" structured tasks with patrollers on Arabic Wikipedia, Spanish Wikipedia, and Russian Wikipedia.
We are considering several improvements suggested by community members, and I want to open up a discussion here as well:
- How should we make these structured tasks better to ensure high-quality edits?
- How should we make these structured tasks easier / faster to review for patrollers?
Hello! I found this thread quite by accident ;)
The current implementation of structured tasks has several problems:
1. They are very easy to make, which allows users not to think what they are doing.
2. You can make a lot of them, partially solved by limits, but still it is impossible to reduce the limits forever.
3. There can be multiple added links in one edit, which does not make it easy to undo the wrong link.
4. There is no automatic/semi-automatic notification to a newbie that their edit has been reverted for such and such a reason.
To deal with these issues, you can do the following:
1. Create a module, but for experienced editors (500+ edits)
2. Instead of / together "add link" they will have "check link"
3. When checking, the editor gets already corrected articles, with added links.
4. He checks each link and if it is incorrect, he marks it and leaves a comment.
5. The comment is posted on the participant's talk page.
6. If there were many (60%) such reverts for one participant, then access to the tool is blocked for him.
It seems to me that this will solve problems with new editors as well and will attract experienced editors to review contributions.
@Iniquity: "3. There can be multiple added links in one edit, which does not make it easy to undo the wrong link.": You can use Diffedit for this.
Oh, thanks for this script, never seen it before! :)
Thank you very much for sharing this very interesting gadget, @NGC 54!! We have so many gadgets available, it is difficult to know all of them.
I will test it with my volunteer account.
Thank you @Iniquity for this feedback! Our team is planning to conduct some further outreach to gather ideas this week and then team members who aren't active patrollers are participating in a demo from a patroller so we can better understand frustrations related to "add a link" and "add an image" tasks. After this, we will review ideas like yours so we can start to evaluate the effort and impact of each idea. I will be sure that your ideas are represented in our discussion.
I really like this idea of creating an easier "check link" workflow for experienced editors. Do you have any thoughts as to how this would fit into the current patroller workflow? Would this be an "advanced" task available on the newcomer homepage, but only available to users with 500+ edits? Or would this a tool somehow integrated into the Recent changes feed? A special Editing gadget that was opt-in only?
Thanks again for the feedback, I appreciate your insight!
Thanks for the detailed answer :) I'm glad that you listen to our opinion!
It seems to me that initially it should be integrated into the newcomer homepage.T hen, if everything goes well, perhaps we can create a full-fledged tool (patrol) for the future.
I was still thinking about how we can fasten the check and removal of templates. See phab:T274909. That is, the priority is to display pages with a template, and then check them.
Hi @MMiller (WMF) is there a 'Newcomer task' for removing redlinks? See for example https://en.wikipedia.org/w/index.php?title=Violette_Impellizzeri&diff=1067612170&oldid=1067612034 - where the redlink is removed with the tag of a newcomer task within visual editor. In general this kind of edit is *really bad* since it removes links to articles that could exist in the future, and newbies definitely shouldn't be encouraged to do this.
Hello Mike
No, there is no such task.
Most tasks we offer are edits suggestions, where we guide newcomers to work on something with full editing capacity remaining. Here, I guess by the banner that the newcomer was asked to fix the tone of the article. To do so, the newcomers can edit the entirety of the article.
Our guidance don't tell them to remove red links, so I think this user thought it was a good idea. I see that you left a message to this user explaining why it wasn't a good idea, which is the best thing to do.
We have some other tasks where users are much more guided, and edit Wikipedia through precise tasks. See our Add a link project for more information.
Thanks for pointing this out, @Mike Peel. Putting myself in the shoes of the newcomer, I think perhaps they might have interpreted the red link as a "broken link", i.e. "links are usually blue, and this one is red, so something must be wrong". Do you know if removing redlinks is something we see less experienced do erroneously? Maybe @Sdkb may have seen something along the lines?
Thanks for the ping. This hasn't been something I've seen, although I'm not sure I see enough newcomer edits that I would've noticed it if it was happening.
To give some context, redlinks are a tricky area, because they often indicate a problem (e.g. adding a non-notable person to a list), but not always. The circumstance in which they're warranted is when there's a notable topic that should have an article but just doesn't yet. So the decision of whether a redlink should exist or not requires an understanding of notability, which is obviously a fairly advanced skill. I see even many more experienced editors removing them overzealously, sometimes citing w:WP:Write the article first. There's just this natural pressure to take something that's normally bad and easy to identify and overgeneralize to it being always bad and try to eradicate it at scale.
In terms of the beginner experience, I think it'd be a good idea for the VisualEditor to do a better job explaining what redlinks are when you click on them. For instance, in the screenshot at right, there's nothing telling an editor what the redlink means. I think it'd be good to put something just below the title, where the short description would go for a bluelink, saying perhaps Unwritten article (learn more) ("unwritten" hopefully connotes both that the article does not currently exist and that someone thinks it should exist).
One last thing to note about redlinks is that quite often, they reflect articles that do exist in other languages, and the best way to handle them is with w:Template:Interlanguage link. For instance, at the instance Mike came across, there's an article in Dutch, and I've added an interlanguage link to it. If we wanted to get fancy, we could add a "search for this in other languages" tool that'd assist with the creation of interlanguage links. But I think that's much farther off/lower priority than just helping folks understand what they are.
Thanks - good to know that this wasn't a specific newbie task, I agree with Sdkb that this is a tricky area that's best to be avoided by newbies. I also agree that VisualEditor should explain these better.
I shared this feedback with the Editing team.
There's an AbuseFilter at enwiki that tags edits by IPs and newcomers who are removing links from Wikipedia articles. I looked at some of the recent results, and they were not bad. In most cases, it looked like the link should have been removed. In a few, it would have been ideal to replace the link rather than removing it ("Chimney Stack" vs w:en:chimney stack), but overall I don't think these were generally bad edits. Looking more statistically, about a quarter of such edits are quickly reverted.
Also, less than 10% of these edits used the visual editor. This suggests that even if the visual editor handled this better, it might not make a significant difference.
> less than 10% of these edits used the visual editor.
At en.wp, you mean?
Yes. (Mike and Sdkb are both enwiki editors.)
Problems shared here can be universal. :) You certainly have wikis with an higher use of VE, and all edits made using Growth features are using VE. Users would certainly benefit some changes regarding red links there.
Hello @John Broughton @Sdkb @NickK @Nick Moyes @Galendalia @Barkeep49 @Pelagic @Czar @LittlePuppers @HLHJ and everyone else who is following along!
First results
Thank you all for helping us design and build the "add a link" feature (which now has its own separate project page). We deployed the first version of it about ten weeks ago in four wikis, and we've since expanded it to ten wikis. It's going well so far! We've recently posted the data from the first two weeks of the feature, which we used to get an initial read on whether users seem to be engaged, and whether it is resulting in valuable edits. I invite you all to take a look, and reply with any of your reactions and questions, or ideas of what to look further into.
Basically, we see newcomers doing a high volume of these edits (more than from the conventional unstructured tasks), and that these edits have a low revert rate (lower than the conventional unstructured tasks). Some users do dozens or even hundreds of these edits, with one user on Arabic Wikipedia having done over a thousand. We're not seeing anyone abuse the feature or cause runaway vandalism by, say, clicking "yes" on everything.
We're now assembling a larger dataset with more than the first two weeks of data, and we'll be posting a more in-depth analysis in the future. I want to refrain from drawing any big conclusions before then, but from this initial data, I am optimistic that the "add a link" task is valuable for getting newcomers engaged. We'll still want to look into important questions, like whether these newcomers move on to other kinds of edits, whether this task is more engaging on mobile or desktop, and where in the flow users get stuck. Those questions will help us decide how to improve "add a link", and also how to build our next structured task, "add an image" (please check that out if you haven't yet!)
Wikimania
Wikimania starts tomorrow, and I hope that anyone who is registered for it can attend our session about "add a link". We'll be doing into details on the algorithm, how we built it, and the results so far. There will also be some time for Q&A. The session will be Sunday August 15, 15:450 UTC, and the details on the session are here. Since this Wikimania is virtual, I hope many of you will be attending part of it!
- Are you tracking how often users who use this "Add a link" feature return for another "Add a link" session?
- Or how often those who "Add a link" as their first edit return for another editing session at all (say at least 24 hours later) vs. standard new accounts? I.e., as an intro to Wikipedia, how does this tool fare for retaining potential editors?
@Czar -- yes, we're keeping track of data that will allow us to answer all those questions. That analysis is another level deeper than what we've done so far, and we're going to be getting to the sorts of questions you're asking in September, when we'll have more bandwidth from our team's data scientist. I'm definitely looking forward to digging into those numbers, and I'll post the results so we can discuss!
Thanks for the update! I'll take a look at the data and join the session tomorrow, but glad to hear that the initial results seem promising!
One question that comes to mind from whether these newcomers move on to other kinds of edits—is there any particular pathway for that? It'd be neat if, after a user has done a bunch of suggested edits, they're invited to check out their homepage, the task center, or even the community portal.
@Sdkb -- we haven't yet built an explicit pathway for what we're calling "leveling up", in which we might say to newcomers, "You've done 20 of these link edits, with no reverts! You may be ready to try a more difficult task, like adding an image." It's something that we're planning to do later this year as part of our broader "positive reinforcement" work (this project page is still quite bare, but we'll be expanding it).
One other thing that we did recently that's in this vein is to let the user switch out of "suggestions mode" into the visual editor. This means that if they're working on link suggestions, but they notice a copyedit that needs to be made, they can switch over and make that copyedit -- it provides them a door to discover other kinds of editing. We don't yet encourage them to make that switch -- it's something we need to think about the right way to do, but the opportunity is there for them. Please let us know any thoughts you have on how might do this well, and see you at the session on Sunday!
For anyone who wasn't able to attend the Wikimania session, the video is available to view here: https://www.youtube.com/watch?v=ar034Gha24o. Thank you to @Sdkb, who joined the discussion after and weighed in with good perspective!
We can think of several editing workflows that could be structured, along with the help of algorithms. Here are some examples. Which of these workflows do you think have the most potential to be structured? Which ones would be useful for the wiki and which ones not useful? Are there others you can think of?
- Add a link: algorithm recommends words or phrases that should be blue links, on articles that don't have many blue links. Newcomer decides whether the link really should be added and adds it.
- Add an image: algorithm recommends images from Commons that might belong in the article. Newcomer decides if it is a good fit for the article and adds it.
- Add a reference: algorithm recommends sentences or sections that need references. Newcomer goes out to find references and adds them in.
- Add a section: algorithm recommends section headers that could be used to expand a short article. Newcomer finds sources and adds content.
I think adding a table and the aspects that go with it should be an advanced task as a lot of articles have tables some basic some advanced.
@Galendalia -- interesting. Do you know of some way to identify articles that need tables but don't have them?
Adding wikilinks is not particularly useful (also, a "link" can be either a wikilink - internal - or an external [http] link; the latter are generally undesirable, at least in the English Wikipedia; and it is helpful to distinguish between the two). Adding maintenance templates is generally not useful
Every task that is listed consists of two things - (a) changing an article, and (b) finishing the edit by publishing it (ideally, adding an edit summary). Starting out (as an editor) by making a minor change, such as fixing a typo, is a good way for editors to learn that second thing, which they will be using every single time that they edit. By contrast, adding a section involves (1) adding content (sentences), (2) adding citations, and (3) finishing by publishing.
In other words, "fix a typo" or "make a minor change" should, ideally, be the first structured task that an editor learns, because it incorporates the "finishing the edit by publishing it" micro-task. And once the editor has learned to do that micro-task, other tasks will be easier.
@John Broughton -- I think this is a good point, that every task teaches wiki skills (e.g. adding an edit summary) that are not part of the core task itself (e.g. adding wikilinks). We should keep in mind that as we structure the experience of editing, we may also be teaching other universal wiki skills and concepts. Other examples might be teaching users that their edit is immediately public (except in wikis with flagged revisions), or that they can see their edit on the history page.
I thought this was what the Wikipedia Adventure was for? It shows the basics of using WP, however, there is no obligation to go through it. If there was 3/4 of our Teahouse questions would stop coming in. Galendalia (talk) 06:50, 20 May 2020 (UTC)
@Galendalia -- good question! Our team looked at the Wikipedia Adventure (and many other attempts at onboarding newcomers), and we've learned a lot. In summary, our current theory is that a good way to help newcomers stick around Wikipedia is to help quickly have a positive editing experience. We think that if they can make a good contribution within minutes and understand its value, they will be excited and want to keep going. Whereas if they have to go through a long tutorial, they might lose patience and not stick around. So this idea, "structured tasks", is about how we can give newcomers a real editing experience, but with guardrails so that the experience is positive for them and for the wiki.
More background information: In a study on the Wikipedia Adventure, while a lot of users claimed to enjoy the experience, it unfortunately didn't statistically increase their retention, or any other important metrics. But in a study about the Teahouse, it was shown that being invited to the Teahouse does statistically increase retention. So our team took this all to mean that there is something valuable in the personal connection that happens with getting a question answered (although we know it takes a lot of time from experienced editors). That's why we decided to build the mentorship module for the newcomer homepage. And, to your point, as we deploy the mentorship module on more wikis, we are continually trying to strike the balance of giving newcomers a personal connection, while not overburdening the mentors who answer the questions.
I think that the not sticking around part is the bullying of admins and the not following the don't bite the newcomer rule. Many a time in my start and even until today, I get admins telling me what to do and what not to do as well as adding their own POV to why I should or should not be doing something. Two recent examples are last night I asked a question on IRC about BLP for clarification from someone who I thought would have the answer and their response was "You should find something else to do as you have bitten off more than you can chew as a new comer." The second was today an editor pinged me about removing the gnome and fairies tags from indefinitely blocked user pages to clean up the active user lists as it contained some 50 or so blocked users from years back to current. That editor opened an ANI against me because he/she didn't get the answer they wanted. I think if admins and other people were to stay out of the new members using their in your face routines (does not apply to all, but to some) and let normal editors be a mentor, this would work great. There are definitely cliques in the admin and sysops teams that seem out to get newbies and instead of being helpful they are rude and not helpful. When I first joined I went into IRC to the en-help channel and got chastised because I did not have a cloak nor am I at 3 months as a wikipedian. When I asked about these I was pointed to 2 links of which neither were helpful. I watched this same user in the IRC and they are rude to everyone in the tone of their messages and I even PM'd them to let them know I felt they were being hostile, not only towards me, but others as well and the response I got was "Deal with it' then I got kicked from the room. I requested a courtesy vanish on Friday last week. Before I knew it, those I have worked with on various things posting messages for me to come back and continue my contributions. So I decided to come back and again, same hostility towards me. So in short, I would recommend that the mentor's not be admins, sysops, clerks, ARBs, etc. Just normal everyday wikipedians who volunteer to take on someone. How would we define who is an experienced editor I guess would by my next question.
I was the person Galendalia asked "about BLP for clarification". They had asked for help in private message to me with a dispute resolution case they were mediating for on en-wiki. It was a particularly complex case and they had already pinged two others on-wiki for assistance with it. The "quote" that Galendalia is posting here is not an accurate quote. My response to them was actually: "It's a pretty involved situation you're asking for advice on, you may have bitten off more than you can chew right now." and "I see that you've pinged Robert McClenon and Nightenbelle, I would await their responses." As you can see, the tone of my reply is quite a bit different than the "quote" they are offering here.
They are also complaining about us asking them to not idle in the help channel until they meet the requirements for idling in the channel as specified at en:Wikipedia:IRC/wikipedia-en-help. They were repeatedly pestering numerous people about getting a WM cloak and were pretty upset that they were not getting a cloak despite not meeting the minimal criteria specified at m:IRC/Cloaks. They kept obtaining various different cloaks, trying to get past the channel rules regarding idling in -help without meeting the criteria for idling or helping. Honestly, I think I was pretty patient and polite given the level of intensity from them regarding this.
This rudeness to helpees they speak of, and this quote of "Deal with it", I do not know what they are referring to. If this is referring to me, this is entirely inaccurate and they never PMed me with anything of the sort. I'm actually very patience and polite with helpees, even ones who are difficult and/or UPE.
Frankly, I'm not appreciative of this blatant mischaracterization of my actions.
Thanks for sharing that perspective, @Galendalia. We know for a fact from research that hostility toward newcomers drives them away. Here is one of the most important papers about it, and here is another influential research project. I think it's definitely hard to improve the culture of a wiki, and I think it's great that you're trying to be a force for positivity in your work. So far, the mentors that we've recruited seem to be generally encouraging to newcomers, and I think you have a good idea that we should make sure it's clear that many people can be a mentor -- it doesn't only have to be the most experienced and involved editors on the wiki.
I can feel Galendalia’s pain. Shortly after becoming an Administrator earlier this year, I thought I’d go and try out IRC.chat as I’d never used it and thought I ought to get a feel for the place. I not only found it incomprehensible as well, but I was permanently blocked by a so-called ‘helper’ whose manner towards me was appallingly unwelcoming. There is no accountability or complaints system at IRC, so I will never ever recommend any newcomer on en-wiki to ever have go there unless major changes happen there, or unpleasant/unhelpful editors are kicked out. The person who I encountered wasn’t an admin, so unpleasant attitudes to newcomers isn’t something unique to those with extended rights. Finding mentors/helpers with the right interpersonal skills to be able to deal with inexperienced users is critically important.
I'm sorry that Nick Moyes had a bad experience, although I must say that it was somewhat self-inflicted for them. There is accountability on IRC, and there is a process for complaints and appeals. For a more complete and accurate explanation of what actually happened here, please read the thread at en:User_talk:Waggie#Your_attitude_on_IRC. I go into great detail about why this happened. I am also willing, with Nick Moyes' and Jeske's (as the other involved person here) permission, to publicly release the logs of the encounter. There was no "permanent block", bans in -help are for 24 hours by default. Secondly, as soon as they were identified to a known "good" user, I lifted the ban immediately.
Looking through the list of tasks at https://en.wikipedia.org/wiki/Wikipedia:Task_Center...
As I've mentioned at a previous stage, I still think anti-vandalism has a ton of potential to be a structured task for newcomers (it somewhat already is with WikiLoop Battlefield). Categories and copy editing both sound good. There are also some more niche tasks that could be easily structured, such as fixing links to disambiguation pages that pop up in mainspace.
@Sdkb -- I remember when you mentioned that, and @Zoozaz1 brought up WikiLoop Battlefield as an example of how reverting vandalism is like a structured task. I guess my open question is still whether newcomers would do a good job of judging vandalism, given their low wiki experience. You recommended that we check in with some Wikipedians who do a lot of edit patrolling. I can go seek some out -- is there anyone in particular who you would recommend or tag?
I'm not too sure about specific users, but I'd recommend posting at w:Wikipedia:Counter-Vandalism Unit. @Kudpung might know a better person to reach out to or have thoughts.
My issues with the rollback that everyone gets are:
1. Inexperienced
2. Not trained
3. Causes edit wars
I recommend one or all of the following:
A. IP users are not allowed to use the rollback feature B. Only the people who have graduated from the CVUA should have rollback rights (I see a lot of new users getting the right without any type of training. C. To use the rollback built in it must be a registered user with 3 months experience.
Hi @Galendalia -- thanks for thinking about this. We've been talking a lot about easy editing tasks for newcomers to do, and we wanted to hear from someone in CVU because of the idea that maybe reverting simple vandalism is something newcomers could help with. It seems like an interesting idea, because on the one hand, some vandalism is really obvious, but on the other hand, newcomers know little about Wikipedia or vandalism, and might not have the judgment required. What's your take? Could you imagine newcomers being given something like a very simple version of Huggle, and asked to revert obvious vandalism? If I'm reading your previous comment correctly, it sounds like maybe you would say it's not a good idea.
Hi @MMiller (WMF) : Even though I have been on WP just over a month, I feel the inexperience would be a major hindrance. Like I stated above, They need to complete the CVUA and be on WP for at least 3 months. This will allow new editors time to process the policies and learn from their mistakes rather than reverting a valid entry. There are sometimes subtle entries which would probably not being noticed unless you are looking for them, like no source listed in the diffs. Wait what is a diff? That is a question I see users asking a lot of.
Thanks, @Galendalia. It sounds like your general advice is that reverting vandalism takes some experience and knowledge. Got it. But it also sounds like you have an interesting story, if I may ask -- how did you find your way to reverting vandalism so soon after joining Wikipedia? What caused you to try that type of editing in the first place? What were the very first edits you did?
Honestly it seemed like the only thing I can do without having someone revert anything I did or go on a tangent about questions I asked that end up not even answering the question I posed in the first place. I pretty much do 2 things. CVU and Dispute Resolution. I also am in the process of rebooting Spoken Wikipedia as there is plenty of interest in it. That will be the 3rd thing. I’ve been trying to maintain where active user lists are maintained and I’m getting a lot of flack for that because in one instance it requires removing the tag or userbox from someone’s user page and I only did this to those who are permanently blocked. However as soon as I did it people were all over me and reported me to ANI and I’m getting nothing but crap for housekeeping.
Also pinging @Revi (WMF), who has a perspective on this from Korean Wikipedia, which doesn't have any sort of bots for reverting simple vandalism.
I would very much like to have one more: correcting typos / improving language. Wikipedias have a lot of articles that are labelled as needing proofreading. If we can use some spellchecker or dictionary (e.g. for identifying words that are very similar to the dictionary ones but possibly misspelled) or some style problems (e.g. common stop words like 'outstanding' or 'interestingly'), that would give us a good task for a simple first edit. Beyond that, Ukrainian Wikipedia also has a good list of problems at uk:Вікіпедія:Проект:Якість.
NickK, yeah, that could potentially be part of copy editing. Developers, you'd want to coordinate with the folks at https://en.wikipedia.org/wiki/Wikipedia:Typo_Team for that.
@NickK -- I agree that would be a perfect task for newcomers. And I think you've hit on the main problem: how to automatically generate lists of potential spelling and grammar corrections across dozens of languages? @John Broughton pointed me towards the Typo Team's "moss" tool, which does this for English. Also, engineers on the Growth team pointed out the aspell and hunspell libraries, which have many languages. Do you know if Ukrainian Wikipedia already does anything like that? Where do the problems listed at uk:Вікіпедія:Проект:Якість come from? Are they from maintenance templates placed by users, or from some automation?
@MMiller (WMF): We had multiple discussions about libraries, we have several bot owners who are maintaining their own lists. There are some lists at uk:Вікіпедія:Список найтиповіших мовних помилок internally or Неправильно — правильно externally (it cannot be completely copied as some might still be accepted in some context, so a human check will be needed). If this is the only issue, I think we can come up with some solution.
Regarding uk:Вікіпедія:Проект:Якість, yes, they are maintenance templates placed by users.
I know autowikibrowser had this feature and I was going to start in on some of them, however, I was informed today, that feature has been long gone. I know there is a db source somewhere that contains dictionary words. This does not necessarily resolve synonyms or other word choices. It would be great to have a bot that could those changes based on the article language tag and also to fix dates based on the article date format tag.
I also agree with adding categories and typos as a potential task. Bigger picture I'm wondering if individual communities could input something into a template to generate these tasks rather than everything having to be done uniformly on the backend perhaps through categories which this tool could render in nice forms.
@Barkeep49 -- thanks for weighing in. I think that the way we have started to build newcomer tasks is in-line with how you're thinking about it. Right now, the feed that newcomers get runs off of maintenance templates, like these. Most wikis have big backlogs of these templates, but maybe one day in the future, newcomers (or others using this feature) could churn through the backlogs, and communities would be incentivized to keep tagging articles with them. That said, the idea we're talking about here, "structured tasks", is about these tasks coming from an algorithm, as opposed to from maintenance templates. Perhaps both sources could continue to be options, and communities could regulate which ones of the pipes (so to speak) they turn on and off into these tasks feeds.
To go off of this it would also be dependent on the users grasp of the language. There is a small difference in British English vs American English. Same with the Spanish language where I believe there are 3 versions. I know of a few editors from other countries who try to correct what they assume are typos but in fact are not based on the sentence. That may pose a potential problem with this being automated or templated.
I've just posted my support for Typo-fixing in the General Thoughts section above, but I'd like to reiterate it as a preferred first task, and to try to understand why it is that fixing typos as a structured task is seen as so difficult ti implement across different langauge sites.
English Wikipedia already has Wikipedia:Lists of common misspellings; Wikipedia:AutoWikiBrowser/Typos and even an article on Commonly misspelled English words, plus a list of variations of acceptable spellings that should NOT be corrected like colour>color and vice versa (Wikipedia:List of spelling variants).
Even if other language Wikipedias don't currently have any such similar internal lists, surely these spell-check lists are available from elsewhere? And it could even be an ideal opportunity to engage with wider editing communities to start building up such a list of common errors themselves which could be incorporated into this task.?
I do tend to feel that anti-vandalism might not be an ideal structured task as it does require some understanding of what is and isn't bad faith editing, and is possibly also prone to being abused if bad edits are let through. English Wikipedia already has edit filters and Cluebot for removing the worst of the worst - but what about other languages? Does manual input here have a role to play?
Hey Nick, I just wanted to point out, as I stated earlier, they removed that function from AWB. Also, you have to have a really good reason to gain access to the application to use it. I got denied a couple times, but then they accepted my reasoning. I think part of the difficulty may be the language format in which symbols/characters are used. That would require every language to have their own version of spell check.
I'm sure that there are spell checkers already in existence that cover the majority of Wikipedia languages - see https://webspellchecker.com/ , for example.
Grammar and punctuation fixing came to my mind. Most educated native speak have an intuitive sense of wrongness when they see ungrammatical a sentences. Having a feel for encyclopaedic tone is a more uncommon skill, but the improvements don’t have to be perfect.
Acquiring the software to identify problem sentences for non-major languages would be harder for grammar than spelling, I imagine.
@John Broughton @Sdkb @Nick Moyes @NickK @Pelagic @Barkeep49 -- since we were all talking about how it would be valuable to have copyediting as a structured task, @Tgr (WMF) and I did some research to look into it. We talked to @Beland, the creator of "moss", a typo-detection script on English Wikipedia. We learned how the tool works, and talked about prospects for doing similar things in other languages. You can see our notes here (@Beland, please add to or correct them!) We're going to keep thinking, learning, and posting about the possibilities around copyediting.
Sounds great; thanks for the update!
@MMiller (WMF): I would make spelling correction a separate task from copyediting and label it as such; I personally think of copyediting as more of a grammar/structure/clarity thing than spelling correction. That's not to say that fixing typos is unimportant or something we shouldn't do, but it might be more clear for other editors (and you should probably deal with categories differently for each). LittlePuppers (talk) 01:37, 4 June 2020 (UTC)
Hi @LittlePuppers -- thanks for weighing in. That distinction is not something I had thought about. And I think you're right -- the more we've thought about how we might build a structured task that would recommend spelling corrections, the more we think that such a task would only recommend spelling corrections, and not other kinds of grammar edits, which would require totally different algorithms to identify. Where would you say that the phrase "typos" fits in? Do you think typos are more about spelling, or more about punctuation or something else?
Thanks MMiller (WMF). I'd say that spelling is solidly within the realm of typos, and something like phrasing is solidly within the realm of grammar, while punctuation is somewhere in between. It's a bit harder to say, but I think that punctuation would fit into the category of typos if it's an obvious and entirely unambiguous error (for example, putting two periods instead of one at the end of a sentence), but more in the category of grammar when it's something less clear-cut (such as over or underuse of commas, or a period vs. a semicolon).
To generalize a bit more, typos are unambiguous mistakes based on basic rules (be it a misspelled word, or some other typographical error), while copyediting or grammar (whatever you decide to call it) focuses on improving language (be it sentence or article structure, phrasing, punctuation, etc.) in a way that makes it more clear or easier to understand, even if it wasn't strictly "wrong" before. To link to two projects on en.wp I think you're familiar with, typos are in the realm of the MOSS project and grammar/copyediting is in the realm of the Guild of Copyeditors. LittlePuppers (talk) 02:08, 24 June 2020 (UTC)
Thank you, @LittlePuppers. This actually helps a lot, especially where you said "typos are unambiguous mistakes". This has implications for our prioritization and design of different structured tasks. For "unambiguous mistakes", we can probably create a very confident algorithm that can feed easy edits to newcomers, which they could accept or reject. Copyediting or grammar is a more advanced task, requiring the newcomer to create/produce the change on their own. It's like the difference between a true/false question ("This word should actually be spelled this way. True or false?") and an open-ended question ("What is a better way to phrase this sentence?"
Hello @جار الله -- I'm the product manager for the WMF Growth team, and I work with @Dyolf77 (WMF). He said it would be okay if I ping you here, where we are having a conversation about "structured tasks". In this conversation, we have been talking about automated ways to identify spelling errors in the wikis, so that we can point them out to newcomers to fix. We've talked about the moss tool in English Wikipedia, and I've learned that you built something similar in Arabic Wikipedia with JarBot. We're trying to figure out how possible it would be to build similar things in many wikis. I'm hoping you can answer some questions about your work. Thank you!
- Which dictionaries/spellcheckers does JarBot use, and which one is best?
- Does JarBot scan every revision when it is made? Or does it follow its own path through the articles?
- Approximately how many spelling corrections does it make per day?
- How does JarBot avoid making changes to peoples' names or names of locations, or other words that cannot be found in a dictionary?
- Does it assign a score for how likely something is to be a misspelling, with some having higher scores and some lower? Or does it simply decide that a word is either misspelled or not?
- Does JarBot automatically make the corrections? How accurate is it? In other words, how often are its corrections reverted?
- How easily do you think something like this could be made for another language?
Hello @MMiller (WMF)
Which dictionaries/spellcheckers does JarBot use, and which one is best? I use list of the most common mistakes in Arabic, the list is made by arwiki editors.
Does JarBot scan every revision when it is made? Or does it follow its own path through the articles? It depends on the tasks, sometimes by new revisions and sometimes by scan the database.
Approximately how many spelling corrections does it make per day? I don't know, maybe 50-100.
How does JarBot avoid making changes to peoples' names or names of locations, or other words that cannot be found in a dictionary? There is a list of words that the bot most avoid, but our common mistakes list didn't includes names and locations.
Does it assign a score for how likely something is to be a misspelling, with some having higher scores and some lower? Or does it simply decide that a word is either misspelled or not? The script doesn't work on AI to make decisions (maybe in the future).
Does JarBot automatically make the corrections? How accurate is it? In other words, how often are its corrections reverted? Yes, the bot is automatically makes the corrections. And 99.99% are correct.
How easily do you think something like this could be made for another language? I don't know about other languages but in Arabic and maybe the languages of the Middle East, the start will be from scratch and work will be difficult because there are no valid word lists or comprehensive dictionaries.
Best regards.
Thank you for the quick reply, @جار الله. These answers are helpful for now, and I will get back in touch if we decide to work on a project around spelling.
Typo-fixing seems like a task that would fit well in a mobile interface. Subtitling movies on Commons and translating subtitles also spring to mind. Adding "lang" templates would also be very useful and make the Typo Team's life easier (flagging that these words are Latin, these are Japanese, and so on).
More creatively, the WikiProject Guild of Copy Editors is always looking for volunteers to read through select articles and review and fix. This is not as readily done on a small interface.
You are building this into a reader app. Maybe link it to what the reader is doing? If they are confused, help them add a "clarify" inline tag. If it needs a citation, have them add that inline tag (everyone knows that tag, even if they never edit). If it is US-centric, let them add "globalize" inline tags.
A good simple interface for this might be OpenStreetMap-style comments to articles; "I got lost here, because you did not define this mathematical term" and suchlike. Scan the text and suggest some inline tags in which the comment could fit as a "reason=" parameter, in this example "clarify". There's a related project for doing something similar in Huggle.
And then let them resolve tags. If a section is templated as needing expansion, ask them to submit a comment suggesting sources that could be used to expand the section, as plain URLs. If they spend time on a "citation needed", the app could tell them to click the tag for guidance on adding a reference (a few times only). Or a banner saying: "This article has a photo request. If you have or could take a photo to donate to this article, please [add it]".
Why there are not individual tags (from history, recent changes) for certain types of suggested edits? For example, the tag for adding link(s) should be different than the tag for copyediting, to exactly know the activity of the newcomers.
There's some discussion about that in task T266474 and task T236885
The interests should be improved. You cannot select only astronomy, only geometry or only algebra... You cannot select a certain country. You cannot select only zoology, only botany, only mycology, only genetics or any other branch of biology. I do not see psychology. What happens if a user wants zoology, mycology, geometry, psychology, Belgium, France and Pakistan? That user should select General science, Biology and...?
I tested the tools on Romanian Wikipedia, and I think that the suggested edits could be improved (following templates is not enough, because there are articles with issues but without templates for those issues). Copyediting could also search after wrong-written words and so on. Linking could also search after articles that have big number of words compared with the number of links. Reference finding could also search after articles that do not have references or have sentences without references, not only to display articles with the templates from MediaWiki:NewcomerTasks.json. Updating articles could also search after articles that were edited a long time ago on the local wiki, but were edited intensively in other languages (taking into account Wikidata, the number of bytes added and the reverted edits; I think taht it would require a powerful AI). Expanding articles could also search after articles that have a small number of words on the local wiki, but have a bigger number of word in other languages (taking into account Wikidata; I think that it would require a powerful AI).
ro:Special:PageHistory/MediaWiki:NewcomerTasks.json: The problem with Expanding articles is that there are a lot of stub-templates, and MediaWiki:NewcomerTasks.json can only use a limited number of stub-templates; if you use ~800, Expanding articles no longer works.
Interests: I select Physics, but I receive as the first suggested edit ro:Nebuloasa Rozeta, ro:Ganymede (satelit), ro:Progress (navă spațială), ro:Lista expedițiilor pe ISS, ro:Cassiopeia (constelație), ro:Descoperirea și explorarea sistemului solar and ro:New Horizons?! It is annoying, because I would like to add Astronomy as interest.
Today, I received ro:Alexie al III-lea Angelos on "Reference finding" ("Computers and internet")...
And ro:Dryocalamus on "Expanding articles" ("Computers and internet")?!
Sorry for my late reply.
The topic matching system we use can triage articles by topics, with more or less success. We took the 39 topics with best matching results of a list of 64 topics. This leads to some limitations, but the team in charge of this algorithm still works on improving it.
Even if you can find some false-positives, topic matching overall provides accurate results, and newcomers work on them. I suggest that we deploy suggested edits on your wiki fr newcomers, and after a few weeks, we check on the data to see if they are actually making some edits.
Wanted to toss out another idea for research/consideration. meta:wikidata-game contains multiple gamified tasks that I'd consider friendlier for both new and mobile users than some of the suggestions I've seen for structured tasks. They have a few benefits: (1) having already been built/designed for short-term interaction, (2) any improvement to this tool is extended to the GLAM workers and others using this tool with new users, (3) the benefit of contributing to Wikidata instead of a single Wikipedia is that the benefits pass downstream to all language Wikipedias whose infoboxes are automatically pulling from the centralized Wikidata metadata. At the very least, worth checking out. I would be curious whether user research confirms that these types of games/interactions are interesting for new users (and thus could just use extra loops to make continued use more compelling) or whether these types of efforts do not impact user growth (which I think its maintainers and other GLAM workers would want to know).
Hi @Czar -- thanks for bringing this up. We are familiar with the Wikidata games, and they helped inspire these ideas around structured tasks. One of the issues with the Wikidata games is that newcomers don't understand about Wikidata, and are therefore not motivated to edit it -- but they do want to edit Wikipedia. I suppose, though, that edits to Wikidata could be wrapped in an experience that makes it clear that the user's edits will ultimately affect Wikipedias -- it's just that they themselves will not necessarily be able to go to a Wikipedia article and see their own handiwork. What do you think about this?
Regarding GLAMs, I think we're thinking along the same lines with structured tasks and campaigns. If the Growth features provide a feed of articles to work on, we could imagine communities setting up campaigns around specific topic areas, assembling lists of articles to work on, and then using Growth's suggested edits feed to make them available to campaign participants.
About the magnified benefits of contributing to Wikidata: I'm thinking about the other side of the coin. Would you be concerned that it would give newcomers too much power to allow them to essentially edit many Wikipedias at once through Wikidata?
newcomers don't understand about Wikidata, and are therefore not motivated to edit it
fwiw, if it were built into the WP app, I doubt newcomers interested in a random queue would care whether it's technically WD or WP (or care about the difference). It's also pretty cool to see when you're impacting X times the amount of Wikipedias with your edits. In your testing, have users shown an interest in wanting to go back and admire their handiwork? The novelty wears off after maybe the first check, if even the first check is necessary. Once you're into processing a random queue, it's just the thought of knowing that it's helping that keeps you going, in my experience. I would expect a nice visualization of one's contributions to more impactful than seeing a parameter added to a collapsed infobox in the app, for instance.
re: GLAMS, what follows is definitely a strong opinion loosely held, but I've participated in and organized a number of edit-a-thons and while the dashboards show X amount of edits (usually reflecting regulars who continue editing in the defined timeframe rather than actual newcomer contributions), they often miss the trees for the forest. It's far better to hook someone into editing typos on Wikipedia and have their lifetime contributions over teaching them a fairly intensive edit process in an hour that they are unlikely to revisit ever.
re: the other side of the coin, this may be surprising but I'd say the opposite! Wikidata (and Commons, for that matter) has a higher tolerance for mistakes because they lack either the people power or tools to review the volume of edits coming through the site. (As compared to ENWP, which has a bevy of tools and editors dedicated to patrolling even the most random of edits.) Either way, their communities are not going to want junk edits, of course, and if they view a tool as being a vector for abuse, they'll oppose it, but I would anticipate both WD and the other language WPs that use WD to see such an accessible tool as a boon to their basic work. mix-n-match is highly important matching metadata properties between trusted authority sources—anything that improves their own tools while bringing in and onboarding new users? Dreams come true!
In your testing, have users shown an interest in wanting to go back and admire their handiwork?
There are two main things that make me think users want to visually confirm/admire their Wikipedia edits, but I acknowledge that neither of them are conclusive -- it's more that they inform a theory we have. The first is from looking at the data on the newcomer homepage's "impact module", which lists the articles recently edited by the newcomer, along with how many pageviews the article has received since the newcomer's edit. In this image, you can see the impact module (lower right) for a user who has edited exactly one article (Diana Rossová). We see lots of newcomers clicking on the titles of the articles they've edited in the past. This may be because they want to continue editing the article, or it may be because they want to confirm that their edit is still there. More analysis would shed more light, but we haven't been able to prioritize this.
The other thing is anecdotal: our sense from events and editing workshops that when newcomers make their first edit to Wikipedia, a lightbulb goes on for them when they realize that their edit is live, and has actually changed Wikipedia. We want to make that moment happen for newcomers even when they're not at an event. In your experience at edit-a-thons, is this a real effect that we should try to cause?
Taking this all together, I think you're making good points that (a) a newcomer doesn't necessarily need to realize the difference between Wikipedia and Wikidata, and (b) there would certainly be good ways to help a newcomer see that their edit to Wikidata has impacted Wikipedia. We should think about this for future structured tasks -- it's more that I've wanted to shy away from the types of edits that only affect Wikidata, and don't reverberate into the Wikipedias.
It's far better to hook someone into editing typos on Wikipedia and have their lifetime contributions over teaching them a fairly intensive edit process in an hour that they are unlikely to revisit ever.
This totally makes sense, and is something that's come up as we've planned features that can help edit-a-thons. The question has been: what are edit-a-thons for? Is it more important to generate a bunch of articles, or to get as many newcomers as we can off on a start on their Wikipedia journey, even if that means less content from the event? In your experience, have you seen longtime Wikipedians come from these events?
re: the other side of the coin, this may be surprising but I'd say the opposite!
I guess that the higher tolerance for mistakes on Wikidata is what I'm worried about. Since we're targeting newcomers with these features, I think we should expect a fair amount of bad edits -- I'm thinking, like, someone who indiscriminately taps "Yes" on all the image suggestions they get. If those images get added directly to the Wikipedia they're on, then we can have high confidence that the edits will be patrolled, and someone will realize that user isn't using discretion, and then maybe warn/block them. But if the edits are going to Wikidata, and Wikidata doesn't have bandwidth to patrol them closely, then the bad edits would be making their way on to potentially dozens of Wikipedias without those wikis having a good way to patrol them.
On the other hand, though, it's possible to filter one's Wikipedia Watchlist (or Recent Changes) to include Wikidata edits. Do you know if that's commonly used?
This is a thread to talk about the newest set of mockups, shown in a presentation linked from the project page. There are four essential questions that the Growth team is thinking about as we work with these mockups, listed below. We hope community members weigh in on any of these questions. You are also welcome to just say what you do and don't like about the designs, ask questions, or give ideas of your own.
- Should the edit happen at the article (more context)? Or in a dedicated experience for this type of edit (more focus, but bigger jump to go use the editor)?
- What if someone wants to edit the link target or text? Should we prevent it or let them go to a standard editor? Is this the opportunity to teach them about the visual editor?
- We know it’s essential for us to support newcomers discovering traditional editing tools. But when do we do that? Do we do it during the structured task experience with reminders that the user can go to the editor? Or periodically at completion milestones, like after they finish a certain number of structured tasks?
- Is "bot" the right term here? What are some other options? "Algorithm", "Computer", "Auto-", "Machine", etc.?" What might better help convey that machine recommendations are fallible and the importance of human input?
One thing I think could enhance the user experience is by integrating the topics into the categories system. I would suggest adding an unobtrusive search bar for users to search for other topics that would be populated with all the categories on wikipedia (or ones containing a certain amount of articles) so if a, say, linguistics expert comes along or any expert on a more obscure topic they can more easily participate, be engaged and be more likely to edit.
I generally like concept A better, as it has the simplicity of both but also has the ability for complexity and introduction to broader editing. Some of the concepts in B, though, can be integrated into A like the summary screen. I also think it could be clearer for the user that the pencil icon is meant to edit the link suggestions.
Regarding question 5, I think that is the ultimate goal of the experience, making editors comfortable and confident enough to edit on their own, and that editing the article in ways not considered by the program should be easy and encouraged. With 7, I don't have specific suggestions, but generally that it would be a good idea to slowly introduce it and give them more tools and traditional ways of doing things so when you eventually get to an experience without the tasks/bot where it feels easy and natural. Of course, that would be optional for the user, but easing it in would be a great way to get newcomers understanding wikipedia.
Just as another suggestion, with copy editing there is a chance a user will change the page from, for example, British to American english and it should be specified somewhere that you shouldn't change it from one to another.
Thanks for checking out the designs, @Zoozaz1. I like your idea about topics. We've considered having a free-text field, just as you say. One model for that is the way that Citation Hunt works, which searches categories.
I also agree that both design concepts can be made to seem more like the other one, which gives us some flexibility.
@Zoozaz1 Personally, I am not a big fan of the Category system for newcomer workflows like this -- its very hit or miss in what it covers, and what we discovered really quickly with citation hunt is that newcomers work much better from larger more inclusive categories (i.e. WikiProjects or Custom sets). I have been involved in #1lib1ref in my professional capacity, and organizing in general with my volunteer hat on -- and my impression is that category navigation tooling is rarely "inclusive" enough for the kinds of "relatable topics" that folks are looking for. Some topics you need Wikidata, or WikiProjects, or whole category trees, or broader undefined sets that only machine learning can create.
That's a fair point. I was thinking more of just giving newcomers the option to use it (and maybe collapsing it at the start) just if they are deeply interested in a specific category or aren't interested in the other listed category so it would sort of be an option of last resort.
@John Broughton @Sdkb @NickK @Nick Moyes @Galendalia @Barkeep49 @Pelagic @Czar @LittlePuppers @HLHJ -- thank you all for participating so helpfully in our previous discussion about structured tasks (the summary from that conversation is here). We took your thoughts seriously in making the next set of designs, and I wanted to call you all back to this page to check out our progress and let us know your reactions. We'll be making some engineering decisions in about three weeks, and hope to have as much community input as we can get! The new materials are in this section, and include static mockups, interactive prototypes, and questions that we're thinking about. Thank you!
Thanks @MMiller (WMF): for the ping! I am strongly in favour of the concept A. I can list at least three problems of the concept B:
- creating a yet another editing mode would make experience more confusing (vs same editing mode but with hints in concept A)
- step B-08 is a very un-wiki: while any mistake on wiki can be easily fixed, correcting this one becomes difficult
- in addition, B-08 means a newbie will likely start by having an edit conflict with themselves. Edit conflicts are already frustrating, creating favourable conditions to start with one (an AI edit in parallel with a regular visual edit) is really, really bad.
Hi @NickK -- it's been a long time since you posted this comment, but we've made some progress and I wanted to get back to you. We ran user tests on both Concept A and B to decide which to build. The summary of the findings is here, and we decided to build Concept A (while incorporating a couple of the good parts of Concept B). About your ideas on edit conflicts: I think that's a good point. When the user switches out of "suggestions mode", we will probably want to prompt them to either publish what they've done so far or explicitly discard the work, before switching to the full editor.
Next, we'll be finalizing mobile designs and testing desktop designs. I'll be posting those things as we have them, and I'll ping you to take a look if you have time.
Thanks for the ping, MMiller! My initial thought is that context can be pretty important; it's much harder to tell whether a link is appropriate or not when e.g. you can't see if it or something similar has been linked above.
Thanks for the quick response, @Sdkb. Having the context of the whole article probably enriches the experience for the newcomer, and maybe helps them understand, "I am editing Wikipedia right now." For the specific concern around seeing the link has been made above, we're able to program that into the algorithm: only suggest the link if it's the first occurrence. But yes, I think the broader point about context makes sense.
My gut tells me the sooner we can get them into a real experience the better. So that would be my answer to Q1 & 3 but I would think some A/B testing is really the right answer to that.
Thanks, @Barkeep49. We are actually running user tests of both design concepts this week, which means we'll have videos of people new to editing using both of them. That may help us figure out which design concept to engineer with first. The option I think you're talking about, though, is building both designs, and giving each to half the users. I'll talk with our engineers about how easy it would be to do that. Perhaps some large portion of the work is shared between the two of them.
I think you need something in between the two options you present. So, a couple of principles: (1) you want to isolate the user from the full editing experience (for example, they don't having to select the text to link, then going to a menu to tell VE that you want to create a link), (2) you want to provide explanatory material - which could well include what they would do in the "real world", and (3) you want what the user actually does to resemble what would do in the real world.
Specifics:
(1) If you want to allow the user to go into full VE edit mode to fix something [why not?], then after the user clicks the (general edit) icon, and you confirm that he/she wants to do copyediting or whatever, save the edits that the user has done [do the full "Publish" process], then let the user do whatever he/she wants, and then provide a way for the user to continue on with doing linking. Don't build a separate navigation system for jump from linking to general editing to back to linking. (So A-13 leads directly to A-16.)
(2) "B" has some explanatory material (B-02); A is lacking.But neither explain that in the "real world", you select the text to link, then going to a menu to tell VE that you want to create a link. A brief screencast would be ideal, but even just showing a couple of screenshots would do. And, of course, it's critical to not force the user to go through all of this when he/she comes back for another editing session.
(3) "A" isn't good, and "B" is worse, at mimicking the real linking process. The real process looks much more like A-10 than A-07.
I can tell you really looked closely and thought about these design, @John Broughton. Thank you!
For (1), I think your idea to let the user publish their suggested edits before switching to the full editor makes a lot of sense. Especially when designing for mobile, the priority is to only ask users to do one thing at a time -- and your idea is in the spirit of reducing how many things the user is juggling (their "cognitive load"). I will definitely bring that up with our team's designer.
For (2) and (3), this is sort of core to our challenge here. Like you said, we want to isolate the user from the full editing experience, but we also want them to somehow be able to learn about the full editing experience and how to add a link the traditional way. I'm worried that if we were to explain the traditional method before sending users through the streamlined method, it would be confusing ("Why show this to me if I'm not about to use it?"). Perhaps a better way is to conclude the workflow with the option to learn the traditional method ("Learn how to do this task on your own with the Visual Editor!")
What do you think?
I don't think it's that confusing to tell a new user "Normally, you'd start the linking process by selecting some text, then going to the menu and selecting the link icon (small screenshot)". However, for this structured task, we've already selected the text and told the editing software that you're looking at creating an internal link."
If you're really concerned about throwing too much at the user, then make this optional (click on "How are links normally created?".
Also, as an aside, I disagree with calling this "streamlined". I think this should be though of a "truncated" or "shortened". Streamlined implies something that is better in most aspects. (Who objects to "streamlining" a process?) But in this case, there are tradeoffs.
As another aside, if you really wanted to provide the user with something closer to the full experience, while providing guidance, then the user clicking to start the process, or the user clicking to go to the next suggestion, would result in the user seeing the software (a) select text, and then (b) mark it as a possible internal link. Then and only then would control of the screen be yielded to the user in order to do the next steps.
I also have several relatively minor points:
(1) The mockup keeps using the term “AI suggestions", but why not just “Suggestions”?
(2) The sample edit summary is way more detailed than any human would provide - something like "Added N internal links, using computer-generated suggestions".
(3) Regarding “Linked article is of poor quality” [reason not to link], the implication is that the user will check the quality of each suggested link before linking (how?) More importantly, that reason is directly contradicted by this, from https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Linking says: “Do not be afraid to create links to potential articles that do not yet exist.”
(4) Why doesn’t software (“AI”) handle “Already linked earlier in the article” [and how does the user know about such prior linking if he/she is only seeing part of the article]?
@John Broughton -- thanks for these thoughts! Here are my responses and questions back to you:
(1) Another WMF product manager who I showed these to actually asked the same thing, and I think it's a good point. Most software and apps don't tell you that suggestions come from AI -- they're just "suggestions". For instance, the Facebook or Instagram feeds aren't the "AI feeds"; they're just "feeds". But on the other hand, we talked about how transparency is a core value in our movement, and so we want users to know where information comes from and which work is being done by machines. Therefore, we're trying to figure out a way to convey that these suggestions come from machine learning, but without being cumbersome.
(2) Do you think that having the more detailed edit summary is a bad thing? We wouldn't want to do something that would end up being a burden for patrollers. Or maybe it conveys to newcomers an unrealistically detailed idea of what they should be putting into their own edit summaries later on? One thing that I'm noticing from our initial user tests is that users seem to like seeing a review of all the edits they've done before they publish -- it helps them confirm that they like what they edited. But that doesn't necessarily have to happen in the edit summary.
(3) We've actually iterated on that list of "rejection reasons" a bit since making that mockup. Here are the ones that we're working with now. What do you think of these?
- Everyday concept that does not need explanation (e.g. "sky")
- Already linked earlier in article
- Selected text too narrow (e.g. "palm" instead of "palm tree")
- Selected text too wide (e.g. "tall palm tree" instead of "palm tree")
- Incorrect link destination (e.g. linking "sun" to "star")
(4) Yes, we intend to program the algorithm so that it only recommends a link for the first occurrence of the word in the article. I suppose then we should not need to include that as one of the "rejection reasons". On the other hand, maybe we should keep it in as a check to make sure the algorithm is in fact behaving as expected. I will bring it up with the researcher who works on the algorithm.
(1) If you simply disclose, somewhere in the background/contextual information (and you may already do this) that suggestions are AI-generated (though I prefer "machine-generated"), I think you've satisfied any need for disclosure. There is real value in simplification (avoiding distractions).
(2) It's not a question of a burden for patrollers, it's that you're showing new users something that (a) they should not do [the community absolutely does not want this level of detail in edit summaries], and (b) could be intimidating. By (b), I mean that a user could easily say "Wow, I have to do a lot of bureaucratic work - describing in detail ALL my changes - if I want to change something. I'm going to find something else to do that has less busywork involved."
(For patrollers, a tag that says "Structured task" or similar would be helpful, if not already in place; but tags are invisible to users until an edit is published.)
(3) I don't understand the last of these bullet points - if the user thinks the link is wrong, he/should can pick another one. Maybe you mean "Cannot find a good link destination"?
(4) Thank you for checking with the research working on the algorithm. However, I'm not sure that this addresses the point of whether the user can easily see/scan the prior parts of the article - or, perhaps more to the point, whether you're going to imply, to users who want to do everything right, that they always need to check the prior part of an article before creating a link, even when doing so is time-consuming.
As far as this being feedback on the algorithm, I would hope that you'd be tracking reverts on edits made by new users (again, tagged edits), particularly for reverted edits because more experienced editors think the new user has overlinked.
Hi @John Broughton -- I'm sorry it's taken me a while to get back to you. I wanted to have the results from our actual user tests of the two design concepts so that I could give you some good responses. We posted the summary of the user test findings here, and we've decided to go with Concept A (plus some of the best parts of Concept B). Here are some responses:
(1) I think it's a good idea to make it clear at the outset that the suggestions come from AI, and then save space by not including the phrase "AI" for the rest of the workflow. That does sound simpler. In our user tests, every user clearly understood what we meant by "AI suggestions", and so I'm less worried that the concept will be confusing.
(2) This makes sense. Perhaps the edit summary should say something like, "Added 3 wikilinks". How about that?
(3) These various "rejection reasons" actually were somewhat confusing in the user tests, and we have some re-wording to do. We will be prompting the user to choose a better link destination if they think it's wrong. Perhaps a plainer way to phrase that reason would be, "Link goes to wrong article".
(4) Yes, I think we will definitely make sure the user knows they should make sure to only link the phrase the first time they see it. In Concept A, the user will be able to scan the whole article, but it's a good reminder that we should tell them it is appropriate to read the whole article through as they make edits to it.
Regarding reverts, we'll definitely be tracking that. It's already something we track for the "classic" suggested edits feature that already exists (which encourages users to copyedit and add links on articles that have the corresponding maintenance templates.). Right now, the revert rate on suggested edits is about equal to the revert rate on the edits newcomers make on their own without suggested edits. We think that is a good sign that suggested edits is not encouraging shoddy edits, and we'll need to make sure that "add a link" does not increase the revert rate.
In terms of next steps, we'll be posting and testing designs for the desktop version of this feature (the designs you saw before were only for mobile), and I'll tag you for your thoughts on those, if you have time.
(2) Anything less than 30 or 40 characters is fine, including what you suggested.
(3) I think the point I was trying to make was that the list, I think, is of reasons why the user didn't create a wikilink when a link was suggested. If that's in fact what the list is for, then the final reason for not making a link is that the user couldn't find a good choice (and yes, the suggested link wasn't useful, then, but the point is that the user, searching for a good link, couldn't find one).
Overall, I'm a firm believer that actual user experience is "ground zero" for making good decisions about a UI and online-process, so I'm glad to hear that you're learning so much from user testing.
Answers to the listed "essential questions" and then general thoughts
- Guided editor experience (teaching-centered structure) definitely appears to be the way to go if the goal is to integrate new editors to make bigger edits (is it? I expanded on this below), i.e., yes, first edit, whether editing a suggestion or of one's own initiative, is the opportunity to learn VE
- If the point is not to make bigger edits later, but just to recruit mobile editors into mobile-friendly tasks, is adding a link the way to go? Per the Android structured tasks, that wouldn't be about recruiting desktop editors but a different category of low-effort maintenance tasks.
- I would need to see the relative benefit of the latter to know what opportunity is there. When we talk about editor decline, we're mainly referring to content editors (whether that implication is correct that the glut of editors from a decade ago were mainly productive content editors) and not simply those making corrections. (And is there evidence that those who partake in the Android tasks are any more likely to adopt full-featured editing?) It's two very drastic perspectives on how to grow the editor pool. If that decision remains to be made, I have some ideas on how to resolve that with community input.
- For the question of when to introduce the VE, would this be solved with user testing? Recruit from a pool of unregistered readers who are interested in making first edits and offer them this "AI suggestion" flow vs. VE with guardrails.
- I wouldn't count on new users recognizing the bot icon or knowing what bot or AI are. For me, they're just "Suggested edits" or "Recommended edits"—the user doesn't need to know how they were generated except that they're coming from the software. To the larger point of the recs being fallible, I think this would need to be pretty high confidence of being a worthwhile edit before editing communities would want to implement it. At that point, the caveat wouldn't be needed.
- For future mock-ups, would be more realistic to pull from en:w:Category:Articles with too few wikilinks, as most random articles are messier and I imagine such a tool would be most effective when paired with a cleanup category (if one is active on that language's Wikipedia) vs. adding links to articles where they're not needed. I.e., the suggestions are going to be more like the ones you've listed for "croissant" than for "dutch baby pancake"
- "Our first foray into newcomer task recommendations has shown new users will attempt suggested edits from maintenance templates." (from May design brief) This is a much more interesting entry point in my opinion. If I'm reading an article on mobile, do we even show the maintenance templates right now? If instead it gave entry into a guided method of resolving the maintenance template, there's much more mutual benefit than receiving a random article based on my viewing history or otherwise.
- The dot progression in B encourages skipping between options, which I think is good here.
- I'd wager I'm more likely to say "I don't know" to an edit than to give a firm "yes" or "no" as a new editor. Might be useful to have that as a skip button.
- The contextual highlighting in the text doesn't feel strong/striking in A or B.
- In B (excerpt view), the reader would not be able to answer whether the link is already in use elsewhere in the article (we typically only link the first usage of a word in an article too, which they wouldn't know). For that reason alone, I think the whole article is needed for context, though I do like how B lets me focus on just the sentence at hand, when approaching a task on mobile. Feels unrealistic to have the user click out to another tab to view the full article on mobile.
- I totally would have missed that I had to click the blue button arrow to actually commit my edits. I would have thought that clicking "Yes" on an edit was sufficient for submitting the edit without any other visual indication.
General thoughts
- I'm skeptical that, at scale, new users want to go through a series of repetitive tasks, like first do X, then Y. That gets into tutorial territory (like en:w:WP:TWA) vs. aid with first edits.
- "Only about 25% of the newcomers who click on a suggestion actually edit it." How do you know whether this is because users only clicked in because they were curious and are not actually interested in the task vs. because users were turned off by the interface and need a better intro? I imagine it's more the former than the latter but would be interested in what the data says there
- This feature necessitates close integration with existing editors beyond the mentor/welcome committee.
- In general, if this tool is to link simple words on articles that have already been mostly linked, it is likely bound to clash with editorial practices on overlinking. For anything that generates load on existing editors, I recommend getting broad community input before implementing. I know this is designed for smaller Wikipedias, but I can picture, for instance, English WP maintainers going berserk at the flurry of cleaning up wikilinks like "oven" or "stove top", which would be seen as overlinking. They'd sooner throw out the whole feature. In general, that community would have to have interest in semi-automated edits. Some communities have rejected this sort of aid outright as creating more clutter/work than benefit.
- to the open question "should workflows be more aimed toward teaching newcomers to use the traditional tools, or be more aimed toward newcomers being able to do easy edits at higher volume?" I'd be interested in an analysis of established editors to this effect. If an editor does more "gnome" edits, did they get started by making easy edits at high volume. If an editor is interested more in writing, ostensibly this won't be as helpful. Do you have survey data on what new editors were trying to do on their first edit? I'd wager that most new editors are coming to make a correction, in which case this interface should be aiding them in accomplishing that rather than entering them into a high-volume workflow in the absence of indication that they're looking to do those types of edits. My hunch, if most editors are coming for the single correction, that our best chance is to show them how easy it was to edit, which would increase their likelihood of a second edit. That would be a vastly different intent than the high-volume workflow.
- The main difference between this project/feature and Android's structured tasks, it seems, is that the former is about introducing the act of editing whereas the latter is about adding structured metadata. Android benefits from not having to teach/learn the editor at all, making it simple to do the one targeted, mobile-friendly task.
@Czar -- thank you for these detailed and helpful thoughts. I'm going to re-read through and respond tomorrow.
Sounds good and no reply needed! Only if you feel the need to follow-up or want to discuss. Otherwise just passing along my feedback.
Hi @Czar -- I read through all your notes, and I have some reactions and follow-up questions. Thank you for thinking about our work in detail!
- Regarding whether the objective of this work is to recruit editors into higher-value content edits or whether to help them do many small tasks:
- I think this work has the potential to do both. For instance, with Growth's existing suggested edits (in which we point users to articles with maintenance templates), we see users on both paths. There are some users who have done hundreds of copyedits or link additions, and who keep going day after day. There are also many users who do a few suggested edits, and then move on to Content Translation or creating new articles. In general, we want users to be able to find their way to the best Wikipedian they can be, giving them opportunities to either ascend or to stay comfortable where they are. We think this is the route to finding and nurturing the subset of newcomers who can be prolific content creators.
- I also think structured tasks could open up another possible route to content generation. If we are able to create many different types of structured tasks -- like adding links, images, references, infoboxes -- it's possible that we may have enough of them to string together into the construction of full articles, making article creation a lot easier (this is more needed in small and growing wikis than in English Wikipedia).
- But you mentioned that if a decision needs to be made between pursuing many small scale editors or pursuing "content editors" that you have some ideas. I'm curious what they are.
- Regarding the question of whether to use the term "AI" or "bot": this is something a few people have brought up. @John Broughton said something similar above: why not just call them "suggestions"? I agree that it would be simpler, but we are also trying to increase transparency and make sure users know where information is coming from. We've been thinking that it's important for humans to know how artificial intelligence is affecting their experience. What do you think?
- About using a cleanup category: yes, in our mockups, we are just showing a toy example that has lots of links to begin with. We've thought about pairing this with a maintenance template in production, but part of the motivation for building this task is that some wikis don't have a maintenance template for adding links (e.g. Korean and Czech Wikipedias). I'm thinking that we'll want to do something like use the link recommendation algorithm to identify articles that lack links, and then recommend those to the user. I will check with the research scientist to make sure the algorithm could do that.
- About entry points: we definitely want to try making suggested edits available from reading mode. Let's say you're a newcomer and you've already done a couple suggested edits from the homepage. Then the next day, you're browsing and reading Wikipedia, and it's an article that could be found in the suggested edits feed (either because it has a maintenance template or has link recommendations). We could then say to the newcomer, "This article has suggested edits!" This has the added benefit that (as you say) the newcomer is already interested in that topic, which we know because they went to the article to read it. Does that sound like what you're thinking of? Do you think that would work well?
- Regarding "Only about 25% of the newcomers who click on a suggestion actually edit it." This number has actually changed a lot in the past months, and that change is instructive. That number has now doubled to about 50%! We attribute this increase to the topic matching and guidance capabilities that we added. Topic matching made it so that newcomers would land on articles more interesting to them, and guidance give the newcomers instructions on how to complete the edit. The fact that these increased the proportion of newcomers saving an edit makes me believe that many newcomers would have wanted to save an edit, but were turned off either by the content or the task once they arrived on the article. And this makes me believe that there is room for further increases in the future.
- You said "this feature necessitates close integration with existing editors beyond the mentor/welcome committee." What kind of integration are you thinking of?
- About newcomer intentions, you asked if we have data on what newcomers intend to do with their first edit. We do have this data, from the welcome survey. This report shows responses from Czech and Korean newcomers on why they created their account. It turns out that a lot of newcomers intend to create an article or add information to an article. One of our challenges has been to steer them toward simpler edits where they can pick up some wiki skills, before they try the more challenging edits and potentially fail.
- If there's a place you're tracking the top-level outcomes you mentioned, I'd be interested in following along and I imagine many others would too. Stuff like return rates for those who do not make a suggested edit vs. those who do (and what timeframe); new user rates of other desired actions (adding a citation, CXT, etc.) after engaging in the suggested edit flow vs. those who skip it and start editing directly; if you're tracking self-reported willingness to edit after any of these interactions. In general, this work (and editor growth in general) is more relevant to WP editor communities than a lot of other WMF work, with no disrespect to that work. Sharing the wins from this work is good for both the WMF and the editor community who would be administering the tools.
- re: encouraging small/corrective edits vs. cultivating content writers as a prioritization decision, how closely do you currently work with communities? And is it mainly focus grouping with mentors and new editors, or do you have community discussions on your target wikis? I.e., I know you had seen wikifying text as one of the actions new users commonly take, but is it what the community needs? The growth team is essentially building out a wikifying AI that has to reach a level of accuracy that it could be run semi-automatically by users with no experience. At that point, it wouldn't be far off to open the same AI to experienced users who will write tools that run the AI as part of doing general fixes on an article. On one hand, great, but I'm curious if that's what the community would say was among its top problems in need of fixes. I can't think of a place where, for example, the enwiki community takes stock of its biggest problems. Those discussions usually happen about specific problems as patchwork rather than as a community ranking (apart from maybe the annual WMF community tools survey?) In my experience, enwiki usually depends on creating backlogs until someone announces how the backlog is causing a problem or that they could do more if given some script/bot tools. I think there's an overlap between that and this work. If you were to ask specific communities what their biggest editing problems are, if I'd take enwiki as an example, if we could magically juice our volunteer count, I don't know if we'd say we need "more content" or even necessarily more watchlist-watching activity to catch things being missed. I'd hazard you'd hear we need help with, for example, reducing promotional tone and removing dead external links. Lack of wikilinks is a far smaller problem. Anyway, there's an opportunity for overlap in the choice of the new user "suggested action". Not to mention the benefits that come from this sort of community–WMF common understanding of where a lack of active users is actually a problem. There's a lot more to say on effectively setting that up, but yeah, my wall of text over here.
- ""this feature necessitates close integration with existing editors beyond the mentor/welcome committee" Going out of order but a similar note: Any tool that semi-automates edits will require community consensus to implement. I don't have full knowledge of how each language WP works but I imagine the tiny ones are happy to receive any/all aid and are grateful, while established communities have to justify whether adding the feature is worth the cost. If it creates more work for users, they will quickly vote to kill it (CXT is a great example) and then the benefit is null. (Side note: This also compounds the idea that the WMF is not listening to the community and is working on features that do not benefit it.) If an established community is only informed of a tool near its completion, they have no time to guide the development to make the tool most useful to their community. So I know these growth features are built for smaller wikis, but if eventually you'd like to see them applied to larger communities, I'd recommend collecting their requirements/feedback early in the process, both so that they can see progress and also have some investment in the success of the shared project.
- The AI-based wikilinking in specific would need to have an extremely low error rate to be turned on by one of the established Wikipedias, nevertheless to be applied by new users. Otherwise established editors will hate that they have more work to revert, and new users won't make progress if their edits are reverted and they don't know why it was suggested in the first place.
- "Suggestions" is sufficient copy to my eyes. Absolutely, I think it's important to distinguish between whose suggestion it is (AI's vs. community's) but the difference is not material to most users. I think that could be solved with design treatment, e.g., (i) info icon and overlay, so it doesn't clutter the field.
- re: the Korean and Czech Wikipedias, my question would be whether they need to add a "too few wikilinks" maintenance tag in order for this intervention to be successful, or if their wikilink issues are spread generally evenly across their articles. This goes back to whether wikilinking is among their top activity concerns, i.e., would your help be to identify articles for wikilinking internally, within the tool, or would it be more beneficial to the community to run that algorithm externally, populating the public maintenance tag both for anyone in the community and the new user suggested edit tool
- re: entry points, that makes sense to have an entry point on an article you're reading—I'd even say it's nicer to have it within the article itself as a CTA instead of distracting at the top of the article, e.g., that paragraph you just read has two suggested edits, would you like to review? I was thinking of the dashboard/start page though and why it'd list kouign-amann vs. something else. For my first edit, instead of using a random article within the broad topics I just selected, I imagine I'd be more interested in topics related to the last article I just read (by related I mean linked from that article, not in the same topic area). Being interested in "food" doesn't mean I'm interested in editing kouign-amann, but having a curiosity about Beyonce would likely mean I'm interested in the wikilinks other viewers visit when reading her article. A thought.
- re: 25% to 50%—that sounds great! I imagine that's this green line going from 2.5% to 5%, so then it's the green line divided by the red line? I was asking more about page views vs. the blue line: How many new users care about making a suggested edit or, specifically, a wikitext edit as their first edit.
- Fascinating that "Fix a typo or error in an article" was among the lowest motivations of the surveyed Czech/Korean new users. I wonder whether those users, when given this suggested edit tool, are retained better than those who answered otherwise because they're being given what they want. I'd also be curious if any other part of the editor flow actually tells new users that writing an article without making prior edits is not recommended and why. Sounds like it would be smart to set expectations appropriately.