Quality Assurance/Strategy
This page is obsolete. It is being retained for archival purposes. It may document extensions or features that are obsolete and/or no longer supported. Do not rely on the information here being up-to-date. |
Manual testing
[edit]Goals
[edit]- Improve Wikimedia software products:
- User perceived quality.
- Areas difficult to cover with automated testing.
- Grow the Wikimedia technical community.
- Accessible entry point for Wikimedia users and editors. No technical skills required.
- Good motivation for experienced and professional testers.
We still need a central "these are our QA priorities" page.
Volunteer profiles
[edit]- Wikipedia/Wikimedia editors
- Motivated users willing to try what's coming next.
- Experienced / professional testers willing to contribute.
- Companies developing products where Wikipedia/Wikimedia software needs to run well.
- ... and of course other regular contributors at https://bugzilla.wikimedia.org/ willing to get involved in a more structured way.
Consolidating a testing team
[edit]We need to identify, empower and let lead to those experienced in testing and QA, and those experienced in Wikimedia software & community.
We must build a healthy meritocratic structure, with a dose of fun and incentives to those doing great progress, and helping out others to progress as well.
Based on this, a set of profiles:
- Senior testers - can help and teach others to test.
- Organizers - can increase the quantity and quality of QA activities.
- Connectors - can bridge QA volunteers efforts with development teams.
- Promoters - can help reach out to new volunteers.
Activities
[edit]In theory, almost all combinations apply across:
- Testing vs. bug triaging.
- Online vs. on-site.
- DIY vs. team sprint.
- Synchronous vs. asynchronous.
However, not all combinations are equally productive towards all contexts and goals. For instance, a face-to-face team sprint requires well-defined scope and goals, and heavy involvement from the development team. Discrete (individual) tasks can provide great results, as long as they don't block urgent deliveries and critical paths.
We need good documentation so that we can clone at least these three procedures efficiently:
- Online testing sprint: how to organize, announce, perform, and evaluate it.
- Proposed: right after deployment of new MediaWiki versions to non-Wikipedias.
- Proposed: right after feature deployments.
- Note: this requires availability of effective announcements and release notes.
- For examples, see Mozilla Test Days, Fedora Test Days and our own Weekend Testing Americas held on 2012-05-5 and Article Feedback Testing on 2012-06-09.
- See also session-based testing.
- Individual testing: tasks that a person can perform and report on, anytime and anywhere.
- Individual bug triaging: bug reports to look at, and instructions to improve their status.
Reaching out
[edit]We need to go beyond the sporadic isolated efforts and build a continuous, incremental flow of activites. The success of each activity must contribute to future successes.
We need to let people know about ongoing / DIY opportunities as well as events. We need to reach out to the current MediaWiki / Wikimedia communities as well as to external groups and potential new contributors.
- Calendar: a central place where activities are announced. It should be possible to subscribe and receive notifications of new activities.
- QA communities: reaching out and having processes in place to promote our activities.
- Work with promoters to spread the news.
- Contact companies testing Wikipedia in their products e.g. browser developers.
- Organize on-site activities engaging local groups.
Follow-up activities
[edit]Testing events require a follow-up to
- Evaluate and announce the outcome.
- Triage and process the feedback received into the regular development flow.
- Keep the contributors engaged.
- Warm up for the next event.
For instance, it is a good idea to organize an online bug triaging sprint after a testing event.
We need a way to keep in touch with participants in events and get back to them.
Community incentives
[edit]To be defined. Some ideas:
- Tester barnstar.
- "I test Wikipedia" shirt.
- Sponsored training e.g. AST courses.
- Sponsored travel to Wikimania.
Test automation
[edit]
Future
[edit]Emphasis on good unit tests
Measuring success
[edit]Activities
- One Features testing activity per month.
- One Browser testing activity per month.
- One Bug management activity every other week.
- Co-organized with different teams, focusing in different areas.
- In sync with WMF and other community priorities.
Results
- Tangible progress after each activity: bugs reported / triaged, scenarios written, good perception of end user perceived quality.
- Improvements documentation and processes enabling scalability of QA volunteer effort.
- Developer team values the exercise and is willing to repeat.
Participation
- Pool of at least 25 regular non-WMF contributors in each QA category.
- At least 10 non-WMF contributors in synchronous activities.
- Mix of newcomers and repeaters in each activity.
- Involvement of Wikimedia editors.
- Involvement of contributors active in other OSS projects, new to Wikimedia.
- Involvement of existing groups.
Organization
- Topic and dates are agreed 2 weeks before the activity starts.
- Goals are defined before an activity and evaluated right after.
- Preparing a new activity takes less effort.
- The stakeholders related with an activity are aware and engaged.
- Participants can start learning / contributing right away.
- Non-WMF contributors involved in the organization.
Evaluation checklist
[edit]- For an example, see QA/Browser testing/Search features#Evaluation.
â â â ââ
- Summary:
- Results:
- Participation: WMF? Wikimedia? Other?
- New:
- Repeating:
- Mood:
- Documentation:
- Promotion:
- Developer team:
- Organizers:
- Lessons learned:
Individual activities
[edit]- How easy and how long before doing a first contribution.
- Positive / negative feedback, complaints, bugs.
- Individual contributors showing up in community channels and team activities.
- Statistics on individual contributors (to be defined).
- QA contributor retention.