I agree with the sentiment :) yet Sumana is right when she insists that we also need parameters to measure how are we doing, and therefore what can we improve. Let's check against QA/Strategy#Measuring_success:
Measuring success
Activities
- One Bug management activity every other week.
- Co-organized with different teams, focusing in different areas.
- In sync with WMF and other community priorities.
These are easy to evaluate and you are doing well so far.
Results
- Tangible progress after each activity: bugs reported / triaged, scenarios written, good perception of end user perceived quality.
This goes in the direction of selecting manageable topics: if we start with 200 reports then we probably won't even be sure that we have a good idea about them by the end of the week. If we are focusing on a goal more specific (implying say <50 bugs) then it is easier to evaluate what we had before and after. We might even have a sense of 'we went through all of them and now that corner is tidy'.
- Improvements documentation and processes enabling scalability of QA volunteer effort.
Ideally the generic docs related with bug triaging would be fine tuned based on feedback and lessons learned. The pages of each bug triage with their evaluation could still be useful for volunteers interested in those areas joining on their own in the future.
- Developer team values the exercise and is willing to repeat.
Simple to measure.
Participation
- Pool of at least 25 regular non-WMF contributors in each QA category.
There are 9 listed at Project:WikiProject_Bug_Squad/Participants. That list needs an update and be checked against reality from time to time.
- At least 10 non-WMF contributors in synchronous activities.
This is also relatively simple to measure. Like the point before, it's a goal. It doesn't mean that you failed if you didn't reach those numbers. But we should see a trend in that direction over time.
- Mix of newcomers and repeaters in each activity.
- Involvement of Wikimedia editors.
- Involvement of contributors active in other OSS projects, new to Wikimedia.
This implies knowing more about the active participants. Currently this is not easy, but at least we have traces in Bugzilla, IRC and perhaps MediaWiki if peope want to sign up or leave a trace here.
- Involvement of existing groups.
This implies looking periodically for areas affecting a clear group of stakeholders (e.g. Commons bugs) and succeeding involving them in the activity.
Organization
- Topic and dates are agreed 2 weeks before the activity starts.
- Goals are defined before an activity and evaluated right after.
- Preparing a new activity takes less effort.
- The stakeholders related with an activity are aware and engaged.
- Participants can start learning / contributing right away.
- Non-WMF contributors involved in the organization.
All this can be measured.
Evaluation checklist
- For an example, see QA/Browser testing/Search features#Evaluation.
★★★☆☆
- Summary:
- Results:
- Participation: WMF? Wikimedia? Other?
- Documentation:
- Promotion:
- Developer team:
- Organizers:
- Lessons learned:
And this check list can be applied.
All this might look like extra work, but it might define the difference between proper community engagement or "Andre, Valerie and Quim going through bugs publicly with some friends". :)
This post was posted by Qgil-WMF, but signed as Qgil.