Team Practices Group/Light engagement survey reports
This page is obsolete. It is being retained for archival purposes. It may document extensions or features that are obsolete and/or no longer supported. Do not rely on the information here being up-to-date. |
Pilot Round 1 results
[edit]The first pilot round was conducted between 2016-08-17 through 2016-08-31, and included 2 recent engagements.
What we learned FROM the survey
[edit]Summary
[edit]The one-off New Readers engagement generated an NPS of 50, with no detractors. Comments called out the value of retrospectives facilitated by TPG, and a regret that they had not involved TPG earlier.
The ongoing Scrum of Scrums engagement generated an NPS of 14, which is still positive. The comments were mixed, with some participants seeing benefits of TPG’s facilitation, and others questioning the value of the current format. The comments from attendees or former attendees who see room for improvement inspired a full-group retrospective to be scheduled, with the hope of improving the overall productivity.
Context
[edit]In August 2016, TPG (the Team Practices Group) launched a pilot to start surveying recipients of its “Light Engagements”. This first iteration of the survey ran from August 17-31, 2016, and was sent to 2 separate groups, both served by Grace Gellerman:
- New Readers
- Scrum of Scrums (“SoS”)
Both parties were emailed the survey separately in the form of a generic survey that included questions about a variety of skills, not just the ones applicable to that engagement.
Skills (Likert)
[edit]New Readers project requested facilitation and process design, while the Scrum of Scrums engagement focused on facilitation.
Expected skill ratings
[edit]Facilitation, common to both engagements, scored:
From 9 total responses across SoS and New Readers, 2 agree and 7 strongly agree with the the statement: “The facilitation provided improved the overall quality of the discussion”
(Note:1 SoS participant, confused by the survey form, emailed me to say "Scrum of Scrums and what I think about you leading it every week. Which I think is great and appreciate very much.")
The New Readers project also received process design support.
From the total of 2 responses, 1 agree and 1 strongly agree: in response to the Likert question “The process design provided improved my team’s productivity”
Unexpected skill ratings
[edit]Interestingly, many recipients scored Likert questions for skills outside of what TPG thought we were bringing to the table:
Likert question | SoS | New Readers |
---|---|---|
The Agile Coaching improved my team's confidence in executing Agile roles and processes on our own |
|
|
The process design provided improved my team’s productivity |
|
|
The coaching on team and organizational dynamics resulted in improved interactions |
|
|
The Project Management support made the project run more smoothly |
|
Net Promoter Score (NPS)
[edit]New Readers: 50
SoS: 14
There was 1 detractor and 2 promoters. The other 4 participants were passive promoters: 2 7s and 2 8s
Overall across New Readers and SoS: 22
According to Wikipedia, “An NPS that is positive (i.e., higher than zero) is felt to be good, and an NPS of +50 is excellent.”
Open-ended questions
[edit]Was there anything you were expecting to get out of this engagement that you didn't?
- 3 said no- that expectations were met
- “I'm a fan of TPG work and I'm always just expecting more TPG resources. I think, up to a reasonable point, TPG folks make the organization better. If you compare where we are now with cross-team collaboration vs. 4 years ago, the progress is obvious.”
- “I expected to get solid support from other teams in SoS, but didn't really receive answers to about half of the questions/requests I posed in the meetings.”
Was there anything you got that you weren't expecting?
- New Readers:
- Retros
- “Help with enabling a neutral, balanced conversation in retro”
- “Help understanding the function of a retro (ie, not to react, to feel empowered to offer differing opinions)”
- SoS:
- “I'm impressed by the efficiency of this meeting”
Do you think improvements could be made to how this engagement was structured? If yes, please explain.
- New Readers:
- Both respondents wished that they’d gotten Grace involved sooner
- SoS:
- “Yes. If SoS is to succeed, representatives from every team should be present, and cross-team dependencies should be more accurately mapped.”
- “It seems that the we have diverged from the SoS starting notion of blockers/blocking into a more generic format of ‘Here's some updates and please take a look into this.’ It might very well serve us better in the end but it'd be nice if we actually had agreed on the new format”
Do you have suggestions for how TPG could improve its performance in engagements like this?
- 2 responses of “no”
- “Not really, I think the facilitation of SoS is fantastic, but the meeting itself is basically useless to my team in its current form.”
- “I wonder if something similar for product managers would make sense.”
Next steps
[edit]- The responses from SoS participants opened the conversation to convene a retrospective on that meeting’s format. Tentatively scheduled for 2016-09-28.
What we learned ABOUT the survey
[edit]Executive Summary
[edit]This small-scale pilot allowed us to experiment in a low-risk setting, and provided a relatively tight feedback loop so we could make iterative improvements. We discovered some flaws with the survey/email design, and learned about participation rates and some effects of deadlines. We found the actual responses to be interesting and valuable, providing information to help us improve our engagements. However, it appears there was some confusion on the part of respondents, so they probably shouldn’t be considered statistically valid.
For the second pilot, we decided to greatly streamline the survey, and to shift some of the customization into the emails. We decided to only leave the survey open for 1 week, not 2, since the vast majority of responses came in the first few days. We will stagger the second pilot surveys into 2 waves, to experiment with the non-batched approach we expect to take post-pilot. Batches of 2-3 groups per wave will reduce the effort required to analyze the results.
Selecting participants
[edit]- We had to open the form to non-wikimedia.org email addresses, to allow WMDE staff to participate
- We quickly realized that some individuals might be part of multiple engagements at the same time, so the survey somehow needs to handle that (for the pilot, we dropped engagements that would have resulted in duplication)
- With hindsight, the survey was sent to a couple people who probably weren’t involved enough to be worth including
Survey design
[edit]- Several recipients of a facilitation engagement answered several questions related to skills which appeared not to be relevant for that engagement
- Perhaps being clearer that the survey relates to only this engagement would help?
- Perhaps skills were at work that the TPGer wasn’t aware of, but the recipients noticed
- Perhaps respondents felt obligated to answer several of the questions, even if they weren’t relevant?
- The open-ended questions provided some valuable insights
- The “other” skills Likert scale didn’t have an associated prompt as to what was being agreed to. In the second pilot, we are completely revising that part, avoiding that problem.
Response rates
[edit]- The response rate for the one-off, high-touch engagement was 100% (2/2)
- The response rate for the recurring large-meeting facilitation was 43% (9/21)
- Researchers have told us that it is reasonable to expect a 20-30% response rate for an internal survey
- Despite a 2-week window, 66% of total responses were submitted within 2 days; 90% within a week