Team Practices Group/Light engagement survey pilot 2 results
Appearance
The Team Practices Group (TPG) was dissolved in 2017.
This page is obsolete. It is being retained for archival purposes. It may document extensions or features that are obsolete and/or no longer supported. Do not rely on the information here being up-to-date. |
Summary
[edit]In September 2016, the Team Practices Group ran a second pilot of a new Light engagement survey. This document presents what we learned from the pilot.
Round 2 of the pilot was very successful. The changes based on the first pilot worked well, and need only minimal additional revisions. We experimented with a non-batch approach, and have made that a standard part of the process. We encountered challenges with google forms, and will work around those with documentation. We discovered that a separate form for âongoingâ engagements would be helpful, so we created and tested a variant of the survey.
Details
[edit]Changes compared to the first pilot round
[edit]- Switched from sending surveys out in a batch to sending them individually (see below).
- The first pilot included specific rating questions for each of the possible skills used. With the second pilot, we just had a checkbox for each possible skill, indicating whether it had been used.
- Minor wording tweaks.
Non-batch approach
[edit]- Rather than distribute these surveys in batches (as we do with our embedded CSAT), Grace strongly advocated that we send individual surveys at times that make sense for each engagement. For engagements that have an endpoint, that would ideally be right after it ends.
- One pilot survey was sent out after an event, but before some follow-up meetings. With hindsight, it would have been better to have waited until after the follow-up meetings.
- A consequence of eliminating batches was that we need a separate form for each engagement. This simplifies the form by avoiding the âWhich engagementâ dropdown, but makes the back-end slightly more complicated. We should revisit this after several months.
- For each engagement, the involved TPGer(s) will be responsible for generating an instance of the survey, sending out the email with link, and analysing the results. It will be entirely decentralized.
Survey design
[edit]- The questions in the second pilot were modified and streamlined substantially, based on what we learned in the first pilot round.
- We considered adding a question about how the respondent viewed the cost/benefit of this engagement. We ended up deciding that the participant wouldnât have enough knowledge to judge the cost, and we donât yet have a way to quantify the value.
- One-off and short-term engagements vs. ongoing/long-term
- The initial form was oriented toward engagements with an endpoint.
- When we were about to send a survey to customers in an ongoing engagement, we (Arthur) ended up forking the form (in consultation with Design Research folks) to create a separate variant.
- Both versions will be available to TPGers, so they can select whichever applies.
- We need to evaluate whether any of the wording changes from the fork should be brought back into the original survey (T146822).
- In addition to surveying our customers, we also set up a simple process to solicit feedback from the TPGer(s) about how they felt the engagement went. To avoid bias, TPGers are encouraged to write their thoughts before viewing any results from the corresponding survey, or thoughts from other TPGers who were involved with that engagement.
Frustrations with google forms
[edit]- The google forms UI is often awkward or confusing, and documentation is sometimes inadequate. We will need to compensate for this by having clear step-by-step instructions.
- We sent out some surveys using the URL provided by the âget pre-filled linkâ option. That sounded reasonable, but is actually an example of a counter-intuitive feature. We didnât test the link enough, and it didnât actually work. We had to provide updated links to some recipients.