Jump to content

CSAT-FY1516-Q4-retrospective

From mediawiki.org

Note: this retrospective was performed asynchronously in a document

Timeline:

  • 4/1 Quarter started
  • 6/9 started CSAT discussion at staff meeting
  • 6/16 Convened team meeting to discuss
  • 6/17 Arthur and Grace paired
  • 6/17 Team voted asynch on themes
  • Around 6/22 Grace worked with JMo on writing professional questions
  • 6/23 asked for team for list of recipients by EOD 6/27
  • 6/27 survey sent (was live until 7/6)
  • 6/30 Quarter ended
  • 7/7-8 Grace and Arthur processed, with help from Kevin
  • 7/12 TPG Quarterly review

Other important events during this time period:

  • Wikimania 6/20 - 6/27 (and a lot of people traveled/took vacations right after)
  • DarTar vaca 6/27 - 7/1, then working remotely where Grace could not easily nudge him
  • KL on vaca 7/5-7/8

What did not go well:

  • Grace:
    • Timeline was compressed due to late start and fixed end date
    • Survey live time coincided with post-wikimania fatigue and summer vacations both of which may have contributed to low response rate
  • Arthur
    • Low response rate
  • Kevin
    • +1 to late start and unfortunate conflict with wikimania
  • Joel
  • Max
    • +1 to what Grace said
  • Kristen (optional)
    • timeline was pinched to meet quarterly review reporting needs
    • Wikimania/vacationing timing was tough
    • Basically, +1 to othersā€™ entries aboveĀ :-)

What went well:

  • Grace
    • Within the tight timeline we moved quickly
    • Grace pairing with Arthur on questions and again on processing results was greatĀ :) Ā Kevin was involved in results processing, too, which was also great
    • Adding v-heads to recipient list
    • Halfakā€™s willingness to share his expert input on the response rate
  • Arthur
  • Kevin
    • It was fun and educational for me to help process the results
    • Improved the documentation to make future processing easier (+1 GG)
    • Improved the spreadsheet format and formulas to make future processing easier Ā (+1 GG)
    • Grace took the lead
    • Grace had access to great help from researchers
    • We met the deadline
  • Joel
    • Executed without requiring the whole team to be heavily involved
  • Max
    • +1 to what Grace said
  • Kristen (optional)
    • Knowledge sharing about CSAT processing was good
    • Good results! (up from last quarter)
    • Sanity check response rate w/halfaker was good

What we could do differently:

  • Grace
    • Reconsider end of quarter timing! Ā In a way, our work is continual. Ā Sure, maybe we help teams achieve their quarterly goals, and maybe they also appreciate our help even if they donā€™t. Ā I think that at any time our customers could be in a position to indicate their satisfaction with our work.
      • Consider this an opportunity to model a shift away from strict quarterly, at times arbitrary, boundaries
    • Allow more time for iterating questions with DR experts
  • Arthur
  • Kevin
    • Start earlier in the quarter (assuming we stick to quarters). I still think survey fatigue is real, and would like to see us only interview half or a third of our customers each cycle
    • Standardize on how much nagging we want to do. Currently, different teams receive different amounts of reminders. Should we poke and prod, or let things take their course? Are we hiding survey fatigue with too many reminders? To what degree are reminders (or lack thereof) skewing who responds, either by team, or by how strongly they love/donā€™t love our work?
    • Do we clearly disclose in the survey how quotes might be used?
    • I would really like per-TPGer breakdowns about areas of strength and improvement. Primarily for continual improvement; secondarily for annual review fodder.
  • Joel
    • Maybe piggyback with another survey so that itā€™s longer but not a new interruption? Ā Or schedule a ten-minute meeting with each team for them to fill it out?
  • Max
    • Try to avoid sending out with other surveys from other sources
    • Leave up long enough to collect significant data (response rate)
    • Investigate whether or not we want at least some response from each team
    • Investigate how to engage the ā€œmehā€-opinionated people (who are less likely to take the survey)
  • Kristen (optional)
    • Earlier planning start and more lead time for respondents.
    • NLP algorithms for pulling out themes? http://blog.algorithmia.com/2016/04/natural-language-processing-algorithms-web-developers/
    • I like Graceā€™s notion ā€œI think that at any time our customers could be in a position to indicate their satisfaction with our work.ā€. I donā€™t know how that would work in practice - maybe staggering sending out the survey, and increasing the response interval (I think we floated the idea of every 6 months)? In quarterly reviews, we could then report on most recent results.