Development process improvement/7-8 July 2011 Meeting
This page is currently a draft.
|
Purpose of the meeting: Review the code review/deployment/release process, understand pain points, and prioritize fixes.
Regarding these notes: These are draft notes. Errors or misunderstandings are likely. Photos will be added shortly and are necessary to make sense of this document. Most importantly, the purpose of this meeting was to have a quick, high-throughput conversation about some of those issues -- key changes e.g. regarding the release process will still be discussed and iterated further, esp. with key folks who couldn't be physically present (e.g. Tim, Chad).
Attendees:
- ThoughtWorks: Jonny Leroy, Hai Ton
- WMF: Roan Kattouw (Developer/Code reviewer/Deployment engineer/...), Alolita Sharma (Director, Features Engineering), Howie Fung (Sr. Product Manager), Tomasz (Director, Mobile + Special Projects), Erik Moeller (VP Engineering/Product), Brion Vibber (Lead Software Architect), Rob Lanphier (Director, Platform Engineering), CT Woo (Director, Technical Operations)
Day 1: July 7, 2011
[edit]Visual documentation
[edit]The following "value stream" illustration was created during the meeting with sticky notes, and then translated into an illustration by ThoughtWorks, to capture the current development/deployment/release process. Some errors probably occurred during the transcription process --if so, they can be corrected in File:Wikimedia Engineering Process Flow.svg.
The red boxes below identify problems with the current process/flow. The orange boxes describe improvement strategies.
Quickstart
[edit]- Timeboxed
- Iterative, feedback driven
- Business value focused
What does success look like?
[edit]- ID Pain points
- Agree on areas to improve on
- We think a better future is possible
- Shared view of what's painful
- Don't burn out dev teams
- How does this apply when we're 2x in size? 4x? (Scaling agile)
- Useful + original output out of process
- Path to developing protective and productive team culture
- We can move faster than the world around us
- Newcomers can see the what and why
- Understand the goals of this quickstart
- Have team thinking, not top down
- Clear understanding of bottlenecks and how to address
- Avoidable mistakes <10% of product delays
- mix and match open source and agile processes to suite WM
- process that are acceptable to community
What does failure look like?
[edit]- Agile is a failure
- No one comes back tomorrow
- Descending into chaos
- this meeting doesn't have clear goals
- stuck in methdology dogmas
- no plan
- no one owns next steps
- community can't understand where to participate
- denial of problem
- don't have a path to org/community buy-in
- devs here think this is a colossal waste of time
- not doing anything differently on Monday
- paralysis
- no identifying/acknowledging constraints
Hopes/Concerns
[edit]- less about agile, more about doing the right thing
- short feedback loop with volunteer developers
- unique and open development process that scales
- Come up with a process that makes us less open
- Alignment of org structure and community
- balance of consensus and need for speed
What is the purpose/goal of the meeting?
[edit]- Broadly: to get an understanding of the current state of the system.
- Also include thinking about what a better future state would look like
- may or may not include some thinking about how to get there (this seems unclear still as to how in-scope it is, and may depend on time!)
Value Stream Mapping
[edit]Most of this process was a visual exercise of mapping out the development process; photos TBD
Considering a process as a system with inputs, outputs, stakeholders, and flow of value between various parts of the system.
Distinct beneficiaries (roughly):
- developers
- wiki*edia editors & admins
- wiki*edia readers
- third-party mediawiki users
Different flows affect those parties differently:
- WMF deployments for sites we run (push updates to *our* servers: bug fixes, feature updates)
- Releases for third party use (in larger chunks; sometimes tarball releases lag behind Wikimedia site updates by several months, harming third-party users; some developers have more interest in third-party use than Wikimedia sites)
- There's interest in having many more small deployments than we have releases, but also to keep releases happening more frequently than they are now.
- Different release schedules for extensions sometimes happen, especially for Wikimedia-specified ones.
MediaWiki Release
[edit](Working backwards)
- Release MediaWiki Tarball (4-11 months)
- RC defects fixed (release blocker issues are marked in bugzilla for easy finding)
- Community reviews RC
- Release candidates/Betas
- MW/Install-specific stabilization (fix things that didn't matter for the WMF deployment)
- WMF production fixes
- (from WMF Deployments)
WMF Deployments
[edit](Working backwards)
- Profiling, production testing
- WMF production fixes: backporting fixes from trunk to deployment, release branches
- test.wikipedia -> canary releases -> en-wp release -> release to rest
- test2.wp.com (future)
- cut production branch
- Extension selection
- Prototype environment for testing -- Backport fixes - Local testing
- Bug fixes (mainly on trunk)
- Code review on Rel branch
- Branch cut
- Code review on trunk
Extension Development
[edit](e.g., central auth, abuse filter)
- Extension deployment to WMF
- Branches are not good
- Extensions for every feature ideal (Brion)
- Roan usually reviews everything to trunk stage before deploying "WMF Blessed" extensions to production
Requirements Gathering
[edit]- WMF desired (ideal process, in reverse)
- defects -> loop back to development as needed
- data collection --> loop back into priorities into sprints
- wmf "blessed" extension deployment to wp
- review on trunk
- push to prototype
- development
- chunk out tasks & estimate
- priorities into sprints <-- feedback from production
- identify user stories
- functional design (hihg-level specs)
- community working group created / ..... (something)
- mockups created
- roadmap priorities identified
- product roadmap exercise
- product research
- strategic planning (abstract)
- Community desired
- Community bug comes in
- Open loop for triage
- No process for community review of features
- Consensus
- Don't capture metrics
- Prototype - deployment wonky
- Bad reaching out to people to get feedback
- Feedback only given when something concrete is delivered
- Not enough people doing revies
- Some revisions take a long time to review
- Release event poorly defined
- No formal bug-finding for release
- Code check in may break trunk
- Release assumptions may not be all valid
- Not good about making data publicly available
- Hard to identify bug was fixed because trunk has moved
- Person expert in review not tasked with review
- some environments don't match production environment
- refusal to merge something that was tagged
- community involvement in WMF initiatives
- shoehorning things into extensions
- some bugs deferred until later
- no final criteria to deploy to wikipedia
- work on installer bugs
- release frequency vs. deployment refrequncy
- Community may not want frequent tarball releases
[mucho discussion on deployment cadence, how making it signifiantly faster could influence problems/solutions and what it would need -- flesh this out!]
Day 1 takeaways
[edit]Erik's takeaways from day 1: We've got some pain points and solution paths which are fairly well understood (keep the backlog small! chunk things! push out code regularly! increase test coverage!) and some which are a little more complex. In the latter category we have questions such as: When does something get pushed into an extension, and how/when do extensions get reviewed/deployed? Pushing towards a cadence of even weekly deployments from trunk seems doable -- but I see the risk that we just end up shifting part of the problem to complex extension reviews. Are there measures we should use -- even as simple as LOC -- to say: This extension will go through the regularly scheduled review window, vs. needing additional resourcing? Should we more aggressive about getting extensions at least into labs/prototype staging wikis?
And, there's the "invisible" problem of improvements to the process in areas where we're currently doing very little: iterative interaction testing, design reviews, application of a style guide, QA testers following test plans, .. On the one hand, we want to get to a point of keeping the backlog of existing process burden (i.e. code review) small, but on the other hand, we may have to create new process burden in order to ensure higher quality -- which in turn could slow us down a little again.
Branching models, code review tooling improvements, and even decisions around the type of VCS we want to use will flow naturally from the process end goals we define. So, if we have a good vision of where we're trying to go, I think the tooling can be improved incrementally.
Day 2: July 8, 2011
[edit]a little re-hashing of yesterday, what was good
- [Tomasz] having the visual flow layout is helpful in getting a big picture; something we can show people to help explain what's going on
- [CT] Didn't realize there was a process shift two years ago until it was mentioned yesterday
- [Erik] Visual model captures what we know well, but doesn't capture what we don't know or don't do. Talked some concrete solutions yesterday, even though some things have been said before; concerned that we end up with weekly deployments but still have extension stoppage (for non-WMFextensions). CR problems are kind of close to home, but the other problems we know much less about
- [Alolita] Will be interesting to see which areas you guys (TW) think we didn't cover well
What was bad:
- [CT] Didn't talk much about which constraints we have
- [Alolita] Exceptions, edge cases not discussed
- [Erik] We won't know what we don't know
- [Erik] Other depts funnel things to engineering, not always anticipated (need survey in 2 weeks). Don't have a good process for assigning resources to these things, means disruption
- [CT] This is basically capacity management, supply and demand
- [Howie] Fundraising yanked some resources away completely
- [RobLa] We should parking lot things more aggressively
Agenda Friday
[edit]- Retrospective (above)
- TW's impression
- What did we miss yesterday
- Flow
- Path in problems
- Path out problems
- - lunch- (lunch ordered into conference room, might talk through lunch)
- backlog
- parking lot
- prioritization
- backlog deep dive
- retrospective
- wrap-up
CT says we need to address risk management re changes. Parking lotted, will probably be covered in path out. --> Parking lot
TW's impression
[edit]- Cost of change is not linear but exponential. Defects discovered late. Should make changes before they become expensive to fix, as Erik said. Community involvement, buy-in into early mock-ups will be discussed.
- [Howie] Consituencies change over time too. Later stages reach different people. Problem is how to reach those people earlier and get feedback sooner. Problem is not entirely straightforward because the audience is not a consistent group of people and [RobLa] is self-selected
- Capacity planning. Seems to be a lack of clarity/visibilty
- [Alolita] Mismatch between requirements and capacity planning, exist on different layers, out of sync. Not just engineer optimism leading to lower estimates, but we also have lots of things to do
- [Erik] "We aspire to do a shitload of things, but we're understaffed"
- Getting visitiblity into real throughput, likely time for completing something, track, improve
- QA! Big hole needs plugging. Goes back onto cost-of-change curve. Get QA right after a feature is finished by the dev
- [RobLa] We're hiring one QA person. That's just one person
- [Jonny] "Not having a person doesn't mean you shouldn't do that activity"
- [Hai] Multiplex devs into QA and plan capacity accordingly
- [Howie] Can we map out QA process? I've never seen one work the way it's supposed to. Docs for QA (functional spec) typically take a lot of time to develop
- [Alolita] Usually done by the same dev writing the unit test
- PARKING LOT!
- Suspect user stories are way too big. Bulges through pipeline rather than small chunks. Does not allow rapid feedback, everything has to be done first. Divide into smaller chunks
- [Erik] Defintions of "done", "rollout", "moving on" are not clear --> parking lot "definition of done"
- [Hai] We'll talk about this later -- introduce a binary notion of "done"
[Jonny takes over from Hai]
- 17 months (commit to release) is a big number, good place to start from
- Lack of QA. Not very much between local testing and deploying to production. Most places have too many steps instead of too few. Natural issue of growth, higher scale needs more steps, more formalities.
- [RobLa] Not just a function of growth, also of culture. "Wikipedia is the canonical example of a system that does not work in theory but works in practice." Fact that we can pull this off without QA is sort of a badge of honor.
- [CT] This is chaos, if you're not managing it it's not scalable
- [Jonny] Reliance on hero culture? 17 months number indicates this is breaking down
- [Erik] Should explore unconventional models of testing. Example: UploadWizard. Instead of multiple staging grounds, run a development version parallel to normal version, in production, as opposed to pointing them to a prototype-like system.
- [Jonny] How good is A/B testing infrastructure?
- [Brion] Depends on what you're A/Bing. For JS and usability things it was fine (usually needs the feature to have been built with A/Bing or at least per-user enabling/disabling in mind)
- [Jonny] Need functional QA. Some things need to be done in production or production-like environments (performance, user adoption). For performance, roll things out on select nodes only? [Roan] Nice when it's possible
- [Erik] Non-interactive Prototypes? How can we use these/are they effective?
- [Hai] Prototypes range from paper to HTML+JS
- [Jonny] If there's people who won't give feedback until there's something real, make it real-ish for them
- [Howie] Parking lot: how do we create urgency and deadlines? 17 months wouldn't have happened in a for-profit company, would've had financial consequences. We don't have external consequences, usually, when we don't meet our deadlines
- [Jonny] If the community can see "my commit is here, likely wait time is N days" ...
- [Hai] We'll talk about flow, queues and running dry later
- [RobLa] We've been overambitious about what we're trying to do, but we have lots of basic things to get right
- [Alolita] There's constant reprioritization
- [Brion] But should there be? Releases should constantly happen as part of the normal process -- we should ensure that there's enough capacity to do ongoing releases without affecting features groups Howie note: I think Brion's question is *very* important -- I think we should make the tradeoff decision explicitly, but with an understanding of the true costs on both sides
- [Howie] In other orgs, there's a cost of not meeting your goals
- [Jonny] Predictability, comes from visibility
- [Howie] Accountability --> parking lot
- [Jonny] Lots of different paths to production-- perhaps narrow down and optimize it
- [Erik] Standardized toolkit for dark launches etc. would help
- [Jonny] Force multiplier of how you judge quality
What did we miss?
[edit]- [RobLa] We have talked about the canary deployment piece, but tools for actual deployment are pretty crude. scap is a series of involved shell scripts that evolved rather than were designed. The first thing new deployment engineers is to "ignore the warnings!" (bad signal to noise in the tools & error reporting complicate recurring processes)
- [Roan] Work is going on / perennial plans are being made ;-) regarding improvement of sync scripts
- [RobLa] General technical debt, e.g. globals
- [Roan] There's been major refactoring, maybe sometimes actually freeze refactoring because it causes integration issues
- [Hai]: "interest" on technical debt is compound
- [Jonny] Pet issue: Branches diverge, and cost of backporting increases as divergence increases. Dislike long-lived feature/project branches for that reason.
- [Brion] The only branches I like are the ones that are regularly kept in sync with a master branch. A lot easier to do in git.
- [Hai] In daily standups, have the story wall available for eveyrone to see; get this in people's faces
- [Roan] SSL project example of cross-functional cooperation: ops (Ryan), devs (Roan etc) needing to poke at different sides of it to make it happen...
- [CT] Dependencies: to have more view into what's happening in the front end of the process
- [RobLa] Hidden dependencies -- a seemingly simple feature has tentacles into remote areas of the code
- [Erik] Many things can be manipulated by the community, that creates issues and dependencies. Mostly JS (Gadgets, user/site scripts), configurable features (giant title blacklists with complex regexes conflicting with UploadWizard that expected simpler settings).
- [Roan] content-specific issues such as particular pages with certain history patterns cause can problems that weren't forseen, leading to extra processes to try to avoid hitting them. Was limited to admins (~1000), now to a smaller group*
- [Tomasz] very few people that fully understand enough about what the dependencies are
- [Roan] many people can write code, but few people understand scalability of code
- [RobLa] it's important to find ways to do this that still fit with our community culture -- simply restricting everything to a few people loses some of our ability to connect with, bring on new devs
- Pushing features to production earlier in smaller pieces, doing more iterative design & testing from there can be advantageous. (ex UploadWizard: could have gone out a lot sooner, but once we did get it out we had it in limited use and iterated). Design may need to take this sort of parameter into account.
- [Hai] Typical way to look at the problem: What's the minimal viable feature?
- [Jonny] You guys still need to do a lot of testing in production -- how to manage risk by keeping those in small chunks?
- [RobLa] some parts of community (e.g. enwikinews) are more amenable to running latest code than others
- [Roan] incubator deployment -- example of a community-created feature extension that got ignored for about 2 years; once it finally got some dev attention, Roan was able to review it in a few hours and get issues fixed within a few days, then finally deploy it. We need to shorten that delay time before the actual work gets done on things
- [Hai] one thing that works for others is a "member advisory committee"
- [Roan] sounds kinda scary and controversial
- [Brion] is that worse than the status quo? (unaccountable cabal of employees! omg)
- [Erik] we have been pretty good about distinguishing between things that need to address versus things that are just resistance to change[....] don't institutionalize things too early, start with incremental improvements to process
- [Howie] compares w/ the working groups we tried for features -- how would it differ? [standing vs ad-hoc committees? selection?]
...
- [Erik] Review committies exert a conservative influence; we don't need to be more conservative on feature development than we already are!
- [Tomasz] good thing for fundraising team was publishing the backlog, who's assigned to what, and *what isn't assigned* in a visible place
... visibility ...
- [Robla] lots of places to put this info, maintaining it is a big job. Don't have good visibility into who's *looking* at what info, so don't know where to concentrate effort?. E.g., based on feedback, monthly reports are appreciated, but what about project pages -- who's reading these?
... various ways of reporting... guillaume's monthly tech reports...
- [Tomasz] fundraising team pages -> main place to put stuff during management of the project, so it's not an extra step to report it.
- [Erik] don't underestimate how much work it takes to translate to human-speak
- [Hai] Summary of issues discussed:
- Single prioritized backlog
- Product definition team
- Member advisory committee
- Prototyping
- JIT requirements - defer some decisions to the last *responsible* moment so you can gather info, but don't wait until the *last moment*!
- driven by both variability in the resourcing as well as variability in requirements definition
- Visibile aggregated backlog
- Velocity: relative estimation (relative to requirement a, requirement b will take n time longer to implement than req a)
- [Erik]: How to translate relative into absolute?
- [Erik]: Integrate estimation with bug tracking process?
- [Jonny]: Two types of bugs
- Bugs that occur during development: don't estimate, just fix
- Bugs coming out of bugzilla -- new work coming in, should be prioritized alongside other stories
- [Jonny]: Two types of bugs
- [Jonny]
- Begin with groups/teams (features, special projects, general engineering, ops)
- For a given week, a team is assigned a list of things
- [Erik]: e.g., Visual Editor team. There's a set group of people. Every week, they pick something off the backlog. For this backlog item, they go through design --> development
- [Erik]: is the 20% model one we can build off of? Or do we need to do something different?
- [Jonny]: hope is that 20% is enough to keep backlog at a steady, low level
- [Erik]: Fastrack -- need a process for judging when something in extension review queue will happen
- [Robla]: Core code review vs. extension review
- [Jonny]: everything but very small chunks of work need to be prioritized
- [Robla]: What's the threshold?
- [Alolita]: 30-45 min? That's what Roan was doing
- [Jonny]: Extension review -- there should be product input to help detemrine desireability
- [Robla]: Mark H. has been hungry for this type of feedback
- [Erik]: Need some mechanism for review, but reviewer may be different for different types of extesions (e.g., extension that touches UI should have Brandon review)
- [Robla]: There will be things where the code reviewer doesn't know to ask (e.g., how does he know to consult brandon?)
- [Brion]: training for code reviewers to make sure they know what to look for and how to escalate should help with this
- [Robla]: Parking lot issue: who gets commit access?
- [Jonny]: Miniproject: Code review process -- need to drill down. Won't fix this today.
- [Roan]: Need to develop with community -- post half of plan to wikitech-l
- Get to 0
- What do we do when we're at 0 (steady state)
- What to do about extensions?
- Commiter access?
- If we move towards patch review, do we need to improve that process
- [brion] patch/pre-commit review is informally done on bugzilla, irc, mailing lists for some things -- but probably needs to be more regular and reliable! like non-wmf extension reviews, these can be very ad-hoc. But catching things before commit is usually less painful than catching them after.
- [Roan]: Need to develop with community -- post half of plan to wikitech-l
Lunch break
[edit]Food was consumed.
Recap
[edit]Goal: Get big vision & prioritized list of doable stuff + possible owners.
Continuous integration: Instead of having big pain towards the end of the development process (e.g. complex branch reviews + merges), deal with integration issues on an ongoing basis. When stuff hits version control, first run basic build tests + quality checks. (Re: quality checks, WMF has CheckVars - get that into CruiseControl!)
[broadly this is covered by the existing automated testing tools & notifications of breakages -- to the extent that they're done and used. not all things are yet covered]
- unit tests, parser tests: cruisecontrol, pings irc
- test swarm/qunit: http://toolserver.org/~krinkle/testswarm/user/MediaWiki/
- detail integration work still needed to make sure everything pings everything consistently and runs on each commit (unit tests currently a little funky), but mostly this is working already.
- more functional, integration, installation, UI, UX tests will be nice :)
Functional deployment tests (using selenium etc to test an actual web instance) are not conceptually much different from the unit tests & parser tests we do now -- set up a private instance and run the code. Should be able to extend to support more of those with similar infrastructure.
We need more tests & more test coverage -- we know this will make the overall tests run *slower* which is bad. Parallelization/distribution can usually help to scale these with the automated tests (but having some key tests very fast for individual devs is very useful as well)
UI testing -- selenium vs whatever still up to dispute, but should be able to figure something out.
- [Erik] there's still some question on utility of the UI click-style tests overall
- [various] some discussion of fragility, ways to make them less fragile
- [Jonny] Usually it's useful to plan a 'testing pyramid' -- more & faster unit tests, fewer & slower UI tests. But both are needed.
- [Robla] Consider putting off things where we're still hiring -- QA lead isn't here yet, we may be able to move faster on it once we're in.
- [Brion] we have been improving testing a bunch without a QA lead though, we should be able to make some more improvements in the meantime -> [question about balance/priorities/availability of labor, probably we'll end up doing something in the middle]
Process improvement backlog
[edit]This was again a post-it exercise, summarizing/recapping and prioritizing the backlog of improvement. Photos TBD
Specific (actionable) <----------------------> Fluffy (needs definition) continuum
<below list may be incomplete -- fill in with photo of the chart>
Specific:
- code review: get to 0, test integration, etc
- increase visibility into ...
- more production-like testing
- Improve continuous integration
Specific-ish:
- protect the teams
- formalize & sandbox canary instance
- info flow: specs -> code -> documentation -> release notes: have these prepped for releases, but also ongoing for deployments. users want to know what will effect them!
Middling:
- Capacity planning -- our ability for throughput, planning for that (velocity, ideal engineering days, etc)
- [erik] hairy complex extension reviews, etc tend to be the biggest surprises on our time usage
- UI and functional testing [need communication flow from project specs]
Fluffy:
- Faster feedback for various community .... ?
- Design changes in small thin slices
- Get limited set of community involved
[priorities being ranked]
1. Manage Code Review Process (12 votes) - RobLa
[edit]- get to Zero backlog!
- steady state for Patches and Extensions
- change review process
- [Robla] risk: we may not be able to reach 0 as fast as we hope. Realistic numbers will take a while to reach?
2. Protect the Team (8 votes) - Erik
[edit]Will need to be generally the way we do things: Ensure that teams don't get distracted when they tackle a big problem like improving CI tooling, migrating to git, or whatever.
Improve Continous Integration (CI) Process (8 votes) - RobLa
[edit]- Build/Test/Deploy Framework
- [Robla] risk: we have related pieces that are already super high priority (eg het deploy) that need to keep going in the meantime
- [Erik] do we have the right people & priorities assigned to these? do we need to do more to make sure some of those pieces get done first to free up these new things?
Increase Deploy Frequency (7 votes) - Robla
[edit]- (but not for Release)
- [Roan] if we get 1.18 out August-ish that leaves us with 3-4 month backlog ... we could then concentrate on catching up to 1.19 and get it done and out in a much shorter period. Then consider deploying straight from trunk... (or similar scary fast stuff)
- [Erik] confirming general agreement than after 1.18 release is the time to change the release process (robla, brion agree \o/)
- Separate release and deployment processes
[Roan] doing the 1.19 as branch-and-release-almost-right-from-trunk probably good -- then consistent cycle from there?
- [Robla] will still be some funkiness as we have to fix installer issues etc for a release -- but in this model it'll interfere less with our deployments, making the work more localized to the release process.
- [Jonny, Robla, Brion etc] -- looking at the CR stats and how we've had good slopes down to 0 and then lost momentum; making sure we stay *at* 0 going forward is a new problem! Continuous integration & tools improvements -> should probably help to keep us going. "Deploying every week keeps you honest!" Some of the CI infrastructure's already there; continuing to improve it will also help us reach 0 while we're still working on it.
- Separate "get to 0" and "stay at 0"?
- [Robla] changing revert culture is one of our trickiest bits (we're _starting_ on this, but people still tend to freak out when they get reverted! make sure we can do what's needed but more smoothly)
- [Erik] CR improvements & CI improvements should be doable in parallel... QA will tend to pop up as next thing?
Enhance/Identify & Use more Profiling Tools (6 votes)
[edit]-> some in ops land, needs developer work as well (built-in tools etc)
- [Roan] per domas we know there's not enough info in the profiling to identify the root cause of performance problems that are based on content markup (eg scary templates)
- [Erik] there's the _measuring_ piece (verify what is problems) and improvements. find out what kinds of things we need to measure that will be useful to devs
Perform more UI & Functional Testing (6 votes) -
- include test plans & specifications
Have a Focused Release on Performance Improvement (4 votes)
[edit]- [Erik] Important, but not yet clear when we can make that push.
Establish Big Rule - Design changes in small releaseable portions (4 votes)
[edit]Improve Process for moving bugs from triage to backlog
Increase visibility into all work queues (3 votes)
[edit]Formalize and Sandbox Canary Releasing (3 votes)
[edit]Knowledge Sharing to avoid Truck Factor (& SPOF) (3 votes)
[edit]Use more automated testing (3 votes)
[edit]Visualize and Pay Down technical debt (3 votes)
[edit]Get focused group of Community involved in direct strategy discussion (2 votes)
[edit]Estimate and Prioritise Extension & Hairy code reviews (2 votes)
[edit]Most folks felt this was pretty important, but more a "phase 2" kind of improvement once we have the core code review and deployment process stabilized.
Improve Deployment Tools (2 votes)
[edit]Faster feedback from various Community groups earlier in the process (2 votes)
[edit]The key folks here (features/product) weren't as strongly represented in this particular meeting, but it's still something we have to improve.
Product iteration (post deployment) (1 vote)
[edit]Release notes for deployments (1 vote)
[edit]Brion argued for more informative RN process as part of the continuous integration process.
Capacity Planning (1 vote)
[edit]Mostly this was seen as an outflow/precondition for other activities, e.g. getting the CR backlog down to zero.
Parking lot issues
[edit]Process changes & scaring volunteers
[edit]- [RobLa] changing how we handle reverts will be a pain point. Deciding that some people won't have SVN access may be a big pain point...
- [Jonny, Brion] change to git would be a natural point when processes, permission bits and expectations change -- especially if we're already at a good deployment cadence at that point
- [Brion] for the most part these changes should give vol devs more & faster feedback, which is good and will be welcomed
- [RobLa] sumana (volunteer dev coordinator) will be instrumental in the process of communicating process changes :D
Definition of done
[edit]- [Hai] example from behavior-driven development: binary it does this / it doesn't do this checklists with user stories -- until they're all done you ain't done!
- [Erik] Let's get better at expanding "doneness" in the context of e.g. code review when a revision has UI implications and fails a design review
Accountability
[edit](howie's question)
urgency, commitments vs deadlines
- [Roan] some projects *aren't* urgent -- a 2-week delay on UploadWizard usually isn't a problem and is worth taking the time to fix things. [so we should distinguish between urgent and non-urgent things when time needs to be adjusted?]
- [Erik] 20% approach for CR etc doesn't do a great job at telling what's getting done; some risk there compared to more explicit assignments. consider having some assigned overall process ownership for the 20% work.