Visiting two teams last week, attending Sprint Review, Retrospectives and Planning meetings, I saw a common thread of malcontent. The dreaded feeling was associated with having a relentless, constant focus of continuous improvement.

One of the engineers bluntly mentioned that their “continuous improvement feels like we are constantly failing.”

After two lunches and several coffees later, an anti-pattern emerged.

There are two significant loops contained in product development where Scrum is the framework being used. One is the loop producing the desired potentially shippable nugget.  The other loop is one where we examine how we’re working, devise an experiment to pull into the next production loop to improve how we’re working as a team towards delivering and delighting our customers. This is the loop on the right side of the diagram:


Last week, this is what I saw instead: The team talked about what worked, they restated the stuff that didn’t work, (perhaps already feeling like they were constantly failing), nodded to one another, sighed long sighs, and then one of the engineers (already late for another meeting) finally summed up the meeting saying  “ok lets try not to submit all of the code on the last day of the sprint.”  In effect here’s what the retrospective felt like:


The anti-pattern is where Retrospectives become dreaded sessions where we look back at the last Sprint, make two columns of what worked and what didn’t work, and quickly come to some solution for the next Sprint. There is no Scientific Method involved. There is no data gathering/research, no hypothesis, and very little deep thought or hypothesis forming involved. The result is that you don’t get an experiment to pull into the next Sprint, you are not able to capitalize on the opportunity to “Scrum the Scrum” (when the improvement from the Retrospective is pulled directly into the next Sprint.)

Retro(Pro)spective alternative

Instead of looking backwards, try taking instead the approach of “with what we know and can remember from the last two weeks, what is an experiment we can run to help make the next Sprint more satisfying (to us, and hopefully in the end, to our customers.)

A practical example of “scrumming the scrum” involved a team who was struggling with increasing the pairing among the team members.  The team did a significant amount of data gathering and discovered that the work that was completed by two folks on the team who paired throughout the sprint (doing high quality work and lots of it) was great work indeed. And everyone on the team wanted to do more pairing as a result.  Had they jumped to a solution, they’d have stopped the meeting and said “ok lets pair more”, and likely would have seen the same result the following week – all wanting to pair more but not doing it.

Instead, during the retrospective this team split into two groups – one group running 5 Whys on “why do we pair”, and the other half of the group doing a force field analysis, analyzing the forces which enabled pairing, and those which impeded pairing.  They ended up learning that the physical space (the pairing table near a well lit area) was what they wanted to experiment with next.

Their hypothesis:  Two tables in well lit quiet areas would double the amount of pairing we would do as a team.

Their experiment: Build two pair stations in well-lit (quiet) areas.

This experiment was pulled into Sprint Planning and ended up on the top of their Sprint backlog.  During Sprint Planning they discussed how best to build two tables in well lit areas with the resources they had (at work –  and even in their storage areas at home.)  In the following Sprint, in addition to building the product they spent a half a day building tables for pairing. At the end of the Sprint they examined and communicated the results  – not quite double the amount of pairing, but close.

Some hints for better Retro(pro)spectives

  • Don’t jump to a solution. Not solving something and thinking about it deeply instead might be a better option.
  • If the Retrospective doesn’t make you excited to try the experiment, maybe you shouldn’t be trying it.
  • If you’re not doing any analysis for how to improve, (5-whys, force-fields, impact mapping, fish-boning..) you could be jumping to solutions too soon.
  • Vary your methods.  If every time you do a retrospective you ask “what worked, what didn’t work” and then vote on the top one from either column, you’ll have a bored team very quickly.
  • End each Retrospective with feedback of the retro. Might seem a bit meta but it works: continually improving the retrospective is recursively getting better at improving as a team.
  • Ask how, as the Product Owner, you are enabling the teams’ search for improvement, and be prepared to act on any feedback the team provides.
  • There are no Scrum Police. Take breaks as needed.  Deriving hypotheses from analysis, and coming up with experiments involves creativity, and it can be taxing.  Perhaps every once in awhile go out together as a team and have a nice retrospective lunch instead.