What do points mean?

14669824798_c2fe8ec986_k

www.flickr.com/photos/vpickering/14669824798/

I’d like to think that my previous posts have provided an informative and well balanced, ego free commentary on pertinent topics such as learning, change and even improv in agile.  Well, this one is different; it’s good old fashioned, vitriolic rant.  It’s about velocity, or at least what people do to velocity, a well meaning, innocent and largely defenseless concept.  The other thing this post isn’t is a comparison or comment on the value of velocity or estimating in general.  You’ll find plenty of good #NoEstimates conversation elsewhere.

Time and time again I see velocity abuse:
– Equating velocity to volume of output (more points equates to more productivity)
– Using velocity as a target, linked to incentives (The Scrum master shouts: “what do points mean? – prizes” )
– Assuming that velocity is a real number, double the engineers, double the velocity, right?

There are plenty of good velocity explanations around, the way I see it velocity is part of a system for:
a) Improving group estimation ability (mostly by encouraging exploration of work, and an appreciated of other’s roles)
b) Forecasting the rate work will be carried out by the team.

 As such, velocity needs to be honest and without interference, otherwise the outputs of neither estimation or forecasting activity can be trusted.  

Increase your velocity; Go big or go home!
It seems there is an obsession with velocity in volume, if a team knocks over 20 points one iteration, it should better itself in the next, 25 points next time anyone?

Consider however, a situation where a team doubles its velocity compared to the previous two iterations, does that really indicate double productivity?  There are numerous factors which could contribute, perhaps they are not working in a sustainable fashion, or corners are being cut.  Was a large amount of unfinished work rolled over from the previous iteration?  In these circumstances we often have what looks like high volume output, but is it to the detriment of other (typically operational) concerns?  What if a team halved its velocity in the last iteration?  Is the team failing, or is the organization not providing what they need to succeed?  Did they change the value of points to align with demand?  Is the team hitting its predicted point every single sprint? Suspicious to say the least.

The key point is that a team’s velocity fluctuates over time, and there is often considerable variation as a team forms, and starts to understand it’s work, constraints and processes.

Velocity with benefits
To some extent velocity is a target for the team, but one that is most effective when set and owned by the group, rather than linked to incentives.  Pushing velocity onto a team, demanding or targeting an increase is a dangerous, counterproductive practice. Strongly linking velocity to incentives compromises its primary value as an estimation tool. Organisations generally hire the smartest people available, do they somehow think those people won’t be smart enough to game velocity given even the smallest incentive?

“The moment a measure becomes a target, it ceases to be a measure” – Goodheart’s Law

A further reason not to link velocity to incentives is the frame of mind it encourages during an iteration. It seems preferable to focus on an iteration goal as opposed to churning point earning stories.  Strong focus on an iteration goal invites creative thinking and awareness of user goals – if we find a new way to achieve an iteration goal with the side effect of throwing away the remaining stories, we should get on with it, and not mourn lost points and old stories.

Once the rough quantity of ‘stuff’ to be delivered is agreed, it is all about execution, as such focus should shift away from what was estimated and towards what should be achieved.  Velocity is like your last order for team pizza, use it to inform quantity, once the mountain of pizza arrives you and your colleagues just have to deal it, constant reference to the original estimate serves little value.

Comparing Velocity
I’ve observed something of an obsession with velocity as a quantity, the cause of much sniggering and derision during Scrum of Scrums meetings.  A team with a velocity of 100 must be better than a team with a velocity of 20, right?  This reminds me of futile attempts to compare different people’s number of steps walked in all but the most sophisticated fitness apps. My Fitbit tells me I average 10,000 steps each day.  I happen to know that some of those ‘steps’ are activities like chopping firewood, lifting heavy coffee cups, and gesturing wildly at whiteboards. I can use this figure to compare how active I’ve been across different days, but to compare my number of ‘steps’ to another person’s would be fruitless. Just like an agile team sizing and executing its own stories, my context is unique.

I also see many attempts to compare, or harmonise different team’s velocities.  We can certainly compare estimating ability with some confidence.  Standard deviation of predicted velocity against actual velocity can be useful for both forecasting, and prompting improvement conversations.  For instance, a team with a standard deviation of +/-5% on estimates is more predictable than one with a deviation of +/- 20%.  This of course says nothing about the team’s actual achievements against their potential, but this predictability is a solid foundation for improvement. 

Often though the conversation is about stack ranking teams, and proving who does most.  A flaw in only considering velocity is that it is solely and indicator of a team’s output of stories, stories are by no means the only kind of value that teams add to organisation.  A ‘slow’ team might be the one which fixes more defects, handles more support calls, is more active in recruitment or assisting other teams.  Velocity gives no guarantee of completeness, and could simply be the rate at which bug riddled code is being unleashed on unsuspecting downstream teams.  This tends to indicate the need for a more varied and balanced set of measures.

…and further more
A big problem here is that first impressions last, velocity it is such a convenient, accessible term that people intuitively grasp it; incorrectly. This first impression may be very difficult to unlearn, corrupted understanding spreads, particularly when carried by people with influence and a penchant for direction. Thanks to these unstructors, before you know it you’re being asked why your velocity isn’t 400 like Team Over-achievers in the corner. Analogous to technical debt, this is a kind of methodology debt, as unproductive habits become calcified and baked into culture, it is just as hard to pay down.

Ultimately I have a simple plea: think before you use velocity, consider the side effects and take time to educate stakeholders. Ideally bring other perspectives on progress and discourage the comfort often brought by over simplification. This is especially true if you publicise raw velocity values outside the team, it is a fragile concept, and if miss-used it could leave the team with compromised estimates and a painful legacy.

 

 

 

Advertisements

INVEST a little more in your stories

16207445813_72d651ad2d_k

https://www.flickr.com/photos/jakerust/16207445813/

I’m a big fan of the INVEST mnemonic, which encourages agile story authors to make their stories Independent, Negotiable, Valuable, Estimable, Small and Testable. There are plenty of great sources for more detail, so I won’t rehash them here. Having introduced a number of teams to INVEST I often find I need to follow up with three more items.  Leading to the notion of INVESTpul, and yes, before you say it, I really should figure out a better backronym:

Provoking – A good story provokes conversation, it is a much an expression of wish as a challenge, how can this goal best be achieved given the constraints the team operates within?  All too often this element is lost, and work becomes about churning stories, regardless of their value, with the dangerous assumption that all the quality thinking has been done already. Opportunities present themselves in different ways throughout an iteration, and we should remain alert and ready to seize them, to pivot based on new information.

Ubiquitous – The language of stories should be readily understood by the team, its stakeholders and drive-by observers. Drive-by observers are often influential, and may even be the ultimate sponsors of the team, packing up late I often see evening board walking by the leadership team. Their support of the team, and agile, may be effected by how well they understand stories, and the impression they get from the board carrying them. Ubiquitous language discourages technical terms, the presence of which often indicates that a solution to the story has already been agreed, diminishing connection to the user and narrowing the team’s potential to find other ways of achieving the same aims.

Legible – It seems incredible that this needs saying, but it does. It should be possible to read stories easily, this doesn’t just apply to handwriting, it means using sensible fonts when printing cards, and avoiding cramming information into small spaces. The ideal story card passes a three foot test – it can be read by anyone participating in a board based discussion, like a stand up, which necessitates some members standing about 3ft away

The Six Foot Test
While we’re on the subject, another distance themed test I like relates to the overall board. The six foot test is simple; from standing that far away what could someone learn about the team’s work?  That distance is deliberately chosen because generally you can’t read individual story cards, putting focus firmly on flow and the system of work. It is particularly useful to forget what you know and don a stakeholder hat for this exercise.  At a minimum I would hope to be able to determine the following:

  • The team name
  • Their high level goal
  • Amount of work in progress
  • Phases or stages work passes through.
  • Who is doing what
  • Which work is blocked, or needs help

From the appointed distance I’d also look around the board, the presence of artifacts like definition of done, column policies, burn down charts and metrics are positive indicators.  Another interesting aspect is the presence, or otherwise, of playful elements, often these are reflections of trust and safety within the team.

So that’s how to write INVESTpul stories, along with some bonus musings on good board practice. The author Antoine de Saint-Exupery once said; “Perfection is Achieved Not When There Is Nothing More to Add, But When There Is Nothing Left to Take Away”.  So tell me, what would you add or remove from INVEST?

Why is a Spike called a Spike?

5244470514_e793787642_ohttp://www.flickr.com/photos/tulanesally/5244470514/

Unusually for an agile practice, it appears practitioners largely agree on what the term ‘spike’ or Spike Story describes:  a brief, focused effort to answer a question, explore a concept or investigate an issue.  Fortunately the unsettling feeling of general agreement is quickly displaced when we try to agree the origin of the term.

Kent Beck is generally recognized for introducing ‘spike’ to software development parlance, as part of the XP movement. However, he has not been forth coming on why that particular word was chosen, at one stage even avoiding the term because it was too nuanced:

Because people variously associate “spike” with volleyball, railroads, or dogs, I have begun using “architectural prototype” to describe this implementation.
Guide to Better Smalltalk:

It is interesting too that Cynefin, much of whom’s focus is on experimentation chooses not to borrow the term spike.  Understanding the source of the term is useful, and potentially interesting, given how often the concept is illustrated by real word example.  Listed below are some of the definitions I frequently hear, do let me let me know if you’ve heard others!

The Mountain Climber – A spike is another term for piton, used to anchor oneself to a rock face.  If the piton in secure it is safe to take that route and hang more gear (and actually yourself) from it, if not it’s time to find another path.

The Railroader – Even within this single domain there are at least two derivations.  A spike is a huge nail used to secure track to sleepers, hence ‘spiking a track’ – in order for a train to move safely over newly laid rail.  Secondly a spike, or similar shaped wedge may be used to hold open a point (switch) and set the direction of subsequent traffic.

The Geologist – A spike is a kind of probe used to assess the layers beneath by inserting a hollow core, which fills with material, withdrawing the core, and inspecting the resulting sample.  From this a geologist may determine if there is firm foundation to build upon, or if there is anything of value to extract.

The Sci-Fi Fan – Named for the Buffy The Vampire Slayer character, a Spike disregards team norms and provides an excuse to go it alone with a focus on what is cool, often leaving behind a trail of destruction for others to deal with.  Actually, I just made this definition up, but all too often I see these behaviors.

The Builder – This definition, from Ron Jeffries, appeared during a debate on Stack Overflow: “Spike” is an Extreme Programming term meaning “experiment”. We use the word because we think of a spike has a quick, almost brute-force experiment aimed at learning just one thing. think of driving a big nail through a board.

The Electrical Engineer – This one I remember from my brief time as an Electrical Engineer, in the days when it was far more necessary to tinker with hardware to support our software habit.  We used to spike relays to hold them on or off, isolating part of the circuit as a temporary measure to assess behaviour and assist problem solving.

The Statistician – I’m sure you’re familiar with this one; when charting a values over time a spike is a sudden sharp increase in those values, often of short duration.  When monitoring production systems this sharp rise frequently correlates with increased shouting and coffee consumption.  In terms of a spike story then, this is a burst of effort towards a goal, a sprint within a sprint.

Of course, there is another option, one that perhaps we’d rather not consider.  Thanks to a cognitive bias we all have (yes, even you) called the Halo Effect, we are prone to liking or believing things from certain sources.  Simply because we believe the term ‘spike’ was coined by a person we admire and respect we believe it must have a respectable and admirable definition.  I’m afraid it is entirely possible that little thought was given to the term, the wrong word was used, it is an in joke, or that the reference is something only the author would understand.  Luckily for agile and XP practioners everywhere, we’ll probably never know.

 

 

The Witch Hunt Retrospective

It’s Halloween night… what better way to celebrate than with a good old fashioned witch hunt?  I’m well aware that agile retrospectives are intended to be collaborative exercises in continuous improvement, or Kaizen if you will.  There are many formats and styles to suit different teams, situations and facilitators.  A well run retro can lead to insights, learning opportunities, and a greater sense of team.  Let’s face it though, sometimes you just want to get in there and blame someone.  So here it is, my guide to singling out that person or team in the guise of a retrospective.

1. Choose a facilitator with a vested, or emotional interest.  Ideal for cross team retros, Particularly important when you need to place blame.  Make sure the facilitator, whose main role is to create an environment that encourages openness, collaboration and thought, wants a particular outcome, or has something significant to gain, or loose.

2. Prepare a timeline upfront. There is nothing more annoying that discovering that other people have a different perception of events.  Presenting your own timeline, and not inviting comment, is a sure path to the outcome you want.  Keep a look out for techniques like Future Backwards, these could easily undermine by revealing things that don’t support your bias.

3. Ambush with data.  Data is a powerful tool, used right you won’t even need to point the finger, you can make it obvious which team, or person is the source of the problem.  Carefully prepare a graph, or visualization, don’t warn anyone, and keep the source data to yourself.

4. Exclude key people. If you include everyone who was involved, or effected, you might find unexpected insights, different opinions, or worse someone might challenge the way you think about things.  Some people have the ability to articulate well, and may challenge.  Exclude these meddlers to ensure a quick conclusion; your conclusion.

5. Ignore the system.  A lot of great thinkers will tell you that often the system, or the environment someone is working in, has a significant impact on behaviour and productivity.  Avoid these blame dilution techniques by keeping the focus firmly on people’s performance and what happened in the moment.

6. Steer Contributions. As a facilitator it’s wise to be impartial, so if you really must get ideas from the group, keep things heading towards the inevitable by judging input.   Praise ideas the same as your own, be scornful of anything different or new.  Pro tip:  If you feel threatened, pretend the glue on a post-it has failed, then conceal the fallen note under your shoe.

7. Publish incomplete or un-reviewed conclusions.   If you haven’t got the outcome you wanted from the retro, there’s another opportunity.   Wash away everything that happened by writing up a  summary.  Describe things as you want people to see it, make sure you are first to publish, and publish widely.  On no account get the summary reviewed in case other people’s ideas or actions creep in.

So that’s how you run a Halloween retrospective witch hunt, it’s a sure way to find someone to blame, and cut out all those awkward learning opportunities.  Off course, all of the above are written in jest, but underneath are what seem to be fairly common retro anti-patterns.  I’ve heard about these, seen them, and probably done them.  Point one, around knowing when to get out of the way, is clincher.  It can be hard to recognize when we’re sleep deprived, over caffeinated, stressed or under inspired.  It is similarly hard to tell, or admit, if we’re too close to a subject.  Luckily there is a simple way to find out: ask.  Just don’t forget to listen.