There’s no script for agile – what can we learn from Improv theatre?

It’s always good to hear a new perspective on a seemingly familiar subject, it often provokes thought, inspires action, and encourages change.  Paul Goddard has introduced one such perspective on coaching teams in his book ‘Improving Agile Teams’.  There are plenty of books with similar titles, but the clue here is ‘Improv’ – the book shows readers how improv theatre techniques and games are of benefit to agile teams, both aspiring and experienced.  

I’m confident improv needs no introduction, the likes of Paul Merton, Josie Laurence and more recently, The Mighty Boosch have brought the art to the mainstream. Something you quickly realise reading the book is just how spontaneous improvisers are.  There are subtle cues, and openings the players use to both support, promote, and challenge each other.  Take a look at the expressions on the faces of Ryan Stiles and Jeff B Davis in the first minutes of this Improv-a-ganza clip, you’ll see reactions to unexpected events, and rapid adaptation to changing circumstances.

The idea of using improv techniques to improve team collaboration is an appealing one, the principles of agile and improv seem closely aligned, success in entertaining an audience and working effectively on complex, creative projects requires trust, collaboration, respect and a sense of play.

The book draws out five principles;
Safety – concerned with creating an environment in which teams (players in improv speak) can trust and rely on each other, supporting, encouraging, accepting failure and learning together.
Spontaneity – the business of developing an open mind, to increase creativity and receptiveness to others ideas.
Storytelling – as a method of engaging, inspiring and developing empathy, not just applicable to conversation, but also in user stories.
Status – identifying status, and manipulating it in a safe way to identify issues and further creativity and collaboration.
Sensitivity – the ability to sense, or listen and respond appropriately to others, in order to work effectively with them.

These are novel choices for what must, by extension, be agile principles, but the book explains the thinking behind these, and grounds them in current theory. For instance, in safety pointing out that absence of trust is one of the five dysfunctions of a team .  It’s nice to see Mihaly’s flow model get a mention too.

What I really like is that these principles are used to introduce a improv games and tips designed to further the maturity of teams.  These range from simple, everyday exercises that most teams would be comfortable with, to techniques that may challenge both team and coach.  It’s interesting to run through the techniques, and think “Could I run this with my team?”  if the response is ‘no’ then it’s likely attention to one of the five principles is warranted.

There are fifty or so techniques in the book, some quick and simple and great for livening up stale meetings (like conducting a stand up without using the letter ‘S’) while others require more time, and set to explore a particular area with a team.

One technique is ‘The physic Stand Up’, when team members aim to give someone else’s daily report.  I’ve seen this in action, and it’s fascinating, especially when vociferous people give reports on behalf of more timid people, there’s a strange mix of relief (that’s what I’ve been wanting to say) and shock (did they really just say that?).

I found the status techniques are particularly interesting, especially as it’s an area less explored by agile.  The Dinner Party game is aimed at understanding and recognising status, and the responses it elicits.  In brief, players are invited to an imaginary dinner party, split into two groups and asked to show either high or low status behaviors.  The groups then switch for the same amount of time before retrospecting on the experience.  

Sounds simple?  I’d suggest reserving judgement until you’ve tried some of the techniques, either in a safe environment or on the ‘stage’ of the shop floor.  There are great benefits to be found, experimenting with improv is both challenging and thrilling, the learning is definitely in the doing.

P.S.
In a similar vein, I’d highly recommend the book Artful Making, an unlikely collaboration been a Harvard Professor and a director/playwright.  It draws agile and leadership lessons from the world of makers, particularly experiences running a busy theatre and working with groups of actors.

An Agile Cambridge Reading List

Welcome to my not-entirely complete Agile Cambridge 2015 book list.  Why isn’t it entirely complete?  For starters, many books were mentioned in passing, Kevlin Henney in particular throwing out quotes and titles at rate so ferocious my free conference biro melted into wispy nothingness.  I’ve listed books which either cropped up in multiple talks or seemed interesting and novel (groan).  Secondly, the list currently captures recommendations from talks I attended – a fraction of the content across three tracks and around fifty talks, tutorials or workshops.  If you think I’ve missed any crucial reads do let me know, I’d be happy to include them.

In all cases I’ve aimed to link to the home site, rather than anyone’s favorite online re-seller

Mindset by Carol Dweck, on building a growth mindset to reach our potential.

Drive by Dan Pink, influential book on motivation, summarized at TED.

Thinking Fast and Slow by Daniel Kahneman, the source of the System 1 and System 2 thinking styles.

Eight Behaviors For Smarter Teams, and Smart Leaders, Smarter Teams by Roger schawarz, A PDF is available here.

Minimum Viable Book by Emily Webber & Amy Wagner, a growing collection of stories from people who create things.

Lateral Thinking by Edward De Bono, the inventor of the technique describes its value, and approaches.

Being Agile In Business by Belinda Waldock, primarily aimed at non-software businesses, but relevant to software organisations, introduces agile with practical methods.

A Practical Approach to Large-Scale Agile Development: How HP Transformed LaserJet FutureSmart Firmware by Gary Gruver, Mike Young, Pat Fulghum.  This book has solid agile and continuous delivery content.

Build Quality In by Steve Smith Matthew Skelton et al (I’m one of the al!)  experience reports from a diverse range of continuous delivery practitioners.

More Agile Testing by Janet Gregory and Lis Crispin, they have a neat mind map of the book, in their words: “two world-renowned agile test experts ask tough questions about agile testing – and provide definitive answers”

Five Gators Of The Apocalypse?

Wacky gators arcade machine

I generally dislike war and military metaphors for team and making activities.   Admittedly IT has a lot to learn from the military in terms of teams and scale, but in the wrong hands these metaphors seem to encourage unproductive conflict and counter-collaborative behaviours. This strikes me as odd because although prepared for conflict, the military spend much of their time avoiding or minimising it.  However, I do need to call upon a slightly violent metaphor to describe the relationship between constraints encountered when building a continuous delivery capability in an organisation.

The process of change reminds me of the nineties arcade game Whacky Gators, where a number of cheeky gators poke their heads out of caves, and you biff them with a large hammer, hands, or other appendage depending on personal preference. You never know which gator will appear, or when, and more than one might show up at once.

When encouraging continuous delivery (and by extension DevOps) those gators might be named: Culture, Tools, Organisation, Process, and Architecture.

These five are interdependent constraints, each affecting the other.  However, while inside Whacky Gators is a fairly simple randomiser determining which gator will surface, behind the scenes our organisations look more like a hideous HR Giger meets Heath Robinson mash up.  We can’t readily inspect them to determine what to change.

My theory is that when one constraint is eased it will reveal a new constraint in a different area. This is a tenet of most agile and learning methods – surface a significant issue, deal with it, see what surfaces next.  Often a method, and our expertise, focuses on just a couple of areas, we’re well versed at solving problems with technical solutions, or just improving our own team’s capability in isolation.

A good continuous delivery capability involves the whole engineering organisation (a great one involves the entire organisation). This means it is crucial to consider all five constraints, and when there is a problem be ready to shift focus and look for the solution in one of the other areas.  In fact, this simple shifting may lead to the root of a problem.  Do reams of process indicate a risk adverse culture?  The solution may not be more process, but a different attitude.  Are those tools covering up or compensating for some thorny, unspoken issue no one dared to face?  When trying to improve delivery capability there may be a temptation to replace an old tool with an improved version, maybe the need for that tool (and associated overheads) can disappear with an ounce of process and a healthy dollop of collaboration?

Returning to our Whacky Gators metaphor, the big question is how are you playing?  Do you simply wait for that same familiar gator to return?  The one you’re most comfortable dealing with?  Do you hover where you are comfortable while other opportunities pass by or, are you nimble, and brave enough to tackle each constraint as it appears?

Footnote:
While I was looking up Whacky Gators, I couldn’t resist a peak in the machine service manual, there I found this uncanny quote on success, as applicable in the game, as it is in change:
“The player does not score extra points by hitting harder; a light touch will activate the circuits and will lead to higher scores.”

 

Change, Disparity and Despair

Models? Meh.  We all know Box’s quote: “essentially, all models are wrong, but some are useful”, a quote generally heard seconds before someone presents an alternative model to the one you’ve just put forward.  However there’s an undeniable utility to models and diagrams, the way they convey concepts in a fashion that people can quickly understand and start to explore together.

One of my favorite diagrams, and theme of this post, is The J Curve of Change by David Viney (Creative Commons 4.0).  Change in this instance is just about any planned change that impacts an organisation.  This is a distinct, but close relation to the Kubler-Ross individual change curve.

This J curve aims to show that for any desired improvement in capability (or fitness to achieve some purpose) there will be a decline in capability before there is an improvement.  This is virtually impossible to capture and graph accurately, but we can talk in general terms about transitions, duration in a state and trending.

Annotated J

The Danger Zone
When introducing Kanban David Anderson points out that time and depth an organization is comfortable in the trough reflects it’s appetite for risk.  Push change deeper or for longer, and the organization’s appetite risk is exceeded. End result: change agent gets fired.  Of course this danger zone applies to just about any substantial change, and practitioners of Scrum, DevOps, DSDM and so forth should be just as wary.  A further observation is that if a change is halted once in the trough things don’t magically return to the start state, and a second iteration through the curve is required – starting from a point of reduced capability.

The swan song
In a recent talk I added a hump near the start of the transition.  That’s the point where people hear about an incoming change.  Sometimes, if the instinct is to resist, this inspires efforts to prove original methods can and will work.  Alternative practices are implemented with renewed diligence, energy and fervor often leading to a short term uptick in capability.  This reinforces arguments that change is unnecessary, but will not yield the desired improvements in the long term.

Change Disparity?
While the J curve represents an organization’s progress during a period of change, it makes the assumption that people are moving – or adopting – at more or less the same pace.  In fact this is seldom true, and different adoption rates lead to significant gaps in understanding and approach.

This ‘change disparity’ hampers collaboration and can be as damaging as any silo or clique.

For simplicity let’s consider early and late adopters.  The reasons for being in those groups may vary greatly: work allocation, meddling by people with influence, environment, personality, good or bad luck.  Unaware of circumstances both groups get frustrated. There’s a temptation to say the late adopters are at fault and should hurry-up, but running too far ahead and expecting people to keep up, or ‘just get it’ regardless of circumstance seems no better.  This is common with technology zealots, characterized by a disparaging attitude towards people not using their tool of choice, or voicing concerns about it.  Of course this reaction actually discourages adoption, and serves to hinder the change they would like to bring about.

The awfulness of the situation reminds me of the Inuit game ‘Ear Pull’ in which two players face each other, linked by a string around their ears, and pull.  In opposite directions. You can almost feel the pain in this clip.

Ear Pull Leroy

Note the string does not cause pain by itself  – it’s pulling in an opposing direction, forcing another to follow at a rate they are not comfortable with.  If both players agreed a distance, or form of feedback, they could move together without discomfort.

This is something to consider when introducing change, new tools or ways of working.   This adoption gap, or Change Disparity, is easily overlooked but potentially damaging.  There are numerous solutions, but it all starts with recognition of the problem, when rates of change are outside productive limits, and willingness to do something about it.

Measure for Measure – exploring DevOps adoption metrics.

Confession; I find measuring stuff a fascinating challenge.  Sometimes measuring is straight forward, like the fuel gauge on your car, but often times it’s more complex.  The volt meter in your car quietly drains the battery while measuring its health.  The motivation survey in your inbox will quietly change your motivation.

Its termed the observer effect, where the act of measuring affects the thing you’re trying to measure.  Measuring, or even just assessing, the output of groups is similarly taxing, even the act of posing a question can project your own biases.  Last year I got interested in measuring the progress of steps towards DevOps culture. At Nokia Entertainment’s MixRadio development emporium we’d had good continuous delivery tools in place for months, but weren’t certain our culture continued to improve.  Complacency was a risk, but we couldn’t tell how large.  It seems one of the hardest parts of change is keeping things going in the period between that initial burst of enthusiasm and when practices truly take root as habit. 

So, I shared my thoughts at DevOps Days London, and received some really useful feedback from the crowd there.  I’ll let you into a secret though: I wasn’t happy, it just wasn’t rigorous enough. Paul Swartout and I created metrics focused on adoption, we wanted simple, no cost methods that anyone could use, without needing a big budget or corperate sponsorship. We called them ‘Vital Signs‘  and they comprised; Cycle Time, Shared Purpose, Motivation, Collaboration and Effectiveness.

The main aim was to benchmark, ready to see which way our desired ways of working were trending. However I wanted to capture elusive things: just how ready was the team to ignore organizational setup and work together?  We also didn’t want to bias for DevOps, Scrum, Kanban or any of our other preferred methods,  if someone found a better way, we wanted to learn.

The art was to find metrics in which these desirable behaviours surface, and of them only cycle time was measurable with any consistency.  We learnt an awful lot from the other metrics, particularly free form comments.  The problem was all that prose was impossible to graph, impossible to track.

Frustratingly those things that are hard to measure are amongst the most critical. They often indicate how long you can sustain a pace or practice.  It is very easy to focus exclusively on productivity, but you might be slowly killing your workforce, as Amazon recently discovered.  In general engineering teams aren’t a temporary construct, they need to be looked after for longer than the holiday season.  Engagement and well-being over time are going to drive quality and productivity as much as anything else. (Pseudo science here).

So why raise this now?  Well, I was enjoying a coffee at the DevOps Café , and was interested to hear a side remark about metrics by the ever eloquent Damon Edwards and John Willis.  They described the following as their ideal set of metrics:

  • Cycle Time – From customer report to change in production.
  • Mean Time To Detect (an issue)
  • Mean Time To Repair (or make a change)
  • Quality at source – or escape distance, how far do errors get before they are noticed?  Worst case: customer.
  • Repetition Rate – Does the same issue keep happening, or are we learning?

Used together, these are just genius, because it’s very hard to achieve good results without a healthy productive relationship between teams.  Furthermore it doesn’t matter how you describe what you’re doing: DevOps, OpsDev, Agile, Scrum or one great big group hug – those metrics don’t test adherence to a methodology.  I guess mavens like these will often drive common practice, and these metrics are very much evident in Puppet Lab’s excellent surveys, (some words on the magic here) (for DevOps archaeologists* early John Allspaw thoughts here).

But there is one metric which went un-mentioned, my old measurement nemesis; engagement.  I suspect that you could be proudly watching all the above metrics trending positive, and be rudely awoken by burn out or a rash of exit interviews.  To avoid surprises shouldn’t the impact of change on key people be monitored too?  Retention is a favourite for this.  A good indicator, but actually people leave for a lot of other reasons.  If someone departs for a role closer to the best trails it should not be seen as the first sign of DevOps culture crumbing into nothingness.

So while it seems DevOps operational metrics are mature, there is more work to be done to understand if we’re getting results and simultaneously creating a healthy, sustainable, culture.  That suggests three dimensions to measure for DevOps, and other flavours of adoption.

  • Efficiency – Our key measures like cycle time, and mean time to XYZ, are they improving?
  • Effectiveness – Is the right kind of work being done, and steering the team towards success in their organization’s field?
  • Engagement – Have we created an environment for people to be at their best?  Are we making the most of our talent?

Of course, these three need to be balanced – focus on one could easily be to the detriment of the others.  Measuring engagement, culture change and people things will always be hard, and methods flawed, but we should avoid measurement inversion, and strive to measure things not just because they are simple, but because they are valuable.

* Note to recruitment agents:  DevOps Archaeologist is not a real role, don’t go there.
.

Before we learn, must we first unlearn?

Sometimes I read something and think, this is awesome, how did I miss this one? Sometimes, I even get carried away and write more than 140 characters about it…

One such article explores the concept of ‘unlearning’, as a precursor, or catalyst, to learning. Learning feels like a common denominator across agile methods. But agile is not just about learning how to get better at building stuff, it’s about learning how to introduce and encourage change.

The article is ‘Unlearning Ineffective or Obsolete Technologies’ by William H Starbuck, currently a visiting professor at Cambridge. The article is an absolute goldmine, but Starbuck is also remarkable for having a CV that has to be one of the most simultaneously impregnable and impressive of all time.

The abstract grabbed me straight off: “Often before [people] can learn something new, people have to unlearn what think they already know.”

We’re familiar (and often lazy) with concepts like keeping an open mind, and perhaps techniques like DeBono’s thinking hats which invite other perspectives. Deliberate unlearning though, seems counter intuitive and somewhat destructive, especially if the ultimate aim is to learn more.

The article is packed with great antidotes to reinforce the messages, from how a navy spent weeks bombing aquatic mammals they believed were submarines to exploding steam boats and Sony Walkman development.

Frankly I’d recommend you stop reading this now, and read full the paper, but if you don’t have the time available, allow me to offer a summary:

Starbuck suggests that there are three key points to recognize in order to encourage learning.

1.Learning cannot happen without unlearning – current beliefs are blinkers, something is required to demonstrate that people should no longer rely on their current beliefs and methods – “Expertise creates perceptual filters that keep experts from noticing technical and social changes”
2. Organizations make it difficult to learn without first unlearning. Policies and practice are often created from individual’s beliefs, and these polices mesh together to form a kind of structure, in which it is difficult to change a small part. This creates a self-perpetuating situation discouraging change, where it becomes hard to change anything without dismantling the whole system.
3. Unlearning by people in organizations may depend on political changes. I think the key point here is that unlearning may need to be enabled by people changes. The motivation may be political or something more mundane, the change in influencer is the significant part. This is because information is interpreted by people, influential people create ways of working, culture and policies. Any modification to these may be seen as a threat to the individual and suppressed, rather than exploring suggested change. Starbuck suggests this is why senior managers are prone to overlook, and miss-interpret bad news.

I hear things that support these views time and time again, phrases like “our Agile culture was going no where until so-and-so joined, or so-and-so finally left”. Other disruptions seem to foster unlearning – particularly stronger collaboration and a better appreciation for the challenges of other teams, something very visible in the DevOps movement.

Starbucks goes on to identify methods, or viewpoints, to encourage unlearing.

Dissatisfaction – A common reason for doubting, and reconsidering current approaches, he observes that this can take a long time, presumably requiring a high level of discontent before people are motivated to seek change.

“It’s only an experiment” – There is a mind trick that goes on when we are in experimental mode; we take calculated risks, and we are more observant, we want to evaluate outcomes, rather than preferring a particular one. Often there is less to loose if the results aren’t as predicted. Starbuck puts it: “[Experiments] create opportunities to surprise”. As a side note, Cynefin recognizes the value of this, and promotes safe to fail experiments, nice post here.

“Surprises should be questions marks” – In other words when something surprises us, we should not dismiss it, or categorize it as an interesting anomaly, but look to see if it challenges any of our beliefs or assumptions.

“All descents and warnings have some validity” – Starbuck admits that this is a little over zealous, and there are sources of dissent that don’t provide value, never the less in many cases there is something to gain. Often these comments are attempts to warn or inform, and merit attention.

“Collaborators who disagree are both right” – or rather, there are elements of truth in both arguments. In these situations the art is discovering how the seemingly contradictory elements can both exist. This doesn’t mean creating a compromised win-win situation. It means challenging assumptions and seeking new models until there is understanding.

“What does a stranger think strange?” – Strangers haven’t been exposed to, or adopted, your ways of working, and therefore are more likely to challenge and make valuable observations. In my opinion this is yet another reason to pay close attention to new hires, especially if they are new to the industry or fresh from college.

“All casual arrows have two heads” – If I’ve interpreted it correctly, this indicates that we should change the way we consider flow, by recognizing that there are two directions for each path and we should seek out overlooked feedback routes. Starbuck illustrates with a great example: Mass vehicle manufacturing was once be based on accumulating inventory. Materials were shaped into components, components into cars and customers selected cars from the vehicle inventory. That’s one direction for a causal path. Taiichi Ohno saw the opposite direction and created Toyota’s just in time system, where the absence of inventory to serve customer demand stimulated flow.

“The converse of every proposition is equally valid.” – This pithy phrase is almost immediately caveated to indicate that not all propositions have a valid converse. I guess the aim is to train ourselves to explore the converse, a neat method of flipping our perspectives. Are leaders really leading their people, or just servants to them?

Summing up then, Starbuck puts forward a set of useful techniques to help us overcome our inherent biases and tendency to filter what we consider threatening or bothersome. Even if you don’t agree with the techniques it’s a useful reminder, and the goals are worthwhile. These techniques may avoid some other more catastrophic event, like being fired or going out of business, being the trigger for unlearning. The term unlearning is convenient but perhaps a misnomer, nothing is discarded, it is more a recognition that current beliefs, ways of working ,or processes, no longer serve us; that it is time to seek alternatives.

The DevOps Ball Point

I can still remember the first time I played the ball point game, five years ago in Angela Druckman’s Scrum Master class.  Much of the theory has evaporated but I can still recall the buzz from flow and the significance of the do – retrospect – do cycle.  Feelings, it seems last longer than facts.

So when I was looking for a way to articulate concepts for the Experience DevOps workshop, I started to wonder if something based on the ball point would be useful.  In fact, such a game could have relevance beyond DevOps and software development. Ultimately we’re dealing with common problems with systems and the teams that work with(in) them.

There are a number of variations on the ball point, back at Nokia Entertainment’s Bristol hub we’re indebted to Karl Scotland for his Kanban oriented Ball Flow game.  Particularly for the tools to generate Cumulative Flow, Throughput and Lead Time diagrams for the game.  Rapid iterations mean it’s a great way to see how those statistics change, and experiment with them in a safe environment.

When thinking about a DevOps variation there were a few things I set out to achieve, I wanted players and observers to feel the effects of interrupted an uninterrupted flow, and I wanted constraints on throughput.  The process reminded me of covering a classic song, the ball point is so simple and elegant, should I mess with perfection?

If you are new to it, the classic game is described brilliantly by Declan Whelan.  In summary, it is played with ball pit balls.  Players aim to add magic (I prefer thinking of it as value) to as many balls as possible.  Magic is added when a ball is thrown, everyone in the team must touch the ball.  The ball cannot be passed between neighbors.  The game is played in iterations, with a chance to retrospect between each.

Here are the modifications for DevOps:

Team Growth – the game shows how flow can be interrupted as a more people join an organization.  For this we start with two or three players (you might think of them as founders of a startup), then add a couple more.  At this point flow is good and the game is fun.  Then, as business is booming, we add more people…

Silos – We show the disruption that can be introduced with unchecked growth.  In DevOps parlance this is a physical version the infamous wall of confusion, a crude way to represent organizational, cultural or geographical separation.  In the workshop, a projection screen and a couple of flip charts served us well, forcing only one touch point between teams, and limited visibility.

Incentives – As if the odds weren’t already stacked against our players we give teams on either side of the wall of confusion separate incentives, and managers.

Constrained Throughput – We add a constraint to the throughput of the system, meaning that players need to consider downstream flow.  This is analogous planning and provisioning systems.  It also applies in other situations, for example it’s the need to synchronize with marketing or device programs.  For the game we simply use paper cups to receive balls, each has a capacity, and each has a cost: 10 seconds notice to spin up.  If a cup isn’t ready the team either wait, and get a point, or drop the ball.

Start person is not the end person – The added complication of the end person placing balls in cups, and potentially requesting more, means it isn’t practical to have same person as the start and end point.

All of this sounds complicated, but it takes moments to explain and understand.  Constrained throughput probably has the greatest impact on the players, successful teams will practice early involvement to ensure that that a cup (capacity) is available when they need it.  If they don’t they slow down, queues emerge or the ball gets dropped…

At some point I’ll publish detailed instructions.  Creating games and materials that can be shared was one of key aims myself, Matthew Skelton  and James Betteley shared when we created the workshop.  As a taster, the game breaks down into a number of iterations:

Iteration 1 – Founders
Two or three people play the game, this is partly a demonstration for everyone in the room.

Iteration 2 – Startup
Add a few more people, play the game

Iteration 3 – Growing Pains
Form two teams, keep founders in same team. Add wall of confusion.
Play the game
Hold separate retrospectives.

Iteration 4 – Incentives
Flow may not have improved enough – add incentives, hint at a reward.  The downstream team should aim to pass as many balls as possible.  Upstream should aim not to drop a single ball.
Play the game
Hold separate retrospectives. (To reinforce the point, discourage cross team communications)

Iteration 5 – Joined up incentives.
The last iteration may well have been a mess, so join up incentives for both teams.
Encourage both teams to retrospect together.

Iteration 6 – Flow
Remove the wall. Final round to remind people of the feeling of uninterrupted flow.  No retro, end on a high.

We had fun with the DevOps Ball Point, and the attitude of the participants was fantastic.  There was even palpable relief when the wall of confusion was broken down; if only it were so easy in a real organisation!

The Witch Hunt Retrospective

It’s Halloween night… what better way to celebrate than with a good old fashioned witch hunt?  I’m well aware that agile retrospectives are intended to be collaborative exercises in continuous improvement, or Kaizen if you will.  There are many formats and styles to suit different teams, situations and facilitators.  A well run retro can lead to insights, learning opportunities, and a greater sense of team.  Let’s face it though, sometimes you just want to get in there and blame someone.  So here it is, my guide to singling out that person or team in the guise of a retrospective.

1. Choose a facilitator with a vested, or emotional interest.  Ideal for cross team retros, Particularly important when you need to place blame.  Make sure the facilitator, whose main role is to create an environment that encourages openness, collaboration and thought, wants a particular outcome, or has something significant to gain, or loose.

2. Prepare a timeline upfront. There is nothing more annoying that discovering that other people have a different perception of events.  Presenting your own timeline, and not inviting comment, is a sure path to the outcome you want.  Keep a look out for techniques like Future Backwards, these could easily undermine by revealing things that don’t support your bias.

3. Ambush with data.  Data is a powerful tool, used right you won’t even need to point the finger, you can make it obvious which team, or person is the source of the problem.  Carefully prepare a graph, or visualization, don’t warn anyone, and keep the source data to yourself.

4. Exclude key people. If you include everyone who was involved, or effected, you might find unexpected insights, different opinions, or worse someone might challenge the way you think about things.  Some people have the ability to articulate well, and may challenge.  Exclude these meddlers to ensure a quick conclusion; your conclusion.

5. Ignore the system.  A lot of great thinkers will tell you that often the system, or the environment someone is working in, has a significant impact on behaviour and productivity.  Avoid these blame dilution techniques by keeping the focus firmly on people’s performance and what happened in the moment.

6. Steer Contributions. As a facilitator it’s wise to be impartial, so if you really must get ideas from the group, keep things heading towards the inevitable by judging input.   Praise ideas the same as your own, be scornful of anything different or new.  Pro tip:  If you feel threatened, pretend the glue on a post-it has failed, then conceal the fallen note under your shoe.

7. Publish incomplete or un-reviewed conclusions.   If you haven’t got the outcome you wanted from the retro, there’s another opportunity.   Wash away everything that happened by writing up a  summary.  Describe things as you want people to see it, make sure you are first to publish, and publish widely.  On no account get the summary reviewed in case other people’s ideas or actions creep in.

So that’s how you run a Halloween retrospective witch hunt, it’s a sure way to find someone to blame, and cut out all those awkward learning opportunities.  Off course, all of the above are written in jest, but underneath are what seem to be fairly common retro anti-patterns.  I’ve heard about these, seen them, and probably done them.  Point one, around knowing when to get out of the way, is clincher.  It can be hard to recognize when we’re sleep deprived, over caffeinated, stressed or under inspired.  It is similarly hard to tell, or admit, if we’re too close to a subject.  Luckily there is a simple way to find out: ask.  Just don’t forget to listen.

Ok, come in…

So I’ve decided to try blogging again. Much to my own surprise I started by deleting all my previous posts.  They were generally concerned with the epic battle to keep my rusty vw camper from the scrap heap, photography and cameras.  Deleting the posts was quite a cathartic experience, given that these days we tend not to tidy up after our digital selves.  Like space junk, this string of debris can collide with us when we least expect it to.

I’ve left just one post, my first on wordpress, partly for nostalgic reasons, and partly to remind me to see other perspectives.  In the post I wrote:  “why should I write down things which I already know, for an audience I don’t? “

It was a light hearted comment but since then I’ve begun to really understand the value of writing things down.  It helps consolidate thoughts, and learn through feedback. I’ve also been incredibly gratefully for all the things I’ve learnt from blogs and talks, and glad that those authors did take time to share their thoughts.

I still don’t know what this blog is about, but it might just touch on building things, Agile and DevOps.

Hello World

So here it is, my first blog entry. Blogging is not something which really
appeals to me, why should I write down things which I already know, for an
audience I don’t? It seems rather like standing behind a fence near the
high street and shouting random details of your day at work. Actually, I
don’t know that from experience, but maybe I’ll try it next, if blogging
doesn’t work out for me. Something I do know however, is a number of clever
people who do rate blogging. Clever people have a frustrating habit of being
right about things, so it seems only fair that I should give it a try.
This first entry also calls to mind some wise words, said by one of those clever people in fact; Its never to late to start.