Retrospective Experience – Happiness Index using a Niko Niko Calendar

In the last month or so, I’ve been trying to change tactics for how I approach increasing velocity. I think this all started with a tweet from Luis Gonçalves:

As I’m a coder by trade, I’ve been challenging the team to improve our engineering processes as that was what I was more comfortable with. The tweet made me realise that I should spend a lot more effort into the team themselves.

I’d previously read about the Happiness Index technique but was reluctant to try it as I was worried that the team would say “I don’t remember what I was feeling”. As what normally happens, this all slipped my mind for a while until I came across something called a Niko Niko Calendar. I won’t go into the details of how these two techniques work, as the above two links are great, but the following quote leapt out:

Feelings are the fastest feedback I know

I don’t think all of the team were particularly comfortable with analysing their feelings, but I believe it was a very worthwhile exercise to do. At the end of the sprint we ended up with the following chart:

Niko Niko Calendar

Niko Niko Calendar

As a team, we then talked through all the events that caused big changes in happiness and plotted the “average” happiness on a graph.

Happiness Index Graph

Happiness Index Graph

This really helped the team focus on the major events and led to some really great discussions. One of our team even said afterwards “that may have been the best [retrospective] yet”.

Summary

Using a Niko Niko Calendar to capture the teams emotional state at the end of each day is brilliant. The information is clear for everyone to see, which allows other team members to offer help if someone is clearly struggling. As a fellow Scrum Master commented:

A fantastic information radiator

I would highly recommend any Scrum Master to try it out. If you do, I’d love to hear from you in the comments below or catch me on twitter.

Personal Retrospective

What went well

  • Changing the focus from engineering to people was fantastic

What could I have done better?

  • Considered the team as well as engineering a lot earlier, but at least I’ve started!

What should I not do again?

  • I tried to restrict the team to just :) , :| and :( in the hope that it would make analysis easier. I think that was a mistake because they didn’t and it didn’t affect analysis at all!

Add Grunt and ESLint to a MVC Project

This is part two of getting started with ESLint using Grunt where I will show you how to configure ESLint to analyse a MVC .NET Project. In part one I set-up our environment with node.js, Grunt-cli and finally Grunt for our project, but you couldn’t do much with it.

In this post, I’ll install ESLint, disable all the default ESLint rules, enable one specific some rule and exclude some files from analysis.

Add Grunt and ESLint to a MVC Project

To recap, I have a new ASP.NET MVC project in c:\myproject\WebApplication1 that also contains a package.json and Gruntfile.js:

C:\myproject\WebApplication1> dir -name
node_modules
packages
WebApplication1
Gruntfile.js
package.json
WebApplication1.sln
C:\myproject\WebApplication1>

Let’s get started.

Step 1 – Install ESLint and Grunt-ESLint

Like last time, npm makes installing things trivial. First, install ESLint:

npm install --save-dev eslint

Once that’s completed, install the ESLint grunt integration:

npm install --save-dev grunt-eslint

And finally install load-grunt-tasks, which saves a bit of typing in a minute:

npm install --save-dev load-grunt-tasks

Step 2 – Configure ESLint

eslint.json

To make our lives easier to change the configuration of ESLint, we’re going to use an eslint.json file. As you can probably tell from the name, it’s a text file containing some json that ESLint parses. The ESLint documentation is pretty good at explaining what all the options are, so I won’t do that here, but for now just create one containing the following:

{
    "env": {
        "browser": true,
    },
	"globals": {
        "$": true,
    },
	"rules": {
        no-undef: 1,
    }
}

This ensures the browser and jQuery ($) variables are recognised by ESLint so they don’t throw false positive. It also enables a single rule “no-undef – disallow use of undeclared variables unless mentioned in a /*global */ block”.

As you will see in a minute, I personally like to disable **all ** the rules, only enabling the ones I explicitly want to use. That’s personal preference, as on legacy systems you can end up with a lot of issues to address which can seem overwhelming.

.eslintignore

The next file that we need to create is .eslintignore. As the name suggests, this is an easy way of telling ESLint to ignore certain files and directories. Again I refer you to the documentation for more details, but for now, create an .eslintignore file containing:

# ignore everything in the packages folders
**/packages

# ignore everything in Scripts except files beginning with "myapp"
**/Scripts
!**/Scripts/myapp*

This tells ESLint to ignore all files inside the packages directory, i.e. anything you’ve got from nuget. The last two lines ensures all files except those following your applications naming convention – you have a naming convention right? – are also ignored, i.e. jquery..min.js etc.

Finally, all that’s left is to configure Grunt to run ESLint.

Step 3 – Configure Grunt to use ESLint

Before explaining the syntax, please edit your Gruntfile.js file to contain:

module.exports = function(grunt) {
	# section 1 - require modules
	require('load-grunt-tasks')(grunt);

	# section 2 - configure grunt
	grunt.initConfig({
		eslint: {
			options: {
				config: 'eslint.json',
				reset: true
			},
			target: ['WebApplication1/**/*.js']
		}
	});

	# section 3 - register grunt tasks
	grunt.registerTask('default', ['eslint']);
 
};

The more you play with Grunt the more familiar this will be, but it’s basically made up of 3 sections. Section one lists any requirements (“require” calls), section 2 is where you initialize Grunt and section 3 where you register tasks.

In this instance, I’m configuring a single “target” called “eslint” and telling it to use the eslint.json file, turn off all the rules (reset: true) and to search for all JavaScript files inside the “target”.

Finally I register the “eslint” target to be the default task. This simply means I can execute “grunt” instead of “grunt eslint”.

Which if I do that, I get:

C:\myproject\WebApplication1> grunt
Running "eslint:target" (eslint) task

WebApplication1/Scripts/_references.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/bootstrap.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/bootstrap.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery-1.10.2.intellisense.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery-1.10.2.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery-1.10.2.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate-vsdoc.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.unobtrusive.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/jquery.validate.unobtrusive.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/modernizr-2.6.2.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/respond.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

WebApplication1/Scripts/respond.min.js
  0:0  warning  File ignored because of your .eslintignore file. Use --no-ignore to override

? 14 problems (0 errors, 14 warnings)

And that's it! ESLint is now analysing the JavaScript files in my MVC project.

Step 4 – Next Steps

If you’ve got this far, you’re set to go. You will definitely want to edit the rules you’re using, but I’ll leave that up to you.

Please leave a comment below or catch me on twitter if you’re having any problems.

Getting Started with ESLint using Grunt

Getting Started with ESLint using Grunt

I recently had the chance to add ESLint to our workflow. I considered using it standalone, but as Grunt is becoming a first class citizen in Visual Studio 2015, I wanted to get more familiar with it now.

This is the first of a two part guide to getting started with ESLint using grunt. I’ll also help you understand how to enable other ESLint rules as well as include/exclude files from analysis. I’m going assume you have little to no experience of node.js, Grunt or ESLint and are running windows. In this post I’ll cover the setting up of your environment and next time we’ll go about installing and configuring ESLint to work on a ASP.NET MVC project. This guide could also be used to add ESLint any project that contains JavaScript or even a different operating system, but I haven’t tested that, so won’t make any promises.

Once we’ve finished, you should be able to type “grunt” from a command prompt and ESLint will analyse your JavaScript, so let’s get started.

Step 1 – Install Node.js

Installing Node.js, which includes npm (node package manager – think nuget for node), is as simple as going to the Node.js homepage, clicking the “Install” button and executing the .msi file that downloads.

Once the install has finished, open a command prompt and type “npm”. If you’re presented something like this, you’re good to go. If you have any problems, let me know in the comments.

Usage: npm <command>

where <command> is one of:
    add-user, adduser, apihelp, author, bin, bugs, c, cache,
    completion, config, ddp, dedupe, deprecate, docs, edit,
    explore, faq, find, find-dupes, get, help, help-search,
    home, i, info, init, install, isntall, issues, la, link,
    list, ll, ln, login, ls, outdated, owner, pack, prefix,
    prune, publish, r, rb, rebuild, remove, repo, restart, rm,
    root, run-script, s, se, search, set, show, shrinkwrap,
    star, stars, start, stop, submodule, t, tag, test, tst, un,
    uninstall, unlink, unpublish, unstar, up, update, v,
    version, view, whoami

npm <cmd> -h     quick help on <cmd>
npm -l           display full usage info
npm faq          commonly asked questions
npm help <term>  search for help on <term>
npm help npm     involved overview

Specify configs in the ini-formatted file:
    C:\Users\Matthew\.npmrc
or on the command line via: npm <command> --key value
Config info can be viewed via: npm help config

Step 2 – Install the Grunt Command Line tools (grunt-cli) globally

Getting Started with ESLint using Grunt

The next step is to install the Grunt Command Line tools globally using npm. Thankfully that’s as simple as typing the following in a command prompt:

npm install -g grunt-cli

Step 3 – Prepare your project for Grunt

For grunt to run on our project, you need a “package.json” file and a “gruntfile.js” file.

package.json

You could create this file by hand, but npm can talk you through the process, so in the root of your project type:

PS C:\myproject> npm init

You will get asked a series of questions, which if you don’t know the answer to, just hit enter to skip it. The file is editable, so can be changed later if need be. Depending on what you answer, you’ll end up with a package.json file containing:

{
  "name": "npminit",
  "version": "1.0.0",
  "description": "Description",
  "main": "index.js",
  "scripts": {
    "test": "test"
  },
  "author": "Matt Dufeu",
  "license": "ISC"
}

This file will also be automatically updated by npm when we start installing other packages, as we’ll see in step 4.

Gruntfile.js

Gruntfile.js is similar to a makefile (showing my age) and is used by grunt to see what to do when you issue grunt commands. To get started, create a file that contains:

module.exports = function(grunt) {

  // Project configuration.
  grunt.initConfig({
    pkg: grunt.file.readJSON('package.json')
  });

};

Don’t worry about what this means at the moment, we’ll be modifying this later.

Step 4 – Install Grunt locally

The final setup step is to install Grunt locally. Again, npm comes to the rescue, but this time we specify “–save-dev”.

PS C:\myproject> npm install grunt --save-dev

This will do two things; firstly, it will create a folder called “node_modules” to your project. This is basically the folder npm stores all the packages for this project, so think of it as a libraries folder.

Secondly, it will install Grunt as a dependency to your project. Take a look at your Gruntfile.js now and you will see a new section like this:

...
"devDependencies": {
  "grunt": "^0.4.5"
}
....

“devDependencies” simply lists the packages the project is dependent upon and their version number. It gives you the option of moving the project to a different location without the node_modules folder and typing “npm install” to get npm to download the required dependencies.

Step 5 – Verify it’s working

To check that’s working as expected, simply execute grunt, and you should see the below. It’s basically saying “there’s nothing to do”, but at this stage that’s expected.

PS C:\myproject> grunt
Warning: Task "default" not found. Use --force to continue.

Aborted due to warnings.
PS C:\myproject>

Summary

We’ve installed node.js, npm, the Grunt command line tools and setup our project.json and Gruntfile.js files. We’re now ready to start adding ESLint to our project, so next time, I’ll be adding ESLint, showing you how to enable/disable ESLint rules and finally exclude certain files from analysis.

Please leave a comment below or catch me on twitter if you’re having any problems.

[Update: part 2 is now available.]

Sprint Retrospective Experience – Essentialism

This post is all about the sprint retrospective experience I had using the technique I invented and wrote about last time in “Scrum Retrospective Idea – Essentialism”. I won’t go through the 5 stages again here, so I recommend you give that a read first.

I was more nervous about using this technique than the ones I’ve used previously as this was the first technique I had actually created, as opposed to read about. Previously I’d been using tried and tested methods. Although I felt comfortable with my ability to cope with problems, I had my pride on the line as I really wanted this to be a success!

Result

I’m pleased to say that overall, the technique worked and would happily recommend it to other teams. As with everything, there were positives and negatives but with some further tweaking (see below) I’m confident it could be even better. Importantly, I believe it would work for both struggling and already successful teams as it really focuses on the slowest tasks that can be improved.

Positives

  • The team ended up with a couple of actions that have since been implemented and have lead to positive results.

Negatives

Personal Retrospective

What went well?

  • Focusing on slow tasks that can be improved lead to great actions and resulting improvements;
  • Some painful tasks were highlighted that I never considered before;

What could I have done better?

  • Perhaps limit the groups to their 5 slowest tasks. We had a lot of post-it notes that it was clear we didn’t need.
  • Combine the lists differently. I’m not convinced that was clear until half way through so it either needs more explanation or an example.

What should I not do again?

  • Assume everyone on the team knows what I’m talking about!

Thanks to @GregoryMcKeown for his book, and thanks to my team for letting me try something new.

Sprint Retrospective Idea – Essentialism

Until now, I’ve always used someone else’s ideas for retrospectives. That’s fine, but I’m always on the lookout to push myself, so wanted to see if I could come up with something of my own design.

Over the Christmas break, I read the book “Essentialism: The Disciplined Pursuit of Less” by Greg McKeown. There was one bit that stood out for me from a sprint retrospective idea standpoint. I think it aligns well with continual improvement and is hopefully a good way to increase team efficiency and velocity.

As with all the techniques I use, this technique is made up of the 5 stages of a retrospective.

Stage 1 – Set the Stage

This is paraphrased, but tell the following story of the “Herbie” metaphor:

A scout group is out walking and need to get to their destination before sunset. It's clear they're not going to make it as one of them is too slow, let's call him Herbie.  Herbie constantly falls behind forcing the whole group to wait for him to catch up. The scout leader tries several things and then an idea hits him. 

By putting Herbie at the front of the group and the others behind him, in order of speed, so the quickest walker is at the back, the leader can guarantee the group stays together as everyone can keep up with the person in front of her/him.

To speed the group up, the leader just has to concentrate on Herbie. The first thing he does is share out Herbie’s bag amongst the other hikers.  Herbie can now speed up and the whole group moves much more quickly.

Explain that the aim for the retrospective is to find our slowest task(s) and come up with ideas of how to make them quicker.

Stage 2 – Gather Data

To encourage everyone in the team to contribute, break the team up into groups of 3. Ask each group to create a list of tasks that we do regularly and sort them based purely on speed, slowest first. Give the teams some time and then with everyone gathered around, combine these on a white board or table.

Ask for the slowest from each team, group similar activities, order them as a team and then move on to the next slowest working your way down their lists.

To get team consensus, ask for clarification from the whole team as to the order they think the tasks should go.

[N.B. I’m expecting some issues in this step, but can’t think of a way of making it smoother beyond getting the team to do it which seems lazy!]

Stage 3 – Generate Insights

Now that the list is agreed, for the top 5 tasks, as a team answer the following questions:

  1. What makes the slow tasks slow?
  2. What makes the fast tasks fast?
  3. Are they essential?
  4. Yes/No answer to “Can we improve them”?
    1. If the answer is no, ask “is there an alternative way of doing it”. Again yes/no?

Stage 4 – Decide What to Do

Pick the top 1 or 2 tasks that answered yes to number 3 and 4 above and come up with some suggestions to try for future iterations.

Stage 5 – Close the Retrospective

Close the retrospective with a nice game, something like AHA! from Ret-O-Mat.

Conclusion

If you’ve got this far, I’m hoping you can see the potential in the idea. I’m going to give it a go tomorrow, and will report back with how it went.

Retrospective Experience – Appreciative Inquiry

It’s been over 6 months since I started using different retrospective techniques with my current team and although we are a long way from repeating the same thing over and over, I was beginning to worry about stagnation. A great deal of the techniques I’ve been using have essentially been “what are we doing wrong?”. I wanted something that was “what are we doing right”.

A quick bit of googling lead me to a post titled Appreciative Inquiry Retrospectives by Doug Bradbury. The following quote really stood out:

“Are you discouraged and pessimistic about all the problems that your team has?”

We had our fair share of problems in the lead up to Christmas and the more I read, the better it sounded. A bit more digging and one of the search results was from one of my favourite tools, Retr-O-Mat. Turns out Remember the Future is also appreciative inquiry so I followed the link to Diana Larsen’s post on how to use the technique, and the rest is history.

Result

We tried something different and I’m very pleased with how this technique played out. Looking forward to look back seems strange, but it resulted in different topics to those we normally see. In particular, I don’t think the managers in the room would’ve ever predicted the top most voted for topic.

There also seemed to be a confidence boost among the team. I’m putting this done to the fact that none of us discarded any of the points raised with “we’d have never done that”.

Summary

Appreciative Inquiry offers a change of pace from most of the other techniques out there. I would highly recommend it if you’re just about to start a new release or have been hit with a load of problems, as it’s a great way of finding out what the team think should happen soon. One of the team afterwards also said (which I think is fantastic):

“[it was] a great technique to celebrate our qualities”

Thanks to @dougbradbury for his post which started my research, @findingmarbles for Retr-O-Mat and @dianaofportland for the details of how to perform the technique.

Personal Retrospective

What went well?

  • Flipping the direction of the retrospective from backwards looking to forward looking was great;
  • Discovering an unexpected desire from the team;
  • We got the chance to highlight both our individual and team successes;
  • The team confidence boost

What could I have done better?

  • I should’ve provided more examples of team and individual qualities as we didn’t get many of those.

What should I not do again

  • Forget over the Christmas holiday that the first Tuesday back is retrospective day :)

Retrospective Experience – Halloween Special

The end of the sprint was approaching and I was researching which technique to try next, when I saw this tweet from @aaronjmckenna which I had to try:

Thankfully the team are used to the 5 stages technique so after getting some speakers from IT, queuing up a few spooky songs and introducing the theme, we plowed head first into it.

Stage 1 – Setting the Scene – Halloween Treats

Each team member should think of a Halloween treat to describe how the last sprint went

As with all first stage techniques, this was great at getting every single member of the team to talk at least once and generally wake people up. We had some nice friendly banter at weird definitions of treats, so the team were set for the next stages.

Stage 2 – Gathering Data – Ghost Stories

Each member of the team had to come up with a scary story (which could have a happy ending) describing a key event in the last sprint

This was probably the least successful part of the retrospective as only a small number of team made an actual story. That’s a problem with either my explanation or the team, not the technique, and we still ended up with a decent set of views.

Stage 3 – Generate Insights – Trick or Treat

Each member of the team had to come up with 1 treat (something that will help improve the team) and 1 trick (something that is or could hold us back)

This worked really well as it limited everyone to just one idea. As you can see below, I also decorated the board:

Trick Or Treat Board

Trick Or Treat Board

Stage 4 – Decide what to do – Pick a Door

The team had to decide which doors to visit to talk about, aka dot voting

Stage 5 – Closing the Retrospective – Spook of the Sprint

Each member of the team had to thank one other member for something they did in the last sprint.

I must admit, we don’t normally do much for this part of the retrospective and we certainly don’t do anything as gushy as thanking each other, so I was a little skeptical!

I could tell by the audible groans from the team that they thought the same thing when I introduced the idea, but I’m pleased to say it was quite a positive thing. What’s more, we now have a wonderful mascot for the next sprint.

Sprint Mascot

Sprint Mascot

Summary

All in all, this was one of our better sprint retrospectives and I really like using a theme. Aaron’s idea of timing each thinking section with a song worked fantastically, so I’ll be looking to do that again.

I’ll also definitely be keeping an eye on his blog for any future themes to try.

Personal Retrospective

What went well?

  • The limit of 1 good thing and 1 bad thing helped focus the team. I’m not sure I would use this for every retrospective as it’s nice to let the team vent, but it’s a good way of getting to the most important matters.

What could I have done better?

  • Given more time (and courage) it would have been nice to decorate the room a bit more;
  • I’m not convinced I summarized the stories from the second stage clearly on the board, or even guided the team to use these as a basis for the next step. I think I’ll try and get the team to come up with a summary statement next time, rather than something only I understand.

What should I not do again

  • Talk about writing a blog post over a week before having the time to do it!

Sprint Planning Technique – Group Sorting

Do your sprint planning meetings contain a lot of disagreements and arguments? Are you spending more time trying to decide how many points a PBI is worth than it would take to develop it? Don’t worry, you’re not alone.

To combat this, my current team tried a technique which worked really well. Not only did the meeting take less time than normal, but more of the team were engaged and every team member was happy afterwards.

I’m sure this has an official name, but I’m going to call it “group comparison” until I find out what that is. Thankfully it’s really simple.

Step 0 – Prepare

Hopefully you’re performing some sort of backlog grooming so this is relatively quick. If you’re not, this planning technique might help, but I strongly encourage you to schedule a regular grooming session. We do 1 hour every 2 week sprint and it seems to be enough. YMMV.

Print out (or create on Post-It notes) the PBIs that you think the team will be able to commit to. I’d suggest a little more than your velocity after taking into account capacity.

From your planning poke decks, place the cards on the table that match the size of PBI your team can normally deliver in a sprint in a line. For us, this was 1, 2, 3, 5, 8.

Step 1 – Introduce the PBIs

This is simply refreshing the minds of all the team members so they know what all the PBIs are.

Step 2 – Split the team into groups.

We split into two groups, but I guess you could do more. Basically, each group is invited to come up to the table one group at a time and sort the PBIs under the poker card they think that PBI is.

Step 3 – Invite the other teams

Once the first team has finished, invite the next team up. Second and subsequent teams should check the previous grouping. If they disagree with a position of the card, move it to where they think it should be. The team should mark each card with a dot to show that it’s changed.

Repeat this step for each group and – excuse the hideous use of paint – something like this:

End of sorting by size

Fig 1 – Table after grouping PBIs into story point buckets

Step 4 – Discuss any significant changes

Once all the teams have finished, invite everyone back to discuss any cards that have moved twice, i.e. have two dots. It doesn’t matter if they’ve moved from 3 to 5 and back to 3, just discuss them so the group can have a consensus.

Step 5 – Write the story points on each card

Once that’s finished, there should be some sort of agreement to the size of the PBIs, so write the story points on the cards. This step is really important as we’re about to move them all around!

Step 6 – Order by dependencies

Ask the whole team (or a subset if you’re a really large team) to order these by dependencies . We used a column for each chain of dependencies and staggered them if there was some overlap, i.e. shared parents.

PBIs ordered by dependencies

Fig 2 – PBIs ordered by dependency

As you can see in the above image:

  • PBIs 1, 4, 6 and 7 were marked, and the team discussed 1 and 7 further;
  • PBIs 1, 6, 4 and 5 have no dependencies so can be started at any time;
  • PBI 7 has a dependency on PBI 4;
  • PBI 3 has a dependency on PBI 2;
  • PBI 2 has a dependency on both PBI 1 and PBI 6.

Step 7 – Mark Dependencies

By now, the PBIs should be sorted not only by size, but it should also be clear which are dependent on another PBI and which are standalone. Simply write this information on each card ready to transfer to your tool of choice.

Final Step – Task Breakdown

Perform you’re task breakdown as normal and you’re done.

It’s important to note, that the team can change their mind about something half way through. Just because a PBI started step 5 as 8 story points and dependant on another one, doesn’t mean it needs to end up in that state.

Conclusion

Once you’ve completed this, you should have a sized and sorted set of PBIs ready to be committed to and placed in the sprint backlog. If it’s anything like us, you’ll see the following benefits:

  • You score PBIs much quicker
  • The PBIs are scored relative to each other
  • It’s easy to work out dependencies
  • MUCH less arguing
  • MUCH more team engagement

The only downside I can see, is that it’s possible for the business priorities to get lost in the re-shuffle. As this is limited to just the sprint and you’re going to deliver them all, it’s not a problem.

Retrospective Experience – High Performance Tree

For this retrospective, I was struggling to pick an appropriate technique. The team’s velocity has been creeping up and the actions from each retrospective were mostly minor tweaks. So I hit the books and saw this in the “When you would use this exercise” heading of Luis Gonçalves and Ben Linders Getting Value out of Agile Retrospectives book chapter on “High Performance Tree”:

“A good team that is looking for the next step to become a high-performing team”

Clearly that sounded perfect, and after a little more digging I found a fantastic video by Lyssa Adkins called The High Performance Tree which demonstrates the technique perfectly.

Gather Data, Generate Insights and Decide What To Do

I was a little nervous about using the metaphor as it’s not something we’ve done before and I’d read that the team can get a little uncomfortable. Thankfully that didn’t happen at all.

I won’t re-iterate the steps, or embarrass myself by including a photo of the “tree” I drew, but I will say that each root, leaf and fruit led to really interesting discussions. Having a visual representation of our target worked really well. It helped guide the conversation towards parts we are missing, but also towards part we are doing well.

Summary

I can’t list all the outcomes, but I’m particular pleased we realised we don’t currently have a “can do anything” attitude but perhaps should. I’m really hoping we can turn that around in coming sprints.

Overall, this was a great technique and we had an excellent retrospective. If you’re a ScrumMaster of a mature team and are looking for inspiration to become high performing, I’d definitely recommend this technique.

Personal Retrospective

What went well?

  • My drawing of the tree!
  • The team responded well and it was good to go back to first principles

What could I have done better?

  • When writing up the meeting, I noticed a few points that we said we’d get back to, but didn’t. I need to come up with a better way of tracking these points.

What should I not do again?

I don’t think anything went so badly that I can put in this category.

Retrospective Experience – 6 Thinking Hats

To make the team think a different way during the retrospective I decided to try 6 Thinking Hats. We’re used to saying what went well and what didn’t, so I thought this would be a good way to get some different discussions going.

I was a little nervous beforehand as the description warns about something I’ve been struggling with in previous sprints, namely facilitating rather than controlling. So at least the technique forced me to deal with that head on!

“Tip: The facilitator should try to stay out of the circle and try to avoid the participants talking directly to them”

I was also a little worried that I was flooding the team with lots of new techniques, so we used ESVP as a check-in exercise like we did a few sprints ago. This not only did the job of getting everyone off their feet, but we had a much better split of positive over negative categories this time, so it gave me a nice boost too.

Gather Data, Generate Insights and Decided What To Do

Things started slowly as we didn’t really know what we were doing (see Personal Retrospective below) and we’re not used to Blue hat thinking, but once we got onto topics we felt comfortable with the meeting progressed well.

I found facilitation to be quite hard to begin with, as it seemed like the team were stopping to see what I wrote on the board, but again, that seemed to improve as time went on.

Finally, I won’t go into the actual details, but the meeting ended on a really positive note and there seemed to be a real air of optimism for the next sprint. It will be interesting to see if that’s something this technique brings, or we just had a good retrospective. It was certainly more noticeable than previous retrospectives, but that could be coincidence.

Summary

The exercise started slowly with the team talking directly to me, but as things progressed and we got onto topics we’re more comfortable with – what went well, what went badly – time started to fly.

I’m lucky as I have several strong characters who are more than willing to lead such a debate (which could be a bad thing if they dominate), but if I didn’t, I’m sure there are ways around that.

Overall, I think it’s a good exercise to conduct and one I’ll certainly use again.

Personal Retrospective

What went well?

  • The team produced some good insights, which is a good sign we can become more self-organising;

What could I have done better?

  • A team member mentioned that the strictness of the hats meant he couldn’t/didn’t say something when he thought of it and then forgot. I could provide pen and paper or perhaps tell everyone that than anything could be mentioned and if it’s not allowed we’ll note it for later;
  • I don’t think the time slots of 10 minutes needs to be a strict rule. Next time, we’ll treat it as a time-box, rather than “we must talk for 10 minutes”.

What should I not do again?

  • I’m not convinced I explained the hats particularly well, so next time I’ll try to give examples of each before starting the exercise.
  • Related to that, I didn’t use the results from the previous hats to guide the conversation for subsequent hats particularly well. During the write up I noticed a few facts we didn’t discuss.