Wednesday, June 1, 2011

Blog has moved

If this post is showing up in your blog reader, you need to update your feed. Subscribe through feedburner at: http://feeds.feedburner.com/timwingfield/JaFt

This particular version of the blog will live on at http://family-truckster.blogspot.com, however won't be updated after this post.

The domain hasn't change, my blog still lives at http://blog.timwingfield.com but in the move from blogger to jekyll, the feed moved.

Tim

Wednesday, May 11, 2011

Estimation Rant

Estimation rant

Estimation is a bad word on dev teams. We have learned from many painful estimation debacles to the point we cringe when we hear the word, "Estimate." In many software endeavors estimating is a necessary practice, at least on some levels, to get the thumbs up to start writing software in the first place. But just because we have done estimating activities often, does not mean we are good at it.

Disclaimer: I am discussing estimation in the context of delivery, not sales. Clients everywhere want to know when something will be done before they commit money to it, which is why many talented sales people drive fancy cars.

Failing Into the Hours Trap

The very first failure of many estimating exercises is estimating your tasks, stories, and features in hours - or days or weeks or years. Any unit of time measurement is likely going to get you in trouble, but we will stick with hours for our example. Yes, hours fit neatly into a column in Microsoft project, but for estimating a development effort your setting yourself up for failure as soon as you choose hours as your unit of measure.

Dages estimation

The first trap in using hours is we are all going to base that guesstimate on the 8 hour work day. If you commit to 8 hours, you are essentially saying that you will have that feature done in a day. Your manager is definitely hearing "one day" when you say 8 hours. If you could sit and code for 8 hours straight, maybe that estimate would be worthwhile. But how often do you get to code 8 straight hours? All kinds of things happen that derail that 8 hour day of uninterrupted coding. The stand up, other meetings, lunch, nature will inevitably call, and there's a better than average chance a Nerf war will break out in many team rooms.

The second trap of hours is they really only get used as a measurement when you go over your estimate. For example...

Let's take a developer on our team, we'll call him Jeff. In our estimation meeting, the team has determined that feature #1337 will take 16 hours to complete. Based on our previous trap we all just heard, "Two days." Jeff pulls feature #1337 off the board Monday morning and gets to work. Wednesday afternoon, he moves it to dev complete. Whoa, that feature just took 24 hours to complete! Come on, Jeff, you're 8 hours over the estimate? How will we ever make that time up?? We're on a deadline here!!

Sorry...all accusations are purely hypothetical.

The next Monday morning we're back at work, and our trusty developer Jeff looks on the board and sees feature #1355 on the board estimated at 16 hours. He pulls the card into the development queue and gets to work on it. At the end of the day he's done. 8 hours into his 16 hour feature he moves it to dev complete. Way to go Jeff!! You rock! Best developer ever!!

But hold on a second. Our rock star dev has missed his estimate by 50% in both cases. In one he's the villain and the other the hero? He goofed by 50% but thanks to "beating the hours" on the second feature that gets lost. In reality we should be asking Jeff why he missed each estimate. That will help us learn where we as a team missed and how we can apply that to future guesstimation exercises.

The Cost of Estimation

During a lunch conversation at a past conference I listened to a gentleman tell us how he was contracted to do a 6 month estimation on what it would cost to build a certain piece of software. Two weeks into the contract, one that was paying him quite well, he went to his stakeholders and said, "The amount of money you are sinking into the estimation contract will never be returned to you. You will never get $1 back from it, how about I just start building the software and we'll see where we are in 5 and a half months?" They went for it, and he won the development contract 5 and a half months later.

The estimate means nothing to the actual delivery of the software. One of my former managers used to say, "The effort is the effort is the effort." The iron triangle of software is scope, timeline, and quality. We're allowed to fix any two of them, but the third has to be flexible. (It's scary how many times quality is the one that gets forced to flex.) Our estimate won't change any of the three points on the iron triangle.

All Is Not Lost

Fear not, there are ways to get your software written without falling into estimation traps, and giving those outside the team a reasonable idea of when something will be delivered.

First up is story points. Story points are a representation of the level of effort the team thinks it will take to complete a story. Story points can be anything. I have seen teams use numbers such as 1, 3, 5, and 8 as their points, and other teams will use t-shirt sizes such as S, M, L as points. One team I saw REALLY wanted to drive home the fact that story points are relative, so they went with duck, unicorn, and elephant as their sizes. Using any of those units of measurement illustrates that story points are used relative to each other rather than to a fixed value such as hours. Humans are very good at doing comparisons, so we don't know exactly how big a unicorn story is, but we know an elephant story is bigger.

Another way to step away from estimations is to move to a continuous flow system. Get yourself a Kanban board, get rid of iterations and iteration commitments, and start tracking the cycle time on the completed side of things. There are a few challenges with going this way, the first being that you will not have a reasonable idea of your cycle time until you have pulled a few stories through the system. Additionally sizing may still come into play because stories have this habit of never being exactly the same size. But, get through a few cycles, get some actual times on features from concept to completion, and you'll be handing your managers actual times from which they can plan future work. It may take a little time to get the data, but there isn't a manager in an IT department anywhere that wouldn't love to hear, "This story is sized as a small for our team and small features take us about 2 days to complete. Come back Wednesday, we should have it for you."

The last way to get rid of estimation is just to get rid of estimation. Like the story where the gentleman stopped researching on the estimation contract and just started writing code, just write the freakin' code! That could make for some difficult conversations at some point, but if your 4 person dev team is in a 2 hour estimation meeting every iteration, that's 8 hours you could have spent writing code towards a deliverable.

</rant>

Thursday, April 21, 2011

How We Do a Retrospective

Hard hats required

There is no one right way to do a retrospective, there are many good techniques...and a few bad ones. Utilizing more than one technique is a good way to get different conversations going with your team, and expose different areas that could use a little fixin'. That said, I do have a "tried and true" technique that we've used on a number of teams over the years. It's usually pretty good at getting the conversation going, and can be done in 30 minutes or less to keep you from being stuck in "yet another meeting."

First things first, at the start of any retrospective you should review the action items or results from the previous iteration's retrospective. Have the people assigned to each item report how it went and what got accomplished. Nothing gets a retro off to a good start like saying, "Remember that problem we had last week? It's fixed!"

A Few Supplies and a Little Preparation

We're going to need:

  1. Three colors of sticky notes. Regular square ones will work fine. For this example we're going to use purple, yellow, and green.
  2. A box of pens, preferably all the same kind. I like to use sharpies because big pens on small paper ensures we get short items on each card rather than essays. Using the same pen also keeps things a little more anonymous.
  3. A whiteboard. In the absence of a whiteboard the big easel sheets would do the trick.
  4. A timer of some kind. I picked a timer up for about $200...it also makes phone calls, surfs the web, plays Angry Birds and sends text messages and email. It's high end for a timer, but it does the trick.

For the prep work, take our three colors of sticky notes and split them up so each member of the team has a few of each color. The number really doesn't matter, but we usually end up with 6 to 8 of each color for each person. Give each member of the team their own little pile of sticky notes and a sharpie.

Gather The Data

The colors signify good, bad, and confusing items from the previous iteration. Since we went with purple, yellow, and green we'll say that purple = pain, yellow = confusing, and green = good. Set the timer for 5 minutes and have the team write as many items as they can think of for each color from the previous iteration. They should do this on their own, writing their own thoughts down, we'll collaborate and discuss later. If your team is smaller and iterations are pretty short, put less than 5 minutes on the clock.

When the time is up, have each team member walk to the white board and stick their notes on the board. No order to them, just get them on the wall. Once all the stickies are on the wall, take a couple of volunteers to group them by subject, not by color. We're not looking for all the bad things that happened in one cluster, we're after what was good and bad about a given subject to the team. Aim for 4 to 8 categories, and try not to get them too broad. Once you have your categories, circle them and put a one or two word title above the cluster. It should look something like this...

Retro board

Discuss

We've got things broken down, next up we have to decide what we're going to tackle. Best way to do this is to Dot Vote. Dot voting is an old stand by in agile. Each team member gets 2 or 3 votes and they place a dot next to the category they want to discuss. If they think a category is very important they could put all three of their dots next to that category. We do want each team member to use their votes though, abstaining isn't allowed.

When everybody has voted we'll discuss the top two vote getters and try to pull an action item or two out of each category. Do a quick run through of each sticky, set the timer at 10 to 15 minutes, and get to discussing. Try to involve everybody in the discussion as well. Since everybody contributed to the stickies on the wall it should be easy to get the quieter folks to speak up as they likely added a note to the category.

What Are the Goals Here?

Goal numero uno in this set up is to get everybody to put stickies on the wall with what they thought went good or bad or confused them during the last iteration. By doing this we have everybody involved in the retrospective right away, and increases the likelihood of them speaking up during the discussion.

The second thing we're after is discussing categories of issues rather than smaller issues raised by one person. The goal of any retrospective is to look back and see what we can do to make the whole team better, not just what the loudest, type A personality, Alpha-Dev wants fixed to make his life better. By categorizing everybody's issues we can compare what everybody thinks, discuss the broader issue, and derive a good actionable, assignable action item from that.

Some Issues: Learn From My Mistakes

Doing this, or any, retrospective style 6 or 8 or 10 retrospectives in a row will stop yielding good results. That happened with one of my teams and we kind of got in a rut. Once in that rut the retrospective became an "Airing of Grievances" (minus the festivus pole) and we weren't getting good action items. In reality, we were getting bad attitudes towards the retrospective and incremental change was not coming our way.

One particular retrospective we had done the categorizing and completed our voting and a sticky note from one member of the team had not made it into a category we were going to discuss. In order to get it discussed, the owner of the sticky moved it into a category we were going to talk about. It was posed to the team if we should leave it, and the consensus was it wasn't there for the voting, it shouldn't go in now. The offender wasn't too happy with this decision and slapped it back in to be discussed...at which time one of us removed it, tore it up, and threw it away. Team over individual in the retro. Always.

As noted earlier, and solidified with my first issue above, this isn't the only way to do a retrospective and it shouldn't be used iteration after iteration. However, it does get the conversation going and it will usually yield an item or two that your team wants out of its way. Give it a shot the next time you have a retrospective.

Tuesday, April 19, 2011

Action Items Round Two

In my last post I discussed keeping your action items assignable and actionable. In the days since that post went out, I've seen a few more things to add to the subject of action items. We're agile, we're all about continuous improvement.

Action Items Are To Improve Team Workflow

Film chalkboard

Action items are things the team agrees to try in their next iteration to improve a process which is giving them trouble. In otherwords the team has identified a problem in how they're getting their work done and the action item they've decided on should help fix that problem. Some examples of action items I've seen on team boards in the past...

  1. Add Work In Progress limits to our scrum board.
  2. Make our Product Owner the single point of contact for our field engineers to limit the context switching of developers.
  3. Add a large, red light outside our team area to let the rest of the office know that the team is in heads down "quiet time" and shan't be interrupted.
  4. Try planning poker at our next planning meeting.

If the item would go straight to the value stream of your project, meaning it's something you are going to work on that will apply directly to your deliverable, then I usually don't classify it as an action item. Things that gostraight to your deliverable are typically specific features, setting up hardware, and technical spikes. (More on those in a sec.)

Not every retrospective will yield action items and that's OK. If you're doing short iterations and are humming right along, you may have a very short retrospective and be pretty happy with how everything went. There's nothing wrong with NOT having a problem on your team.

Technical Spikes

Spikes are meant to answer a question and should be time boxed. This stems from a couple things we as developers are very good at: Taking too long to come up with an answer and going off in the weeds, down a number of tangents and returning with what we think is the absolute perfect solution. Enter the technical spike. Limit our off in the weeds time, give us a definite end time, and force us to come up with A solution prior to coding it into the PERFECT solution.

Why don't I think spikes are action items? Because in the long run they should be answering a question about something that will likely go straight to the value stream of the project. Usually we're answering a question about some portion of code that's new to us, a framework we haven't used before, or something along those lines. These questions need to be answered for us to deliver our product, not necessarily to improve the team, its process and its flow. (Though not delivering our product will most decidedly have a negative impact on the team.)

An an example of what I mean, I have a spike assigned to me right now to look into a GUI testing framework for our project. This is clearly a question for us, and not something that I need to do to improve the next iteration and beyond. It's something that will provide direct value to our client.

So, let's say I finish my spike and settle on Selenium as our GUI testing framework of choice. After three or four iterations we see that our selenium tests are getting quite large and out of hand, at that point the team would probably identify an action item to clean those tests up. (And we would of course assign a steward to oversee that clean up.)

The Working Agreement

In my last post I got a very good comment from Fanlan:

At my company, we distinguish between tasks (Action items) and working agreements. "We will do a peer (code) review when we think we have finished a user story" is an example of a working agreement.

Many of the teams I've been on we haven't been too explicit with our working agreements, though I think that's a mistake on our part as I do like being more explicit about team workings. Leaving things open for interpretation rarely goes the way you wanted it interpreted, so I do like Fanlan's suggestion here.

In my example our team was going to try out code reviews and added them as an action item for the next iteration. In that context we don't know if code reviews will work for us or not, so we're going to run with incremental change and see how it goes. In another example we may want to add WIP limits to our team board, but after a couple of iterations they may not change our flow. If all goes well with our action items then I'd add the new practices explicitly to that working agreement for the team, but we should go through the "try before we buy" period before full adoption by the team.

Incremental change is a big deal in making your teams and your deliverables go better. Work at those retrospectives and their resulting action items, you'll see those improvements start to take hold.

Thursday, April 14, 2011

Action Items: The Results of the Retrospective

One of the principles of agile is incremental change. Looking inward on what your team is doing that could be better is one way to get that incremental change, and team retrospectives are a great way to do that looking inward. If you're not familiar with team retrospectives, essentially they're a fairly informal gathering where teams bring up issues and kudos over recent work and decide on a few things that they want to improve. The things they decide to improve are usually called Action Items.

Having been on a few agile teams and taken place in many a retrospective there are a couple of things that come to mind in regards to action items.

Make Your Action Items Actionable

Ok, that sounds a little on the obvious side, and it can be. But it's a very real problem with many action items.

A team gathers and they find an issue with their code. For example our team is working on a large, old code base. There are many pieces of it that are just scary to change. So the team brings this up at the retrospective. Many people have issues on the board such as, "code is scary to change." Or something like "our code is difficult to work with." And since it's a dev team somebody has probably taken a little initiative and just put, "Code SUCKS!" up on the board. After some discussion the team has decided on an action item of: Improve the Code.

The whole team is there, managers, product owners, QA folks, everybody, and they settle on "Improve the Code" with some gusto. This is an item we KNOW we need to do and the whole team is behind it. If we're going to succeed, we need to IMPROVE THE CODE!

But our (somewhat) hypothetical team has missed the important part of this action item: HOW do we improve the code? Just saying "improve the code" is the dev team equivalent of saying, "Well, duh." We're always striving to improve the code, but what's that first step? Make sure your action items are indeed items and not overall team goals.

Drag the discussion a bit further. Start asking questions when you settle on a broad action item such as this. If you start digging, you'll arrive at that first step to your bigger goal. For example, our team notices that they have a small seam in their code where they could get their data layer wrapped in an adapter and add some more testability to both sides of that data layer. Now they have an actionable action item: Wrap Data Layer in an Adapter.

Make Your Action Items Assignable

The next retrospective comes around, and we working hard at getting actionable action items. After some discussion, we realize that we could use a quick code review before we move our features to "development complete." Performing a code review is pretty actionable, and everybody on the team can participate. Our team is happy with this action item and they decide to tackle it the next iteration. They've also decided to assign it to "Everybody."

So they just assigned it to nobody.

It's human nature, if everybody is assigned to a team task to complete of their own free will, nobody will take care of it. Code reviews will likely not happen, and we'll arrive at the next retrospective with everybody looking at each other thinking, "I thought somebody else would take the lead on that."

Even if your action item probably does need to be done by everybody on the team, as would be the case with our code review example here, assign it to one person to oversee that it actually happens in the iteration. Make that person a steward of the action item. This person likely needs to do nothing more than bring up at the first stand up to remember our code reviews, then later volunteer to do the first one and the ball will be rolling. Additionally, at the next retrospective the team has a person that is accountable to report back on the results of that action item from the previous iteration.

Retrospectives are great tools, possibly the greatest tool for incremental change. Keep an eye on those action items and keep them actionable and assignable.

Tuesday, April 12, 2011

pik, IronRuby, MRI, and a .Net Project

On a recent ASP.Net MVC project I was leveraging IronRuby and Cucumber to get some BDD specs in place to drive development. Though this post isn't about the benefits of BDD, it was very easy to get the specs worked out with my Product Owner, and the demos went really quickly. For the most part, I was using IronRuby to run functional tests at the controller level.

Then we decided to add on some GUI tests. With Ruby and Cucumber already in place, we decided to give watir a try. Except once we went on to using watir, IronRuby was no longer needed to test the website. We could instead use MRI 1.9.2 as our interpreter and get a little more speed out of our watir tests, and leverage the latest version Cucumber.

Since we're doing our development in Windows we don't have the luxury of RVM, but pik is a great solution to switching Ruby interpreters on Windows. During development, a few pik switch statements keeps all our cucumber tests in sync with either our IronRuby testing or our watir testing. However, if we wanted to run them all at once I had to write a little batch file to handle it. (I'm a dev, I'm lazy, I just want to type one line in the command prompt and have it all kick off...)

After knocking the rust off my batch file fu, here is the contents of that batch file...

@echo off
@call pik sw 100
@call rake
@call pik sw 192
@call rake watir

Here's what's happening in our little five line helper file. First line just doesn't echo your commands back out to the command prompt. After that we call pik sw 100, which is pik switch to IronRuby 1.0.0. Then our default rake task is called, which builds the project, runs the unit test suite (in nUnit in this case), then run the IronRuby cucumber tests. pik sw 192 is switching to Ruby 1.9.2, then calling rake watir just runs our watir tests against the already built website. Pretty straightforward.

Now that was only for our dev machines in order to do one line build test, test, test during development. Our CI server was much easier to configure.

We were using TeamCity as our CI environment when we added watir to the mix, but the batch file wasn't needed as TC allows for different build tasks to use rake and specify which interpreter to use. So it was as easy as add an IronRuby build task call the default rake task, then create another rake build task and call the watir task in the rake file. We have since switched from TeamCity to Jenkins, but the set up with a build task per interpreter is identical.

(Don't have pik installed and you're a Windows using, Ruby loving programmer? Ben Hall's post on getting pik installed and running is the best reference I've found.)

Monday, April 11, 2011

Upcoming Speaking Events

Trying to slow down the speaking and associated travel schedule this year, but not giving up on it entirely. So, here's what I've got coming around the corner...

Stir Trek

On May 6th, Stir Trek happens again. This has quickly grown into one of the region's more popular conferences, and it's only a one day thing. I'll be presenting "Executable Requirements: Testing in the Language of the Business" in the Testing Track there this year. There's something cool about seeing your slides on a movie screen. There's also something daunting about following Jim Weirich on the schedule...

Columbus Ruby Brigade

On May 16th I'll be presenting at the Columbus Ruby Brigade. It's not directly a Ruby topic, but the good folks at CRB have been kind enough to give me an hour to present "The What's, Why's, and How's of Kanban." So, it's not specifically for the Rubyists, but Kanban is good for everybody! If you're thinking, "Really? The Kanban talk AGAIN??" Yes, again, because I need the practice for...

Agile Dev Practices

June 9th I have the honor to be on stage at Agile Dev Practices West, again presenting "The What's, Why's, and How's of Kanban." Yes, more of Tim yammering on and on (and on and on) about Kanban and how it will cure all that ails you, but this time I get to do that yammering in Vegas! I am really looking forward to this conference and the crowd that will be different from the normal developer type crowd at most of the conferences I attend.