Back in late March, I blogged about my issues getting going with TDD. At the time I was nearing the end of a project where I had attempted some unit testing, but had not done it up front. I was looking forward to a clean slate to get some more work on TDDing up my code.
That opportunity somewhat presented itself, though I was moving on to an established team, which presents its own challenges. However, this team, or a good portion of it, was practicing TDD. So, plenty of example code out there. Not only that, I ended up sitting about 3 feet from world renowned TDD zealot, Steve Harman. Steve is a great teacher, and only yelled at me a few times. (And I only cried the one time...)
That helped on the professional side of things, but even in my own hobby coding I started writing more and more tests before the code. I've been presenting on ASP.Net MVC at a few places recently, and the sample code I use there was written all TDD style. (And is presented test first.) Well, almost all of it...somebody wrote his repository wrong, and the updates didn't work so well. I have some refactoring to do.
So, here I am 3 months later, where do I stand on the points I brought up back in March?
1. I tend to wrestle a lot with when to layout some of the framework and when to just breakdown and write the tests.
This one became pretty easy when Steve introduced some BDD (Behavior Driven Design/Development) style grammar to our testing. This one shift in thinking opened up my ability to get the requirements into unit tests, and get them green quicker. In a nutshell, thinking along the lines of, "When X condition is present, then Y should happen," and naming the classes and test methods with that type of grammar took me well beyond wondering when to write the tests.
2. A recent project found our team working with a pile of generated tests.
Generated tests are a thing of the past. This is a non-issue at this point.
3. When to mock?
Again, a healthy dose of Steve helped here a lot. Also, Rhino Mocks 3.5 was released with the Arrange, Act, Assert syntax, and that cleared up a lot in the mocking area. No more record and playback blocks, and lots of lambdas to stub out that which you want doing the dirty work.
The answer to the original question finally became clear: Get the hard stuff out of the way with mocks. Beyond that, get the stuff that you're not testing out of the way with mocks. The AAA syntax made that pretty easy.
4. Is this test trivial, or needed?
OK, I still have a bit of trouble here. Not as much as before, but occasionally I find myself writing a test, looking at it, and going, "Gee, I'm testing the setter of that property...that had BETTER work!"
5. I still have large holes in what I test.
Again, still having trouble here. Even with the good TDD practices in place at the most recent project, I found myself frustrated and writing the code and screen testing it rather than writing my unit tests up front. I did have a deadline that I was up against, but cutting the unit tests is never the right solution. I fell back to an old habit, one I've been working pretty diligently on breaking.
In the March blog entry, this referred to unit testing my javascript. Well, ending up on a Webforms project pretty much ended my worries about getting my javascript tested.
So, with those steps behind me, what do I want to focus on now? Well, I can still improve on numbers 4 and 5 above, but I think those two will be continuous improvements. I'm going to continue down the BDD-ish path. It's not pure BDD, but it got me over the hump, and I like the syntax. But overall, I'm just going to keep going at getting the tests written up front. Focus more on the red, green, refactor as much as I can. I'm seeing the benefits, just need to get those old habits kicked.
Showing posts with label TDD. Show all posts
Showing posts with label TDD. Show all posts
Wednesday, July 9, 2008
Sunday, March 30, 2008
TDD growing pains
I've understood and attempted to follow TDD practices for a while. The initial understanding of it was pretty tough, and very, VERY sporadically put into practice. After all, it's a pretty big shift in thinking when you first encounter it. "Write tests? For code that's not there?? Wow, that's fantastic!!"
So, over time I've gotten much better at the whole TDD thing. Or so I had thought.
Wednesday, mid-presentation, my lack of patience with the system outed me. I had written my tests, passed my tests, changed my code, changed it a little more, and then went to present it. Wow did I get smacked in the head by not running the tests after each change. (Continuous integration for a small demo app didn't really enter my mind. I have a hard time firing up CI at home since there's not much to I, but maybe I should join Mr. Donaldson in doing so.) So, big time lesson learned there.
The flip side to that bit of a stumble was the amount of time TDD saved me on my current project at work. I had written a number of unit tests to cover some business logic, and a few integration tests. A couple of defects came in showing that I needed to change a number of my dates to nullable objects. Not a difficult change, but not totally trivial, either. Made the changes, ran the tests. Oops...more changes, ran the tests. The whole change over was done, tested, and passing within 15 minutes.
A few pitfalls I've fallen into while traveling down the path towards TDD enlightenment:
1. I tend to wrestle a lot with when to layout some of the framework and when to just breakdown and write the tests. The main reason I wrestle with this one is that the good ol' crutch of intellisense sure comes in handy for test writing if you've framed your code out some. (When writing Ruby code in e, this doesn't seem to be as much of an issue for me.)
2. A recent project found our team working with a pile of generated tests. With the ever present deadline looming larger and larger, we decided to go with the generated tests as they appeared to be "good enough." They were good at the start, but as the code base grew, and our tests with it, we started looking at our CI build taking up to an hour. There's a big red flag. Dug into the code, very few unit tests were generated, but a pile of integration tests were. Takes a while for nHibernate to do its thing 4,316 times. (We've since started cleaning that up, build is back to a more reasonable, but still outrageous half-an-hour now...)
3. When to mock? I'm hip to the whole mocking thing, but identifying the right time to do so is still giving me troubles. I'm sure I'll get there with practice, just need some more practice.
4. Is this test trivial, or needed? In a previous post I mentioned viewing code coverage the other way around, seeing what's uncovered rather than covered. I don't see 100% coverage, er I mean, 0% uncovered as attainable in the web projects I usually work on, but where do you draw the line? If you end up falling over on a piece of untested code, I guess it wasn't trivial.
5. I still have large holes in what I test. Just about all of my javascript code is completely untested. I know there are frameworks available to help me with that, but I haven't gotten down and dug into them.
So, short term I'm going to keep the ears pinned back and keep moving forward with the testing first. May as well get a home build server set up, and keep digging on mocking as much as I can. Gonna have to investigate Moq, as well. Also, time to get that javascript tested. With as much AJAX as we're cranking out, this is quickly becoming a priority.
So, over time I've gotten much better at the whole TDD thing. Or so I had thought.
Wednesday, mid-presentation, my lack of patience with the system outed me. I had written my tests, passed my tests, changed my code, changed it a little more, and then went to present it. Wow did I get smacked in the head by not running the tests after each change. (Continuous integration for a small demo app didn't really enter my mind. I have a hard time firing up CI at home since there's not much to I, but maybe I should join Mr. Donaldson in doing so.) So, big time lesson learned there.
The flip side to that bit of a stumble was the amount of time TDD saved me on my current project at work. I had written a number of unit tests to cover some business logic, and a few integration tests. A couple of defects came in showing that I needed to change a number of my dates to nullable objects. Not a difficult change, but not totally trivial, either. Made the changes, ran the tests. Oops...more changes, ran the tests. The whole change over was done, tested, and passing within 15 minutes.
A few pitfalls I've fallen into while traveling down the path towards TDD enlightenment:
1. I tend to wrestle a lot with when to layout some of the framework and when to just breakdown and write the tests. The main reason I wrestle with this one is that the good ol' crutch of intellisense sure comes in handy for test writing if you've framed your code out some. (When writing Ruby code in e, this doesn't seem to be as much of an issue for me.)
2. A recent project found our team working with a pile of generated tests. With the ever present deadline looming larger and larger, we decided to go with the generated tests as they appeared to be "good enough." They were good at the start, but as the code base grew, and our tests with it, we started looking at our CI build taking up to an hour. There's a big red flag. Dug into the code, very few unit tests were generated, but a pile of integration tests were. Takes a while for nHibernate to do its thing 4,316 times. (We've since started cleaning that up, build is back to a more reasonable, but still outrageous half-an-hour now...)
3. When to mock? I'm hip to the whole mocking thing, but identifying the right time to do so is still giving me troubles. I'm sure I'll get there with practice, just need some more practice.
4. Is this test trivial, or needed? In a previous post I mentioned viewing code coverage the other way around, seeing what's uncovered rather than covered. I don't see 100% coverage, er I mean, 0% uncovered as attainable in the web projects I usually work on, but where do you draw the line? If you end up falling over on a piece of untested code, I guess it wasn't trivial.
5. I still have large holes in what I test. Just about all of my javascript code is completely untested. I know there are frameworks available to help me with that, but I haven't gotten down and dug into them.
So, short term I'm going to keep the ears pinned back and keep moving forward with the testing first. May as well get a home build server set up, and keep digging on mocking as much as I can. Gonna have to investigate Moq, as well. Also, time to get that javascript tested. With as much AJAX as we're cranking out, this is quickly becoming a priority.
Subscribe to:
Posts (Atom)