Tuesday, March 2, 2010

Craftsman Swap – Day 2 at Relevance

Day 2 begin as I expected. We gathered at 8:45 in Relevance's large conference room to discuss yesterday's happenings across the company. I said that I had been working on Rob's project and expected more of the same today. We weren't giving story-level progress in this meeting so it wasn't drawn out with the minutia of everyone's day. Ten minutes later I was back at my desk with Rob continuing work on our story.

Yesterday we had worked through a few model changes that were necessary to support the behavioral work we'd be tackling today. The old model had a one-to-one association that we changed to a one-to-many. We needed to alter the algorithm that populated this relationship.

The existing algorithm was embedded as part of the model's lifecycle hooks. These hooks fired on save and would populate the association. This algorithm was already fairly complex and it was going to get worse. Rob suggested we extract it into its own module. Rob wanted to TDD this new module from scratch where I thought it would be best to refactor into the module we wanted with the support of the existing tests. After some more discussion and thought on my part Rob convinced me the greenfield TDD approach was best.

Our next debate was: How would the module communicate with the model we were extracting it from? Rob wanted something we would mix in. You'd execute it and it'd do whatever it needed to the model to populate the association. I was worried this approach would create coupling between the model and the module that wasn't immediately obvious. It seemed a bit magical. I suggested that we could reduce the coupling between two and make their interaction obvious by calling out to the module explicitly from the model, then letting the model handle hooking up the associations. Rob agreed to my plan.

After breaking for another excellent lunch provided by Relevance, Rob and I began TDDing our new address allocation module. Rob ran out for a phone call, so in the interim I put together a set of pending specs for all the scenarios that I thought we'd want to test. I knew I'd missed some things, but it was a good place to start the conversation when Rob came back.

The rest of the afternoon was spent building our module, test-first. Many times we would write a failing spec, then realize the behavior we wanted should live in another model. We'd leave that test failing, hop over to the other model's spec and write a second failing test there for the new method we wanted. If everything went well we'd go from two failing tests to a green bar in one fell swoop. Nice.

This is different than my usual method of inlining the new behavior to get the outer test passing, then refactoring into a new method after I've seen the green bar. Rob's approach has the advantage of making you write that unit test for the new function first. It is easy to rationalize not writing it after you've seen the green bar. I know I've done it. We used the 'fit' (f means focused) feature in RSpec2 a lot for this style of work. It really came together well.

The evening had four of us geeking out with a Magic The Gathering draft + tournament, which is the reason I am so late in delivering this post. Although I didn't fare well (1-6) it was a lot of fun.

Two days down, three to go. I'm having a lot of fun. There's rumblings that I may move on to a Clojure project tomorrow. That'd be fantastic.

2 comments:

Fred Lee said...

@Tyler:

I found this post interesting because I am in the middle of a design discussion/argument regarding whether to move some business logic out of an AR Model into a Module.

I made the argument that this was needed because:

* It just made sense from a domain perspective.
* It would be easier to maintain.
* It would be easier to test.

To my surprised, this individual shot my arguments down. His basic argument being that none of what I was saying would actually gain as much as I thought. My first reaction was to . . . well, react. But, then, I listened and thought about it. It occurred to me that he might be correct.

In other words . . .

* Does simply moving things around make it easier to test? Unless the overall coupling issues are dealt with, all we have accomplished is moving things around.
* And, even if we agreed to move it around, how can you TDD a set of methods that are in use already?

Anyway, at some point, it would be interesting to see how you specifically handled your situation.

Tyler Jennings said...

> Does simply moving things around make it easier to test? Unless the overall coupling
> issues are dealt with, all we have accomplished is moving things around.

Extracting the code without dealing with the underlying coupling issues is just sweeping dirt under the rug. It can actually make things worse. You're just as coupled as you were before, but now you've got code hidden in a place another developer may not think to look. That is what we like to call a land mine. I usually prefer interaction between cohesive units of code to be explicit.

This is a good first refactoring step on the road to two cohesive units, however, just don't leave the code that way.

> And, even if we agreed to move it around, how can you TDD a set of methods that are in use already?

This is a hard one to answer.

If the code has good tests then you're refactoring. Extract the module with the support of the existing tests, then move the relevant tests down from the parent to a new test suite for the extracted module. How easy this is depends a lot on how you've constructed your tests for the parent.

If the parent module has no tests OR it has crappy tests you have two options. First you can clean up the test suite so that it will properly support the refactoring you'd like to do. Second, you can test-drive the new module using the old code as a reference. When the new module is complete you write tests in the parent code that assert that its behavior should exists, plug it in, then delete the old code.

The latter is the approach we took here. Not because the code had poor/missing tests, but because the behavior of the module we wanted to include was mostly new. It would have been more work to refactor the existing behavior into place than it was to write it from scratch test-first.

In really well written systems I find myself writing new code a lot more often than I am refactoring what already exists. I think this is mostly due to good compliance with SOLID principals and SRP in particular.