Monday, November 1, 2010
Sometimes you need to make a mess
What do we mean when we say some code is a mess? We're usually referring to its structure or lake thereof. Interactions between components are awkward and poorly named. There's often logic leaking between the layers causing odd couplings and generally making our lives a pain. Software like that is certainly a mess. That sort of mess is usually traceable to poor discipline on the part of the programmers or poor management of the team. We know these messes well.
However, that's not the kind of mess I want to talk about. I want to talk about the kind of mess that happens while you're learning about a problem space by programming in it. These are flaws that you intend to clean up once you've wrapped your head fully around the problem, but you're quite incapable of doing so until you *do* understand what's going on.
Would you say code in that state is messy? I would, but I think these messes are actually natural and good. Trying to avoid all kinds of messes early can be a mistake.
It is premature optimization of the design to force structure into the system when your understanding of the problem is too weak to support it.
Here's an example. I wanted to build a pure-ruby png spiriting library to use in our web applications. It would take multiple png images, smash them together into one "sprite" png, and generate the css you need to access each of the images contained within. Now, I had never done any of this before. I didn't know how to parse a png or splice them together. I had a lot to learn to solve this problem.
What did I do? First, I went and found the official png documentation, then I found a png library in ruby I could refer to. Finally, I made a huge gigantic mess trying to get it to work. This mess was a little bit like laying all the puzzle pieces out on the floor before you start to assemble it. I had some functional tests, I had some code that could parse a png - but it was far from an ideal form. Once I had something basic working, even though the code was a disaster, I could start to refactor. More importantly, I had allowed myself to explore the problem space by not getting hung up on the optimal design. I could worry about that later. Once I had enough understanding of the solution that I felt good about introducing new concepts I would do so and refactor the code into its new home.
The end result was a nice little spiriting library you can check out here. How does this technique work? How can you make a big mess without tossing all the code in the end? I've certainly done that before. Well, most importantly, you need to write the right kinds of tests. Whenever I'm working in a problem space that I don't feel comfortable with I always start my testing at the highest possible level, most of the time this means integration tests. I do this because integration tests give me more freedom to refactor.
A system that is easy to refactor doesn't punish me as much for making mistakes and I certainly do make a lot of them. A good suite of integration tests shouldn't break unless you've truly broken the application, that is very useful. In the case of the spiriting library, I started by writing a test that expressed how I wanted to open a png and inspect its data. Once I had that working I would refactor what I thought I understood well enough and move on to the next test. When I got stuck, I tried to think up an easier test that doesn't require me to take on so much of the problem. So foremost, when I have a really awesome set of integration tests I'm never afraid to attack a mess. I know the test suite has got my back.
Second, I need to be really good at refactoring. As I understand more of the problem I need to be able to take all these mistakes and turn them into solid code. Another important aspect of refactoring is knowing when the mess is getting too big. I need to understand how to refactor as much as I need to understand when a refactoring is necessary. I try to do this by always keeping the current state of the solution in my brain in sync with the solution in the code. As I learn and as code starts making sense I change it to match that new knowledge. That way when I come back to a problem I can pretty clearly tell where I need to focus my learning - wherever the code is the messiest.
Therefore I endeavor to only allow a mess to live as long as my ignorance of the problem space.
Friday, August 6, 2010
Groupon Launches Personalized Deals
Last week Groupon's founder Andrew Mason announced, as he put it, "the biggest thing we've done since launching Groupon". That announcement was personalized deals and I'm very proud to say that Obtiva was intimately involved in the development this new initiative.
Collaborating as a team comprised of Obtivians, Groupons, and other consultants we were able to take this project from inception to prototype to launch in just a few months. I believe Groupon and Obtiva have shown once again that taking the time to do things right, to be craftsmen, is the most efficient way to build software.
I'm confident that the growing Obtiva team at Groupon will continue contributing to the great software that has helped make Groupon one of the fastest growing internet companies of all time.
Tuesday, March 9, 2010
craftsman Swap – Day 5 at Relevance
It was the last day of an amazing week and I wanted to end it with a bang. I poked around after standup to see what people were doing and it was apparent that it was going to be a light day in the Relevance office. I think it was a coincidence, but this Friday found most of the Relevance staff traveling.
Fridays are special at Relevance as it gives everyone the opportunity to work on his or her own R & D projects with the stipulation that these projects are open source. Not only is this an awesome perk for their staff it is also great for their customers too. A lot of the tools they build on Fridays help them deliver better software faster and more reliably for their clients. If you’ve used any of the projects on this page, then you owe a nod to Open Source Friday at Relevance. Personally, I find their commitment to the community inspiring. I wish more companies found a way to give back like this (nudge, nudge, Obtiva).
This Friday began as every other day this week had, with the company stand-up. Nothing new to report there. After standup everyone broke to do their own thing, either as a pair or solo. I think it was split evenly between those working alone and those working as pairs.
I was attempting to sell Larry on my Clojure QuickCheck port when Brian Marick found his way into the conversation. For those who are unfamiliar, QuickCheck is a Haskell testing library where test data is automatically generated and assertions made against “properties” of a function under test. What I wanted to attempt was a BDD / QuickCheck Hybrid that supported both styles of testing.
Eventually it was just Brian and I working to bootstrap the framework. We’re both new-ish to Clojure and we ended up bumping our head on a rough edge or two. For instance, using a binding to override the behavior of the "=" function in Clojure is a really bad idea. And it is bad in a way that isn’t immediately obvious. I think we killed a good hour figuring that out. We didn’t make a ton of progress, but it was really great to finally work with Brian. He’s an exceptional guy.
My Open Source Friday ended too soon, as did my time at Relevance. The people are top notch, the culture is wonderful and the software they write is exceptional. Without a doubt, a most wonderful place to spend a week or a career. So, a huge Thank You to all the Relevance folks for having me out. Let’s do another one soon!
craftsman Swap – Day 4 at Relevance
The fair weather abandoned Durham sometime last night. I awoke to a brisk, rainy day. I had treated myself to an early bedtime and I was feeling especially good even though the weather wasn't spectacular. I made it in early thanks to Chad and spent about 40 minutes working on the blog post from the day before. I’m still behind from Tuesday.
Standup went with a now expected cadence. I filled everyone in with the delightfully busy day I had yesterday. Today I would continue working with Larry on the Compojure web application. Excellent.
Larry Karnowski and I began our pairing session by reviewing the card. Our review turned out a missing behavior. We had yet to allow the caller to provide the recipient's identifier in the request. Our next step was to look to see what was currently supported in the model. Someone had already wired up the model so that a message could have multiple recipients, but it wasn't entirely finished. Multiple recipients is also clearly beyond the scope of our card. I thought we might as well go through with it but Larry was adamant that we rip it out. It was not necessary for the delivery of this card - or any of the cards in this iteration.
This intrigued me. We spoke about it a bit and it became clear to me that this is part of what makes Relevance guys different. If the code is dead or if the implementation isn't ideal it gets ripped out right away.
I realized that this could be why I found it so easy to integrate myself with their projects. I didn't waste any cycles trying to sift out what was relevant from a year's worth of dust and debris. Great code doesn't say that way long if you're not disciplined about taking out the garbage. Relevance employs some tidy fellows. Larry had to run to a meeting and left me to extract the unnecessary bits. I managed to muddle through and finished the job just before lunch.
After lunch Larry and I ran into another little issue. This story we were working on wasn't exactly user facing. We were building a programmatic API for creating messages. In order for the customer to sign off that the story was complete we needed a way for them to create messages through the API. It didn't need to be perfect, just functional. Given this particular API is for a web application we were able to construct a simple form that the customers could use to see the system work on their own.
Our afternoon ended early when we broke to do an estimation session. This estimation was to be done in preparation for the next iteration of the project Larry and I were working on. This project was also different. The outcome was to be a proof of concept that would be used to demonstrate some of the application's core features, not an application they'd be putting into production.
Before we talk about what happened let me roughly explain what I learned about how Relevance estimates cards. There are two estimates taken, one at a high level and another at a low level. The high level estimate is used to size stories into an iteration. An iteration is “full” when the sum of the high level estimates meets the expected velocity of the team.
These estimates are taken using a relative scale derived from completed stories in the project. Before each estimation session begins they baseline themselves for this project by looking at completed stories from the previous iteration.
The low level estimate is done by the pair picking up the card for development. This low level estimate is the one that the development pair is expected to perform to. It makes a lot of sense then that they get to chose what this number is.
Now, back to this estimation session. Muness, Chad, Larry and I began by reviewing the available cards in Mingle. We were debating priority from the customer's perspective as well as what sequence made the most sense for development. This gave us a subset of the available cards to plan for the upcoming iteration as well as a rough sense of priority.
At this point we began estimating cards. I abstained as I was unfamiliar with the project and I didn’t want my poor estimates to cause someone trouble down the road. Each estimate for a card was collected by counting down 3-2-1, then everyone showing a point value simultaneously. If the variance was small the higher estimate is taken, if the variance is large then the card is debated again and another estimation is taken.
We continued this process until the iteration was full. And that marked the end of my 4th day at Relevance.
Craftsman Swap – Day 3 at Relevance
Tuesday, March 2, 2010
Craftsman Swap – Day 2 at Relevance
Tuesday, January 19, 2010
Craftsman Swap – Day 1 at Relevance
My first day at Relevance actually began Sunday night when Chad Humpries and Kris Singleton were kind enough to entertain me for the evening. I had dinner with Chad at Piedmont, an excellent local Italian place. The food was great and the conversation interesting. Chad talked to me about his inspiration for Micronaut, which is now Rspec2, and went over some of its features. He’s on the hook for giving me a 10 minute overview of the framework and some of its innovative features. Pretty excited about that. He also offered to meet me for breakfast the following morning while we walked to the Relevance office for a late night tour.
Kris met Chad and I at the office as he’d kindly offered to shuttle me to and from his place for a night of gaming. We played two rounds of Dominion (the seaside expansion) and a train game I cannot remember the name of, but that was after a glass of bourbon. We played to about ten. It was a great way to finish out the evening.
I met Chad the next morning at Dos Perros a little Mexican place next door to Relevance. About half way through the meal I look up to see a familiar face outside the window. Holy crap! Its Brian Marick!
He’s in town doing his own craftsman swap – so to speak. I don’t know if he’d call it that, but his plan seems similar to mine – do some work with the top-notch Relevance crew. It’s always a good day when Brian walks through your door. How exciting.
We all made it up stairs and hung around a bit before the 8:45 all-hands standup. As stand ups go it was pretty much what I expected from an Agile shop. What did you do yesterday? what’s the plan for today? what’s blocking you? Brain and I were set up with our pairing partners as well, which looked to have been discussed before hand. No problem.
Rob Sanheim and I (my pair for the day) started off with an architectural overview of the project I was joining. The project itself is really interesting, but I can’t say much about it. NDAs and all. We had the usual sort of arrowed diagrams describing the major components of the system and their interactions. I was interested in how their testing strategy played out across the tiers of the application, so we drew some lines to show the boundaries between the test suites and where they overlapped. It was a useful experiment. It turned out that in a past life I had worked quite a lot in the domain, so most of it made sense to me at a high level. Good enough to get started.
The work environment at Relevance is really nice. Everyone sits together in one large room. The desks are huge with ample room for whatever you think you’ll need for the day. Each pairing station has two monitors, two keyboards, and two mice. Most of the shops I’ve visited do it that way and it works. The office itself had a really warm, relaxing atmosphere. Exposed wood beams, pine plank flooring and lots of windows.
We settled in and began our pairing session by reviewing the story in Mingle. It was larger than I usually like a story to be and Rob agreed that it was a bigger card than they usually played. We discussed breaking the story down. A significant refactoring was necessary, so I pitched doing that first as an “internal” dev-facing card. Rob seemed to like the idea initially, but thought it would be a hard sell for the customer – no value. The second pitch was to break the story by its two major functional components – configuration and execution. That didn’t make sense from a value standpoint either – the configuration part had no value to the customer without being able to execute it. All that discussion took about 5 minutes and in the end we decided to take on the whole card.
The card itself broke down like this. It had the usual title and a paragraph or so describing the story. That was followed up by four cucumber features that loosely described the acceptance criteria. These were written by a developer beforehand. The bottom of the card contained a Q/A section that listed responses to developer questions and anything outstanding. The whole card was sent to the customer for approval before we began work. I liked that a lot.
The part of the system we’d be changing is a Rails webapp. First things first, we made sure the test suite passed. I noticed Rob was using something like Autospec, but it wasn’t. It is called Watchr and it looks pretty awesome. My first big knowledge win for the week. It works like Autospec in that it automatically re-runs the test suite whenever something changes, but it gives you a lot more control over what's getting executed. Going to be a big win for me on my client project back home.
We had spent about an hour on the card when we were pulled into the (I think daily) project standup meeting with the customer. It was efficient and well run. We discussed all of the outstanding issues from last week and how they were being addressed. Ran through the story work for the day. It was pretty much your standard stad-up meeting.
15 minutes later we’re back to work. The communal stereo offered an eclectic mix of tunes to set the mood – pretty much any genre you can imagine. By lunch we’d managed to translate most of the first acceptance test from the story into something we could execute.
Relevance brings in lunch every day for its employees. Well that’s nice, but what’s the value? Is catering lunch every day just a perk or does it add something to Relevance’s culture? I think it does. First, everyone eats together every day. A lot of interesting discussions went on over lunch and everyone was there to participate. Second, we all got back to work at the same time. That is a fantastic feature if you’re pairing a lot – and Relevance certainly is.
The rest of the afternoon was standard Rails fare – a few migrations, some Refactoring, fixing specs. We did get a little hung up in our acceptance test where we wanted to assert the presence of a few fields in the view. We got something working with rspec’s include matcher, but the failure output was terrible. It dumped the entire response body. We blew a good 20 minutes trying to clean it up to no avail. We're using Micronaut (RSpec2) so have_tag wasn't available and webrat's selector doesn't work with assertions. We benched the cleanup when it became apparent it was going to be harder than we thought. It wasn’t that important.
Our day ended with an 1.5 hour company retrospective. I was told these happen every two weeks here at Relevance. We don’t do these at Obtiva and I really liked the idea. We do a lot more on-site work than Relevance does, at least it seems that way. An all-hands company retrospective is a great idea for us and I’m definitely going to take back to Obtiva with me.
Overall, it was a pretty awesome first day. The way they put together their projects really resonates with my own personal style. All the code we worked with is exactly what I expected to see. Very high quality, easy to comprehend and simple to test. I had enough confidence to take the driver’s seat after a few hours and I’m happy to say I was able to contribute a couple useful ideas on day 1. That speaks not to my own ability but to the quality of my pairing partner and the expressiveness of the system he was helping me to learn. Well done.
Looking forward to day 2.