Lee:
I didn't want to start derailing the thread, but I wrote two gists that night I wanted to get your personal opinion on before posting/avoiding posting.
Without DI: https://gist.github.com/richmolj/5811237
Does this accurately display the additional complexity added by DI, or is the example unfair? Not particularly trying to make a point here, more trying to better orient myself to the opposing mindset.
Lou:
Yeah, I think your examples are pretty fair. There certainly is some overhead for going the DI route, I definitely wouldn't dispute it, but to me the extra complexity in the constructor pays off more times then not.
When you want to reuse the post somewhere else, but have the stats collected differently, you just pass in a different collaborator. In other words, I think this helps with the open/closed principle, which is a bigger deal than the enhanced testability (there, I agree with your previous assessment, It's a minor plus, but it wouldn't be the main reason I'd want the DI code--though I'm sure others would disagree with me). If we are getting into the world where we're trying to aim for more code reuse, we want to minimize the number of instances where we actually have to change class implementation, because this is a much greater risk of introducing regressions then building up the behavior you want from small existing pieces.
Also, you could minimize the pain of that constructor by either the parameter defaults or a factory etc. In the example I'd probably just put a factory method on the post class, like Post.simple_post, that wraps the last three lines. In general, I'd prefer the more flexible interface and then work with it via a facade of some sort. This keeps us from monkey patching/changing internals of classes that are used in many places in production code.
So, yeah, I would definitely add it to the discussion and see how folks feel about it. I was thinking about this a lot last night, and I think you're right that folks have different tolerances for the number of abstractions they want to deal with. I think I might have preference for more generalized stuff, just because I tend to look at code more from and abstract math perspective than an engineering one. YMMV. Would be interesting to see how everybody else feels.
Though in either case I think we should take a look at how we do (or don't do) documentation. That helps a lot in nailing down these abstractions with concrete details about the contracts.
@richmolj Sure, YAGNI can be useful, but I think it has some serious problems when applied too much. Mostly because it's You ain't gonna need it, not You might not need it. Stop. Think. Something like this needs to be applied as a heuristic, not an absolute rule to follow blindly. (Not saying that you're saying that, btw. But I think that DHH most certainly is in the post that you referenced.)
Think of it this way, it's analogous to premature optimization, right? When is the last time that you knew from experience that there was a well known solution to a problem you're trying to solve that ran in logarithmic time; except, you don't want to pre-optimize so you just put in a quadratic solution instead? Nobody starts with a bubble sort, and refactors to merge sort. We just jump to the right solution directly because our intuition and experience guides us to do so.
Except that the warning about premature optimization isn't really about those kind of changes. It's not about going from O(n^3) to O(n). It's about going from
to
. Messing with the coefficients should only be done when you can demonstrate the necessity, but when it comes to orders of magnitude, it's much more clear cut.
I think these little mini-refactorings can make improvements, but a lot of times you're just about to embark on a yak shaving expedition. I think that's the pain points you feel in legacy code: it's not more complicated because it's more abstract, it more complicated because you're seeing the outcome of somebody groping around in the dark, because the don't understand the problem.
I think a good example of this is one of the epic code battles of history, Jeffries vs. Norvig. The blog post, which is a good read but lengthy, is talking about TDD, but I think there's a larger point here. Norvig's solution worked because he had a formal model of the problem that fit the solution better. No amount of refactoring from something simple was going to get Jefferies over the goal line.
So, yeah, I guess to sum up, I think it's easy for DHH to say always do X, this is what design means, because he's selling something. Design isn't about always blindly following a rule, it's about taste and experience. If it wasn't we'd all be Monet after taking a few semesters in art school.