Skip to content

Instantly share code, notes, and snippets.

@richmolj
Last active December 18, 2015 16:59
Show Gist options
  • Save richmolj/5815196 to your computer and use it in GitHub Desktop.
Save richmolj/5815196 to your computer and use it in GitHub Desktop.
Back and forth with Lou

Lee:

I didn't want to start derailing the thread, but I wrote two gists that night I wanted to get your personal opinion on before posting/avoiding posting.

Without DI: https://gist.github.com/richmolj/5811237

With DI https://gist.github.com/richmolj/5811358

Does this accurately display the additional complexity added by DI, or is the example unfair? Not particularly trying to make a point here, more trying to better orient myself to the opposing mindset.

Lou:

Yeah, I think your examples are pretty fair. There certainly is some overhead for going the DI route, I definitely wouldn't dispute it, but to me the extra complexity in the constructor pays off more times then not.

When you want to reuse the post somewhere else, but have the stats collected differently, you just pass in a different collaborator. In other words, I think this helps with the open/closed principle, which is a bigger deal than the enhanced testability (there, I agree with your previous assessment, It's a minor plus, but it wouldn't be the main reason I'd want the DI code--though I'm sure others would disagree with me). If we are getting into the world where we're trying to aim for more code reuse, we want to minimize the number of instances where we actually have to change class implementation, because this is a much greater risk of introducing regressions then building up the behavior you want from small existing pieces.

Also, you could minimize the pain of that constructor by either the parameter defaults or a factory etc. In the example I'd probably just put a factory method on the post class, like Post.simple_post, that wraps the last three lines. In general, I'd prefer the more flexible interface and then work with it via a facade of some sort. This keeps us from monkey patching/changing internals of classes that are used in many places in production code.

So, yeah, I would definitely add it to the discussion and see how folks feel about it. I was thinking about this a lot last night, and I think you're right that folks have different tolerances for the number of abstractions they want to deal with. I think I might have preference for more generalized stuff, just because I tend to look at code more from and abstract math perspective than an engineering one. YMMV. Would be interesting to see how everybody else feels.

Though in either case I think we should take a look at how we do (or don't do) documentation. That helps a lot in nailing down these abstractions with concrete details about the contracts.

@richmolj
Copy link
Author

I don't think the direction you are heading is wrong. In fact, I think it's pretty smart...when applied as a last step. When I see that gist I think:

The road to programming hell is paved with “best practices” applied too early. Very few abstractions, extractions, or architectures are inherently bad (although I’ll contend that a few are). But most of them are downright evil if applied before they’re called for. Link

I feel like 12 lines of code that could be understood in 12 seconds went to 92 lines of code that could be understood in 12 minutes. Granted, I am not that smart.

You could certainly argue the code is more extensible and flexible. My experience has been the worst code situations stem not from tight coupling or mixed responsibilities, but from developers trying to do something too slick, too soon. YMMV.

@ljsc
Copy link

ljsc commented Jun 20, 2013

@richmolj Sure, YAGNI can be useful, but I think it has some serious problems when applied too much. Mostly because it's You ain't gonna need it, not You might not need it. Stop. Think. Something like this needs to be applied as a heuristic, not an absolute rule to follow blindly. (Not saying that you're saying that, btw. But I think that DHH most certainly is in the post that you referenced.)

Think of it this way, it's analogous to premature optimization, right? When is the last time that you knew from experience that there was a well known solution to a problem you're trying to solve that ran in logarithmic time; except, you don't want to pre-optimize so you just put in a quadratic solution instead? Nobody starts with a bubble sort, and refactors to merge sort. We just jump to the right solution directly because our intuition and experience guides us to do so.

Except that the warning about premature optimization isn't really about those kind of changes. It's not about going from O(n^3) to O(n). It's about going from to . Messing with the coefficients should only be done when you can demonstrate the necessity, but when it comes to orders of magnitude, it's much more clear cut.

I think these little mini-refactorings can make improvements, but a lot of times you're just about to embark on a yak shaving expedition. I think that's the pain points you feel in legacy code: it's not more complicated because it's more abstract, it more complicated because you're seeing the outcome of somebody groping around in the dark, because the don't understand the problem.

I think a good example of this is one of the epic code battles of history, Jeffries vs. Norvig. The blog post, which is a good read but lengthy, is talking about TDD, but I think there's a larger point here. Norvig's solution worked because he had a formal model of the problem that fit the solution better. No amount of refactoring from something simple was going to get Jefferies over the goal line.

I suspect, having done a small amount of TDD myself, that this is actually a pattern that arises when a programmer tries to apply TDD to a problem they just don’t know how to solve. If I was a high-priced consultant/trainer like Jeffries, I’d probably give this pattern a pithy name like “Going in Circles Means You Don’t Know What You’re Doing”. Because he had no idea how to tackle the real problem, the only kinds of tests he could think of were either the very high-level “the program works” kind which were obviously too much of a leap or low-level tests of nitty-gritty code that is necessary but not at all sufficient for a working solver.

So, yeah, I guess to sum up, I think it's easy for DHH to say always do X, this is what design means, because he's selling something. Design isn't about always blindly following a rule, it's about taste and experience. If it wasn't we'd all be Monet after taking a few semesters in art school.

@richmolj
Copy link
Author

These are all fair points. But you say:

Design isn't about always blindly following a rule, it's about taste and experience.

What I am hearing from the other side of this debate is:

We should always blindly follow the Dependency Injection rule

For the record, here's an example of how I would progress to DI using the published_at example, but only once my situation called for it:

https://gist.github.com/richmolj/5822934

(I'd take the example above, but it'd get a lot more verbose and liable to tangents at each step)

@ljsc
Copy link

ljsc commented Jun 20, 2013

We should always blindly follow the Dependency Injection rule

I didn't know that was anyone's view. It's certainly not mine. I think we should use whatever patterns are appropriate for the problem at hand.

I just don't think patterns--and to me DI is just a pattern--necessarily have to lead to a lot of complexity. If you pick the right one, you can make your life a lot simpler down the road.

@karlhigley
Copy link

When I wrote my gist, I intended to point out that the original DI example is a bad example, both of why DI is useful and of problems with DI. The example is pointing at a problem (too many collaborators) that's not a problem with DI -- it's a design problem.

I assumed we were talking about a large code base, and on that basis, I think the criticisms of my gist fall flat. Complaining that I've expanded the number of lines misses the point; what I bought with the extra lines were named concepts that serve as attractors for behavior. When those concepts are applied across a large number of instances, the net result should be more reuse and therefore less total lines.

Given the context of a large code base, I also don't think I'm prematurely optimizing. If I had interpreted the example as a single instance in a code base with no other cases that involved searching, statistics, or alarms, I might be okay with either of the original with/without DI examples. I just don't see that's the case we're actually looking at in practice.

And, FWIW, I have yet to hear anyone suggest that DI should always be used. What I have heard, from many people, is that DI is a useful tool that should be applied when relevant. The "always" in this conversation consistently comes from the drive to establish standards. Given the standardization push, it's been pretty hard to escape the false opposition between DI and app/gem configuration built into the premise of the conversation.

@richmolj
Copy link
Author

I've tried to be clear I don't personally see DI as an all-or-nothing scenario. I'll refer to the Post @backend example in my original gist, and the 'progression' gist in my last comment.

I do think "always DI" has been suggested. I've been told in this situation:

https://gist.github.com/richmolj/5823903

We should be injecting Encrypter, even if there is no other encryption class. And the consensus for this:

https://gist.github.com/richmolj/5824046

Seems to be the DI approach, even if Comment and Photo are the only possible repositories (there's a slight caveat here that we could say 'there is a separate repository when the "database" is down' but for the moment I'd like to treat this as a tangent since I've heard the same thing regardless of this situation.

In my 'progression' example (https://gist.github.com/richmolj/5822934), I've been told I should not be progressing, I should immediately be injecting the clock, in case at some point in the future I don't want Time.now

These all seem 'Full-DI' to me, since I never have an actual need for it. The logic is, 'at some point you may have a need for it, so start with it'

The example at the top was an attempt to distill this into one, quickly readable example of my central point that:

A) There is a tradeoff between abstraction and simplicity, here
B) There are alternatives before going full-DI (like Service Locator)
C) Still, go DI when you need.

If you believe the example doesn't make this point clearly, let's pair to create a better example (or disagree with the fundamentals of my assertion).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment