When access to an object's internals is truly necessary, it isn't practical to use composition based techniques. For example, consider the following mixin-based code which implements a memoization routine for caching method return values:
module Cached
def cache(*method_names)
method_names.each do |m|
original = instance_method(m)
results = {}
define_method(m) do |*a|
results[[m, a]] ||= original.bind(self).call(*a)
end
end
end
end
## EXAMPLE USAGE:
class Numbers
extend Cached
def fib(n)
raise ArgumentError if n < 0
return n if n < 2
fib(n - 1) + fib(n - 2)
end
cache :fib
end
n = Numbers.new
(0..100).each { |e| p [e, n.fib(e)] }
A naive attempt to refactor the Cached
module into a ComposedCache
class
might look something like this:
class ComposedCache
def initialize(target)
@target = target
end
def cache(*method_names)
method_names.each do |m|
results = {}
define_singleton_method(m) do |*a|
results[[m, a]] ||= @target.send(m, *a)
end
end
end
end
n = ComposedCache.new(Numbers.new)
n.cache(:fib)
(0..100).each { |e| p [e, n.fib(e)] }
Unfortunately, this code has a critical flaw in it that makes it unsuitable
for general use: It caches calls made through the ComposedCache
proxy, but
it does not cache internal calls made within the objects it wraps. In
practice, this makes it absolutely useless for optimizing the performance of
recursive functions such as the fib()
method we're working with here.
There is no way around this problem without modifying the wrapped object. In order to stick with composition-based modeling and still get proper caching behavior, here's what we'd need to do:
class ComposedCache
def initialize(target)
@target = target
end
def cache(*method_names)
method_names.each do |m|
original = @target.method(m)
results = {}
@target.define_singleton_method(m) do |*a|
results[[m, a]] ||= original.call(*a)
end
define_singleton_method(m) do |*a|
@target.send(m, *a)
end
end
end
end
n = ComposedCache.new(Numbers.new)
n.cache(:fib)
(0..100).each { |e| p [e, n.fib(e)] }
Such a design would prevent a new ancestor from being introduced
into the Numbers
object's lookup path, and it would externalize
the code that actually understands how to handle the caching. However,
because ComposedCache
still directly modifies the behavior of
the Numbers
objects it wraps, it loses the benefit of encapsulation
that typically comes along with composition based modeling.
We also end up with an interface that feels awkward: defining what methods ought to be cached via an instance method call does not feel nearly as natural as using a class-level macro, and might be cumbersome to integrate within a real project. There are ways this interface can be improved, but they all bring with them a few new hoops to jump through.
Because the ComposedCache
expects all cached methods to be explicitly
declared and it does not support automatic delegation to the underlying
object, it might be cumbersome to work with -- it would either need
to be modified to forward all uncached method calls to the object it
wraps (losing the benefits of a narrow surface), or the caller would
need to keep both a reference to the original object and the composed
cache object around (which is very awkward and confusing!).
Good composition-based modeling produces code that is simpler than
the sum of its parts, as a direct result of strong encapsulation
and well-defined interactions between collaborators. Unfortunately,
our implementation of the ComposedCache
class has none of those
benefits, and so it serves as a useful (if pathological) example
of the downsides of composition-based modeling.