In case you were wondering, Python’s built-in LRU cache function decorator doesn’t cache function calls with the same keyword arguments written in a different order. That is, if you cache a function and then call it twice, with the same keyword arguments but written in a different order, on the second call you won’t get the cached result and the function will be re-run.
Let’s construct an example. We’re interested in keyword arguments, so we’ll make our function only take keyword arguments (1). We want some side-effect to tell us whether it was a cache hit or miss, so we’ll print each keyword when we iterate over it (2). We want to cache a result, so we’ll return the sum of the values of the keyword arguments (3). Finally, we apply the lru_cache
decorator to the function with a maxsize=None
to let the cache grow unbounded. Putting all that together, we get:
from functools import lru_cache
@lru_cache(maxsize=None)
def f(**kwargs): # 1
vs = []
for k in kwargs:
print("keyword:", k) # 2
vs.append(kwargs[k])
return sum(vs) # 3
Let’s run it in the interpreter. The first time we run it, we expect f
to be called, so we expect it to print the two keywords we pass, x
and y
, as well as the interpreter printing the result:
>>> f(x=1, y=2)
keyword: x
keyword: y
3
Great! Now let’s run it again with the same arguments, so that we get a cache hit. In this invocation f
should never run, so we expect to not see the keywords printed:
>>> f(x=1, y=2)
3
Again, this matches our expectations. So now if we call f
with the same arguments but written in the reverse order, will it be a cache hit or cache miss?
>>> f(y=1, x=2)
keyword: y
keyword: x
3
It was a cache miss :( our function f
was re-run, which we know because it printed the keywords to the console.
Probably not in a small code base, because functions are probably going to be called with the same keyword order and Python tends to have a consistent dictionary ordering. But in a codebase with multiple contributors, this could lead to unexpected behavior. According to the spec, Python dictionaries are unordered, so most tutorials tell you not to rely on dictionary ordering. A cache function that is sensitive to dictionary order feels incorrect.
I looked into this because I was trying to memoize a function with keyword arguments, and was wondering how lru_cache handles the unordered-ness of keyword arguments. It turns out it doesn’t.
Here's a simpler example that also shows that the same applies for default keywords: