Skip to content

Instantly share code, notes, and snippets.

@jeremyjbowers
Last active December 20, 2015 15:09
Show Gist options
  • Save jeremyjbowers/6151949 to your computer and use it in GitHub Desktop.
Save jeremyjbowers/6151949 to your computer and use it in GitHub Desktop.
Basic Varnish configuration for Inspections project.
# Okay, so, let's start by setting up a backend.
# Varnish needs to know where to send requests
# that fail to find an object in the cache.
# We'll send requests back to Nginx on port 8001.
backend default {
.host = "127.0.0.1";
.port = "8001";
}
# Varnish has some subfunctions (subroutines?) that start
# "sub" and then have special names. This one is called
# "vcl_recv," which is the first step in a long process.
# There will be other steps, but this is the first one.
# In vcl_recv, we handle a couple of edge cases.
sub vcl_recv {
# First, if this is an /admin/ URL, DON'T CACHE IT.
if (req.url ~ "^/admin(.*)") { return(pass); }
# Second, let's make friends with the jQuery people.
# If there's URL with _=123845885, strip it out here.
# jQuery appends a random number as a param called _
# to URLs requested with $.ajax(). Don't want that to
# nuke our cache, right?
set req.url = regsuball(req.url,"[?&]_=[^&]{1,25}","");
# Set up our backend as "default", the one above.
set req.backend = default;
# Grace mode is wonderful. We'll have a 2-hour grace
# mode available; if our backend is overloaded, grace
# mode will continue to serve a stale page for up to
# 2 hours until the new page is available. This should
# only take a few minutes in practice.
set req.grace = 2h;
# Okay, now, advance forward to the "lookup" step.
return(lookup);
}
# Okay, did our lookup "miss" the cache, e.g., fail to
# find a matching URL? If so, send us through to
# vcl_fetch.
sub vcl_miss {
return(fetch);
}
# Otherwise, did we get a hit? If yes, send us through
# to vcl_deliver.
sub vcl_hit {
return(deliver);
}
# You'd think that vcl_fetch is going to check the cache
# for this URL, but you'd be wrong. Fetch actually just
# gets the response from our backend (e.g., Django) and
# then stores it in the cache.
sub vcl_fetch {
# Set the backend response to a 2-hour time-to-live.
# This means our pages will serve from cache for two
# hours until they expire.
set beresp.ttl = 2h;
# Grace mode again. Remember it?
set beresp.grace = 2h;
# Is this cacheable? Set the cacheable header to "yes."
set beresp.http.X-Cacheable = "YES";
# Don't let there be a VARY header. There are many
# reasons why this is true. But trust me: We don't
# want it.
unset beresp.http.Vary;
# Finally, return us to vcl_deliver.
return(deliver);
}
# Alright! So we're either delivering a cached response
# or a newly fetched response which has just been cached.
sub vcl_deliver {
# Okay, if there are hits for this object, send something
# back in the response headers.
if (obj.hits > 0) {
# Say we got a cache hit. Helps you to debug later.
set resp.http.X-Cache = "HIT";
# Say how many hits on this cached item. The more, the better.
set resp.http.X-Cache-Hits = obj.hits;
# Say what our backend was. This is more important if you have
# many backends, like a fleet of servers.
set resp.http.X-Cache-Backend = req.backend;
} else {
# Otherwise, just say it was a miss.
set resp.http.X-Cache = "MISS";
}
# Have a little fun. Not everything should be serious.
set resp.http.X-Django-Pony = "Neiiiiiiigh!";
# Deliver the response!
return(deliver);
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment