Skip to content

Instantly share code, notes, and snippets.

@codeagencybe
Forked from suzannealdrich/wget.txt
Created May 2, 2018 11:50
Show Gist options
  • Save codeagencybe/44832fad67b8e52a7ef982fbc6e107ec to your computer and use it in GitHub Desktop.
Save codeagencybe/44832fad67b8e52a7ef982fbc6e107ec to your computer and use it in GitHub Desktop.
wget spider cache warmer
wget --spider -o wget.log -e robots=off -r -l 5 -p -S --header="X-Bypass-Cache: 1" live-mysite.gotpantheon.com
# Options explained
# --spider: Crawl the site
# -o wget.log: Keep the log
# -e robots=off: Ignore robots.txt
# -r: specify recursive download
# -l 5: Depth to search. I.e 1 means 'crawl the homepages'.  2 means 'crawl the homepage and all pages it links to'...
# -p: get all images, etc. needed to display HTML page
# -S: print server response
# --header="X-Bypass-Cache: 1": Set a header (this one bypasses Varnish cache)
# live-mysite.gotpantheon.com: URL to start crawling
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment