Last active
January 18, 2025 13:37
-
-
Save suzannealdrich/1da087c05ccd5ce3ad5d to your computer and use it in GitHub Desktop.
wget spider cache warmer
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
wget --spider -o wget.log -e robots=off -r -l 5 -p -S --header="X-Bypass-Cache: 1" --limit-rate=124k www.example.com | |
# Options explained | |
# --spider: Crawl the site | |
# -o wget.log: Keep the log | |
# -e robots=off: Ignore robots.txt | |
# -r: specify recursive download | |
# -l 5: Depth to search. I.e 1 means 'crawl the homepages'. 2 means 'crawl the homepage and all pages it links to'... | |
# -p: get all images, etc. needed to display HTML page | |
# -S: print server response | |
# --limit-rate=124k: Make sure we're crawling and not DOS'ing the site. | |
# www.example.com: URL to start crawling |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Perfect. thanks!
I added
--limit-rate=124k
just so the server wouldn't get too hot