Created
February 7, 2014 15:53
-
-
Save molotovbliss/8865428 to your computer and use it in GitHub Desktop.
wget crawler
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Open 3 crawlers recursive, 4 levels deep and only crawl sites with domain thesite.com | |
wget -r -l4 –spider -D thesite.com http://www.thesite.com & | |
wget -r -l4 –spider -D thesite.com http://www.thesite.com & | |
wget -r -l4 –spider -D thesite.com http://www.thesite.com | |
Alternative to wget: http://aria2.sourceforge.net/ |
Got curious about this...
echo $URL_LIST | xargs -n 1 -P 8 wget -q
seems to be the best way without aria2 dependency...
https://stackoverflow.com/questions/7577615/parallel-wget-in-bash
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Doesn't this crawl the same pages three times in a row? Or is wget that smart to split the resources on its own?