$ wget -e robots=off -r -np -w1 --content-on-error 'http://example.com/folder/'
- -e robots=off causes it to ignore robots.txt for that domain
- -r makes it recursive
- -np = no parents, so it doesn't follow links up to the parent folder
- -w wait seconds before downloading next file
- --content-on-error ignoring errors