Skip to content

Instantly share code, notes, and snippets.

@UnforeseenOcean
Forked from protrolium/wget.md
Created September 1, 2019 06:39

Revisions

  1. Gavin Gamboa revised this gist Jan 15, 2016. 1 changed file with 3 additions and 0 deletions.
    3 changes: 3 additions & 0 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -75,4 +75,7 @@ To literally get all files except .html etc:<br>
    - -p: page requisites (includes resources like images on each page)
    - -e robots=off: execute command robotos=off as if it was part of .wgetrc file. This turns off the robot exclusion which means you ignore robots.txt and the robot meta tags (you should know the implications this comes with, take care).

    ### Example 4
    Sometimes you just have to be nice to the server ( flags: -e robots=off --user-agent=Mozilla )

    `wget -r -A pdf -nd -e robots=off --user-agent=Mozilla site-url`
  2. Gavin Gamboa revised this gist May 29, 2015. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -61,6 +61,8 @@ To literally get all files except .html etc:<br>

    - - -

    ### Example 3

    `wget -nd -r -l 2 -A jpg,jpeg,png,gif http://t.co`

    - -nd: no directories (save all files to the current directory; -P directory changes the target directory)
  3. Gavin Gamboa revised this gist May 29, 2015. 1 changed file with 13 additions and 0 deletions.
    13 changes: 13 additions & 0 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -59,5 +59,18 @@ All the answers with `-k, -K, -E` etc options probably haven't really understood
    To literally get all files except .html etc:<br>
    `wget -R html,htm,php,asp,jsp,js,py,css -r -l 1 -nd http://yoursite.com`

    - - -

    `wget -nd -r -l 2 -A jpg,jpeg,png,gif http://t.co`

    - -nd: no directories (save all files to the current directory; -P directory changes the target directory)
    - -r -l 2: recursive level 2
    - -A: accepted extensions

    `wget -nd -H -p -A jpg,jpeg,png,gif -e robots=off example.tumblr.com/page/{1..2}`

    - -H: span hosts (wget doesn't download files from different domains or subdomains by default)
    - -p: page requisites (includes resources like images on each page)
    - -e robots=off: execute command robotos=off as if it was part of .wgetrc file. This turns off the robot exclusion which means you ignore robots.txt and the robot meta tags (you should know the implications this comes with, take care).


  4. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 2 additions and 1 deletion.
    3 changes: 2 additions & 1 deletion wget.md
    Original file line number Diff line number Diff line change
    @@ -33,6 +33,7 @@ Next, give the download-file-list.txt as argument to wget using -i option as sho

    - - -

    ### Example 1

    This downloaded the entire website for me:<br>

    @@ -42,7 +43,7 @@ This downloaded the entire website for me:<br>

    - - -


    ### Example 2

    I was trying to download zip files linked from Omeka's themes page - pretty similar task. This worked for me:<br>

  5. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 2 additions and 0 deletions.
    2 changes: 2 additions & 0 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -38,6 +38,8 @@ This downloaded the entire website for me:<br>

    `wget --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://site/path/`

    - If the files are ignored for robots (e.g. search engines), you've to add also: `-e robots=off`

    - - -


  6. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 3 additions and 1 deletion.
    4 changes: 3 additions & 1 deletion wget.md
    Original file line number Diff line number Diff line change
    @@ -34,14 +34,16 @@ Next, give the download-file-list.txt as argument to wget using -i option as sho
    - - -


    This downloaded the entire website for me:
    This downloaded the entire website for me:<br>

    `wget --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://site/path/`

    - - -



    I was trying to download zip files linked from Omeka's themes page - pretty similar task. This worked for me:<br>

    `wget -A zip -r -l 1 -nd http://omeka.org/add-ons/themes/`

    - -A: only accept zip files
  7. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 1 addition and 2 deletions.
    3 changes: 1 addition & 2 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -41,8 +41,7 @@ This downloaded the entire website for me:



    I was trying to download zip files linked from Omeka's themes page - pretty similar task. This worked for me:

    I was trying to download zip files linked from Omeka's themes page - pretty similar task. This worked for me:<br>
    `wget -A zip -r -l 1 -nd http://omeka.org/add-ons/themes/`

    - -A: only accept zip files
  8. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 1 addition and 1 deletion.
    2 changes: 1 addition & 1 deletion wget.md
    Original file line number Diff line number Diff line change
    @@ -52,7 +52,7 @@ I was trying to download zip files linked from Omeka's themes page - pretty simi

    All the answers with `-k, -K, -E` etc options probably haven't really understood the question, as those as for rewriting HTML pages to make a local structure, renaming .php files and so on. Not relevant.

    To literally get all files except .html etc:
    To literally get all files except .html etc:<br>
    `wget -R html,htm,php,asp,jsp,js,py,css -r -l 1 -nd http://yoursite.com`


  9. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 21 additions and 3 deletions.
    24 changes: 21 additions & 3 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -28,14 +28,32 @@ URL3
    URL4</pre>

    Next, give the download-file-list.txt as argument to wget using -i option as shown below.

    `$ wget -i download-file-list.txt`

    - - -

    - - -


    This downloaded the entire website for me:

    `wget --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://site/path/`

    - - -



    I was trying to download zip files linked from Omeka's themes page - pretty similar task. This worked for me:

    `wget -A zip -r -l 1 -nd http://omeka.org/add-ons/themes/`

    - -A: only accept zip files
    - -r: recurse
    - -l 1: one level deep (ie, only files directly linked from this page)
    - -nd: don't create a directory structure, just download all the files into this directory.

    All the answers with `-k, -K, -E` etc options probably haven't really understood the question, as those as for rewriting HTML pages to make a local structure, renaming .php files and so on. Not relevant.

    To literally get all files except .html etc:
    `wget -R html,htm,php,asp,jsp,js,py,css -r -l 1 -nd http://yoursite.com`



  10. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 9 additions and 0 deletions.
    9 changes: 9 additions & 0 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -30,3 +30,12 @@ URL4</pre>
    Next, give the download-file-list.txt as argument to wget using -i option as shown below.

    `$ wget -i download-file-list.txt`

    - - -



    This downloaded the entire website for me:

    `wget --no-clobber --convert-links --random-wait -r -p -E -e robots=off -U mozilla http://site/path/`

  11. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 5 additions and 5 deletions.
    10 changes: 5 additions & 5 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -21,11 +21,11 @@ Following is the command line which you want to execute when you want to downloa
    ### Download Multiple Files / URLs Using Wget -i
    First, store all the download files or URLs in a text file as:

    `$ cat > download-file-list.txt`
    `URL1`
    `URL2`
    `URL3`
    `URL4`
    <pre>$ cat > download-file-list.txt
    URL1
    URL2
    URL3
    URL4</pre>

    Next, give the download-file-list.txt as argument to wget using -i option as shown below.

  12. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 13 additions and 0 deletions.
    13 changes: 13 additions & 0 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -17,3 +17,16 @@ Following is the command line which you want to execute when you want to downloa
    - -p : download all files that are necessary to properly display a given HTML page.
    - -convert-links : after the download, convert the links in document for local viewing.
    - -P ./LOCAL-DIR : save all the files and directories to the specified directory.

    ### Download Multiple Files / URLs Using Wget -i
    First, store all the download files or URLs in a text file as:

    `$ cat > download-file-list.txt`
    `URL1`
    `URL2`
    `URL3`
    `URL4`

    Next, give the download-file-list.txt as argument to wget using -i option as shown below.

    `$ wget -i download-file-list.txt`
  13. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 10 additions and 1 deletion.
    11 changes: 10 additions & 1 deletion wget.md
    Original file line number Diff line number Diff line change
    @@ -1,5 +1,14 @@
    ### Download a Full Website Using wget –mirror
    ### Download Only Certain File Types Using wget -r -A
    You can use this under following situations:

    - Download all images from a website
    - Download all videos from a website
    - Download all PDF files from a website

    `$ wget -r -A.pdf http://url-to-webpage-with-pdfs/`


    ### Download a Full Website Using wget –mirror
    Following is the command line which you want to execute when you want to download a full website and made available for local viewing.

    ` $ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL `
  14. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 4 additions and 4 deletions.
    8 changes: 4 additions & 4 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -4,7 +4,7 @@ Following is the command line which you want to execute when you want to downloa

    ` $ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL `

    -mirror : turn on options suitable for mirroring.
    -p : download all files that are necessary to properly display a given HTML page.
    -convert-links : after the download, convert the links in document for local viewing.
    -P ./LOCAL-DIR : save all the files and directories to the specified directory.
    - -mirror : turn on options suitable for mirroring.
    - -p : download all files that are necessary to properly display a given HTML page.
    - -convert-links : after the download, convert the links in document for local viewing.
    - -P ./LOCAL-DIR : save all the files and directories to the specified directory.
  15. Gavin Gamboa revised this gist May 28, 2015. 1 changed file with 4 additions and 4 deletions.
    8 changes: 4 additions & 4 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -4,7 +4,7 @@ Following is the command line which you want to execute when you want to downloa

    ` $ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL `

    mirror : turn on options suitable for mirroring.
    -p : download all files that are necessary to properly display a given HTML page.
    convert-links : after the download, convert the links in document for local viewing.
    -P ./LOCAL-DIR : save all the files and directories to the specified directory.
    -mirror : turn on options suitable for mirroring.
    -p : download all files that are necessary to properly display a given HTML page.
    -convert-links : after the download, convert the links in document for local viewing.
    -P ./LOCAL-DIR : save all the files and directories to the specified directory.
  16. Gavin Gamboa created this gist May 28, 2015.
    10 changes: 10 additions & 0 deletions wget.md
    Original file line number Diff line number Diff line change
    @@ -0,0 +1,10 @@
    ### Download a Full Website Using wget –mirror

    Following is the command line which you want to execute when you want to download a full website and made available for local viewing.

    ` $ wget --mirror -p --convert-links -P ./LOCAL-DIR WEBSITE-URL `

    –mirror : turn on options suitable for mirroring.
    -p : download all files that are necessary to properly display a given HTML page.
    –convert-links : after the download, convert the links in document for local viewing.
    -P ./LOCAL-DIR : save all the files and directories to the specified directory.