Last active
December 10, 2024 16:29
-
-
Save pe3/5978540 to your computer and use it in GitHub Desktop.
Scrape An Entire Website with wget
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
this worked very nice for a single page site | |
``` | |
wget \ | |
--recursive \ | |
--page-requisites \ | |
--convert-links \ | |
[website] | |
``` | |
wget options | |
``` | |
wget \ | |
--recursive \ | |
--no-clobber \ | |
--page-requisites \ | |
--html-extension \ | |
--convert-links \ | |
--restrict-file-names=windows \ | |
--domains website.org \ | |
--no-parent \ | |
www.website.com | |
--recursive: download the entire Web site. | |
--domains website.org: don't follow links outside website.org. | |
--no-parent: don't follow links outside the directory tutorials/html/. | |
--page-requisites: get all the elements that compose the page (images, CSS and so on). | |
--html-extension: save files with the .html extension. | |
--convert-links: convert links so that they work locally, off-line. | |
--restrict-file-names=windows: modify filenames so that they will work in Windows as well. | |
--no-clobber: don't overwrite any existing files (used in case the download is interrupted and | |
resumed). | |
``` | |
there is also [node-wget](https://github.com/wuchengwei/node-wget) |
This looks like just initial notes on wget options. Not a script.
Can you scrape this one?
https://roadmap.sh/frontend?r=frontend-beginner
It's empty every time, I try to scrape it.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi;
What would be the right syntax for using this script?