Created
July 22, 2019 20:38
-
-
Save nicksyna01/780e48a1284141b80e76cbcbade0d1c7 to your computer and use it in GitHub Desktop.
Code to Extract Links of all the Pages in a Website
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import re | |
| import urllib | |
| import urllib.request as ur | |
| from bs4 import BeautifulSoup | |
| html_page = ur.urlopen("http://upscanthro.com/questions/") | |
| soup = BeautifulSoup(html_page) | |
| a = [] | |
| for link in soup.findAll('a', attrs={'href': re.compile("^http://")}): | |
| a.append(link) | |
| print(link.get('href')) | |
| #print(a) |
Author
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
It helps in extracting all the page links of a website