This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Standard Library | |
| import smtplib | |
| import webbrowser | |
| from time import sleep | |
| from datetime import datetime as dt | |
| from urllib.request import urlopen | |
| from email.mime.text import MIMEText | |
| # Third-party | |
| import pyautogui | |
| from bs4 import BeautifulSoup |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Python 3.6.4 | |
| import random | |
| from time import sleep | |
| import requests | |
| from bs4 import BeautifulSoup | |
| html = requests.get("https://www.python.org/dev/peps/pep-0020/") | |
| soup = BeautifulSoup(html.text, "html.parser") | |
| zen_of_python = soup.find("pre").get_text() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Adapted from example in "Web Scraping with Python, 2nd Edition" by Ran Mitchell. | |
| import csv | |
| from urllib.request import urlopen | |
| from bs4 import BeautifulSoup | |
| html = urlopen("http://en.wikipedia.org/wiki/" | |
| "Comparison_of_text_editors") | |
| soup = BeautifulSoup(html, "html.parser") | |
| table = soup.findAll("table", {"class":"wikitable"})[0] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Get unlimited access to the smartest writers and biggest ideas by becoming | |
| # a Medium member for just $5 / month. Visit https://medium.com/membership | |
| import sys | |
| import requests | |
| from bs4 import BeautifulSoup | |
| # Build elements of url to get | |
| url = "https://medium.com/search?q=" # replace 'python' with the search keyword of your choice |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Adapted from example in Ch.3 of "Web Scraping With Python, Second Edition" by Ryan Mitchell | |
| import re | |
| import requests | |
| from bs4 import BeautifulSoup | |
| pages = set() | |
| def get_links(page_url): | |
| global pages |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Standard Library | |
| import smtplib | |
| import webbrowser | |
| from time import sleep | |
| from datetime import datetime as dt | |
| from urllib.request import urlopen | |
| # Third-party | |
| import pyautogui | |
| from bs4 import BeautifulSoup |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # Adapted from example in Ch.3 of "Web Scraping With Python, Second Edition" by Ryan Mitchell | |
| # Make a tax-deductible donation to the Wikimedia Foundation at https://wikimediafoundation.org/wiki/Ways_to_Give | |
| # Takeaway from this program: recursion is at the heart of web crawling. Crawlers retrieve page contents for a URL, | |
| # examine that page for another URL, and retrieve that page, ad infinitum | |
| import re | |
| import random | |
| import requests | |
| from bs4 import BeautifulSoup | |
| from datetime import datetime as dt |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| # From Python's Standard Library | |
| from time import sleep | |
| import datetime | |
| import csv | |
| # Third-party | |
| import requests | |
| import bs4 | |
| import matplotlib.pyplot as plt | |
| times = [] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import requests | |
| import csv | |
| import re | |
| from bs4 import BeautifulSoup | |
| from datetime import datetime as dt | |
| dow_jones_page = "https://www.bloomberg.com/quote/INDU:IND" | |
| html = requests.get(dow_jones_page).text | |
| soup = BeautifulSoup(html, "html.parser") |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| import string | |
| from time import sleep | |
| alphabet = string.ascii_lowercase # "abcdefghijklmnopqrstuvwxyz" | |
| def decrypt(): | |
| print("Welcome to Caesar Cipher Decryption.\n") | |
| encrypted_message = input("Enter the message you would like to decrypt: ").strip() | |
| print() |