Skip to content

Instantly share code, notes, and snippets.

@AO8
Last active May 22, 2025 05:18
Show Gist options
  • Save AO8/63b9a5acb9fb238cbed13a0269d14137 to your computer and use it in GitHub Desktop.
Save AO8/63b9a5acb9fb238cbed13a0269d14137 to your computer and use it in GitHub Desktop.
Convert an HTML table into a CSV file with Python and BeautifulSoup.
# Adapted from example in "Web Scraping with Python, 2nd Edition" by Ran Mitchell.
import csv
from urllib.request import urlopen
from bs4 import BeautifulSoup
html = urlopen("http://en.wikipedia.org/wiki/"
"Comparison_of_text_editors")
soup = BeautifulSoup(html, "html.parser")
table = soup.findAll("table", {"class":"wikitable"})[0]
rows = table.findAll("tr")
with open("editors.csv", "wt+", newline="") as f:
writer = csv.writer(f)
for row in rows:
csv_row = []
for cell in row.findAll(["td", "th"]):
csv_row.append(cell.get_text())
writer.writerow(csv_row)
@ChainSwordCS
Copy link

Alternatively, starting at Line 10,

    writer = csv.writer(f)
    tables = soup.findAll("table", {"class":"wikitable"})
    for table in tables:
        rows = table.findAll("tr")
        for row in rows:
            csv_row = []
            for cell in row.findAll(["td", "th"]):
                csv_row.append(cell.get_text())
            writer.writerow(csv_row)```
I basically just changed Line 10, to iterate over all tables found in the page instead of just the first one. And I moved the file open code outside of the new loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment