-
-
Save MichelleDalalJian/2c9aaadbda21290e1ccfc87a9ab1f937 to your computer and use it in GitHub Desktop.
#Actual data: http://py4e-data.dr-chuck.net/comments_24964.html (Sum ends with 73) | |
from urllib import request | |
from bs4 import BeautifulSoup | |
html=request.urlopen('http://python-data.dr-chuck.net/comments_24964.html').read() | |
soup = BeautifulSoup(html) | |
tags=soup('span') | |
sum=0 | |
for tag in tags: | |
sum=sum+int(tag.contents[0]) | |
print(sum) |
For window user follow the instruction given by instructor in the discussion forum than the above top one code even work out for you. https://www.coursera.org/learn/python-network-data/discussions/forums/G0TMJ6G0EeqqMhL7huUnrQ/threads/Fi07MzG0EeymZRIVts3h3w
Thank you for your help, It works.
jai essaie sur vs code il y a un probleme trace back et maintenant sur jupyter toujours le resultat 0
uninstall the zip folder and the extracted folder of bs4 and install it using your command prompt by typing: -
pip install beautifulsoup4
Here is the way how you guys can solve this : Working code below 👍 READ ME "":: Copy the actual Data url and run the file from the cmd/terminal and then paste the in terminal or CMD like so
#! /bin/python3 from urllib.request import urlopen from bs4 import BeautifulSoup import ssl
ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE
leave the url empty for now. Paste the url after running the file in cmd or terminal.
url = input("") html = urlopen(url, context=ctx).read() soup = BeautifulSoup(html, "html.parser")
spans = soup('span') numbers = []
for span in spans: numbers.append(int(span.string))
print (sum(numbers))
togithub.mp4
this is the best way to answer. run it in terminal. thank you
import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
import ssl
import re
Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
url = input('ENTER URL:') #http://py4e-data.dr-chuck.net/comments_1692181.html
fhand = urllib.request.urlopen(url,context=ctx).read()
soup = BeautifulSoup(fhand,'html.parser')
#print(soup)
Retrieve all of the anchor tags
tags = soup('span')
lst=list()
for tag in tags:
tag = str(tag)
#print(tag)
tag2 = re.findall('[0-9]+',tag)
tag3 = int(tag2[0])
lst.append(tag3)
#print(lst)
total = sum(lst)
print(total)