-
-
Save MichelleDalalJian/2c9aaadbda21290e1ccfc87a9ab1f937 to your computer and use it in GitHub Desktop.
#Actual data: http://py4e-data.dr-chuck.net/comments_24964.html (Sum ends with 73) | |
from urllib import request | |
from bs4 import BeautifulSoup | |
html=request.urlopen('http://python-data.dr-chuck.net/comments_24964.html').read() | |
soup = BeautifulSoup(html) | |
tags=soup('span') | |
sum=0 | |
for tag in tags: | |
sum=sum+int(tag.contents[0]) | |
print(sum) |
Enter - http://py4e-data.dr-chuck.net/comments_1585556.html Traceback (most recent call last): File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\urllink2.py", line 17, in soup = BeautifulSoup(html, "html.parser") File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4__init__.py", line 215, in init self.feed() File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4__init_.py", line 239, in feed self.builder.feed(self.markup) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\builder_htmlparser.py", line 164, in feed parser.feed(markup) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1520.0_x64__qbz5n2kfra8p0\lib\html\parser.py", line 110, in feed self.goahead(0) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1520.0_x64__qbz5n2kfra8p0\lib\html\parser.py", line 170, in goahead k = self.parse_starttag(i) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1520.0_x64__qbz5n2kfra8p0\lib\html\parser.py", line 344, in parse_starttag self.handle_starttag(tag, attrs) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\builder_htmlparser.py", line 62, in handle_starttag self.soup.handle_starttag(name, None, None, attr_dict) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4__init_.py", line 404, in handle_starttag self.currentTag, self._most_recent_element) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1001, in getattr return self.find(tag) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1238, in find l = self.find_all(name, attrs, recursive, text, 1, **kwargs) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1259, in find_all return self._find_all(name, attrs, text, limit, generator, **kwargs) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 516, in _find_all strainer = SoupStrainer(name, attrs, text, **kwargs) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1560, in init self.text = self._normalize_search_value(text) File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1565, in _normalize_search_value if (isinstance(value, str) or isinstance(value, collections.callable) or hasattr(value, 'match') AttributeError: module 'collections' has no attribute 'callable'
C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e>
this is the traceback I get when I try to run the following program:
To run this, download the BeautifulSoup zip file
http://www.py4e.com/code3/bs4.zip
and unzip it in the same directory as this file
#from urllib.request import urlopen #import re import urllib.request, urllib.parse, urllib.error from bs4 import BeautifulSoup import ssl
Ignore SSL certificate errors
ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE
url = input('Enter - ') html = urllib.request.urlopen(url, context=ctx).read() soup = BeautifulSoup(html, "html.parser")
Retrieve all of the anchor tags
sum = 0 tags = soup('span') for tag in tags: # Look at the parts of a tag #print('TAG:', tag) #print('URL:', tag.get('href', None)) #print('Contents:', tag.contents[0]) #print('Attrs:', tag.attrs) sum = sum+int(tag.contents[0]) print(sum)
Can anyone help?
Same question. Anyone can help?
I used below link to download beautiful soup in my python3. Mine is Mac.
https://www.geeksforgeeks.org/how-to-install-beautifulsoup-in-python-on-macos/
@Mmcqueary Update. Finally, it worked on pynative.com Thanks a ton for your help.
Hi.. Can you please tell how you fixed the error. I've been getting the same error. Although bs4 folder is in the same directory as the python file.
For those who have been getting the (AttributeError: module 'collections' has no attribute 'Callable'), add these two lines to your last import line to solve your issue. This was the solution I found after exploring several forums for this problem.
import collections
collections.Callable = collections.abc.Callable
Scraping Numbers from HTML using BeautifulSoup In this assignment you will write a Python program similar to http://www.py4e.com/code3/urllink2.py. The program will use urllib to read the HTML from the data files below, and parse the data, extracting numbers and compute the sum of the numbers in the file.
We provide two files for this assignment. One is a sample file where we give you the sum for your testing and the other is the actual data you need to process for the assignment.
Sample data: http://py4e-data.dr-chuck.net/comments_42.html (Sum=2553)
Actual data: http://py4e-data.dr-chuck.net/comments_1688329.html (Sum ends with 69)
You do not need to save these files to your folder since your program will read the data directly from the URL. Note: Each student will have a distinct data url for the assignment - so only use your own data url for analysis.
Data Format
The file is a table of names and comment counts. You can ignore most of the data in the file except for lines like the following:
...
Retrieve all of the anchor tags
tags = soup('a')
for tag in tags:
Look at the parts of a tag
print 'TAG:',tag
print 'URL:',tag.get('href', None)
print 'Contents:',tag.contents[0]
print 'Attrs:',tag.attrs
You need to adjust this code to look for span tags and pull out the text content of the span tag, convert them to integers and add them up to complete the assignment.
Sample Execution
$ python3 solution.py
Enter - http://py4e-data.dr-chuck.net/comments_42.html
Count 50
Sum 2...
Turning in the Assignment
Enter the sum from the actual data and your Python code below:
Sum:
(ends with 69)
can anyone solve this problem for me, i am so confuse
I work with PyCharm using Python 3.11 and encountered a similar issue, after installing bs4.
Implemented @Nikowos solution and it works! Thanks
Here is the way how you guys can solve this :
Working code below 👍
READ ME "":: Copy the actual Data url and run the file from the cmd/terminal and then paste the in terminal or CMD like so
#! /bin/python3
from urllib.request import urlopen
from bs4 import BeautifulSoup
import ssl
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
leave the url empty for now. Paste the url after running the file in cmd or terminal.
url = input("")
html = urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, "html.parser")
spans = soup('span')
numbers = []
for span in spans:
numbers.append(int(span.string))
print (sum(numbers))
togithub.mp4
This is the error I am getting can anybody help?
Notes Regarding the Use of BeautifulSoup
The sample code for this course and textbook examples use BeautifulSoup to parse HTML.
Using BeautifulSoup 4 with Python 3.10 or Python 3.11
Instructions for Windows 10:
-
pip install beautifulsoup4 (run this command)
-
if the bs4.zip file was downloaded, delete it
Instructions for MacOS:
-
pip3 install beautifulsoup4 (run this command)
-
if the bs4.zip file was downloaded or you have a bs4 folder, delete it
Using BeautifulSoup 3 (only for Python 3.8 or Python 3.9)
If you want use our samples "as is", download our Python 3 version of BeautifulSoup 3 from
http://www.py4e.com/code3/bs4.zip
You must unzip this into a "bs4" folder and have that folder as a sub-folder of the folder where you put our sample code like:
Hello I tried this for the same question:
#Scraping Numbers from HTML using BeautifulSoup
from urllib.request import urlopen
from bs4 import BeautifulSoup
import ssl
import re
Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
url = input('Enter - ')
html = urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, "html.parser")
Retrieve all of the anchor tags
counts = dict()
my_list = list()
tags = soup('span')
for tag in tags:
# Look at the parts of a tag
num = str(tag)
number = re.findall('[0-9]+', num)
if len(number) != 1:
continue
for integer in number:
integer = int(y)
my_list = append(integer)
counts[integer] = counts.get(integer, 0 ) + 1
print('Count ', counts)
#or you can say
#print('Count ', len(my_list))
print('Sum ', sum(my_list))
For window user follow the instruction given by instructor in the discussion forum than the above top one code even work out for you.
https://www.coursera.org/learn/python-network-data/discussions/forums/G0TMJ6G0EeqqMhL7huUnrQ/threads/Fi07MzG0EeymZRIVts3h3w
import urllib.request
from bs4 import BeautifulSoup
# Prompt user for URL
url = input('Enter URL: ')
# Read HTML from URL
html = urllib.request.urlopen(url).read()
#Parse the HTML using BeautifulSoup
soup = BeautifulSoup(html, 'html.parser')
# Find all span tags
tags = soup('span')
# Sum up the numbers
sum = 0
for tag in tags:
sum += int(tag.contents[0])
# Print the sum
print(sum)
import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
import ssl
import re
Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
url = input('ENTER URL:') #http://py4e-data.dr-chuck.net/comments_1692181.html
fhand = urllib.request.urlopen(url,context=ctx).read()
soup = BeautifulSoup(fhand,'html.parser')
#print(soup)
Retrieve all of the anchor tags
tags = soup('span')
lst=list()
for tag in tags:
tag = str(tag)
#print(tag)
tag2 = re.findall('[0-9]+',tag)
tag3 = int(tag2[0])
lst.append(tag3)
#print(lst)
total = sum(lst)
print(total)
For window user follow the instruction given by instructor in the discussion forum than the above top one code even work out for you. https://www.coursera.org/learn/python-network-data/discussions/forums/G0TMJ6G0EeqqMhL7huUnrQ/threads/Fi07MzG0EeymZRIVts3h3w
Thank you for your help, It works.
jai essaie sur vs code il y a un probleme trace back et maintenant sur jupyter toujours le resultat 0
uninstall the zip folder and the extracted folder of bs4 and install it using your command prompt by typing: -
pip install beautifulsoup4
Here is the way how you guys can solve this : Working code below 👍 READ ME "":: Copy the actual Data url and run the file from the cmd/terminal and then paste the in terminal or CMD like so
#! /bin/python3 from urllib.request import urlopen from bs4 import BeautifulSoup import ssl
ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE
leave the url empty for now. Paste the url after running the file in cmd or terminal.
url = input("") html = urlopen(url, context=ctx).read() soup = BeautifulSoup(html, "html.parser")
spans = soup('span') numbers = []
for span in spans: numbers.append(int(span.string))
print (sum(numbers))
togithub.mp4
this is the best way to answer. run it in terminal. thank you
Enter - http://py4e-data.dr-chuck.net/comments_1585556.html
Traceback (most recent call last):
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\urllink2.py", line 17, in
soup = BeautifulSoup(html, "html.parser")
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4_init_.py", line 215, in init
self.feed()
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4_init.py", line 239, in feed
self.builder.feed(self.markup)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\builder_htmlparser.py", line 164, in feed
parser.feed(markup)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1520.0_x64__qbz5n2kfra8p0\lib\html\parser.py", line 110, in feed
self.goahead(0)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1520.0_x64__qbz5n2kfra8p0\lib\html\parser.py", line 170, in goahead
k = self.parse_starttag(i)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1520.0_x64__qbz5n2kfra8p0\lib\html\parser.py", line 344, in parse_starttag
self.handle_starttag(tag, attrs)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\builder_htmlparser.py", line 62, in handle_starttag
self.soup.handle_starttag(name, None, None, attr_dict)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4_init.py", line 404, in handle_starttag
self.currentTag, self._most_recent_element)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1001, in getattr
return self.find(tag)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1238, in find
l = self.find_all(name, attrs, recursive, text, 1, **kwargs)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1259, in find_all
return self._find_all(name, attrs, text, limit, generator, **kwargs)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 516, in _find_all
strainer = SoupStrainer(name, attrs, text, **kwargs)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1560, in init
self.text = self._normalize_search_value(text)
File "C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e\bs4\element.py", line 1565, in _normalize_search_value
if (isinstance(value, str) or isinstance(value, collections.callable) or hasattr(value, 'match')
AttributeError: module 'collections' has no attribute 'callable'
C:\Users\EmadHamdouna\Desktop\Python for Everybody\py4e>
this is the traceback I get when I try to run the following program:
To run this, download the BeautifulSoup zip file
http://www.py4e.com/code3/bs4.zip
and unzip it in the same directory as this file
#from urllib.request import urlopen
#import re
import urllib.request, urllib.parse, urllib.error
from bs4 import BeautifulSoup
import ssl
Ignore SSL certificate errors
ctx = ssl.create_default_context()
ctx.check_hostname = False
ctx.verify_mode = ssl.CERT_NONE
url = input('Enter - ')
html = urllib.request.urlopen(url, context=ctx).read()
soup = BeautifulSoup(html, "html.parser")
Retrieve all of the anchor tags
sum = 0
tags = soup('span')
for tag in tags:
# Look at the parts of a tag
#print('TAG:', tag)
#print('URL:', tag.get('href', None))
#print('Contents:', tag.contents[0])
#print('Attrs:', tag.attrs)
sum = sum+int(tag.contents[0])
print(sum)
Can anyone help?