-
-
Save dehowell/884204 to your computer and use it in GitHub Desktop.
import urllib2 | |
def file_exists(location): | |
request = urllib2.Request(location) | |
request.get_method = lambda : 'HEAD' | |
try: | |
response = urllib2.urlopen(request) | |
return True | |
except urllib2.HTTPError: | |
return False |
Updated version I wrote:
def is_urlfile(url):
# Check if online file exists
try:
r = urllib.request.urlopen(url) # response
return r.getcode() == 200
except urllib.request.HTTPError:
return False
Hello ,
i wanna fetch file in server with authentication,i try this :
values = {"username": "user", "password": "password"} try: r = urllib.request.urlopen(url, values) # response return r.getcode() == 200 except urllib.request.HTTPError: return False
but thasnt work, can u help me please
For all the people saying "doesn't return anything" make sure to include a timeout in the function. I don't really know how to do it using Urllib but I know it is possible using Requests:
import requests
def is_online(url: str) -> bool:
"""
Checks if the document at a providen URL is online.
"""
try:
return requests.head(url, timeout=3).status_code // 100 == 2
except Exception:
return False
If it still doesn't works, it might be the webserver: sometimes they simply reject HEAD requests because they are only used by bots. Instead use a GET request (replace "requests.head" to "requests.get" in the code I wrote) but note that it will also download the content body shouldn't be a problem in most cases but if its a file to download then you're kinda screwed because Python will have to load that file into memory which can take a lot of space (and that's why HEAD requests exists, but they're sometimes rejected as I said)
hi guys, i rewrote this function
`def url_is_alive(url):
"""
Checks that a given URL is reachable.
:param url: A URL
:rtype: bool
"""
`