-
-
Save fffaraz/f3dcf48ae93b6c04adb9d74b1de711e5 to your computer and use it in GitHub Desktop.
from bs4 import BeautifulSoup | |
import requests | |
def getPlaylistLinks(url): | |
sourceCode = requests.get(url).text | |
soup = BeautifulSoup(sourceCode, 'html.parser') | |
domain = 'https://www.youtube.com' | |
for link in soup.find_all("a", {"dir": "ltr"}): | |
href = link.get('href') | |
if href.startswith('/watch?'): | |
print(link.string.strip()) | |
print(domain + href + '\n') | |
getPlaylistLinks('Playlist url') |
Can the code be modified to select more than 100 videos? The playlist I'm working with has about 300 and is constantly growing
It can't scrape more than 100 links at a time because of Youtube's infinite scroll. Notice if you go to the playlist link and scroll down, it stops and load the next page. You'll have to use BeautifulSoup in conjunction with Selenium to get more than 100 links.
what do you mean by in conjuction with selenium ? how exactly should you use selenium to solve the infinite scroll problem ?
im working on a playlist with more than 100 and i need to solve this so that i can order the downloaded videos according to the playlist by adding numbers
Made a quick fork https://gist.github.com/Axeltherabbit/5b147d508faf1b5cd735a52bd916b1e4 (No, doesn't solve the 100 video problem)
Note That it can only gather 100 vids for large playlists also you need to change the name of the file to run it because it gives an import error if you did not rename it