-
-
Save alexeygrigorev/a1bc540925054b71e1a7268e50ad55cd to your computer and use it in GitHub Desktop.
import requests | |
import base64 | |
from tqdm import tqdm | |
master_json_url = 'https://178skyfiregce-a.akamaihd.net/exp=1474107106~acl=%2F142089577%2F%2A~hmac=0d9becc441fc5385462d53bf59cf019c0184690862f49b414e9a2f1c5bafbe0d/142089577/video/426274424,426274425,426274423,426274422/master.json?base64_init=1' | |
base_url = master_json_url[:master_json_url.rfind('/', 0, -26) + 1] | |
resp = requests.get(master_json_url) | |
content = resp.json() | |
heights = [(i, d['height']) for (i, d) in enumerate(content['video'])] | |
idx, _ = max(heights, key=lambda (_, h): h) | |
video = content['video'][idx] | |
video_base_url = base_url + video['base_url'] | |
print 'base url:', video_base_url | |
filename = 'video_%d.mp4' % video['id'] | |
print 'saving to %s' % filename | |
video_file = open(filename, 'wb') | |
init_segment = base64.b64decode(video['init_segment']) | |
video_file.write(init_segment) | |
for segment in tqdm(video['segments']): | |
segment_url = video_base_url + segment['url'] | |
resp = requests.get(segment_url, stream=True) | |
if resp.status_code != 200: | |
print 'not 200!' | |
print resp | |
print segment_url | |
break | |
for chunk in resp: | |
video_file.write(chunk) | |
video_file.flush() | |
video_file.close() |
But how to download when I have only playlist.m3u8
and no playlist.json
(or master.json
as it used to be)?
UPDATE. Got a solution with ffmpeg
command:
ffmpeg -protocol_whitelist file,http,https,tcp,tls,crypto -i
"https://vod-adaptive-ak.vimeocdn.com/exp=1724447363~acl=%2Feb09e983-499f-48e7-9987-a83d1cd1f393%2F%2A~hmac=
43f6b6fac6de2ed822b2c4435256879c85f77b6a045cbb86b3fb8dd42e98342c8c/eb09e983-499f-48e7-9987-a83d1cd1f393
/v2/playlist/av/primary/playlist.m3u8?locale=pl&omit=av1-opus
&pathsig=8c953e4f~5vyN96md7Ianyk-S0EheVDFA6_b8dDQHweA-OGZNMig&qsr=1&rh=1CCyfI&sf=fmp4"
-c copy video.mp4
That's how I was able to download the videos I wanted.
Got an issue on some videos, I've been downloading protected content using the method described in the quoted post for months, and it is still working today. However, I've recently encountered an issue with some videos (very few) where the link structure, retrieved from the Network tab in the browser, differs and appears to be a playlist link instead, such as:
https://vod-adaptive-ak.vimeocdn.com/exp=1718799796~acl=%2F2f9924f1-1fc4-4376-af68-5d2a3ffeedc7%2F%2A~hmac=9a978d8566fa0647e5f167d11f174cb84cc50f19399d7777179b8719617e7376/2f9924f1-1fc4-4376-af68-5d2a3ffeedc7/v2/playlist/av/primary/playlist.json?omit=av1-hevc&pathsig=8c953e4f~mocADbIwYJxqL3msRkLnDH_vlRmkehQ1vWwYBLrsVn0&qsr=1&rh=4cRiZu
How am I supposed to download this? Or how can I retrieve the real link?got an issue, been downloading a bunch of video for days and then suddenly the one that was downloading ended up with this error:
F:\ffmpeg>youtube-dl.exe -o 22-10-23PERF.mp4 https://91vod-adaptive.akamaized.net/exp=1699318792~acl=%2F222d9b05-2ac9-4bf1-a98e-c821e509c8fa%2F%2A~hmac=5924a2ed3f6704c9aa9892d413bbbac8da8b7428ebafad976496229f310d7d8a/222d9b05-2ac9-4bf1-a98e-c821e509c8fa/sep/video/2ed9800f,b18fccd0,cbccf898,e88838cd,effa0196/audio/2deec4db,8c92b620,fd7a357a/master.mpd?query_string_ranges=1 [generic] master: Requesting header WARNING: Falling back on generic information extractor. [generic] master: Downloading webpage [generic] master: Extracting information [dashsegments] Total fragments: 1069 [download] Destination: 22-10-23PERF.fvideo-2ed9800f.mp4 [download] 8.9% of ~571.81MiB at 549.60KiB/s ETA 23:44 ERROR: unable to download video data: <urlopen error [SSL: TLSV1_ALERT_INTERNAL_ERROR] tlsv1 alert internal error (_ssl.c:600)>
then now when I try ANY video, even ones from the past that were successfully downloaded, I end up with this error:
F:\ffmpeg>youtube-dl.exe -o 24-10-23.mp4 https://194vod-adaptive.akamaized.net/exp=1699310038~acl=%2Fb852eef9-3848-48a9-81af-a6bb0c3bb101%2F%2A~hmac=64987b83dd191c8ee0833181540fe12c44f6937f244256cc9fe42a15c4a4ede4/b852eef9-3848-48a9-81af-a6bb0c3bb101/sep/video/4ea580af,58650cf4,9886125d,a4f6fd04,ad9e4799/audio/3de08f2b,a69e0c39,c8ff288c/master.mpd?query_string_ranges=1 [generic] Extracting URL: https://194vod-adaptive.akamaized.net/exp=1699310038~acl=%2Fb852eef9-3848-48a9-81af-a6bb0c3bb101%...uery_string_ranges=1 [generic] master: Downloading webpage ERROR: [generic] Unable to download webpage: HTTP Error 410: Gone (caused by <HTTPError 410: Gone>); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
And yes, it is up to date:
F:\ffmpeg>youtube-dl.exe -U Available version: [email protected], Current version: [email protected] yt-dlp is up to date ([email protected])
Help please?solved:
this command was to try to resume a failed download from yesterday (while I was sleeping), so I re typed the exact same command to resume the download except that when I went on the video to compare the source, it has changed. Typing the same command with the fresh URL worked as usual.
F:\ffmpeg>youtube-dl.exe -o 24-10-23.mp4 https://194vod-adaptive.akamaized.net/exp=1699353551~acl=%2Fb852eef9-3848-48a9-81af-a6bb0c3bb101%2F%2A~hmac=75ad5d633886e2d01ef04535372fa4342bd0aa2da02b5d160c9702ea04ebea50/b852eef9-3848-48a9-81af-a6bb0c3bb101/sep/video/4ea580af,58650cf4,9886125d,a4f6fd04,ad9e4799/audio/3de08f2b,a69e0c39,c8ff288c/master.mpd?query_string_ranges=1 [generic] Extracting URL: https://194vod-adaptive.akamaized.net/exp=1699353551~acl=%2Fb852eef9-3848-48a9-81af-a6bb0c3bb101%...uery_string_ranges=1 [generic] master: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] master: Extracting information [info] master: Downloading 1 format(s): video-9886125d+audio-3de08f2b [dashsegments] Total fragments: 658 [download] Destination: 24-10-23.fvideo-9886125d.mp4 [download] 37.7% of ~ 458.35MiB at 434.71KiB/s ETA 09:58 (frag 248/658)
my guess is that Vimeo updates it's URL sources every x hours (these bastards)Here modified (python3) script for download both playlist.json and master.json link types.
import os import sys import base64 import requests import subprocess from tqdm import tqdm from moviepy.editor import * url = input('enter [master|playlist].json url: ') name = input('enter output name: ') if 'master.json' in url: url = url[:url.find('?')] + '?query_string_ranges=1' url = url.replace('master.json', 'master.mpd') print(url) subprocess.run(['youtube-dl', url, '-o', name]) sys.exit(0) def download(what, to, base): print('saving', what['mime_type'], 'to', to) with open(to, 'wb') as file: init_segment = base64.b64decode(what['init_segment']) file.write(init_segment) for segment in tqdm(what['segments']): segment_url = base + segment['url'] resp = requests.get(segment_url, stream=True) if resp.status_code != 200: print('not 200!') print(segment_url) break for chunk in resp: file.write(chunk) print('done') name += '.mp4' base_url = url[:url.rfind('/', 0, -26) + 1] content = requests.get(url).json() vid_heights = [(i, d['height']) for (i, d) in enumerate(content['video'])] vid_idx, _ = max(vid_heights, key=lambda _h: _h[1]) audio_quality = [(i, d['bitrate']) for (i, d) in enumerate(content['audio'])] audio_idx, _ = max(audio_quality, key=lambda _h: _h[1]) video = content['video'][vid_idx] audio = content['audio'][audio_idx] base_url = base_url + content['base_url'] video_tmp_file = 'video.mp4' audio_tmp_file = 'audio.mp4' download(video, video_tmp_file, base_url + video['base_url']) download(audio, audio_tmp_file, base_url + audio['base_url']) video_clip = VideoFileClip(video_tmp_file) audio_clip = AudioFileClip(audio_tmp_file) video_clip_with_audio = video_clip.set_audio(audio_clip) video_clip_with_audio.write_videofile(name) os.remove(video_tmp_file) os.remove(audio_tmp_file)
This re-encodes the video - is there a way to avoid doing that? I thought ffmpeg can just combine the two video + audio files - wouldn't that be easier/more efficient?
Also how would I change it so that I can set the quality I'd want to download? i.e. instead of max quality
This re-encodes the video - is there a way to avoid doing that?
see my previous comment
so I skidded modifications from comments and added a bit of multi threading. It downloads video and audio, accepts playlist.json and master.json (mainly for playlist.json).
import os
import sys
import base64
import requests
import subprocess
from concurrent.futures import ThreadPoolExecutor
from tqdm import tqdm
from moviepy.editor import *
import ffmpeg
url = input('enter [master|playlist].json url: ')
name = input('enter output name: ')
if 'master.json' in url:
url = url[:url.find('?')] + '?query_string_ranges=1'
url = url.replace('master.json', 'master.mpd')
print(url)
subprocess.run(['youtube-dl', url, '-o', name])
sys.exit(0)
def download_segment(segment_url, segment_path):
resp = requests.get(segment_url, stream=True)
if resp.status_code != 200:
print('not 200!')
print(segment_url)
return
with open(segment_path, 'wb') as segment_file:
for chunk in resp:
segment_file.write(chunk)
def download(what, to, base):
print('saving', what['mime_type'], 'to', to)
init_segment = base64.b64decode(what['init_segment'])
segment_urls = [base + segment['url'] for segment in what['segments']]
segment_paths = [f"segment_{i}.tmp" for i in range(len(segment_urls))]
with ThreadPoolExecutor(max_workers=15) as executor:
list(tqdm(executor.map(download_segment, segment_urls, segment_paths), total=len(segment_urls)))
with open(to, 'wb') as file:
file.write(init_segment)
for segment_path in segment_paths:
with open(segment_path, 'rb') as segment_file:
file.write(segment_file.read())
os.remove(segment_path)
print('done')
name += '.mp4'
base_url = url[:url.rfind('/', 0, -26) + 1]
content = requests.get(url).json()
vid_heights = [(i, d['height']) for (i, d) in enumerate(content['video'])]
vid_idx, _ = max(vid_heights, key=lambda _h: _h[1])
audio_quality = [(i, d['bitrate']) for (i, d) in enumerate(content['audio'])]
audio_idx, _ = max(audio_quality, key=lambda _h: _h[1])
video = content['video'][vid_idx]
audio = content['audio'][audio_idx]
base_url = base_url + content['base_url']
video_tmp_file = 'video.mp4'
audio_tmp_file = 'audio.mp4'
download(video, video_tmp_file, base_url + video['base_url'])
download(audio, audio_tmp_file, base_url + audio['base_url'])
def combine_video_audio(video_file, audio_file, output_file):
try:
video_stream = ffmpeg.input(video_file)
audio_stream = ffmpeg.input(audio_file)
ffmpeg.output(video_stream, audio_stream, output_file, vcodec='copy', acodec='copy').run(overwrite_output=True)
print(f"Archivo combinado guardado como {output_file}")
except ffmpeg.Error as e:
print(f"Error al combinar archivos: {e.stderr.decode()}")
combine_video_audio('video.mp4', 'audio.mp4', name)
os.remove(video_tmp_file)
os.remove(audio_tmp_file)
To who ever made this:
Thank you so much for fixing all my problems, you are an absolute legend!! Can stress enough how great the script is, Thanks!!