Expand
ffmpeg -i input.mp4 -c:v dnxhd -profile:v dnxhr_hq output.mxfThe -profile:v output option is required to select the DNxHR profile, such as -profile:v dnxhr_hq.
Accepted values for -profile:v are: dnxhd, dnxhr_444, dnxhr_hqx, dnxhr_hq, dnxhr_sq, dnxhr_lb.
- DNxHR LB:
dnxhr_lb- Low Bandwidth. 8-bit 4:2:2 (yuv422p). Offline Quality. - DNxHR SQ:
dnxhr_sq- Standard Quality. 8-bit 4:2:2 (yuv422p). Suitable for delivery format. - DNxHR HQ:
dnxhr_hq- High Quality. 8-bit 4:2:2 (yuv422p). - DNxHR HQX:
dnxhr_hqx- High Quality. 10-bit 4:2:2 (yuv422p10le). UHD/4K Broadcast-quality delivery. - DNxHR 444:
dnxhr_444- Finishing Quality. 10-bit 4:4:4 (yuv444p10le). Cinema-quality delivery.
ffmpeg -i input.mp4 -c:v prores_ks -profile:v 2 output.movThe -profile switch takes an integer from -1 to 5 to match the ProRes profiles:
- -1:
auto(default) - 0:
proxy≈ 45Mbps YUV 4:2:2 - 1:
lt≈ 102Mbps YUV 4:2:2 - 2:
standard≈ 147Mbps YUV 4:2:2 - 3:
hq≈ 220Mbps YUV 4:2:2 - 4:
4444≈ 330Mbps YUVA 4:4:4:4 - 5:
4444xq≈ 500Mbps YUVA 4:4:4:4
ffmpeg -i input.mp4 -c:v cfhd -quality 8 output.movThe -quality options accepts the following values:
-quality <int> set quality (from 0 to 12) (default film3+)
film3+ 0
film3 1
film2+ 2
film2 3
film1.5 4
film1+ 5
film1 6
high+ 7
high 8
medium+ 9
medium 10
low+ 11
low 12ffmpeg -i input.mov -pix_fmt yuv420p -c:v libx264 -c:a copy -crf 22 -preset veryslow output.mp4-
Choose a CRF value
- The range of the CRF scale is 0–51, where 0 is lossless, 23 is the default and 51 is worst quality possible.
- A lower value generally leads to higher quality, and a subjectively sane range is 17–28.
- The range is exponential, so increasing the CRF value +6 results in roughly half the bitrate/file size, while -6 leads to roughly twice the bitrate.
- Choose the highest CRF value that still provides an acceptable quality. If the output looks good, then try a higher value. If it looks bad, choose a lower value.
-
Choose a preset
- A preset is a collection of options that will provide a certain encoding speed to compression ratio. A slower preset will provide better compression (compression is quality per filesize).
- Use the slowest preset that you have patience for. The available presets in descending order of speed are:
ultrafastsuperfastveryfastfasterfastmedium– default presetslowslowerveryslowplacebo– ignore this as it is not useful (see FAQ)
FFmpeg Wiki: H.264 Video Encoding Guide
ffmpeg -i input.mp4 -pix_fmt yuv420p10le -c:v libx265 -c:a copy -crf 26 -preset slow output.mov-
Choose a CRF
CRF affects the quality. The default is 28, and it should visually correspond to libx264 video at CRF 23, but result in about half the file size. CRF works just like in x264, so choose the highest value that provides an acceptable quality.
-
Choose a preset
The default is
medium. The preset determines compression efficiency and therefore affects encoding speed. Valid presets areultrafast,superfast,veryfast,faster,fast,medium,slow,slower,veryslow, andplacebo. Use the slowest preset you have patience for. Ignoreplaceboas it provides insignificant returns for a significant increase in encoding time.
FFmpeg Wiki: H.265/HEVC Video Encoding Guide
ffmpeg -i input.mkv -c:v libvpx-vp9 -pass 1 -pix_fmt yuv420p10le -profile:v 2 -lag-in-frames 25 -crf 25 -b:v 0 -g 240 -cpu-used 4 -tile-rows 0 -tile-columns 1 -row-mt 1 -f null NUL
ffmpeg -i input.mkv -c:v libvpx-vp9 -pass 2 -pix_fmt yuv420p10le -profile:v 2 -lag-in-frames 25 -crf 25 -b:v 0 -g 240 -cpu-used 4 -tile-rows 0 -tile-columns 1 -row-mt 1 output.webm-
Two-pass is the recommended encoding method for libvpx-vp9 as some quality-enhancing encoder features are only available in 2-pass mode.
-
For two-pass, you need to run
ffmpegtwice, with almost the same settings, except for:- In pass 1 and 2, use the
-pass 1and-pass 2options, respectively. - In pass 1, output to a null file descriptor, not an actual file. (This will generate a logfile that ffmpeg needs for the second pass.)
- In pass 1, you can leave audio out by specifying
-an.
- In pass 1 and 2, use the
-
Constant quality 2-pass is invoked by setting
-b:vto zero and specifiying a quality level using the-crfswitch:ffmpeg -i input.mp4 -c:v libvpx-vp9 -b:v 0 -crf 30 -pass 1 -an -f null NUL && ^ ffmpeg -i input.mp4 -c:v libvpx-vp9 -b:v 0 -crf 30 -pass 2 -c:a libopus output.webm
FFmpeg Wiki: VP9 Encoding Guide
Tuning libvpx-vp9 to be more efficient - r/AV1
# libaom
ffmpeg -i input.mp4 -c:v libaom-av1 -pass 1 -pix_fmt yuv420p10le -cpu-used 6 -crf 20 -b:v 0 -g 300 -lag-in-frames 35 -f null NUL
ffmpeg -i input.mp4 -c:v libaom-av1 -pass 2 -pix_fmt yuv420p10le -cpu-used 6 -crf 20 -b:v 0 -g 300 -lag-in-frames 35 output.mkv
# rav1e
ffmpeg -i input.mp4 -c:v librav1e -pix_fmt yuv420p10le -speed 6 -qp 60 output.mkv
# SVT-AV1
ffmpeg -i input.mp4 -c:v libsvtav1 -pix_fmt yuv420p10le -preset 6 -rc 0 -qp 25 output.mkv(This only applies to libaom) Same with libvpx-vp9, using 2-pass mode is recommended as it enables some fancy options, like better adaptive keyframe placement and better ARNR frame decisions, alongside better rate control.
-cpu-used sets how efficient the compression will be. Default is 1. Lower values mean slower encoding with better quality, and vice-versa. Valid values are from 0 to 8 inclusive. To enable fast decoding performance, also add tiles (i.e. -tiles 4x1 or -tiles 2x2 for 4 tiles).
By default, libaom's maximum keyframe interval is 9999 frames. This can lead to slow seeking, especially with content that has few or infrequent scene changes. The -g option can be used to set the maximum keyframe interval. Anything up to 10 seconds is considered reasonable for most content, so for 30 frames per second content one would use -g 300, for 60 fps content -g 600, etc.
FFmpeg Wiki: libaom AV1 Encoding Guide
Making aomenc-AV1/libaom-AV1 the best it can be in a sea of uncertainty - r/AV1
ffmpeg -i input.mp4 -c:v ffv1 -coder 1 -context 1 -g 24 -slices 24 output.mkvThe relevant options used:
| Name | FFmpeg argument | Valid values | Comments |
|---|---|---|---|
| Coder | -coder | 0, 1, 2 | 0=Golomb-Rice, 1=Range Coder, 2=Range Coder (with custom state transition table) |
| Context | -context | 0, 1 | 0=small, 1=large |
| GOP size | -g | integer >= 1 | For archival use, GOP-size should be "1" |
| Slices | -slices | 4, 6, 9, 12, 16, 24, 30 | Each frame is split into this number of slices. This affects multithreading performance, as well as filesize: Increasing the number of slices might speed up performance, but also increases the filesize. |
FFmpeg Wiki: FFV1 encoding cheatsheet
You may consider using fps filter. It won't change the video playback speed.
Example to reduce fps from 59.6 to 30:
ffmpeg -i input.mkv -vf fps=fps=30 output.mkvCut from 00:01:00 to 00:03:00 (in the original), using the faster seek.
ffmpeg -ss 00:01:00 -i video.mp4 -t 00:02:00 -c copy cut.mp4Cut from 00:01:00 to 00:02:00, as intended, using the faster seek.
ffmpeg -ss 00:01:00 -i video.mp4 -to 00:02:00 -c copy -copyts cut.mp4Cut from 00:01:00 to 00:02:00, as intended, using the slower seek.
ffmpeg -i video.mp4 -ss 00:01:00 -to 00:02:00 -c copy cut.mp4To extract only a small segment in the middle of a movie, it can be used in combination with -t which specifies the duration, like -ss 60 -t 10 to capture from second 60 to 70. Or you can use the -to option to specify an out point, like -ss 60 -to 70 to capture from second 60 to 70. -t and -to are mutually exclusive. If you use both, -t will be used.
Note: If you specify -ss before -i only, the timestamps will be reset to zero, so -t and -to will have the same effect. If you want to keep the original timestamps, add the -copyts option.
ffmpeg -i in.mp4 -vf "crop=out_w:out_h:x:y" out.mp4Where the options are as follows:
out_wis the width of the output rectangleout_his the height of the output rectanglexandyspecify the top left corner of the output rectangle- You can refer to the input image size with
in_wandin_has shown in this first example. The output width and height can also be used without_wandout_h.
You can take a crop and preview it live with ffplay:
ffplay -i in.mp4 -vf "crop=in_w:in_h-40"$ cat concat.txt
file '/path/to/file1'
file '/path/to/file2'
file '/path/to/file3'
$ ffmpeg -f concat -safe 0 -i concat.txt -c copy output.mkvUse this method when you want to avoid a re-encode and your format does not support file-level concatenation (most files used by general users do not support file-level concatenation).
(echo file 'first file.mp4' & echo file 'second file.mp4' ) > list.txt
ffmpeg -safe 0 -f concat -i list.txt -c copy output.mp4This however requires your clips to have the same codec, resolution, framerate etc. – so it doesn't work with all kinds of heterogenous sources.
Use this method with formats that support file level concatenation (MPEG-1, MPEG-2 PS, DV). Do not use with MP4.
ffmpeg -i "concat:input1|input2" -c copy output.mkvThis method does not work for many formats, including MP4, due to the nature of these formats and the simplistic concatenation performed by this method.
FFmpeg FAQ: How can I join video files?
FFmpeg Wiki: Concatenating media files
By default, FFmpeg will only take one audio and one video stream. In your case that's taken from the first file only.
You need to map the streams correctly:
ffmpeg -i input.mp4 -i input.mp3 -c copy -map 0:v:0 -map 1:a:0 output.mp4- The order mapping options determine which streams from the input are mapped to the output.
0:v:0is the first video stream of the first file and1:a:0is the first audio stream of the second file. Thev/aare not strictly necessary, but in case your input files contain multiple streams, that helps to disambiguate.- If your audio stream is longer than the video file, or vice-versa, you can use the
-shortestoption to have ffmpeg stop the conversion when the shorter of the two ends.
FFmpeg Wiki: Selecting streams with the -map option
ffprobe -v error -select_streams v:0 -show_entries stream=nb_frames -of default=nokey=1:noprint_wrappers=1 input.mp4- This is a fast method.
- Not all formats (such as Matroska) will report the number of frames resulting in the output of
N/A. See the other methods listed below.
What the ffprobe options mean:
-v error: This hides "info" output (version info, etc) which makes parsing easier.-count_frames: Count the number of frames per stream and report it in the corresponding stream section.-select_streams v:0: Select only the video stream.-show_entries stream=nb_framesor-show_entries stream=nb_read_frames: Show only the entry for nb_frames or nb_read_frames.-of default=nokey=1:noprint_wrappers=1: Set output format (aka the "writer") to default, do not print the key of each field (nokey=1), and do not print the section header and footer (noprint_wrappers=1). There are shorter alternatives such as-of csv=p=0.
If you do not have ffprobe you can use ffmpeg instead:
ffmpeg -i input.mkv -map 0:v:0 -c copy -f null -- This is a somewhat fast method.
- Refer to
frame=near the end of the console output. - Add the
-discard nokeyinput option (before-i) to only count key frames.
In this example the input images are sequentially named img001.png, img002.png, img003.png, etc.
ffmpeg -framerate 24 -i img%03d.png output.mp4If -framerate option is omitted, the default will input and output 25 frames per second. See Framerates for more info.
For example if you want to start with img126.png then use the -start_number option:
ffmpeg -start_number 126 -i img%03d.png -pix_fmt yuv420p out.mp4ffmpeg -i "input/Clap.gif" -vsync 0 "temp/frames%d.png"Output one image every second:
ffmpeg -i input.mp4 -vf fps=1 out%d.pngOutput images between between 2-6 seconds:
ffmpeg -i in.mp4 -vf select='between(t\,2\,6)' -vsync 0 out%d.png- Note: Use backslash (\) to escape the commas to prevent
ffmpegfrom interpreting the commas as separators for the filters. - Why
vsync 0is needed: The image2 muxer, which is used to output image sequences, defaults to CFR. So, when given frames with timestamp differences greater than 1/fps,ffmpegwill duplicate frames to keep CFR. That will happen here between selection oft=6frame andt=15frame.-vsync 0prevents that.
FFmpeg Wiki: Create a thumbnail image every X seconds of the video
If you need to delay video by 3.84 seconds, use a command like this:
ffmpeg.exe -i "movie.mp4" -itsoffset 3.84 -i "movie.mp4" -map 1:v -map 0:a -c copy "movie-video-delayed.mp4"If you need to delay audio by 3.84 seconds, use a command like this:
ffmpeg.exe -i "movie.mp4" -itsoffset 3.84 -i "movie.mp4" -map 0:v -map 1:a -c copy "movie-audio-delayed.mp4"-itsoffset 3.84 -i "movie.mp4"offsets timestamps of all streams by 3.84 seconds in the input file that follows the option (movie.mp4).-map 1:v -map 0:atakes video stream from the second (delayed) input and audio stream from the first input - both inputs may of course be the same file.
A more verbose explanation can be found here: http://alien.slackbook.org/blog/fixing-audio-sync-with-ffmpeg/
-
Install Debugmode FrameServer, Vapoursynth and ffmpeg.
-
Create a Vapoursynth script named
frameserver.vpyby opening a text editor and adding the following (assuming your working directory isD:\Videos\Editing):import vapoursynth as vs core = vs.get_core() input = r'D:\Videos\Editing\output.avi' video = core.avisource.AVISource(input) video.set_output()
-
Open Premiere, select your sequence, and choose "File > Export > Media" (or Ctrl+M).
-
Under "Export Settings" choose Format: DebugMode FrameServer. Give your file the same name and path based on your Vapourynth script (
output.aviinD:\Videos\Editingfor this example). -
The FrameServer setup window will appear. Set the "Format" in "Video Output" to
YUV2. Choose "Next". -
Now you can pipe your Vapoursynth script into
ffmpeg. Type this incmd,vspipe frameserver.vpy - --y4m | ffmpeg -i - -c:v libx264 -qp 0 -preset veryfast lossless.mp4
Note: VapourSynth doesn't support audio output, so you would have to export the audio separately.
This slows down one part of a video and keeps the rest as is.
ffmpeg -i input.mp4 `
-filter_complex
"[0:v]trim=0:2,setpts=PTS-STARTPTS[v1];
[0:v]trim=2:5,setpts=2*(PTS-STARTPTS)[v2];
[0:v]trim=5,setpts=PTS-STARTPTS[v3];
[0:a]atrim=0:2,asetpts=PTS-STARTPTS[a1];
[0:a]atrim=2:5,asetpts=PTS-STARTPTS,atempo=0.5[a2];
[0:a]atrim=5,asetpts=PTS-STARTPTS[a3];
[v1][a1][v2][a2][v3][a3]concat=n=3:v=1:a=1"
-c:v libx264 -preset slow output.mp4How does it work?
- The
trimandatrimfilters cut the video into different parts, from 0–2 seconds, from 2–5, and from 5 to the end. - In each part a
setpts/asetptsfilter is applied, which, when using the optionPTS-STARTPTS, "resets" the presentation timestamps of each frame in each part, so that they can later be concatenated easily. - Each part is given an output label (e.g.
[v1]through[v3]). - For the video parts that need to be slowed down, every presentation timestamp is doubled (
2*(…)), which effectively halves the speed of the video. If you wanted to speed up the video, use a multiplier lower than 1. - For audio parts to be sped up / slowed down, use the
atempofilter, whose parameter is the speedup (e.g.,0.5= half speed). - These parts are finally concatenated using the
concatfilter.
Notes about concat:
nSet the number of segments. Default is 2.vSet the number of output video streams, that is also the number of video streams in each segment. Default is 1.aSet the number of output audio streams, that is also the number of audio streams in each segment. Default is 0.
There are a variety of methods to do this. Here are three:
"trim=start='00\:00\:01.23':end='00\:00\:04.56'"
"trim=start=00\\\:00\\\:01.23:end=00\\\:00\\\:04.56"
trim=start=00\\\\:00\\\\:01.23:end=00\\\\:00\\\\:04.56
