- Installation
- Conversion
- Slicing Video
- Resize/Scale the resolution
- Speed-related
- Video to Gif
- Getting Information like FPS
- Annotation
- Audio-related
- Image-related
- Joining Two Videos
- Keyframes
- Video Cropping
- Rotation
- References
Table of contents generated with markdown-toc
Getting ffmpeg
with all the options is a little more complicated these days in brew because the Homebrew team removed all options from the core formula:
brew update
# if you already have ffmpeg installed
brew uninstall --force --ignore-dependencies ffmpeg
brew install chromaprint amiaopensource/amiaos/decklinksdk
brew tap homebrew-ffmpeg/ffmpeg
# chromaprint depends on ffmpeg also
brew uninstall --force --ignore-dependencies ffmpeg
brew upgrade homebrew-ffmpeg/ffmpeg/ffmpeg $(brew options homebrew-ffmpeg/ffmpeg/ffmpeg | grep -vE '\s' | grep -- '--with-' | grep -vi libzvbi | grep -vi libflite | grep -vi openvino | tr '\n' ' ')
a few options are skipped here, see this other gist as to why
sudo apt update
sudo apt install ffmpeg
To validate that it's installed properly: ffmpeg -version
ffmpeg -i input.mov -vcodec libx264 -crf 20 output.mp4
The CRF, which, in this case, is 20, can range from 18 to 24, where a higher number will compress the output to a smaller size.
The video codec used here is x264, to check what encoding input.mov
is do:
ffprobe -v error -select_streams v:0 -show_entries stream=codec_name -of default=nokey=1:noprint_wrappers=1 input.mov
Reference from here
ffmpeg -i file.mov -c copy out.mp4
you can remove audio by using the -an
flag
MJPEG
(motion-jpeg): an example here and if you are making videos from JPEG images, try-c:v copy
as recommended herex264
: the unofficial cheatsheet on stackoverflowx265
(aka High Efficiency Video Coding): full tutorial here on OTTVerse
or more commonly referred to as trimming
use the -ss
option to specify a start timestamp, and the -t
option to specify the encoding duration. The timestamps need to be in HH:MM:SS.xxx format or in seconds
Example for clipping 10 seconds, 30 seconds from the start:
ffmpeg -ss 00:00:30.0 -i input.wmv -c copy -t 00:00:10.0 output.wmv
ffmpeg -ss 30 -i input.wmv -c copy -t 10 output.wmv
Note that -t is an output option and always needs to be specified after -i
.
Use the -to
arg to specify the target end time
Also note that if you specify -ss
before -i
, -to
will have the same effect as -t
, i.e. it will act as a duration.
ffmpeg -i input.mp4 -vf scale=$w:$h output.mp4
where $w
is width and $h
is height (e.g. -vf scale=640:480
will resize the video to 480p
To get the width, height of a video, you could(ref):
ffprobe -v error -select_streams v -show_entries stream=width,height -of csv=p=0:s=x input.mp4
for small resolutions, check here or here.
ffmpeg -i <input> -filter:v fps=fps=30 <output>
using PTS filter, to slow down video to half the speed:
ffmpeg -i input.mkv -filter:v "setpts=2.0*PTS" output.mkv
to double the speed (frames might be dropped, however, which can be avoided by increasing the FPS: see here):
ffmpeg -i input.mkv -filter:v "setpts=0.5*PTS" output.mkv
To take every 15th frame, do the following as recommended in this blog:
ffmpeg -i in.mp4 -vf select='not(mod(n,15))',setpts=N/FRAME_RATE/TB out.mp4
you might want to:
-an
for time lapse there's really no need to keep the audio, in fact if you don't remove the audio, the output video will be the same length as the input
Another method is to use the framestep filter
ffmpeg -i in.mp4 -vf framestep=15,setpts=N/FRAME_RATE/TB out.mp4
see this useful post or even this one that combines with a gifsicle
compression:
ffmpeg -i input.mp4 -s 1400x800 -pix_fmt rgb24 -r 30 -f gif output.gif
gifsicle output.gif --optimize=3 --delay=3 --colors 64 -o output.gif
- the
-s
params is optional but it's for the output video width, height - the
-r
params controls the FPS --optimize
is the compression level in gifsicle, 3 being the highest--colors
set how many color is in the gif (less being more compressed, see the GIF Compression section below)
But to do the Boomerang effect, you might want something like this where input.mp4
is the segment that you want to loop and rewind (source):
ffmpeg -i input.mp4 -filter_complex "[0]reverse[r];[0][r]concat=n=2:v=1:a=0" output.gif
it's best handled by gifsicle
(brew install gifsicle
), see this solution for reference.
ffmpeg -i filename
For more see this post
ffprobe -v error -select_streams v:0 -count_frames -show_entries stream=nb_read_frames input.mp4
see here for details
but to be faster, you probably wanna count packets instead which is significantly faster from experience:
ffprobe -v error -select_streams v:0 -count_packets -show_entries stream=nb_read_packets -of csv=p=0 input.mp4
The font file path you need is probably gonna be at /usr/share/fonts
on Linux or '/Library/Fonts/'
on MacOS
ffmpeg -i input.mp4 -vf "drawtext=fontfile=Arial.ttf: text='Frame\: %{frame_num}': start_number=0: x=(w-tw)/2: y=h-(2*lh): fontcolor=black: fontsize=20: box=1: boxcolor=white: boxborderw=5" -c:a copy output.mp4
This create a video that's like this:
this could be done using the font
option (i.e. instead of the fontfile
opiton above) if your ffmpeg is compiled with --enable-libfontconfig
per this stackoverlfow answer:
ffmpeg -i input.mp4 -vf "drawtext=text='hello-world!':font='Arial':x=(main_w-text_w-10):y=(main_h-text_h-10):fontsize=32:fontcolor=black:box=1:[email protected]:boxborderw=5" output.mp4
this will draw a box with 50% transparency with the words "hello-world!" in black. For alternatives and other solutions, check this other stackoverflow answer
ffmpeg -i input.mp4 -q:a 0 -map a output_file.m4a
-q:a
: variable bitrate with 0 being the highest quality-map a
: select audio stream only- file extension, in general
m4a
is a good idea because ffmpeg should be smart enough to guess your intent and figure out the right codec of your output audio file and encapsulate the raw AAC into the output M4A container
ffmpeg -i /input/track1.m4a -i ~/input/track2.m4a -filter_complex '[0:0][1:0]concat=n=2:v=0:a=1[out]' -map '[out]' output/track.m4a
for adding a third track simple replace [0:0][1:0]concat=n=2
with [0:0][1:0][2:0]concat=n=3
ffmpeg -ss 01:23:45 -i input -frames:v 1 -q:v 2 output.jpg
- the
-ss
flag indicates the timestamp of the frame in hh:mm:ss - for details see this stackoverflow post
using 30 fps (for details see this stackoverflow post):
ffmpeg -framerate 30 -pattern_type glob -i '*.jpg' -c:v libx264 -pix_fmt yuv420p out.mp4
the command expects that in your working directories there are a list of JPEGs nicely named by frame number and ideally zero padded. In case you want to make them zero padded, this stackoverflow answer might help.
if you bump into the height not divisible by 2
error you might want to add a pad filter: -vf "pad=ceil(iw/2)*2:ceil(ih/2)*2"
as explained here.
Finally, if you want to do this with the ffmpeg-python
package, this example might help.
ffmpeg -i video.mp4 out_%04d.png
note that the %04d
means the frame number is zeros padded to 4 digit long, the syntax in general is %0xd
. For more options like an image every X second or custom ranges, see this stackoverflow post
assuming your transparency PNGs are in the directory saliency/
:
ffmpeg -i video.mp4 -framerate 15 -pattern_type glob -i 'saliency/*.png' -filter_complex "[1:v][0:v]scale2ref=iw:ih[ovr][base];[ovr][base]blend=all_mode='overlay':all_opacity=0.7[v]" -map [v] result.mp4
answer originally from StackExchange, note that the -framerate
must match your input video's FPS otherwise your output video will look off. And the opacity level can be controlled via changing the float in all_opacity=0.7
however you might bump into an error message like this:
[Parsed_blend_1 @ 0x557711613420] First input link top parameters (size 1280x720, SAR 0:1) do not match the corresponding second input link bottom parameters (1280x720, SAR 1:1)
that might be due to a "custom" Display Aspect Ratio (DAR), so you might want to add something like this to your -filter_complex
after scale2ref
:
[ovr]setdar=16:9[ovr];[base]setdar=16:9[base];
alternatively, there is a different way to do this:
ffmpeg -i video.mp4 -framerate 15 -pattern_type glob -i 'saliency/*.png' -filter_complex "[1:v]format=argb,geq=r='r(X,Y)':a='1.0*alpha(X,Y)'[zork];[0:v][zork]overlay" result.mp4
where opacity can be changed by altering the 1.0
in the 1.0*alpha(X,Y)
but note that this solution is noticibly slower!
See this very detailed guide or borrow from this solution:
ffmpeg \
-i input1.mp4 \
-i input2.mp4 \
-filter_complex vstack=inputs=2 \
output.mp4
to merge videos together into one video
ffmpeg -i video1.mp4 -i video2.mp4 -i video3.mp4 -filter_complex "[0:v][0:a][1:v][1:a][2:v][2:a]concat=n=3:v=1:a=1" -vsync vfr output.mp4
you might need to remove the :a
specifier in the filter graph if your video has no audio, for a more detail explanation see here
this explains video keyframes in the context of compression in details. To see how keyframes are extracted, this post is helpful. And to see how many keyframes are in your video, check this
For cropping video in the same way one would on images, see this post
rotation are set in the metadata of the video
to add rotation data, use ffmpeg:
ffmpeg -i input.mp4 -metadata:s:v rotate=180 -vcodec copy -acodec copy output.mp4
this should be lossless because of -vcodec copy
, for more details see this stackoverflow post
to update rotation data, use exiftools (not tested, install with sudo apt install libimage-exiftool-perl
):
exiftool -rotation=180 input.mp4