These are a few quick easy ffmpeg command lines that can be used to make oft-used video formats. I use them a lot so I wrote them down in a txt file but I converted it to markdown to upload here and access on all my pcs.
Feel free to use 'em. I've gathered them through superuser posts, wiki trawls, and personal experience.
- Add
-movflags faststart
to make mp4 files have their headers at the beginning of the file, allowing them to be streamed (i.e. played even if only part of the file is downloaded). - mp4 container supports mp3 files, so if
libfdk_aac
isnt available (it's the only good aac enc) uselibmp3lame
. - For mp4 files, use
-preset X
to use mp4 enc presets, like slow or superfast. (veryfast or fast is ok) c:v
refers to the video codec used (codec: video). Likewise,c:a
is audio. If you're using-map
or something, this can be extended (-c:a:0
: codec: audio: stream 0)
ffmpeg -i file.mkv -c:v libx264 -c:a libfdk_aac output.mp4
ffmpeg -i file.mkv -c:v libvpx-vp9 -b:v 1M -c:a libvorbis output.webm
Change the bitrate to change the quality. Alternatively, replace with -crf 10
to get a higher encoding rate but better overall quality.
Use -an
instead of a seperate audio codec to remove audio (for 4chan and gfycat and such).
ffmpeg -i file.mkv -vn -c:a libmp3lame -qscale:a # output.mp3
Where #
is the audio quality, from 0
-9
.
ffmpeg -r [Framerate (24)] \
-f image2 -s [Image size (1280x720)] -i image%04d.png \
-c:v libx264 -crf 25 -pix_fmt yuv420p \
-i audio.wav -c:a libfdk_aac output.mp4
%04d
represents how many 0s trail the end of your filenames before the extension. i.e. filenames that look like houston000004.png
would be houston%05d.png
-crf
controls quality. if you're having trouble getting a good picture try fiddling with it.
If things are coming out really wrong, double check to make sure your pixel format is yuv420p
, if not you might double check the proper pix_fmt
command for your images, or just batch convert them with ImageMagick or smth idk
If your directory of images doesn't start at 0, you need to include -start_number [index]
so image2 knows which image to start at.
ffmpeg -r 24 -f image2 -s 1280x720 -i image%04d.png \
-c:v libx264 -crf 25 -pix_fmt yuv420p \
-i audio.wav -c:a libfdk_aac output.mp4
ffmpeg -i video.mp4 -i audio.mp3 -c:v libx264 -c:a libmp3lame out.mp4
Though, that's a little overkill. If you're using an mp4 and an mp3 you can just use the copy
codec, which is faster, since mp4 container supports mp3.
ffmpeg -i video.mp4 -i audio.mp4 -c:v copy -c:a copy output.mp4
Containers like MKV can handle a lot of codecs, so copy
is pretty versatile there.
Use -map
.
ffmpeg -i video.mp4 -i audio.mp3 \
-c:v libx264 -c:a libmp3lame \
-map 0:v:0 -map 1:a:0 out.mp4
ffmpeg -loop 1 -i image.png -i audio.wav \
-c:a libfdk_aac -c:v libx264 -tune stillimage \
-shortest out.mp4
-shortest
makes sure it stops when the audio stream ends and doesn't just encode until it runs out of space or crashes.
You may need to do -s, -pix_fmt, etc depending on your image.
ffmpeg -i file.mp3 -i artwork.png -map 0:0 -map 1:0 \
-c copy -id3v2_version 3 -metadata:s:v title="Album cover" \
-metadata:s:v comment="Cover (Front)" out.mp3
There's a lot more metadata that can be added to an mp3 file but they're all obscure and weird and Cover is basically the only one you'll ever see used besides Lyrics (though I think lyrics are on mp4, not mp3)
If you're actually gonna do extensive video editing, you should probably use the Blender VSE or something. But if you only need to do a little bit of work on a file, these should help.
To start encoding a file part of the way through, use -ss
to seek to it.
ffmpeg -ss 00:00:15 -i input.mp4 out.mp4
You can use -sseof
to seek from the end of the file
ffmpeg -sseof 00:00:15 -i input.mp4 out.mp4
Note that if you use -ss
/-sseof
before you specify a file, ffmpeg won't try to "read" those first X seconds of video first, which makes encoding start faster. By contrast, doing ffmpeg -i input.mp4 -ss 00:00:15 out.mp4
will read the first 15 seconds, then start encoding.
If your input stream has timestamps already set right (it should) then seeking should work fine, but I have noticed some strange recording software (like OBS) will sometimes output FLV files with broken or missing timestamps, in which case you'll get a radically different time if you try to seek it. In these cases, reading should be used.
To stop encoding after a set amount of time, use -t
to specify a duration.
ffmpeg -i input.mp4 -t 00:00:30 out.mp4
This is similar to setting an egg timer to stop after 30 seconds.
To stop encoding after a set time in the video has passed, use -to
.
ffmpeg -i input.mp4 -to 00:08:37 out.mp4
This is like scheduling a DVR to record from 6:00 PM to 6:30 PM.
Note that without -ss
, -t
and -to
are functionally similar.
When you seek with SS instead of reading (i.e. -ss 0:13 -i file
vs -i file -ss 0:13
), the timestamp will be set to 00:00:00, meaning that -t
and -to
will, once again, be functionally similar. You can combine two -ss
commands to both seek and read, allowing you to quickly reach finite sections of a video. Example:
-ss 2:14 -i input.mp4 -ss 0.4 -t 5
will record a 5 second clip at 02:14.5
If you want to limit your recording/encoding/transcoding to a file size instead, use -fs
.
ffmpeg -i input.mp4 -fs 5M out.mp4
This would stop copying input.mp4
when the file reached 5 megabytes.
You can combine these commands to get clips out of videos.
ffmpeg -i input.mp4 -t 00:02:00 -fs 1MB out.mp4
This would trim your video to 2 minutes or 1 megabyte, whichever happens first.
ffmpeg -ss 00:30:00 -i input.mp4 -to 02:00:00 out.mp4
This would make you a new version of input.mp4
that lasts 1 hour and 30 minutes, and starts 30 minutes in.
-fs
is extremely useful if your hosting service has a strong limit on your filesize and/or duration, but you want to press your luck to see how good a quality video you can get under those limits.
ffmpeg -ss 00:02:52 -i input.mp4 -preset fast -fs 2M f.mp4
ffmpeg -ss 00:02:52 -i input.mp4 -preset veryfast -fs 2M vf.mp4
ffmpeg -ss 00:02:52 -i input.mp4 -preset superfast -fs 2M sf.mp4
ffmpeg -ss 00:02:52 -i input.mp4 -preset ultrafast -fs 2M uf.mp4
other cool use cases for -fs:
- you're on a storage budget and don't want to eat all your drive space with video files
- you're getting some kind of infinite stream (like recording your desktop) and you want to upload the file in some hosting service that doesn't care about durations (like 4chan's webms or something)
- your input stream has sections of "heavy traffic" that make the file size more unpredictable to change by simply using an mp4 preset or lowering the time.
Because Scaling (resizing) with ffmpeg is woefully outdated.
Scaling is a video filter, so the syntax is -vf
. This is an older syntax (ffmpeg used to use -vc
for "video codec", and will still recognize what you meant if you use -vc
.) so pardon the confusion with -c
. Part of this is because -f
is already used for "Format" and can't be used for "Filter". Enough semantics.
If you know precisely what size you want the output to be, you can just specify it.
ffmpeg -i input.mp4 -vf scale=1280:720 out.mp4
(Some OSes, like Windows, freak out if you use colons on them. If you're running one of those, do -vf "scale=1280:720"
instead.)
If you want to retain the aspect ratio, use -1
to make ffmpeg calculate the proper height or width respectively.
ffmpeg -i input.mp4 -vf scale=1280:-1 out.mp4
ffmpeg -i input.mp4 -vf scale=-1:720 out.mp4
The width and height are calculated equations, and the scale filter exposes a set of constants you can use. Basically this means that you can write simple expressions using the original image's height and width (among other things) to do specific options.
Doubling or halving the size of an image using iw
and ih
, the "input width" and "height":
ffmpeg -i input.mp4 -vf scale=iw*2:ih*2 out.mp4
ffmpeg -i input.mp4 -vf scale=iw*.5:iw*.5 out.mp4
A list of constants and options for scale
are available here
The default is a bicubic scaler. You can make the encoding faster with a lower quality scaler:
ffmpeg -i input.mp4 -vf scale=1280:-1,sws_flags="neighbor" out.mp4
or more beautiful with a higher quality one:
ffmpeg -i input.mp4 -vf scale=1280:-1,sws_flags="lanczos" out.mp4
A list of all options are available here
If you're scaling at anything that isn't the same aspect ratio, you will need to change the video's SAR and DAR. This is easy enough by adding the setsar=1
flag to your filters.
ffmpeg -i input.mp4 -vf scale=1280:-1,setsar=1 out.mp4
Some distortion can be expected since, you know, you're discarding, substituting, or duplicating pixels to achieve the effect, but it generally looks fine if you're scaling down. (Example: NVIDIA Share refuses to acknowledge one of my monitors is portrait-rotated, so it always records videos at 16:10 instead of 10:16. I can squish the width to get a 656x1050 video which has just enough clarity to be visible.)
And of course, all of this can be put together for some spectacular commands.
ffmpeg -ss 00:02:52 \
-i video.mp4 -i audio.mp3 \
-c:v libx264 -c:a libmp3lame \
-map 0:v:0 -map 1:a:0 \
-fs 2M -to 00:08:10 \
-shortest out.mp4
Nice stuff! Will definitely help immerse me into the ffmpeg CLI!