Skip to content

Instantly share code, notes, and snippets.

@yohhoy
Last active July 24, 2024 14:05
Show Gist options
  • Save yohhoy/f0444d3fc47f2bb2d0e2 to your computer and use it in GitHub Desktop.
Save yohhoy/f0444d3fc47f2bb2d0e2 to your computer and use it in GitHub Desktop.
Read video frame with FFmpeg and convert to OpenCV image
/*
* Read video frame with FFmpeg and convert to OpenCV image
*
* Copyright (c) 2016 yohhoy
*/
#include <iostream>
#include <vector>
// FFmpeg
extern "C" {
#include <libavformat/avformat.h>
#include <libavcodec/avcodec.h>
#include <libavutil/avutil.h>
#include <libavutil/pixdesc.h>
#include <libswscale/swscale.h>
}
// OpenCV
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
int main(int argc, char* argv[])
{
if (argc < 2) {
std::cout << "Usage: ff2cv <infile>" << std::endl;
return 1;
}
const char* infile = argv[1];
// initialize FFmpeg library
av_register_all();
// av_log_set_level(AV_LOG_DEBUG);
int ret;
// open input file context
AVFormatContext* inctx = nullptr;
ret = avformat_open_input(&inctx, infile, nullptr, nullptr);
if (ret < 0) {
std::cerr << "fail to avforamt_open_input(\"" << infile << "\"): ret=" << ret;
return 2;
}
// retrive input stream information
ret = avformat_find_stream_info(inctx, nullptr);
if (ret < 0) {
std::cerr << "fail to avformat_find_stream_info: ret=" << ret;
return 2;
}
// find primary video stream
AVCodec* vcodec = nullptr;
ret = av_find_best_stream(inctx, AVMEDIA_TYPE_VIDEO, -1, -1, &vcodec, 0);
if (ret < 0) {
std::cerr << "fail to av_find_best_stream: ret=" << ret;
return 2;
}
const int vstrm_idx = ret;
AVStream* vstrm = inctx->streams[vstrm_idx];
// open video decoder context
ret = avcodec_open2(vstrm->codec, vcodec, nullptr);
if (ret < 0) {
std::cerr << "fail to avcodec_open2: ret=" << ret;
return 2;
}
// print input video stream informataion
std::cout
<< "infile: " << infile << "\n"
<< "format: " << inctx->iformat->name << "\n"
<< "vcodec: " << vcodec->name << "\n"
<< "size: " << vstrm->codec->width << 'x' << vstrm->codec->height << "\n"
<< "fps: " << av_q2d(vstrm->codec->framerate) << " [fps]\n"
<< "length: " << av_rescale_q(vstrm->duration, vstrm->time_base, {1,1000}) / 1000. << " [sec]\n"
<< "pixfmt: " << av_get_pix_fmt_name(vstrm->codec->pix_fmt) << "\n"
<< "frame: " << vstrm->nb_frames << "\n"
<< std::flush;
// initialize sample scaler
const int dst_width = vstrm->codec->width;
const int dst_height = vstrm->codec->height;
const AVPixelFormat dst_pix_fmt = AV_PIX_FMT_BGR24;
SwsContext* swsctx = sws_getCachedContext(
nullptr, vstrm->codec->width, vstrm->codec->height, vstrm->codec->pix_fmt,
dst_width, dst_height, dst_pix_fmt, SWS_BICUBIC, nullptr, nullptr, nullptr);
if (!swsctx) {
std::cerr << "fail to sws_getCachedContext";
return 2;
}
std::cout << "output: " << dst_width << 'x' << dst_height << ',' << av_get_pix_fmt_name(dst_pix_fmt) << std::endl;
// allocate frame buffer for output
AVFrame* frame = av_frame_alloc();
std::vector<uint8_t> framebuf(avpicture_get_size(dst_pix_fmt, dst_width, dst_height));
avpicture_fill(reinterpret_cast<AVPicture*>(frame), framebuf.data(), dst_pix_fmt, dst_width, dst_height);
// decoding loop
AVFrame* decframe = av_frame_alloc();
unsigned nb_frames = 0;
bool end_of_stream = false;
int got_pic = 0;
AVPacket pkt;
do {
if (!end_of_stream) {
// read packet from input file
ret = av_read_frame(inctx, &pkt);
if (ret < 0 && ret != AVERROR_EOF) {
std::cerr << "fail to av_read_frame: ret=" << ret;
return 2;
}
if (ret == 0 && pkt.stream_index != vstrm_idx)
goto next_packet;
end_of_stream = (ret == AVERROR_EOF);
}
if (end_of_stream) {
// null packet for bumping process
av_init_packet(&pkt);
pkt.data = nullptr;
pkt.size = 0;
}
// decode video frame
avcodec_decode_video2(vstrm->codec, decframe, &got_pic, &pkt);
if (!got_pic)
goto next_packet;
// convert frame to OpenCV matrix
sws_scale(swsctx, decframe->data, decframe->linesize, 0, decframe->height, frame->data, frame->linesize);
{
cv::Mat image(dst_height, dst_width, CV_8UC3, framebuf.data(), frame->linesize[0]);
cv::imshow("press ESC to exit", image);
if (cv::waitKey(1) == 0x1b)
break;
}
std::cout << nb_frames << '\r' << std::flush; // dump progress
++nb_frames;
next_packet:
av_free_packet(&pkt);
} while (!end_of_stream || got_pic);
std::cout << nb_frames << " frames decoded" << std::endl;
av_frame_free(&decframe);
av_frame_free(&frame);
avcodec_close(vstrm->codec);
avformat_close_input(&inctx);
return 0;
}
@yohhoy
Copy link
Author

yohhoy commented Feb 14, 2016

@ndtreviv
Copy link

This is great - thanks! However, I only want to target certain frames. How do I get the frame->qscale_table populated? It always seems to be NULL...

@yohhoy
Copy link
Author

yohhoy commented Apr 1, 2020

You mean decframe (decoded frame) instead of frame (converted BGR24 frame) ?

FFmpeg's API are very fragile, and it depends on the library version in your project. For instance, FFmpeg v4.0 deprecates AVFrame::qscale_table member.

@ndtreviv
Copy link

ndtreviv commented Apr 1, 2020

You're right, it does, but I still need access to the macroblock level data. Any idea how I would do that with new APIs?

@yohhoy
Copy link
Author

yohhoy commented Apr 1, 2020

I didn't try to access the macroblock level data via FFmpeg. I think you had better ask at https://stackoverflow.com/

@vuljormp
Copy link

Thanks for your kind sharing.

I use this code with FFmpeg-n4.2.4 and it works well.

However, there is a memory leak at line 89 when i check this code with valgrind.
Description of sws_getCachedContext from the official website:

Check if context can be reused, otherwise reallocate a new one.  
If context is NULL, just calls sws_getContext() to get a new context. 
Otherwise, checks if the parameters are the ones already saved in context.   

If that is the case, returns the current context. Otherwise, frees context and gets a new context with the new parameters.
Be warned that srcFilter and dstFilter are not checked, they are assumed to remain the same.

It seems that we need to release swsctx ourselves.

adding sws_freeContext(swsctx); after the line 139 fix the leak issue.

@dtsmith2001
Copy link

Thank you for this!

@OverStruck
Copy link

This line:
ret = avcodec_open2(vstrm->codec, vcodec, nullptr);
won't work because vstrm->codec is now deprecated

for anyone else running into this, you can do the following:

AVCodecContext *ctx = avcodec_alloc_context3(vcodec);
ret = avcodec_open2(ctx, vcodec, nullptr);

@stephanrotolante
Copy link

Wow this awesome thanks everyone!

@stephanrotolante
Copy link

@OverStruck you then have to use avcodec_parameters_to_context to set some parameters. Check this video out around the 35 minute mark

@stephanrotolante
Copy link

stephanrotolante commented Mar 10, 2022

I then used this guys code to convert the image for open cv

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment