Understanding FFMpegVideoCursor and mainpulating it problem


I’ve been working for about a week in Panda3D and so am in no way a guru or anything so forgive my naivety.

I am attempting to create a video player using multiple videos that you can cut to using the number keys. I have this working pretty much (with minimal overhead when switching) but want to manipulate the data in between the “FFMpeg Video Decoding” and “FFMpeg Convert Video to BGR” steps (as shown in pink and purple in PStats).

I have drilled down into the cxx source code (I am currently doing this in python) but I can’t really see where FFMpeg is called to do its magic and the fetch_into_buffer function that fetch_into_texture() etc. use just seems to allocate memory rather than do any decoding, is this true?

I just need a way to intercept the data between the decode and the conversion because at the moment the decode is taking 10ms and the conversion 20ms, and so I want to alter the amount of data (only grab a quarter of the frame coming in to be precise) before the conversion because doing it after the conversion means I lose that 20ms already, which is a massive performance overhead when working with multiple videos.

If anyone could answer any/all of these questions it would be a great help:

  1. Where is FFMpeg called when executing the fetch_into_texture()/fetchIntoTexture() method?

  2. Is it possible to grab the decode data before it is converted into BGR and manipulate it?

3a. If the answer to 2. is yes, how is the data stored? Can I do something like only take a quarter of the frame if it’s saved as a 2d array say, or can it be converted into a 2d array to do so etc?

3b. If the answer to 2. is no, is there any way I can grab the decode data and do the quartering any other way? Maybe using a different way to decode the data or implementing an FFMpeg/DirectShow wrapper myself?

  1. Can you give me an outline/flowchart type description of what happens on a low level from the video being loaded to grabbing a frame to doing the decode, doing the BGR conversion, applying to a texture etc?

Thanks everyone for your time,

I know this is a big want but I’ve been working on this for about a week only slowly improving performance and have now hit this roadblock so any help is greatly appreciated.

Thanks again,



No ideas anyone?


Sorry for the delayed reply.

The conversion from FFMpeg happens in FFMpegVideoCursor::export_frame(), which is called at the end of fetch_into_texture(). Specifically, the call to sws_scale() is responsible for extracting the frame data from FFMpeg and converting into Panda’s required form, which is consecutive bytes of either BGR or BGRA data (depending on whether the video includes an alpha channel). The time spent in this function is the time thet shows in the “FFMPEG Convert Video to BGR” slot, and this consists almost entirely of calls to the SWScale API.

You can certainly add additional processing of the data to this function between the time it is copied out of FFMPeg and before it is copied into Panda’s data structure. But if your goal is to reduce the amount of time spent processing the video, you’ll have to look deeper into the FFMpeg and SWScale API to see if there’s a way to ask it to extract only a portion of the frame. Otherwise, your best bet is probably to use a lighter-weight codec, such as MPEG2 instead of H.264, for instance; or to use a color encoding that is already in BGR or BGRA instead of YUV (and this also depends on the requirements of the codec; I don’t think it’s possible with H.264).


Thanks very much for the reply, that’s helped me understand a lot!

I’m guessing I’ll have to compile Panda3d from the source then to get down into the FFMpegVideoCursor code?

Sorry I haven’t really done much python but I’m guessing I can’t modify this stuff in the python can I?

Thanks so much for your reply,


Right, this is C+±level stuff. No way to change it at the Python level.



I’ve been messing around in C++ a bit, yesterday and today to get my head around how it does OO and have re-implemented my videoPlayer from Python in C++.

Problem is, I can’t seem to inherit FfmpegVideoCursor so that I can write my own version of export_frame.

I have a header called ffmpegVideoCursorMine and a cpp called FfmpegVideoCursorMine but I’m getting a compile error in my header file:

If I try and inherit another Panda class, say MovieTexture or PStatsClient, it works just fine with no error.

Does anyone know what the problem is? I read it is common the error is about circular inheritance but I don’t know how this can be. (Sorry if it’s a very simple C++ error)

Here is FfmpegVideoCursorMine.h:

#include "ffmpegVideoCursor.h"

class FfmpegVideoCursorMine : public FfmpegVideoCursor //<-- Error here

And here is FfmpegVideoCursorMine.cpp:

#include "ffmpegVideoCursorMine.h"


Thanks for your time,


Ah sorry, just realised this thread is still in scripting issues, should it be moved to the C++ area now?

Mod edit: moved as per your request.

Try #including pandabase.h before including ffmpegVideoCursor.h. It looks like ffmpegVideoCursor.h is incorrectly testing #ifdef HAVE_FFMPEG before it includes pandabase.h, which is where that symbol would normally be defined.


Ahh I see, it is including FfmpegVideoCursor now but it doesn’t seem to think MovieVideoCursor exists now:

Indeed, it appears that ffmpegVideoCursor.h also fails to #include “movieVideoCursor.h”, so you should do that first too.


Ah okay, I’ve read up a bit more now on #includes and how they work, so there will be less tiny questions like that from now on.

Thanks for the help,



I’ve got my own version of FfmpegVideoCursor inheriting FfmpegVideoCursor and have successfully overwritten fetch_into_texture, but when I try and overwrite export_frame() I get a load of errors about filename, _format_ctx, _video_index, _video_ctx, _frame, _frame_out, _packet etc.

So I copied the constructor from FfmpegVideoCursor that looks as follows:

FfmpegVideoCursor(FfmpegVideo *src) :

But then I get compile errors saying that it’s illegal member initialization saying that all _format_ctx etc. are not base or member.

Where they are actually defined is in FfmpegVideoCursor.h right? If so, shouldn’t their definitions be included in my class if I’m inheriting from FfmpegVideoCursor?

I’ve googled around a bit but guess my C++ knowledge is too small for this one.

Thanks for any help and all the help so far,


These data members are protected members of the base class, so yes, you do have them available to you in your inherited class. You can assign to them or read them in normal code. You can’t use the parenthesis syntax to initialize them in your own class constructor, though; that’s the responsibility of the parent constructor (the FfmpegVideoCursor constructor will be called automatically, and it is responsible for initializing all of the members).

What sort of error messages are you getting when you access them in export_frame()? This should be legal.

Note that export_frame() is not declared virtual in the base class, so you can’t actually override just that one method. (You can declare your own export_frame() method, but it won’t be called.) You’ll have to edit the class definition in FfmpegVideoCursor.h and add the keyword “virtual” to the declaration of export_frame() there.

Edit: actually, as long as you override fetch_into_texture() and fetch_into_buffer() from the base class, and call your export_frame() from those methods, then you don’t need to make export_frame() virtual. fetch_into_texture() and fetch_into_buffer() are already virtual, so you can replace those methods with your own; and when your version of those methods calls export_frame() it will call your version of the method whether or not it has been declared virtual.