What is the most efficient way to display decoded video frames in Qt?

后端 未结 3 1904
南笙
南笙 2020-11-30 19:59

What is the fastest way to display images to a Qt widget? I have decoded the video using libavformat and libavcodec, so I already have raw RGB or YCbCr 4:2:0 frames. I am

相关标签:
3条回答
  • 2020-11-30 20:08

    Depending on your OpenGL/shading skills you could try to copy the videos frames to a texture, map the texture to a rectangle (or anything else..fun!) and display it in a OpenGL scene. Not the most straight approach, but fast, because you're writing directly into the graphics memory (like SDL). I would also recoomend to use YCbCR only since this format is compressed (color, Y=full Cb,Cr are 1/4 of the frame) so less memory + less copying is needed to display a frame. I'm not using Qts GL directly but indirectly using GL in Qt (vis OSG) and can display about 7-11 full HD (1440 x 1080) videos in realtime.

    0 讨论(0)
  • 2020-11-30 20:18

    I have the same problem with gtkmm (gtk+ C++ wrapping). The best solution besides using a SDL overlay was to update directly the image buffer of the widget then ask for a redraw. But I don't know if it is feasible with Qt ...

    my 2 cents

    0 讨论(0)
  • 2020-11-30 20:29

    Thanks for the answers, but I finally revisited this problem and came up with a rather simple solution that gives good performance. It involves deriving from QGLWidget and overriding the paintEvent() function. Inside the paintEvent() function, you can call QPainter::drawImage(...) and it will perform the scaling to a specified rectangle for you using hardware if available. So it looks something like this:

    class QGLCanvas : public QGLWidget
    {
    public:
        QGLCanvas(QWidget* parent = NULL);
        void setImage(const QImage& image);
    protected:
        void paintEvent(QPaintEvent*);
    private:
        QImage img;
    };
    
    QGLCanvas::QGLCanvas(QWidget* parent)
        : QGLWidget(parent)
    {
    }
    
    void QGLCanvas::setImage(const QImage& image)
    {
        img = image;
    }
    
    void QGLCanvas::paintEvent(QPaintEvent*)
    {
        QPainter p(this);
    
        //Set the painter to use a smooth scaling algorithm.
        p.setRenderHint(QPainter::SmoothPixmapTransform, 1);
    
        p.drawImage(this->rect(), img);
    }
    

    With this, I still have to convert the YUV 420P to RGB32, but ffmpeg has a very fast implementation of that conversion in libswscale. The major gains come from two things:

    • No need for software scaling. Scaling is done on the video card (if available)
    • Conversion from QImage to QPixmap, which is happening in the QPainter::drawImage() function is performed at the original image resolution as opposed to the upscaled fullscreen resolution.

    I was pegging my processor on just the display (decoding was being done in another thread) with my previous method. Now my display thread only uses about 8-9% of a core for fullscreen 1920x1200 30fps playback. I'm sure it could probably get even better if I could send the YUV data straight to the video card, but this is plenty good enough for now.

    0 讨论(0)
提交回复
热议问题