ffmpeg: RGB to YUV conversion loses color and scale

冷暖自知 提交于 2019-11-28 02:13:57

问题


I am trying to convert RGB frames to YUV420P format in ffmpeg/libav. Following is the code for conversion and also the images before and after conversion. The converted image loses all color information and also the scale changes significantly. Does anybody have idea how to handle this? I am completely new to ffmpeg/libav!

// Did we get a video frame?
   if(frameFinished)
   {
       i++;
       sws_scale(img_convert_ctx, (const uint8_t * const *)pFrame->data,
                 pFrame->linesize, 0, pCodecCtx->height,
                 pFrameRGB->data, pFrameRGB->linesize);                   

       //==============================================================
       AVFrame *pFrameYUV = avcodec_alloc_frame();
       // Determine required buffer size and allocate buffer
       int numBytes2 = avpicture_get_size(PIX_FMT_RGB24, pCodecCtx->width,                                 
                                          pCodecCtx->height);
       uint8_t *buffer = (uint8_t *)av_malloc(numBytes2*sizeof(uint8_t));

       avpicture_fill((AVPicture *)pFrameYUV, buffer, PIX_FMT_RGB24,
                       pCodecCtx->width, pCodecCtx->height);


       rgb_to_yuv_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,  
                                       PIX_FMT_RGB24,
                                       pCodecCtx->width,pCodecCtx->height, 
                                       PIX_FMT_RGB24,
                                       SWS_BICUBIC, NULL,NULL,NULL);

       sws_scale(rgb_to_yuv_ctx, pFrameRGB->data, pFrameRGB->linesize, 0, 
                 pCodecCtx->height, pFrameYUV->data, pFrameYUV->linesize);

       sws_freeContext(rgb_to_yuv_ctx);

       SaveFrame(pFrameYUV, pCodecCtx->width, pCodecCtx->height, i);

       av_free(buffer);
       av_free(pFrameYUV);
   }


回答1:


Well for starters I will assume where you have:

rgb_to_yuv_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,  
                                   PIX_FMT_RGB24,
                                   pCodecCtx->width,pCodecCtx->height, 
                                   PIX_FMT_RGB24,
                                   SWS_BICUBIC, NULL,NULL,NULL);

You really intended:

rgb_to_yuv_ctx = sws_getContext(pCodecCtx->width, pCodecCtx->height,  
                                   PIX_FMT_RGB24,
                                   pCodecCtx->width,pCodecCtx->height, 
                                   PIX_FMT_YUV420P,
                                   SWS_BICUBIC, NULL,NULL,NULL);

I'm also not sure why you are calling swscale twice!

YUV is a planar format. This means all three channels are stored independently. Whre RGB is stored like: RGBRGBRGB

YUV420P is stores like: YYYYYYYYYYYYYYYY..UUUUUUUUUU..VVVVVVVV

So swscale required you give it three pointers.

Next, You want your line stride to be a multiple of 16, or 32 so the vector units of the processor can be used. And finally the dimensions of the Y plane need to be divisible by two (because the U and V planes are a quarter size of the Y plane).

So, lets rewrite this:

#define RNDTO2(X) ( ( (X) & 0xFFFFFFFE )
#define RNDTO32(X) ( ( (X) % 32 ) ? ( ( (X) + 32 ) & 0xFFFFFFE0 ) : (X) )




if(frameFinished)
{
    static SwsContext *swsCtx = NULL;
    int width    = RNDTO2 ( pCodecCtx->width );
    int height   = RNDTO2 ( pCodecCtx->height );
    int ystride  = RNDTO32 ( width );
    int uvstride = RNDTO32 ( width / 2 );
    int ysize    = ystride * height;
    int vusize   = uvstride * ( height / 2 );
    int size     = ysize + ( 2 * vusize )

    void * pFrameYUV = malloc( size );
    void *plane[] = { pFrameYUV, pFrameYUV + ysize, pFrameYUV + ysize + vusize, 0 };
    int *stride[] = { ystride, vustride, vustride, 0 };

    swsCtx = sws_getCachedContext ( swsCtx, pCodecCtx->width, pCodecCtx->height,
    pCodecCtx->pixfmt, width, height, AV_PIX_FMT_YUV420P, 
    SWS_LANCZOS | SWS_ACCURATE_RND , NULL, NULL, NULL );
    sws_scale ( swsCtx, pFrameRGB->data, pFrameRGB->linesize, 0, 
    pFrameRGB->height, plane, stride );
}    

I also switched your algorithm to use SWS_LANCZOS | SWS_ACCURATE_RND. This will give you better looking images. Change it back if it is to slow. I also used the pixel format from the source frame instead of assuming it RGB all the time.



来源:https://stackoverflow.com/questions/21938674/ffmpeg-rgb-to-yuv-conversion-loses-color-and-scale

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!