Convert YUVj420p pixel format to RGB888 using gstreamer

ⅰ亾dé卋堺 提交于 2020-01-02 10:08:29

问题


im using gstreamer 1.2 to feed frames from my IP camera to opencv program

the stream is (640*368 YUVj420p) and i want to convert it to RBG888 to be able to use it in my opencv program

so is there a way to use gstreamer to do that conversion ?

or do i have to do it by myself?

if so please give me the equation that do this conversion


回答1:


After some trials with gstreamer i decided to do the conversion myself and it worked

First we have to understand the YUVj420p pixel format

As shown in the above image, the Y', U and V components in Y'UV420 are encoded separately in sequential blocks. A Y' value is stored for every pixel, followed by a U value for each 2×2 square block of pixels, and finally a V value for each 2×2 block. Corresponding Y', U and V values are shown using the same color in the diagram above. Read line-by-line as a byte stream from a device, the Y' block would be found at position 0, the U block at position x×y (6×4 = 24 in this example) and the V block at position x×y + (x×y)/4 (here, 6×4 + (6×4)/4 = 30).(copied)

here is the code to do it (python)

This code will show how to inject frame to opencv using gstreamer and make the converstion

import gi
gi.require_version('Gst', '1.0')
from gi.repository import GObject, Gst
import numpy as np
import cv2

GObject.threads_init()
Gst.init(None)

def YUV_stream2RGB_frame(data):

    w=640
    h=368
    size=w*h

    stream=np.fromstring(data,np.uint8) #convert data form string to numpy array

    #Y bytes  will start form 0 and end in size-1 
    y=stream[0:size].reshape(h,w) # create the y channel same size as the image

    #U bytes will start from size and end at size+size/4 as its size = framesize/4 
    u=stream[size:(size+(size/4))].reshape((h/2),(w/2))# create the u channel its size=framesize/4

    #up-sample the u channel to be the same size as the y channel and frame using pyrUp func in opencv2
    u_upsize=cv2.pyrUp(u)

    #do the same for v channel 
    v=stream[(size+(size/4)):].reshape((h/2),(w/2))
    v_upsize=cv2.pyrUp(v)

    #create the 3-channel frame using cv2.merge func watch for the order
    yuv=cv2.merge((y,u_upsize,v_upsize))

    #Convert TO RGB format
    rgb=cv2.cvtColor(yuv,cv2.cv.CV_YCrCb2RGB)

    #show frame
    cv2.imshow("show",rgb)
    cv2.waitKey(5)

def on_new_buffer(appsink):

   sample = appsink.emit('pull-sample')
   #get the buffer
   buf=sample.get_buffer()
   #extract data stream as string
   data=buf.extract_dup(0,buf.get_size())
   YUV_stream2RGB_frame(data)
   return False

def Init():

   CLI="rtspsrc name=src location=rtsp://192.168.1.20:554/live/ch01_0 latency=10 !decodebin ! appsink name=sink"

   #simplest way to create a pipline
   pipline=Gst.parse_launch(CLI)

   #getting the sink by its name set in CLI
   appsink=pipline.get_by_name("sink")

   #setting some important properties of appsnik
   appsink.set_property("max-buffers",20) # prevent the app to consume huge part of memory
   appsink.set_property('emit-signals',True) #tell sink to emit signals
   appsink.set_property('sync',False) #no sync to make decoding as fast as possible

   appsink.connect('new-sample', on_new_buffer) #connect signal to callable func

def run():
    pipline.set_state(Gst.State.PLAYING)
    GObject.MainLoop.run()


Init()
run()



回答2:


How exactly are you getting the frames from your camera? And how you inject it into your opencv application?

Supposing you get your frames outside of gstreamer you should use a pipeline like:

appsrc caps="video/x-raw, format=I420, width=640, height=368" ! videoconvert ! capsfilter caps="video/x-raw, format=RGB" ! appsink

And then use appsrc to inject the data and use appsink to receive it back. If you are getting your data from camera from http or v4l2 you can replace appsrc with souphttpsrc or v4l2src.



来源:https://stackoverflow.com/questions/24574655/convert-yuvj420p-pixel-format-to-rgb888-using-gstreamer

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!