问题
I am trying to build a GStreamer pipeline which interleaves images from multiple cameras into a single data flow which can be passed through a neural network and then split into separate branches for sinking. I am successfully using the appsrc
plugin and the Basler Pylon 5 - USB 3.0 API to create the interleaved feed. However, before I go through the work to write the neural network GStreamer element, I want to get the splitting working.
Currently, I am thinking of tagging the images with an "ID" indicating which camera it came from. Then I thought I could split the data flow using this tag. However, I have not been able to find any subject matter dealing with this issue exactly. I have seen that you can use tee
plugin to branch the pipeline, but I haven't seen it used to split based on tags. Is it possible to use tee
to do this?
I have seen people use tee
to split a feed based on the source with something like this:
gst-launch-1.0 -vvv \
tee name=splitter \
$VSOURCE \
! $VIDEO_DECODE \
! $VIDEO_SINK splitter. \
$VSOURCE1 \
! $VIDEO_DECODE \
! $VIDEO_SINK splitter.
However, this does not allow me to have a single path through the neural network element.
If it helps, here is a diagaram of the pipeline I envision:
cam1 ---\ /---> udpsink/appsink
\ /
appsrc-->neural_network-->tee---
/ \
cam2 ---/ \---> udpsink/appsink
回答1:
The tee element just forwards the same data to both branches. You should write another element which takes the input and only outputs the data of the stream you are interested in.
You should also place a queue element behind each branch to provide separate threads for each branch. I called the element to split up the streams camfilter which has a property id
:
cam1 ---\ /---> queue --> camfilter id=1 --> udpsink/appsink
\ /
appsrc-->neural_network-->tee---
/ \
cam2 ---/ \---> queue --> camfilter id=2 --> udpsink/appsink
回答2:
This was not available when this question was asked. However, since summer 2018, if you want to reduce the workload of implementing your own "merging" code with appsrc and camera frame handling, you can use nnstreamer. This also allows you to replace neural networks more easily.
With recent add of neural-network supporting gstreamer plugins, "nnstreamer" (https://github.com/nnsuite/nnstreamer), you can do it without having appsrc in the middle, reducing your workloads of implementation:
cam1 (gst src) ---> videoconvert,scale,... --> tensor_converter --\
\
tensor_merge (or tensor_mux depending on the input dimensions) --> tensor_filter (framework=tf_lite, model=abc.tflite) --> tee --> (same from here)
/
cam2 (gst src) ---> videoconvert,scale,... --> tensor_converter --/
Note that it supports pytorch, caffe2, tf, and more as well.
来源:https://stackoverflow.com/questions/49288404/gstreamer-with-multiple-cameras-how-can-i-split-the-pipeline-based-on-the-camer