OpenCV Python, reading video from named pipe

限于喜欢 提交于 2019-12-23 10:10:42

问题


I am trying to achieve results as shown on the video (Method 3 using netcat) https://www.youtube.com/watch?v=sYGdge3T30o

The point is to stream video from raspberry pi to ubuntu PC and process it using openCV and python.

I use command

raspivid -vf -n -w 640 -h 480 -o - -t 0 -b 2000000 | nc 192.168.0.20 5777

to stream the video to my PC and then on the PC I created name pipe 'fifo' and redirected the output

 nc -l -p 5777 -v > fifo

then i am trying to read the pipe and display the result in the python script

import cv2
import sys

video_capture = cv2.VideoCapture(r'fifo')
video_capture.set(cv2.CAP_PROP_FRAME_WIDTH, 640);
video_capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 480);

while True:
    # Capture frame-by-frame
    ret, frame = video_capture.read()
    if ret == False:
        pass

    cv2.imshow('Video', frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break

# When everything is done, release the capture
video_capture.release()
cv2.destroyAllWindows()

However I just end up with an error

[mp3 @ 0x18b2940] Header missing this error is produced by the command video_capture = cv2.VideoCapture(r'fifo')

When I redirect the output of netcat on PC to a file and then reads it in python the video works, however it is speed up by 10 times approximately.

I know the problem is with the python script, because the nc transmission works (to a file) but I am unable to find any clues.

How can I achieve the results as shown on the provided video (method 3) ?


回答1:


I too wanted to achieve the same result in that video. Initially I tried similar approach as yours, but it seems cv2.VideoCapture() fails to read from named pipes, some more pre-processing is required.

ffmpeg is the way to go ! You can install and compile ffmpeg by following the instructions given in this link: https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu

Once it is installed, you can change your code like so:

import cv2
import subprocess as sp
import numpy

FFMPEG_BIN = "ffmpeg"
command = [ FFMPEG_BIN,
        '-i', 'fifo',             # fifo is the named pipe
        '-pix_fmt', 'bgr24',      # opencv requires bgr24 pixel format.
        '-vcodec', 'rawvideo',
        '-an','-sn',              # we want to disable audio processing (there is no audio)
        '-f', 'image2pipe', '-']    
pipe = sp.Popen(command, stdout = sp.PIPE, bufsize=10**8)

while True:
    # Capture frame-by-frame
    raw_image = pipe.stdout.read(640*480*3)
    # transform the byte read into a numpy array
    image =  numpy.fromstring(raw_image, dtype='uint8')
    image = image.reshape((480,640,3))          # Notice how height is specified first and then width
    if image is not None:
        cv2.imshow('Video', image)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
    pipe.stdout.flush()

cv2.destroyAllWindows()

No need to change any other thing on the raspberry pi side script.

This worked like a charm for me. The video lag was negligible. Hope it helps.




回答2:


I had a similar problem that I was working on, with a little more research I eventually stumbled upon the following:

Skip to the solution: https://stackoverflow.com/a/48675107/2355051

I ended up adapting this picamera python recipe

On the Raspberry Pi: (createStream.py)

import io
import socket
import struct
import time
import picamera

# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('10.0.0.3', 777))

# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
    with picamera.PiCamera() as camera:
        camera.resolution = (1024, 768)
        # Start a preview and let the camera warm up for 2 seconds
        camera.start_preview()
        time.sleep(2)

        # Note the start time and construct a stream to hold image data
        # temporarily (we could write it directly to connection but in this
        # case we want to find out the size of each capture first to keep
        # our protocol simple)
        start = time.time()
        stream = io.BytesIO()
        for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
            # Write the length of the capture to the stream and flush to
            # ensure it actually gets sent
            connection.write(struct.pack('<L', stream.tell()))
            connection.flush()

            # Rewind the stream and send the image data over the wire
            stream.seek(0)
            connection.write(stream.read())

            # Reset the stream for the next capture
            stream.seek(0)
            stream.truncate()
    # Write a length of zero to the stream to signal we're done
    connection.write(struct.pack('<L', 0))
finally:
    connection.close()
    client_socket.close()

On the machine that is processing the stream: (processStream.py)

import io
import socket
import struct
import cv2
import numpy as np

# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 777))
server_socket.listen(0)

# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
    while True:
        # Read the length of the image as a 32-bit unsigned int. If the
        # length is zero, quit the loop
        image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
        if not image_len:
            break
        # Construct a stream to hold the image data and read the image
        # data from the connection
        image_stream = io.BytesIO()
        image_stream.write(connection.read(image_len))
        # Rewind the stream, open it as an image with opencv and do some
        # processing on it
        image_stream.seek(0)
        image = Image.open(image_stream)

        data = np.fromstring(image_stream.getvalue(), dtype=np.uint8)
        imagedisp = cv2.imdecode(data, 1)

        cv2.imshow("Frame",imagedisp)
        cv2.waitKey(1)  #imshow will not output an image if you do not use waitKey
        cv2.destroyAllWindows() #cleanup windows 
finally:
    connection.close()
    server_socket.close()

This solution has similar results to the video I referenced in my original question. Larger resolution frames increase latency of the feed, but this is tolerable for the purposes of my application.

First you need to run processStream.py, and then execute createStream.py on the Raspberry Pi



来源:https://stackoverflow.com/questions/35166111/opencv-python-reading-video-from-named-pipe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!