I\'m attempting to stream video over a WiFi network using a raspberry pi model 3 and the camera module but have come across some inconsistencies from other examples I have f
Ultimately this was not a result of the pipe, the platform or permissions. The video that was being generated on the Raspberry Pi and piped to the python script was not being handled properly.
I ended up adapting this picamera python recipe
On the Raspberry Pi: (createStream.py)
import io
import socket
import struct
import time
import picamera
# Connect a client socket to my_server:8000 (change my_server to the
# hostname of your server)
client_socket = socket.socket()
client_socket.connect(('10.0.0.3', 777))
# Make a file-like object out of the connection
connection = client_socket.makefile('wb')
try:
with picamera.PiCamera() as camera:
camera.resolution = (1024, 768)
# Start a preview and let the camera warm up for 2 seconds
camera.start_preview()
time.sleep(2)
# Note the start time and construct a stream to hold image data
# temporarily (we could write it directly to connection but in this
# case we want to find out the size of each capture first to keep
# our protocol simple)
start = time.time()
stream = io.BytesIO()
for foo in camera.capture_continuous(stream, 'jpeg', use_video_port=True):
# Write the length of the capture to the stream and flush to
# ensure it actually gets sent
connection.write(struct.pack('<L', stream.tell()))
connection.flush()
# Rewind the stream and send the image data over the wire
stream.seek(0)
connection.write(stream.read())
# Reset the stream for the next capture
stream.seek(0)
stream.truncate()
# Write a length of zero to the stream to signal we're done
connection.write(struct.pack('<L', 0))
finally:
connection.close()
client_socket.close()
On the machine that is processing the stream: (processStream.py)
import io
import socket
import struct
import cv2
import numpy as np
# Start a socket listening for connections on 0.0.0.0:8000 (0.0.0.0 means
# all interfaces)
server_socket = socket.socket()
server_socket.bind(('0.0.0.0', 777))
server_socket.listen(0)
# Accept a single connection and make a file-like object out of it
connection = server_socket.accept()[0].makefile('rb')
try:
while True:
# Read the length of the image as a 32-bit unsigned int. If the
# length is zero, quit the loop
image_len = struct.unpack('<L', connection.read(struct.calcsize('<L')))[0]
if not image_len:
break
# Construct a stream to hold the image data and read the image
# data from the connection
image_stream = io.BytesIO()
image_stream.write(connection.read(image_len))
# Rewind the stream, open it as an image with opencv and do some
# processing on it
image_stream.seek(0)
image = Image.open(image_stream)
data = np.fromstring(image_stream.getvalue(), dtype=np.uint8)
imagedisp = cv2.imdecode(data, 1)
cv2.imshow("Frame",imagedisp)
cv2.waitKey(1) #imshow will not output an image if you do not use waitKey
cv2.destroyAllWindows() #cleanup windows
finally:
connection.close()
server_socket.close()
This solution has similar results to the video I referenced in my original question. Larger resolution frames increase latency of the feed, but this is tolerable for the purposes of my application.
First you need to run processStream.py, and then execute createStream.py on the Raspberry Pi