问题
I have a simple python script with open cv, which takes in a video and does object detection on it using YOLO. My question is, how can I display the output to my website as a live stream.
Here is the python code, saving to output.avi.
import cv2
from darkflow.net.build import TFNet
import numpy as np
import time
import pafy
options = {
'model': 'cfg/tiny-yolo.cfg',
'load': 'bin/yolov2-tiny.weights',
'threshold': 0.2,
'gpu': 0.75
}
tfnet = TFNet(options)
colors = [tuple(255 * np.random.rand(3)) for _ in range(10)]
capture = cv2.VideoCapture()
capture.open("rtmp://888888888888888")
fourcc = cv2.VideoWriter_fourcc(*'XVID')
out = cv2.VideoWriter('output.avi',fourcc, 20.0, (640,480))
#capture = cv2.VideoCapture(url)
capture.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while True:
stime = time.time()
ret, frame = capture.read()
if ret:
results = tfnet.return_predict(frame)
for color, result in zip(colors, results):
if result['label'] == 'person':
tl = (result['topleft']['x'], result['topleft']['y'])
br = (result['bottomright']['x'], result['bottomright']['y'])
label = result['label']
confidence = result['confidence']
text = '{}: {:.0f}%'.format(label, confidence * 100)
frame = cv2.rectangle(frame, tl, br, color, 5)
frame = cv2.putText(
frame, text, tl, cv2.FONT_HERSHEY_COMPLEX, 0.8, (0, 0, 0), 2)
out.write(frame)
cv2.imshow('frame', frame)
print('FPS {:.1f}'.format(1 / (time.time() - stime)))
if cv2.waitKey(1) & 0xFF == ord('q'):
break
capture.release()
out.release()
cv2.destroyAllWindows()
回答1:
Instead of writing into a file you can stream the images over your local network using ffmpeg or GStreamer and use some player to show the stream. Or you can use a simple flask server and a html page to do that see here: https://blog.miguelgrinberg.com/post/video-streaming-with-flask
回答2:
I'm kind of late but You use my VidGear Python Library's WebGear, which is a powerful ASGI Video-streamer API, that is built upon Starlette - a lightweight ASGI framework/toolkit. But this API is available with the testing
branch only so install it with the following command:
Requirement: Works with Python 3.6+ only.
git clone https://github.com/abhiTronix/vidgear.git
cd vidgear
git checkout testing
sudo pip3 install .
sudo pip3 uvicorn #additional dependency
cd
Then you can use this complete python example which runs video server at address http://0.0.0.0:8000/ on any browser on the network:
#import libs
import uvicorn
from vidgear.gears import WebGear
#various performance tweaks
options = {"frame_size_reduction": 40, "frame_jpeg_quality": 80, "frame_jpeg_optimize": True, "frame_jpeg_progressive": False}
#initialize WebGear app with suitable video file (for e.g `foo.mp4`)
web = WebGear(source = "foo.mp4", logging = True, **options)
#run this app on Uvicorn server at address http://0.0.0.0:8000/
uvicorn.run(web(), host='0.0.0.0', port=8000)
#close app safely
web.shutdown()
Documentation
If still get some error, raise an issue here in its GitHub repo.
回答3:
Encode the images in jpeg format using cv2.imencode and passing it to tag in browser. Removing cv2.imshow, waitkey will better as it doesn't help in displaying in browser it will keep your server from starting and after you press 'q' the program will stop so it won't have any output to display.
Instead make a flask app with an api endpoint to return the frames. Don't forget to encode in JPEG format otherwise it won't work.
来源:https://stackoverflow.com/questions/49698258/how-to-send-opencv-output-to-browser-with-python