问题
I have set up a pipeline in which, I live stream the video to Kinesis Video Stream (KVS), which sends the frames to Amazon Rekognition for face recognition, which further sends them to Kinesis Data Stream (KDS). Finally, KDS sends the results to a lambda.
For a frame on which face recognition has been conducted, I get the JSON of the following format: https://docs.aws.amazon.com/rekognition/latest/dg/streaming-video-kinesis-output-reference.html
My AIM is: Using this JSON, I somehow want to get an image representation of the frame which was recorded by the KVS.
What have I tried:
This JSON provides me with the Fragment Number.
I use this fragment number and make a call to the get_media_for_fragment_list
The above call returns a key called Payload in response.
I have been trying to somehow render this payload into an image.
However, I fail to do this every time as I do not know how to make sense out of this payload and decode it.
Following is the code snippet.
def getFrameFromFragment(fragment):
client = boto3.client('kinesis-video-archived-media',endpoint_url=data_endpoint_for_kvs)
response = client.get_media_for_fragment_list(
StreamName='kvs1',
Fragments=[
fragment,
]
)
payload = response['Payload']
print(payload.read())
How do I use this payload to get an image?
I know of parsers that exist in Java: https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/examples-renderer.html
However, I wanted to know of a solution in Python.
In case my question statement is wrong or doesn't make sense, feel free to ask me more about this issue.
Thanks for the help. :)
回答1:
After receiving the payload using the following code,
kvs_stream = kvs_video_client.get_media(
StreamARN="ARN",
StartSelector=
{'StartSelectorType':'FRAGMENT_NUMBER',
'AfterFragmentNumber': decoded_json_from_stream['InputInformation']['KinesisVideo']['FragmentNumber']
}
)
you can use,
frame = kvs_stream['Payload'].read()
to receive to get the frame from the payload. Now you can open an mvi file and write this frame to it and then extract a particular frame using openCV from this mvi file.
with open('/tmp/stream.avi', 'wb') as f:
f.write(frame)
cap = cv2.VideoCapture(file.mvi)
#use frame for further processing
回答2:
The response from GetMedia is the stream which is in MKV packaging format. First, you would need to use some Python library that extracts the frames from the MKV format (https://github.com/vi/mkvparse or alike). Next, your stream is likely to be encoded. For example H264. You will also need to decode the frame in order to get the actual bitmap of the image if that's what you need. There are few software based decoders seem to be available for Python: https://github.com/DaWelter/h264decoder
I am not familiar with these projects though.
回答3:
The payload that you are getting is in MKV format :https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/API_reader_GetMediaForFragmentList.html#API_reader_GetMediaForFragmentList_ResponseSyntax. To get an image, you just need to get a key frame in that fragment and convert it to an image.
回答4:
Following code snippet can be used. Further you can modify the data that you fetch in one round for further optimization using the .read()
method. You can then you the video fname
as per your requirement.
client = boto3.client('kinesis-video-media', endpoint_url=dataEndPoint)
response = client.get_media(
StreamARN=streamARN,
StartSelector={
'StartSelectorType': 'FRAGMENT_NUMBER',
'AfterFragmentNumber': fragmentID}
)
fname = '/tmp/'+fragmentID+'-'+serverTimestamp+'.webm'
with open(fname, 'wb+') as f:
chunk = response['Payload'].read(1024*8)
while chunk:
f.write(chunk)
chunk = response['Payload'].read(1024*8)
return fname
来源:https://stackoverflow.com/questions/60912980/using-python-to-parse-and-render-kinesis-video-streams-and-get-an-image-represen