I've built some code that will get the MediaRecorder API to capture audio and video, and then use the ondataavailable function to send the corresponding webm file blobs up to a server via websockets. The server then sends those blobs to a client via websockets which puts the video together in a buffer using the Media Source Extension API.
This works well, except that if I want to start a stream partway through, I can't just send the latest blob because the blob by itself is unplayable. Also, if I send the blobs out of order the browsers usually complain that the audio encoding doesn't match up.
I really don't know as much about video containers, codecs etc as I should to pull this off, but my question is, how can I play those blobs as standalone videos? Can I somehow use code to add the information which is in the first blob (playable on its own) onto the other blobs? What would be a good approach to being able to get the stream playing partway through? I would transcode but it seems to take too long since I want to set up real-time (or close to) streaming.
Thanks!
With MSE, you can load the first chunk containing the WebM segment with track info and what not, and then start loading a cluster later on. The browser will figure it out.
WebM clusters begin with timestamps, which enable this to work.
Only the first blob received from the MediaRecorder API contains a header. So you will need to simply extract it and prepend it to your other blobs to be able to play them as standalone WebM videos. I recommend you to verify whether it works by using tools like hex editor. And you can automate this process on your server.
来源:https://stackoverflow.com/questions/47820722/playing-webm-chunks-as-standalone-video