问题
Currently I have a problem when generating fragmented MP4 file from code using libavformat. My file can be played using VLC, but can't be streamed (via WebSocket) and played (via MediaSource) in (Chrome) browser. (I used this to test streaming fragmented MP4 file to browser via WebSocket).
Note: The files below is encoded by Baseline profile, level 4. So you should change the MIME type (in index.html) to const mimeCodec = 'video/mp4; codecs="avc1.42C028"'; to be able to play them.
I checked and found that my generated Mp4 file is a bit different with the file generated by using ffmpeg
tool.
Here's what I've done:
I have a .h264 file
The first approach, I use ffmpeg to generate the fragmented MP4 file.
ffmpeg -i temp.h264 -vcodec copy -f mp4 -movflags empty_moov+default_base_moof+frag_keyframe ffmpeg.mp4
The generated file can be played by both Quicktime player and VLC player
The second approach, I programmaticaly generate the fragmented Mp4 file by using libavformat
First I initialize the context, the
codec
in code is theAVCodecContext*
of the input streamav_register_all(); avcodec_register_all(); int ret; AVOutputFormat* fmt = av_guess_format("mp4", 0, 0); if(!fmt) { return; } AVFormatContext* ctx = avformat_alloc_context(); // Create AVIO context to capture generated Mp4 contain uint8_t *avio_ctx_buffer = NULL; size_t avio_ctx_buffer_size = 50000; IOOutput buffer = {}; const size_t bd_buf_size = 50000; avio_ctx_buffer = (uint8_t*)av_malloc(avio_ctx_buffer_size); buffer.buff = (uint8_t*)av_malloc(bd_buf_size); buffer.size = bd_buf_size; AVIOContext* ioCtx = avio_alloc_context(avio_ctx_buffer, (int)avio_ctx_buffer_size, 1, &buffer, NULL, MP4Formatter::onWritePacket, NULL); ctx->oformat = fmt; if (ctx->oformat->flags & AVFMT_GLOBALHEADER) ctx->flags |= CODEC_FLAG_GLOBAL_HEADER; ctx->pb = ioCtx; av_opt_set(ctx, "movflags", "frag_keyframe+empty_moov+default_base_moof", 0); AVStream* st = avformat_new_stream(ctx, codec->codec); if (!st) { return; } st->id = (ctx->nb_streams - 1); avcodec_parameters_from_context(st->codecpar, codec); st->time_base = codec->time_base; ioCtx->seekable = false;
Second I implement the onWritePacket callback
int MP4Formatter::onWritePacket(void *opaque, uint8_t* buf, int buf_size) { file.write((char*)buf, buf_size); }
Third, on every packet from h264 file, I write it using
av_interleaved_write_frame
if (firstFrame) { AVDictionary *opts = NULL; av_dict_set(&opts, "movflags", "frag_keyframe+empty_moov+default_base_moof", 0); if(!parseSPSPPS(data, length)) { return; } cout << "spslen " << spslen << " ppslen " << ppslen << endl; auto c = st->codecpar; // Extradata contains PPS & SPS for AVCC format int extradata_len = 8 + spslen + 1 + 2 + ppslen; c->extradata = (uint8_t*)av_mallocz(extradata_len); c->extradata_size = extradata_len; c->extradata[0] = 0x01; c->extradata[1] = sps[1]; c->extradata[2] = sps[2]; c->extradata[3] = sps[3]; c->extradata[4] = 0xFC | 3; c->extradata[5] = 0xE0 | 1; int tmp = spslen; c->extradata[6] = (tmp >> 8) & 0x00ff; c->extradata[7] = tmp & 0x00ff; int i = 0; for (i=0; i<tmp; i++) { c->extradata[8 + i] = sps[i]; } c->extradata[8 + tmp] = 0x01; int tmp2 = ppslen; c->extradata[8 + tmp + 1] = (tmp2 >> 8) & 0x00ff; c->extradata[8 + tmp + 2] = tmp2 & 0x00ff; for (i=0; i<tmp2; i++) { c->extradata[8 + tmp + 3 + i] = pps[i]; } int ret = avformat_write_header(ctx, &opts); if(ret < 0) { return; } firstFrame = false; } AVPacket pkt; av_init_packet(&pkt); pkt.buf = av_buffer_alloc(length); memcpy(pkt.buf->data, data, length); pkt.buf->size = length; pkt.data = pkt.buf->data; pkt.size = pkt.buf->size; pkt.pts = ts; pkt.dts = ts; if (keyFrame) { pkt.flags |= AV_PKT_FLAG_KEY; } else { pkt.flags = 0; } pkt.stream_index = st->id; av_interleaved_write_frame(ctx, &pkt); av_buffer_unref(&pkt.buf); av_packet_unref(&pkt);
Can you guys take a look at my file to see what's wrong?
来源:https://stackoverflow.com/questions/42430809/different-between-fragmented-mp4-files-generated-by-ffmpeg-and-by-code