Have a nice day to you, people!
I am writing an application for Windows that will capture the screen and send the stream to Wowza server by rtmp (for broadcasting). My application use ffmpeg and Qt. I capture the screen with WinApi, convert a buffer to YUV444(because it's simplest) and encode frame as described at the file decoding_encoding.c (from FFmpeg examples):
///////////////////////////
//Encoder initialization
///////////////////////////
avcodec_register_all();
codec=avcodec_find_encoder(AV_CODEC_ID_H264);
c = avcodec_alloc_context3(codec);
c->width=scr_width;
c->height=scr_height;
c->bit_rate = 400000;
int base_num=1;
int base_den=1;//for one frame per second
c->time_base= (AVRational){base_num,base_den};
c->gop_size = 10;
c->max_b_frames=1;
c->pix_fmt = AV_PIX_FMT_YUV444P;
av_opt_set(c->priv_data, "preset", "slow", 0);
frame = avcodec_alloc_frame();
frame->format = c->pix_fmt;
frame->width = c->width;
frame->height = c->height;
for(int counter=0;counter<10;counter++)
{
///////////////////////////
//Capturing Screen
///////////////////////////
GetCapScr(shotbuf,scr_width,scr_height);//result: shotbuf is filled by screendata from HBITMAP
///////////////////////////
//Convert buffer to YUV444 (standard formula)
//It's handmade function because of problems with prepare buffer to swscale from HBITMAP
///////////////////////////
RGBtoYUV(shotbuf,frame->linesize,frame->data,scr_width,scr_height);//result in frame->data
///////////////////////////
//Encode Screenshot
///////////////////////////
av_init_packet(&pkt);
pkt.data = NULL; // packet data will be allocated by the encoder
pkt.size = 0;
frame->pts = counter;
avcodec_encode_video2(c, &pkt, frame, &got_output);
if (got_output)
{
//I think that sending packet by rtmp must be here!
av_free_packet(&pkt);
}
}
// Get the delayed frames
for (int got_output = 1,i=0; got_output; i++)
{
ret = avcodec_encode_video2(c, &pkt, NULL, &got_output);
if (ret < 0)
{
fprintf(stderr, "Error encoding frame\n");
exit(1);
}
if (got_output)
{
//I think that sending packet by rtmp must be here!
av_free_packet(&pkt);
}
}
///////////////////////////
//Deinitialize encoder
///////////////////////////
avcodec_close(c);
av_free(c);
av_freep(&frame->data[0]);
avcodec_free_frame(&frame);
I need to send video stream generated by this code to RTMP server. In other words, I need c++/c analog for this command:
ffmpeg -re -i "sample.h264" -f flv rtmp://sample.url.com/screen/test_stream
It's useful, but I don't want to save stream to file, I want to use ffmpeg libraries for realtime encoding screen capture and sending encoded frames to RTMP server inside my own application. Please give me a little example how to initialize AVFormatContext properly and to send my encoded video AVPackets to server.
Thanks.
My problem can be solved by using example from the source of ffmpeg. File muxing.c
is needed. It's located in the folder ffmpeg\docs\examples
in the ffmpeg's sources. There are all needed source code for writing sample stream into a rtmp server or file. I must only understand those sources and add my own stream data instead of sample stream.
There could be unexpected problems, but in general - there is a solution.
来源:https://stackoverflow.com/questions/15143918/how-to-publish-selfmade-stream-with-ffmpeg-and-c-to-rtmp-server