How to create multi bit rate dash content using ffmpeg dash muxer

后端 未结 3 485
挽巷
挽巷 2021-02-04 12:49

ffmpeg documentation says that we can use dash muxer to create dash segments and manifest file with just a single command, like:

ffmpeg -re -i 

        
相关标签:
3条回答
  • 2021-02-04 13:29

    Ok, so this is how I resolved my problem. Following commands are useful for implementing pseudo-live dash content (it's when you want to stream existing video file as if it were a live video) but also the same approach could be used for on-demand video. First, we transform an input video file (sample.divx) into another, well prepared for dash streaming video file - sample_dash.mp4:

    ffmpeg -y -i sample.divx ^
      -c:v libx264 -x264opts "keyint=24:min-keyint=24:no-scenecut" -r 24 ^
      -c:a aac -b:a 128k ^
      -bf 1 -b_strategy 0 -sc_threshold 0 -pix_fmt yuv420p ^
      -map 0:v:0 -map 0:a:0 -map 0:v:0 -map 0:a:0 -map 0:v:0 -map 0:a:0 ^
      -b:v:0 250k  -filter:v:0 "scale=-2:240" -profile:v:0 baseline ^
      -b:v:1 750k  -filter:v:1 "scale=-2:480" -profile:v:1 main ^
      -b:v:2 1500k -filter:v:2 "scale=-2:720" -profile:v:2 high ^
      sample_dash.mp4
    

    I'm saying sample_dash.mp4 is well prepared because it's encoded in a good for dash format - H264/ACC and it's containing multiple (3) video streams with different qualities (baseline, main, high). ffmpeg dash muxer will translate these 3 video streams into relevant alternative video quality deash segment files. Here is how:

    ffmpeg -y -re -i sample_dash.mp4 ^
      -map 0 ^
      -use_timeline 1 -use_template 1 -window_size 5 -adaptation_sets "id=0,streams=v id=1,streams=a" ^
      -f dash sample.mpd
    

    -re flags tells ffmpeg to process the input video in a realtime manner, which is useful for pseudo-live streaming.

    0 讨论(0)
  • 2021-02-04 13:37

    With the help of this answer and documentation, Following is the way to do this in single command:

    ffmpeg -i $inputFile \
      -map 0:v:0 -map 0:a\?:0 -map 0:v:0 -map 0:a\?:0 -map 0:v:0 -map 0:a\?:0 -map 0:v:0 -map 0:a\?:0 -map 0:v:0 -map 0:a\?:0 -map 0:v:0 -map 0:a\?:0  \
      -b:v:0 350k  -c:v:0 libx264 -filter:v:0 "scale=320:-1"  \
      -b:v:1 1000k -c:v:1 libx264 -filter:v:1 "scale=640:-1"  \
      -b:v:2 3000k -c:v:2 libx264 -filter:v:2 "scale=1280:-1" \
      -b:v:3 245k  -c:v:3 libvpx-vp9 -filter:v:3 "scale=320:-1"  \
      -b:v:4 700k  -c:v:4 libvpx-vp9 -filter:v:4 "scale=640:-1"  \
      -b:v:5 2100k -c:v:5 libvpx-vp9 -filter:v:5 "scale=1280:-1"  \
      -use_timeline 1 -use_template 1 -window_size 6 -adaptation_sets "id=0,streams=v  id=1,streams=a" \
      -hls_playlist true -f dash output/output.mpd
    
    0 讨论(0)
  • 2021-02-04 13:41

    The problem is where you think the filter is applied. In the ffmpeg logic video filters are applied "after" the stream are decoded and "before" they are encoded (no matter where you put them in the command line)

    By consequence that they cannot be used in the way you are using them.

    Probably the best way in your case is using a filter complex that immediately after its decoded, first split the video in 4 different intermediate videos, then apply a different scaling to each of them, then take their output and encode them.

    something like this (i'm reducind to two different variants for shortness, i'm sure you can readapt for 6):

    ffmpeg -i $inputFile -filter_complex "[0]split=6[mid0][mid1];[mid0]scale=320:-1[out0];[mid1]scale=640:-1[out1]" -map [out0] -map 0:a -map [out1] -map 0:a -c:a aac -c:v:0 libx264 -c:v:1 libvpx-vp9 -use_timeline 1 -use_template 1 -window_size 6 -adaptation_sets "id=0,streams=v  id=1,streams=a" -hls_playlist true -f dash output/output.mpd
    

    It's just an example, hope it will bring you to the right track :)

    0 讨论(0)
提交回复
热议问题