Trying to stream video through following chain: h264/mp4 file on local instance storage (AWS)->ffmpeg->rtp->Janus on same instance->WebRTC playback (Chrome/mac). Resulting video
ffmpeg is optimized for outputting frames in chunks, not for outputting individual coded frames. The muxer, in your case the rtp muxer, normally buffers data before flushing to output. So ffmpeg is not optimized for real-time streaming that requires more or less frame-by-frame output. WebRTC, however, really needs frames arriving in real-time, so if frames are sent in bunches, it may discard the "late" frames, hence the choppiness.
However, there is an option in ffmpeg, to set muxer's buffer size to 0, that works nice. It is:
-max_delay 0
Also, for WebRTC, you want to disable b-frames and append SPS-PPS to every key frame:
-bf 0 +global_header -bsf:v "dump_extra=freq=keyframe"
The solution proved to be beautiful in it's obviousness. ffmpeg sent stream to Janus as RTP, Janus sent it further to viewers, obviously, as SRTP, because this is WebRTC and it is always encrypted. Which added a bunch of bytes to each packet as encryption overhead. In some cases, it meant packets going over the MTU and discarded - each time it happened, there was a visible jerk in video.
Simple addition of ?pkt_size=1300 to output RTP URL of ffmpeg removed the problem.
Thanks to Lorenzo Miniero of Meetecho for figuring this out.