问题
I'd like to pipe ffmpeg segments to s3 without writing them to disk.
ffmpeg -i t2.mp4 -map 0 -c copy -f segment -segment_time 20 output_%04d.mkv
Is it possible to modify this command so that ffmpeg writes segments to an s3 bucket? Something like this perhaps?
ffmpeg -i t2.mp4 -map 0 -c copy -f segment -segment_time 20 pipe:1 \
| aws s3 cp - s3://bucket/output_%04d.mkv
When I run the command above I receive this error
Could not write header for output file #0
(incorrect codec parameters ?): Muxer not found
This command works except the entire video is uploaded and not the individual segments
ffmpeg -i t2.mp4 -map 0 -c copy -f segment -segment_time 20 pipe:output_%04d.mkv \
| aws s3 cp - s3://bucket/test.mkv
回答1:
It works with s3fs. Tested on Ubuntu 18.04.4 LTS.
s3fs version:
root@ip-172-31-69-62:~# s3fs --version
Amazon Simple Storage Service File System V1.86 (commit:unknown) with OpenSSL
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
root@ip-172-31-69-62:~#
Compiled from source; could never get it to work with regular version installed from 'apt install s3fs'. You need to have .aws/credentials properly configured and then just mount a folder:
root@ip-172-31-69-62:~# s3fs sm-alfa-beta /mnt/s5
Don't pipe it; treat it as a regular folder and it lands on the S3 bucket.
root@ip-172-31-69-62:~# ffmpeg -i t2.mp4 -map 0 -c copy -f segment -segment_time 5 /mnt/s5/output_%04d.mkv
ffmpeg version 3.4.6-0ubuntu0.18.04.1 Copyright (c) 2000-2019 the FFmpeg developers
built with gcc 7 (Ubuntu 7.3.0-16ubuntu3)
configuration: --prefix=/usr --extra-version=0ubuntu0.18.04.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --enable-gpl --disable-stripping --enable-avresample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librubberband --enable-librsvg --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-omx --enable-openal --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-chromaprint --enable-frei0r --enable-libopencv --enable-libx264 --enable-shared
libavutil 55. 78.100 / 55. 78.100
libavcodec 57.107.100 / 57.107.100
libavformat 57. 83.100 / 57. 83.100
libavdevice 57. 10.100 / 57. 10.100
libavfilter 6.107.100 / 6.107.100
libavresample 3. 7. 0 / 3. 7. 0
libswscale 4. 8.100 / 4. 8.100
libswresample 2. 9.100 / 2. 9.100
libpostproc 54. 7.100 / 54. 7.100
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 't2.mp4':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
creation_time : 2014-07-18T06:00:15.000000Z
Duration: 00:00:21.29, start: 0.000000, bitrate: 14904 kb/s
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], 14517 kb/s, 25 fps, 25 tbr, 25k tbn, 50 tbc (default)
Metadata:
creation_time : 2014-07-18T06:00:15.000000Z
handler_name : ?Mainconcept Video Media Handler
encoder : AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default)
Metadata:
creation_time : 2014-07-18T06:00:15.000000Z
handler_name : #Mainconcept MP4 Sound Media Handler
[segment @ 0x55e4b1d6d660] Opening '/mnt/s5/output_0000.mkv' for writing
Output #0, segment, to '/mnt/s5/output_%04d.mkv':
Metadata:
major_brand : mp42
minor_version : 0
compatible_brands: mp42mp41
encoder : Lavf57.83.100
Stream #0:0(eng): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709), 1280x720 [SAR 1:1 DAR 16:9], q=2-31, 14517 kb/s, 25 fps, 25 tbr, 1k tbn, 25 tbc (default)
Metadata:
creation_time : 2014-07-18T06:00:15.000000Z
handler_name : ?Mainconcept Video Media Handler
encoder : AVC Coding
Stream #0:1(eng): Audio: aac (LC) (mp4a / 0x6134706D), 48000 Hz, stereo, fltp, 189 kb/s (default)
Metadata:
creation_time : 2014-07-18T06:00:15.000000Z
handler_name : #Mainconcept MP4 Sound Media Handler
Stream mapping:
Stream #0:0 -> #0:0 (copy)
Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[segment @ 0x55e4b1d6d660] Opening '/mnt/s5/output_0001.mkv' for writing
[segment @ 0x55e4b1d6d660] Opening '/mnt/s5/output_0002.mkv' for writing
[segment @ 0x55e4b1d6d660] Opening '/mnt/s5/output_0003.mkv' for writing1.9x
[segment @ 0x55e4b1d6d660] Opening '/mnt/s5/output_0004.mkv' for writing1.2x
frame= 531 fps=284 q=-1.0 Lsize=N/A time=00:00:21.22 bitrate=N/A speed=11.4x
video:37640kB audio:491kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Here are the segments:
root@ip-172-31-69-62:~# ls -l /mnt/s5
total 38150
-rw-r--r-- 1 root root 9542771 Jul 7 20:01 output_0000.mkv
-rw-r--r-- 1 root root 9464801 Jul 7 20:01 output_0001.mkv
-rw-r--r-- 1 root root 10072341 Jul 7 20:01 output_0002.mkv
-rw-r--r-- 1 root root 8269715 Jul 7 20:01 output_0003.mkv
-rw-r--r-- 1 root root 1714287 Jul 7 20:01 output_0004.mkv
root@ip-172-31-69-62:~#
Instructions to compile s3fs on Ubuntu 18.04.4:
sudo apt-get install build-essential libcurl4-openssl-dev libxml2-dev pkg-config libssl-dev libfuse-dev automake
cd /tmp && \
git clone https://github.com/s3fs-fuse/s3fs-fuse.git && \
cd s3fs-fuse && \
./autogen.sh && \
./configure && \
make
sudo make install
回答2:
I don't believe aws s3
supports piping multiple files from stdin, either with ffmpeg
or any other command. Looking at the cli docs I see no mention of a protocol over stdin that would support that. Even if such a scheme existed it would be pretty fiddly to work with; the stream would presumably have to include the length of the files to upload or use some sort of complex spec to encode the separate file contents within a single stream of data, and there's no reason to believe it would be compatible with ffmpeg
.
If your goal is to avoid writing to physical disks, I'd suggest trying to create the files you need in memory, using a memory-backed filesystem like tmpfs. The benefit of this approach is you don't need to do anything special with the individual programs (ffmpeg
and aws s3
), they interact with the filesystem as normal but the data is actually only written to RAM.
If that's not an option, I'd step back once more and consider how problematic these disk writes really are. The filesystem is, by design, how files are represented, so if you're trying to upload several files to AWS the filesystem may well be the best option. Are you sure your disks are really the bottleneck you need to address? Otherwise you may need to look for an alternative to the ffmpeg
command line tool that will allow you to generate the segments you need in memory and stream them directly to S3. You may need to build such a utility yourself.
回答3:
Try s3fs to work with S3 as it likes an ordinary filesystem.
回答4:
aws s3 cp
does not (yet) support piping of multiple files.
So you will have to save these multiple files locally first, and then cp
them as a whole folder with --recursive
(as you've mentioned in your question), or one by one.
来源:https://stackoverflow.com/questions/60311166/ffmpeg-pipe-segments-to-s3