Using FFMPEG on Docker

Watch on youtube.com

For example, asr can only be used with ffmpeg built with --enable-pocketsphinx flag. But if the “officially provided ffmpeg binary” you own doesn’t support it, do you give up using it?

If you have some UNIX knowledge, consider using Docker as well. It’s a bit esoteric if you don’t have any UNIX knowledge, but conversely, even if you’ve just touched the UNIX shell, you’ll be able to get used to it relatively quickly.

From the mechanism of Docker, if you write a Dockerfile and “docker build” basically “you can do almost everything you can do with UNIX”. That is, you can build FFMPEG yourself from the source code and use the features you want.

But let’s start by simply using containers made by others. On DockerHub, many people publish containers for ffmpeg. For example:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk krickwix/ffmpeg \
ffmpeg -f lavfi -i flite=textfile=shesha.txt:voice=slt shesha.mka

Note

Note that the result of this usage (with --rm) is volatile. It leaves nothing unless volume mapping leaves a result on the host. Please try the following:

[me@host: ~]$ docker run --rm -it -w //wk krickwix/ffmpeg \
ffmpeg -f lavfi -i flite=textfile=shesha.txt:voice=slt shesha.mka

In this case, you can’t get shesha.mka. (Shesha.mka is written in the container’s disk image, but the disk image disappears when the “run” ends.)

see also

flite

I made it too. Although it’s more about helping you build your own FFMPEG than just running it, but still, if you just want to use it, for example:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i INPUT.mp3 -af rubberband=tempo=0.4:pitch=1.5 OUTPUT.mka
see also

atempo, asetrate, aresample

Note

My container was created with the main purpose of showing examples on this page, not the best one. If you look for it seriously, you’ll probably find a container that suits your purpose better than mine. For example, collelog/ffmpeg is also aiming for a full build, so you might get what you want with this as well. (Note that this is also a modification of ffmpeg itself.)

filters#zscale:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i INPUT.mp4 -vf zscale=1920:-1:filter=bicubic OUTPUT.mkv

SVG rasterization via librsvg:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg INPUT.svg OUTPUT.png

VP9 encoding via libvpx (codecs#libvpx):

[me@host: ~]$ docker run -it --rm -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y INPUT.mp4 -c:v libvpx-vp9 OUTPUT.mkv

H.264 encoding via libx264 (codecs#libx264), OpenH264 (codecs#libopenh264):

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i INPUT.mkv -c:v libx264 -c:a copy OUTPUT1.mkv
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i INPUT.mkv -c:v libopenh264 -c:a copy OUTPUT2.mkv

H.265 (HEVC) encoding via libx265 (codecs#libx265), libkvazaar (codecs#libkvazaar):

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i INPUT.mkv -c:v libx265 -c:a copy OUTPUT1.mkv
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i INPUT.mkv -c:v libkvazaar -c:a copy OUTPUT2.mkv

“Internet Low Bitrate Codec” via libilbc:

[me@host: ~]$ docker run -it --rm -v $(/bin/pwd)://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i vid_stereo.mp4 -vn -af "pan=1|c0=c0+c1,aresample=8000" mono_internetlowbitrate.lbc

protocols#tcp (#tcp):

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
-p 8888:8888 \
ffmpeg -re -i INPUT.mkv -f flv tcp://0.0.0.0:8888?listen=1

In this case, you can play this stream with ffplay on your host PC (maybe Windows):

[me@host: otrterm]$ ffplay tcp://localhost:8888

protocols#http (#http):

[me@host: ~]$ docker run -p 8080:80 -it --rm -v $(/bin/pwd)://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -re -i your_video.mkv -listen 1 -f matroska http://0.0.0.0:80
[me@host: otrterm]$ ffplay http://localhost:8080

protocols#zmq (#zmq):

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
-p 8888:8888 \
ffmpeg -re -i INPUT.mkv -f mpegts zmq:tcp://0.0.0.0:8888?listen=1
[me@host: otrterm]$ ffplay zmq:tcp://localhost:8888

filters#asr:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 -af 'asr,ametadata=mode=print' -f null - 2>&1 | tee asr_result.txt

filters#ocr:

[me@host: ~]$  docker run --rm -it -v ""$(/bin/pwd)""://wk -w //wk \
hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mkv -vf 'ocr,metadata=mode=print' -f null - 2>&1 | tee ocr_result.txt

filters#ocv:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mkv -vf 'ocv=filter_name=erode:filter_params=5x5+2x2/cross|2' OUTPUT.mkv
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -f lavfi -i "mandelbrot=s=800x600:maxiter=136:rate=25" \
-vf "ocv=filter_name=dilate:filter_params=5x5+2x2/cross|2" -t 3 OUTPUT.mkv
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mkv -vf 'ocv=smooth:median' OUTPUT.mkv

filters#lensfun:

[me@host: ~]$ docker run --rm -it -v $(/bin/pwd)://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mov \
-vf lensfun=make=Canon:model="Canon EOS 100D":lens_model="Canon EF-S 18-55mm f/3.5-5.6 IS STM":focal_length=18:aperture=8 \
OUTPUT.mkv

filters#ladspa:

Watch on youtube.com
[me@host: ~]$ # check available LADSPA libraries
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ls -l //usr/lib/ladspa
total 80
-rw-r--r-- 1 root root 14696 Oct  4  2021 amp.so
-rw-r--r-- 1 root root 14704 Oct  4  2021 delay.so
-rw-r--r-- 1 root root 14712 Oct  4  2021 filter.so
-rw-r--r-- 1 root root 14704 Oct  4  2021 noise.so
-rw-r--r-- 1 root root 14408 Oct  4  2021 sine.so
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e LADSPA_PATH=//usr/lib/ladspa \
hhsprings/ffmpeg-yours \
listplugins
//usr/lib/ladspa/filter.so:
        Simple Low Pass Filter (1041/lpf)
        Simple High Pass Filter (1042/hpf)
//usr/lib/ladspa/noise.so:
        White Noise Source (1050/noise_white)
//usr/lib/ladspa/sine.so:
        Sine Oscillator (Freq:audio, Amp:audio) (1044/sine_faaa)
        Sine Oscillator (Freq:audio, Amp:control) (1045/sine_faac)
        Sine Oscillator (Freq:control, Amp:audio) (1046/sine_fcaa)
        Sine Oscillator (Freq:control, Amp:control) (1047/sine_fcac)
//usr/lib/ladspa/delay.so:
        Simple Delay Line (1043/delay_5s)
//usr/lib/ladspa/amp.so:
        Mono Amplifier (1048/amp_mono)
        Stereo Amplifier (1049/amp_stereo)
[me@host: ~]$ # let's try to use "delay"
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 -af 'ladspa=delay' -f null -
...
[Parsed_ladspa_0 @ 0x55ec0e0882c0] The 'delay' library contains the following plugins:
[Parsed_ladspa_0 @ 0x55ec0e0882c0] I = Input Channels
[Parsed_ladspa_0 @ 0x55ec0e0882c0] O = Output Channels
[Parsed_ladspa_0 @ 0x55ec0e0882c0] I:O Plugin                    Description
[Parsed_ladspa_0 @ 0x55ec0e0882c0]
[Parsed_ladspa_0 @ 0x55ec0e0882c0] 1:1 delay_5s                  Simple Delay Line
[AVFilterGraph @ 0x55ec0e0a10c0] Error initializing filter 'ladspa' with args 'delay'
Error reinitializing filters!
Failed to inject frame into filter network: Immediate exit requested
Error while processing the decoded data for stream #0:0
Conversion failed!
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 -af 'ladspa=delay:delay_5s:c=help' -f null -
...
[Parsed_ladspa_0 @ 0x5649dd8ce480] The 'delay_5s' plugin has the following input controls:
[Parsed_ladspa_0 @ 0x5649dd8ce480] c0: Delay (Seconds) [<float>, min: 0.000000, max: 5.000000 (default 1.000000)]
[Parsed_ladspa_0 @ 0x5649dd8ce480] c1: Dry/Wet Balance [<float>, min: 0.000000, max: 1.000000 (default 0.500000)]
[AVFilterGraph @ 0x5649dd8e1c40] Error initializing filter 'ladspa' with args 'delay:delay_5s:c=help'
...
[me@host: ~]$
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 -af 'ladspa=delay:delay_5s' OUTPUT_d1.mka
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 -af 'ladspa=delay:delay_5s:c=c0=2.0' OUTPUT_d2.mka
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 -af 'ladspa=delay:delay_5s:c=c0=5.0' OUTPUT_d5.mka

filters#sofalizer:

[me@host: ~]$ # check available SOFA files
[me@host: ~]$ docker run --rm -it hhsprings/ffmpeg-yours find //usr/share -name '*.sofa'
//usr/share/libmysofa/MIT_KEMAR_normal_pinna.sofa
//usr/share/libmysofa/default.sofa
[me@host: ~]$ #
[me@host: ~]$ # Let's use default.sofa, and MIT_KEMAR_normal_pinna.sofa
[me@host: ~]$ docker run --rm -it -v $(/bin/pwd)://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 \
-af sofalizer=sofa=//usr/share/libmysofa/default.sofa:gain=-4 OUTPUT1.wav
    ...
[me@host: ~]$ docker run --rm -it -v $(/bin/pwd)://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -y -i INPUT.mp3 \
-af sofalizer=sofa=//usr/share/libmysofa/MIT_KEMAR_normal_pinna.sofa:type=freq:gain=-4 OUTPUT2.wav
    ...

devices#caca:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours \
ffmpeg -i INPUT.mkv -vf 'format=pix_fmts=rgb24' -f caca -

After all, Docker containers are “like a remote machine”, so in some cases it might be easier to use them as if they were “login-like”:

[me@host: ~]$ # It is a good idea to analogize login with ssh or rsh.
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours bash
root@1f39a5c0e49:/wk# mkdir _imgs
root@1f39a5c0e49:/wk# ffmpeg -y -i myvideo.mkv -r 1/5 _imgs/"%03d.png"
   ...
root@1f39a5c0e49:/wk# ffmpeg -y -r 2 -i "_imgs/%03d.png" -c:v libx265 myvideo_thumb_265.mkv
   ...
root@1f39a5c0e49:/wk# ffmpeg -y -r 2 -i "_imgs/%03d.png" -c:v libvpx-vp9 myvideo_thumb_vp9.webm
   ...
root@1f39a5c0e49:/wk# ffmpeg -y -r 2 -i "_imgs/%03d.png" -c:v libaom-av1 myvideo_thumb_av1.mp4
   ...
root@1f39a5c0e49:/wk# ffmpeg -y -r 2 -i "_imgs/%03d.png" -c:v librav1e myvideo_thumb_av1_2.mp4
   ...
root@1f39a5c0e49:/wk# ffmpeg -y -r 2 -i "_imgs/%03d.png" -c:v libsvtav1 -preset 10 -crf 35 myvideo_thumb_av1_3.mp4
   ...
root@1f39a5c0e49:/wk# ffmpeg -y -r 2 -i "_imgs/%03d.png" -c:v hap -compressor snappy myvideo_thumb_hap.mov
   ...
root@1f39a5c0e49:/wk# ffmpeg -y -r 2 -i "_imgs/%03d.png" myvideo_thumb.gif
   ...
root@1f39a5c0e49:/wk# ls -l
   ...
root@1f39a5c0e49:/wk# exit
[me@host: ~]$
[me@host: ~]$ # It is a good idea to analogize login with ssh or rsh.
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours bash
root@edb63e60e37:/wk# export LADSPA_PATH=${LADSPA_PATH}:/usr/lib/ladspa
root@edb63e60e37:/wk# listplugins
/usr/lib/ladspa/filter.so:
        Simple Low Pass Filter (1041/lpf)
        Simple High Pass Filter (1042/hpf)
/usr/lib/ladspa/noise.so:
        White Noise Source (1050/noise_white)
/usr/lib/ladspa/sine.so:
        Sine Oscillator (Freq:audio, Amp:audio) (1044/sine_faaa)
        Sine Oscillator (Freq:audio, Amp:control) (1045/sine_faac)
        Sine Oscillator (Freq:control, Amp:audio) (1046/sine_fcaa)
        Sine Oscillator (Freq:control, Amp:control) (1047/sine_fcac)
/usr/lib/ladspa/delay.so:
        Simple Delay Line (1043/delay_5s)
/usr/lib/ladspa/amp.so:
        Mono Amplifier (1048/amp_mono)
        Stereo Amplifier (1049/amp_stereo)
root@edb63e60e37:/wk# ffmpeg -y -i INPUT.mp3 -af 'ladspa=filter:hpf' OUTPUT_h.mka
...
root@edb63e60e37:/wk# ffmpeg -y -i INPUT.mp3 -af 'ladspa=filter:lpf' OUTPUT_l.mka
...
root@edb63e60e37:/wk# exit
[me@host: ~]$

Note

Whether bash can be executed directly like the above example depends on the presence or absence of “ENTRYPOINT” in the Dockerfile. But in any case (unless the container author intentionally disables the command), it is possible with “–entrypoint”:

[me@host: ~]$ docker run -it --rm --entrypoint=bash redis
root@6402e790394b:/data# redis-cli --help
   ...

filters#frei0r, and filters#frei0r_src:

[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours bash
root@c18a9df13d4:/wk# ls /usr/lib/frei0r-1/ | grep disto
distort0r.so
root@c18a9df13d4:/wk# ffmpeg -y -i INPUT.mkv -vf 'frei0r=filter_name=distort0r:filter_params=0.5|0.01' OUTPUT.mkv
...
root@c18a9df13d4:/wk# ls /usr/lib/frei0r-1/ | grep pers
perspective.so
root@c18a9df13d4:/wk# ffmpeg -y -i INPUT.mkv -vf 'frei0r=perspective:0.2/0.2|0.8/0.2' OUTPUT.mkv
...
root@c18a9df13d4:/wk# exit
[me@host: ~]$
Watch on youtube.com
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk hhsprings/ffmpeg-yours bash
root@46ddf68ca1e:/wk# ffmpeg -f lavfi -i 'frei0r_src=size=600x400:filter_name=partik0l:filter_params=1234' OUTPUT1.mkv
root@46ddf68ca1e:/wk# ffmpeg -f lavfi -i 'frei0r_src=size=600x400:filter_name=plasma:filter_params=1|1' OUTPUT2.mkv
...
root@46ddf68ca1e:/wk# exit
[me@host: ~]$
see also

frei0r_src

By the way, unless you are a Super Hyper Executive Ultimate Professional about frei0r, it is not possible for you to use this filter. Because ffmpeg’s, frei0r’s, and frei0r-plugins’s original documentation are “mad”, or “MAD”. Fortunatelly, the documentation for a GStreamer project (unrelated to FFMPEG) that also incorporates the frei0r plugin is “relatively decent”, so you should refer to those when using these. (For example: filter perspective, src plasma. However, note that as I mentioned in the video the order of the “Properties” in this docs is messy. To know the correct order of the “Properties”, you should refer the source code. Source codes are available from https://files.dyne.org/frei0r/releases/.)

Note

Please note that even if ffmpeg itself is configured to use a certain function, it may not be usable only with this my container. For example, I built it as an RabbitMQ-enabled ffmpeg with --enable-librabbitmq, which shouldn’t work without working with an rabbitmq server, for example, but my container doesn’t include it.

If you are familiar with linux, and if you want it in this container, consider creating your own Docker container. In that case, your container can inherit my container with “FROM”. See my GitHub repo for details. However, note that if you want to use, for example, an “RTMP server with nginx”, it’s not difficult to use it, even if it’s outside this container. I will show you this later with an example. (This type can be easy or difficult. In general, network communication types are easy to work with, but device-related ones are very difficult, especially on Windows.)

Note

Playing video and audio only with what’s inside the docker container can be a daunting task, especially for Windows users. If you have a media that has only video stream, you can play back the video by working with an “X server” running on your host. Even if you’re a Windows user, there are some Windows native X servers (the ones I know are xming, VcXsrv, and MobaXTerm), so if you’re running them, this may be possible. See this article, etc.

If your host is UNIX-varient, you need to control authentication via “xhost”, etc., and you’ll also need unix socket mapping. If you set xhost + to allow all access, you can use the host’s X server directly as follows.

[me@host: ~]$ docker run -it --rm \
-e DISPLAY=${DISPLAY} \
-v /tmp/.X11-unix:/tmp/.X11-unix \
hhsprings/ffmpeg-yours \
ffmpeg -y -f lavfi \
-i "frei0r_src=size=600x400:filter_name=plasma:filter_params=1|1,format=yuv420p" -f sdl2 normal

The tricky part of UNIX-varient is unix socket mapping and authentication (and in some cases XDG_RUNTIME_DIR setting), but Windows has a different taste issue.

In the first place, Windows doesn’t have a “UNIX socket”, so it can’t be mapped. There is no standard way to set up authentication, which is different for each X server you use.

[me@host: ~]$ ipconfig  # ifconfig, if you are in UNIX
   ...
[me@host: ~]$ # Assuming that the host's IP address as seen from the docker
[me@host: ~]$ # container is "172.23.32.1":
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e DISPLAY=172.23.32.1:0 \
hhsprings/ffmpeg-yours \
ffplay yourvideo.mkv
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e DISPLAY=172.23.32.1:0 \
hhsprings/ffmpeg-yours \
ffmpeg -i yourvideo.mkv -f sdl
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e DISPLAY=172.23.32.1:0 \
hhsprings/ffmpeg-yours \
ffmpeg -i yourvideo.mkv -f sdl2
[me@host: ~]$ # You may be able to use "host.docker.internal"
[me@host: ~]$ # (depending on your docker engine's built-in DNS).
[me@host: ~]$ docker run -it --rm -v $(/bin/pwd)://wk -w //wk \
-e DISPLAY=host.docker.internal:0 \
hhsprings/ffmpeg-yours \
ffmpeg -y -f lavfi -i gradients=d=10:speed=0.1 -vf format=yuv420p -f sdl2 -
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e DISPLAY=host.docker.internal:0 \
hhsprings/ffmpeg-yours \
ffmpeg -i yourvideo.mkv -f sdl2

The same is true for audio. To use ALSA audio, which uses the UNIX pseudo-device interface directly, what you need to do is just a mapping if your host is Unix:

[me@host: ~]$ docker run -it --rm \
-v $(pwd):/wk -w /wk \
--privileged \
-v /dev/snd:/dev/snd \
hhsprings/ffmpeg-yours \
ffmpeg -y -f lavfi -i "flite=text='Hello, world.':voice=slt" \
-ac 2 -af aresample=48000 -vn -f alsa default

There is no Windows equivalent for /dev/snd, so as far as I know there is no way to use alsa audio if your host is Windows. On the other hand, if it adopts the C/S approach by network communication, it may be available on Windows. For example, you may be able to play back audio by relying on “pulseaudio for Windows”:

[me@host: ~]$ # Assuming that the host's IP address as seen from the docker
[me@host: ~]$ # container is "172.23.32.1":
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e PULSE_SERVER=tcp:172.23.32.1 \
hhsprings/ffmpeg-yours \
ffmpeg -f lavfi -i aevalsrc='0.1*sin(t)*sin(442*2*PI*t):d=30' -f wav -f pulse "my pulse"
[me@host: ~]$ #
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e DISPLAY=172.23.32.1:0 \
-e PULSE_SERVER=tcp:172.23.32.1 \
hhsprings/ffmpeg-yours \
ffplay your_video_with_audio.mp4
[me@host: ~]$ # You may be able to use "host.docker.internal"
[me@host: ~]$ # (depending on your docker engine's built-in DNS).
[me@host: ~]$ docker run --rm -it -v "$(/bin/pwd)"://wk -w //wk \
-e DISPLAY=host.docker.internal:0 \
-e PULSE_SERVER=tcp:host.docker.internal \
hhsprings/ffmpeg-yours \
ffplay your_video_with_audio.mp4

In the “pulseaudio” configuration file, it is easy to set:

your-config.pa (or etc/default.pa)
#load-module module-waveout sink_name=output source_name=input
load-module module-waveout sink_name=output source_name=input record=0

# ...

#load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1
load-module module-native-protocol-tcp auth-anonymous=1

for the time being. I haven’t succeeded in a well-behaved configuration in terms of security, so if you really want to be safe, do your best on your own. You can refer to this site for using “pulseaudio” on Windows. This article was written about using it with WSL, but it will also be useful when using it from docker.

Watch on youtube.com

If you always use ffmpeg in much the same way, you might consider using docker-compose. The usual use of docker-compose is to “use a combination of containers”, but there is no reason to be blamed for using “compose” which consists only of ffmpeg-yours.

For example, suppose your usage always uses the same volume and port mappings, and you always want to pass the “-hide_banner”, and the “-y” options to ffmpeg. In this case, prepare the following “docker-compose.yml” file:

services:
  myffmpeg:
    image: hhsprings/ffmpeg-yours
    ports:
      - "8888:8888"
    volumes:
      - //c/Users/you/Videos:/wk
    working_dir: /wk
    entrypoint:
      - ffmpeg
      - -hide_banner
      - -y

You can do this:

[you@yourhost: somedir]$ pwd
/c/Users/you/AppData/Roaming/mywork/somedir
[you@yourhost: somedir]$ # docker-compose.yml is in curdir.
[you@yourhost: somedir]$ ls
docker-compose.yml
[you@yourhost: somedir]$ ls //c/Users/you/Videos
vid1.mkv vid2.mp4
[you@yourhost: somedir]$ # convert video in //c/Users/you/Videos
[you@yourhost: somedir]$ docker-compose run myffmpeg -y -i vid1.mkv -vf scale=960:-1 vid1_s.mkv
...
[you@yourhost: somedir]$ # you can specify docker-compose.yml explicitly
[you@yourhost: somedir]$ docker-compose -f docker-compose.yml run \
myffmpeg -y -i vid1.mkv -vf scale=960:-1 vid1_s.mkv
...

If you want, you can further wrap it in the form of a shell script:

#! /bin/sh
# Assuming that docker-compose.yml and this script are in the same directory.
docker-compose -f "`dirname \"$0\"`"/docker-compose.yml run myffmpeg "$@"

For example, if you name this script “myffmpeg.sh”, you can:

[you@yourhost: otherdir]$ /path/to/myffmpeg.sh -i vid1.mkv -vf scale=960:-1 vid1_s.mkv
...

Streaming with the RTMP protocol cannot be achieved with ffmpeg alone. To do this, for example, you need to work with an nginx server that has an RTMP module built in. In other words, RTMP streaming cannot be achieved with my container alone. However, the only reason we can’t do it is “only with my container”. If it’s running on your own host, you can use it, and if you know any public service, you can use it. But of course you can use “nginx on docker with the RTMP module”. For example, you can use tiangolo/nginx-rtmp:

[you@yourhost: somedir]$ # start tiangolo/nginx-rtmp as daemon
[you@yourhost: somedir]$ docker run --name=nginx-rtmp -p 1935:1935 -d tiangolo/nginx-rtmp
[you@yourhost: MyVideos]$ ls *.mkv
myvideo.mkv
[you@yourhost: MyVideos]$ # IP address
[you@yourhost: MyVideos]$ ipconfig  # if your host is Unix, use "ifconfig"
   ...
[you@yourhost: MyVideos]$ # see also: https://github.com/tiangolo/nginx-rtmp-docker/blob/master/nginx.conf
[you@yourhost: MyVideos]$ docker run -it --rm -v $(/bin/pwd)://wk -w //wk \
hhsprings/ffmpeg-yours \
ffmpeg -re -i myvideo.mkv -f flv rtmp://172.17.0.5:1935/live/myvideo
[you@yourhost: MyVideos]$ # note that in this case You may be able to use "host.docker.internal"
[you@yourhost: MyVideos]$ # rather than 172.17.0.5.

You can play back this stream with ffplay, etc:

[you@yourhost: otherdir]$ ffplay rtmp://localhost:1935/live/myvideo
Watch on youtube.com

Do the same with “docker-compose”:

docker-compose.yml
services:
  nginx-rtmp:
    image: tiangolo/nginx-rtmp

    # You'll be scolded for "name conflicts" because of the same name as in the
    # previous experiment. In that case, remove the corresponding container with
    # "docker rm" before executing this.
    container_name: nginx-rtmp
    ports:
      - "1935:1935"

  myffmpeg:
    image: hhsprings/ffmpeg-yours
    links:
      - nginx-rtmp
    volumes:
      - //c/Users/you/mywork/MyVideos:/wk
    working_dir: /wk
    entrypoint:
      - ffmpeg
      - -re
      - -i
      - myvideo.mkv
      - -f
      - flv
      - rtmp://nginx-rtmp:1935/live/myvideo
[you@yourhost: otherdir]$ docker-compose run myffmpeg

By the way, the original purpose of “run” of docker-compose is close to “emergency”. This is because “docker-compose” is mainly aimed at configuring resident services. In the case of this example, you wouldn’t normally only deal with a fixed video called “myvideo.mkv”, but would like to stream it through some user interface to respond to that request. Such examples go beyond the levels described on this page. In other words, it’s a full-fledged example of docker rather than an example of ffmpeg. Therefore, I will not give such an example here.

An example of WEBDAV is similar to RTMP. I found bytemark/webdav as a docker container for file sharing with WEBDAV:

[you@yourhost: somedir]$ # start bytemark/webdav as daemon
[you@yourhost: somedir]$ docker run \
-v $(/bin/pwd)://var/lib/dav/data \
-e AUTH_TYPE=Digest \
-e USERNAME=anonnon \
-e PASSWORD=dadamore \
-p 80:80 \
-d \
bytemark/webdav
[you@yourhost: otrdir]$ docker run -it --rm \
-v //f/MyVideos/://wk -w //wk \
hhsprings/ffmpeg-yours \
ffmpeg -i myvideo01.mkv -r 1/10 \
-protocol_opts method=PUT http://anonnon:dadamore@host.docker.internal/myvideo01-%03d.png

Of course, the most easily available WEBDAV client is the WEB browser. In this case, you can see what you have published with this ffmpeg by going to http://localhost. If you want other client software, you can easily find it by searching.

It’s easy to extend an existing docker container. I made a video that actually does that, so please take a look here.

Watch on youtube.com