extractplanes¶
Watch on youtube.com 00:00:00#! /bin/sh
ifn="Pexels_2877_2880.mp4"
ifnb="`basename \"${ifn}\" .mp4`"
pref="`basename $0 .sh`"
#
ffmpeg -y -i "${ifn}" \
-filter_complex "extractplanes=y+u+v[y][u][v]" \
-an \
-map '[y]' "${pref}_${ifnb}_y.mp4" \
-map '[u]' "${pref}_${ifnb}_u.mp4" \
-map '[v]' "${pref}_${ifnb}_v.mp4"
#! /bin/sh
ifn="Pexels_2877_2880.mp4"
ifnb="`basename \"${ifn}\" .mp4`"
pref="`basename $0 .sh`"
#
ffmpeg -y -i "${ifn}" \
-filter_complex "format=rgb24,extractplanes=r+g+b[r][g][b]" \
-an \
-map '[r]' "${pref}_${ifnb}_r.mp4" \
-map '[g]' "${pref}_${ifnb}_g.mp4" \
-map '[b]' "${pref}_${ifnb}_b.mp4"
There are two points to note.
The first point is that what is extracted by “extractplanes” is not raw data of the (for example) Y plane. Each extracted is converted to grayscale. That is, the converted video data has YUV (or RGB) which is different from the input.
The other point is input dependent. Since the example input is yuv420p format, that is, the chrominance components are thinned out. This is why the resulting image size is different.