Showing the huge stillimage as video

see also:overlay

In particular, a mechanically generated graph, or high-resolution topographic map is often not suitable for viewing as a still image. “I want to dive and travel” to such high resolution, large size images.

Large graph generated by graphviz

A typical example falling into such a situation is a graph generated by graphviz. For example, see softmaint (softmaint.gv.txt). It is quite painful to “read and understand” this image using an image viewer. Make an image below and open it using the viewer you have:

[me@host: ~]$ dot -Tpng -Kneato -Gdpi=330 softmaint.gv.txt > softmaint.png
[me@host: ~]$ ffprobe -hide_banner softmaint.png
Input #0, png_pipe, from 'softmaint.png':
  Duration: N/A, bitrate: N/A
    Stream #0:0: Video: png, rgba(pc), 2310x2278, 25 tbr, 25 tbn, 25 tbc

It is not bad to adopt ordinary simple vertical/horizontal scrolling for such images. But rather than that, you will want to pan to the position you like.

We can use expressions that include timestamps in the “x” and “y” of “crop”. Also, we can use the function “pow”. Thus, we can evaluate a polynomial \(p(t) = c_0 + c_1 * t + \cdots + c_n * t^n\):

ffmpeg filter graph
crop='
x=618.95433437 + 13.81390319 * t + 0.41619951 * pow(t, 2):
y=475.81586466 - 59.58643233 * t + 8.95077133 * pow(t, 2):
w=1280:h=720'

There is no way to find these coefficients in ffmpeg itself, so use other tools such as Python (with NumPy):

# -*- coding: utf-8 -*-
import re
import numpy as np

polyfit = np.polynomial.polynomial.polyfit

nav = """\
 0, 0.25, 0.2
 1, 0.33, 0.22
 2, 0.4,  0.1
 3, 0.38, 0.28
 4, 0.45, 0.4
 5, 0.55, 0.5
 6, 0.73, 0.4
 7, 0.85, 0.35
 8, 0.58, 0.5
 9, 0.6,  0.65
10, 0.71, 0.76
11, 0.6,  0.75
12, 0.45, 0.85
13, 0.42, 0.92
"""
#
dat = np.array(
    [(t * 5, x, y)
     for t, x, y in [
        map(float, re.split(r"\s*,\s*", line))
        for line in re.split(r"\r*\n", nav.strip())]
     ])
# softmaint.png: 2310x2278
cex = 2310 * (polyfit(dat[:,0], dat[:,1], deg=4))  # t, x
cey = 2278 * (polyfit(dat[:,0], dat[:,2], deg=4))  # t, y
print("""#! /bin/sh
ffmpeg -y -i softmaint.png -filter_complex "
[0:v]loop=-1:size=2
,crop='
x=(({}) - 640):
y=(({}) - 360):
w=1280:h=720'
,setsar=1
,trim=0:{:.3f}
" softmaint.mp4
""".format(
        " + \n".join(["(%.8f) * pow(t, %d)" % (c, i) for i, c in enumerate(cex)]),
        " + \n".join(["(%.8f) * pow(t, %d)" % (c, i) for i, c in enumerate(cey)]),
        dat[:,0][-1]))
Shell script generated by the above python script
#! /bin/sh
ffmpeg -y -i softmaint.png -filter_complex "
[0:v]loop=-1:size=2
,crop='
x=(((642.99044118) * pow(t, 0) +
(-8.13571267) * pow(t, 1) +
(3.11844061) * pow(t, 2) +
(-0.07753054) * pow(t, 3) +
(0.00050124) * pow(t, 4)) - 640):
y=(((398.98500000) * pow(t, 0) +
(10.57501182) * pow(t, 1) +
(0.31311958) * pow(t, 2) +
(-0.00129939) * pow(t, 3) +
(0.00000375) * pow(t, 4)) - 360):
w=1280:h=720'
,setsar=1
,trim=0:65.000
" softmaint.mp4

The following video is the result of running the script above:

Watch on youtube.com

You can also realize it with `overlay’:

# -*- coding: utf-8 -*-
import re
import numpy as np

polyfit = np.polynomial.polynomial.polyfit

nav = """\
 0, 0.25, 0.2
 1, 0.33, 0.22
 2, 0.4,  0.1
 3, 0.38, 0.28
 4, 0.45, 0.4
 5, 0.55, 0.5
 6, 0.73, 0.4
 7, 0.85, 0.35
 8, 0.58, 0.5
 9, 0.6,  0.65
10, 0.71, 0.76
11, 0.6,  0.75
12, 0.45, 0.85
13, 0.42, 0.92
"""
#
dat = np.array(
    [(t * 5, x, y)
     for t, x, y in [
        map(float, re.split(r"\s*,\s*", line))
        for line in re.split(r"\r*\n", nav.strip())]
     ])
# softmaint.png: 2310x2278
cex = 2310 * (polyfit(dat[:,0], dat[:,1], deg=4))  # t, x
cey = 2278 * (polyfit(dat[:,0], dat[:,2], deg=4))  # t, y
print("""#! /bin/sh
ffmpeg -y -i softmaint.png -filter_complex "
color=white:s=1280x720,loop=-1:size=2[bg];
[0:v]loop=-1:size=2[fg];
[bg][fg]overlay='
x=-(({}) - 640):
y=-(({}) - 360)'
,setsar=1
,trim=0:{:.3f}
" softmaint.mp4
""".format(
        " + \n".join(["(%.8f) * pow(t, %d)" % (c, i) for i, c in enumerate(cex)]),
        " + \n".join(["(%.8f) * pow(t, %d)" % (c, i) for i, c in enumerate(cey)]),
        dat[:,0][-1]))
Shell script generated by the above python script
#! /bin/sh
ffmpeg -y -i softmaint.png -filter_complex "
color=white:s=1280x720,loop=-1:size=2[bg];
[0:v]loop=-1:size=2[fg];
[bg][fg]overlay='
x=-(((642.99044118) * pow(t, 0) +
(-8.13571267) * pow(t, 1) +
(3.11844061) * pow(t, 2) +
(-0.07753054) * pow(t, 3) +
(0.00050124) * pow(t, 4)) - 640):
y=-(((398.98500000) * pow(t, 0) +
(10.57501182) * pow(t, 1) +
(0.31311958) * pow(t, 2) +
(-0.00129939) * pow(t, 3) +
(0.00000375) * pow(t, 4)) - 360)'
,setsar=1
,trim=0:65.000
" softmaint.mp4
Watch on youtube.com

Note the difference in out-of-range behavior for the two versions.

Long-term time series graph

In long-term time series graphs, it may be desirable to make a graph that is long horizontally along the time axis.

Example 1

What I show this time has no an essential difference from the previous one. The difference is:

  • You can use the (almost) same formula as that written for the graph.
  • Using multiple crop and overlay to maintain the axis of drawn graph.

The formula \(100 * \exp(-1/2*(((T-120)/100)^2)) + T/500 * \tan(T/5)\) used in this example have no special meaning. It is just an example.

python with numpy + matplotlib
# -*- coding: utf-8 -*-
from __future__ import division


import numpy as np
import matplotlib
import matplotlib.pyplot as plt


def make_graph():
    fig, ax = plt.subplots()

    # set large width
    fig.set_size_inches(16.53 * 10, 11.69 * 4)

    #
    T = np.arange(0, 300, 0.5)
    Y = 100 * np.exp(-1/2*(((T-120)/100)**2)) + T/500 * np.tan(T/5)
    ax.plot(T, Y, "b-")
    ax.set_xticks(range(0, 300, 10))
    ax.set_yticks(
        np.arange(
            np.floor(Y.min() / 10) * 10, np.ceil((Y.max() + 10) / 10) * 10, 10))
    ax.set_xlim((0, T[-1]))
    ax.grid(True)
    plt.savefig("graph.png", bbox_inches="tight")


if __name__ == '__main__':
    make_graph()
using graph.png generated by above python script
#! /bin/bash
#
ow=12873 ; oh=3647
xaxh=33 ; yaxw=48
#
gb=$((${oh} - ${xaxh}))
gw=$((1920 - ${yaxw}))
gh=$((1080 - ${xaxh}))
#
ymin=-55 ; ymax=185
wr=`python -c "print((${ow} - ${yaxw}) / 300.)"`
hr=`python -c "print((${oh} - ${xaxh}) / (float(${ymax}) - ${ymin}))"`
#
T="(${wr} * t)"
Y="${hr} * ($((${ymax} / 5 * 4)) - 100 * exp(-1/2*(pow((t-120)/100,2))) + t/500 * tan(t/5))"
#
ffmpeg -y -i graph.png -filter_complex "
color=black:s=1920x1080:d=255[bg];

[0:v]loop=-1:size=2,crop='x=0:y=${Y}:w=${yaxw}:h=${gh}'[yax];
[0:v]loop=-1:size=2,crop='x=(${T} + ${yaxw}):y=${gb}:w=${gw}:h=${xaxh}'[xax];
[0:v]loop=-1:size=2,crop='x=(${T} + ${yaxw}):y=${Y}:w=${gw}:h=${gh}'[grp];

[bg][yax]overlay='x=0:y=0':shortest=1[v0];
[v0][grp]overlay='x=${yaxw}:y=0:shortest=1'[v1];
[v1][xax]overlay='x=${yaxw}:y=${gh}':shortest=1

,setpts=PTS/2-STARTPTS
" graph.mp4
Watch on youtube.com

Example 2

The next example is a combination of what was done in the previous and two previous examples. That is, polynomial approximation is used as the equation to be passed to overlay, and multiple crop and overlay are used to maintain the axis. The main difference from the two previous example is that the coefficients of polynomial approximation are found using the data used for graph drawing.

draw the graph, and find the coefficients
# -*- coding: utf-8 -*-
import csv
from datetime import datetime
import numpy as np
import math
from matplotlib import pyplot as plt

# values bellow are at base=0 (<11km)
_P0 = 1013.25  # static pressure (Pa) at MSL
_T0 = 273.15 + 15  # standard temperature (K) at MSL
_L0 = 6.49 / 1000.  # standard temperature lapse rate (K/m) in ISA
_R = 8.31432  # universal gas constant in N·m /(mol·K)
_g0 = 9.80665   # gravitational acceleration in m/s**2
_M = 0.0289644  # molar mass of Earth's air in  kg/mol

#
def a2p(h, delta_T=0):
    t = _T0 + delta_T
    return _P0 * math.pow((t / (t + _L0 * h)), _g0*_M/(_R*_L0))
#
def p2a(p, hA, pA, delta_T=0):
    delta_p = pA - a2p(hA, delta_T)

    t = _T0 + delta_T
    return (t / _L0) * (np.power((p - delta_p) / _P0, -_R*_L0/(_g0*_M)) - 1)

def read():
    with open("sensor_log.2016-08-10_elevs.csv") as fi:
        reader = csv.reader(fi)
        next(reader)  # skip header
        t0 = None
        for line in reader:
            tc = datetime.strptime(line[0], "%Y-%m-%d %H:%M:%S")
            if not t0:
                t0 = tc
            yield (tc - t0).total_seconds(), map(float, line[1:])

if __name__ == '__main__':
    #
    data = np.array([
            (time / 60., GPS_alt, pressure, _, elev_DEM5)
            for time, (
                lon, lat, GPS_alt, pressure, _, elev_DEM5,
                grav_x, grav_y, grav_z, g_scalar) in read()])

    #
    no_cor = p2a(data[:,2], 0., 1013.25, 0)
    corr_1 = p2a(data[:,2], data[0][4], data[0][2], 0)
    corr_2 = p2a(data[:,2], data[0][4], data[0][2], 12)

    fig, ax = plt.subplots()
    fig.set_size_inches(16.53*12, 11.69*4)
    t = data[:,0]
    ymin, ymax = 0, 310

    ax.plot(t, data[:,1], label='GPS')
    ax.plot(t, data[:,4], label='DEM5')
    ax.plot(t, no_cor, label='Pressure Altitude')
    ax.plot(t, corr_1, label='Pressure Altitude (P corr)')
    ax.plot(t, corr_2, label='Pressure Altitude (P + ISA+12 corr)')
    cef = np.polynomial.polynomial.polyfit(t, data[:,4], deg=5)

    #avgelv = (data[:,4] + no_cor + corr_1 + corr_2) / 4.  # except GPS
    #cef = np.polynomial.polynomial.polyfit(t, avgelv, deg=15)
    #t2 = np.linspace(0, t.max(), t.max() * 5)
    #ax.plot(t2, np.polynomial.polynomial.polyval(t2, cef), "k.", label="fit")

    ax.set_xlim((0, t.max() + 1.0))
    ax.set_ylim((ymin, ymax))
    ax.set_xticks(np.arange(0, np.ceil(t.max() + 1), 1))
    ax.set_yticks(np.arange(ymin, ymax, 5))
    ax.grid(True)
    left, right, bottom, top = 5e-3, 1, 1e-2, 1
    fig.subplots_adjust(
        wspace=0, hspace=0, left=left, right=right, bottom=bottom, top=top)
    ax.legend(loc='upper left', shadow=True)
    imgoutbase = "sensor_log.2016-08-10_elevs"
    fig.savefig(imgoutbase + ".png")
    ow, oh = map(int, fig.get_window_extent().bounds[2:])
    #
    tw, th = 1920, 1080
    xaxh, yaxw = int(np.ceil(oh * bottom)), int(np.ceil(ow * left))
    gb = oh - xaxh
    gw = tw - yaxw
    gh = th - xaxh
    dur = t.max() - 12
    wr = ((ow - yaxw) / t.max())
    hr = ((oh - xaxh) / (float(ymax) - ymin))
    Y = " + \n".join([
            "(%e * pow(t + 12, %d))" % (c, i)
            for i, c in enumerate(cef)])
    #
    print("""\
#! /bin/sh
#
T="(%(wr)f * t)"
Y="%(hr)f * (%(ymax)d - (%(Y)s)) - %(th)d / 2"
#
ffmpeg -y -i %(imgoutbase)s.png -filter_complex "
color=black:s=%(tw)dx%(th)d:d=%(dur)f[bg];

[0:v]loop=-1:size=2,crop='x=0:y=${Y}:w=%(yaxw)d:h=%(gh)d'[yax];
[0:v]loop=-1:size=2,crop='x=(${T} + %(yaxw)d):y=%(gb)d:w=%(gw)d:h=%(xaxh)d'[xax];
[0:v]loop=-1:size=2,crop='x=(${T} + %(yaxw)d):y=${Y}:w=%(gw)d:h=%(gh)d'[grp];
[0:v]loop=-1:size=2,crop='x=104:y=4:w=302:h=120',scale=400:-1[legend];

[bg][yax]overlay='x=0:y=0':shortest=1[v0];
[v0][grp]overlay='x=%(yaxw)d:y=0:shortest=1'[v1];
[v1][xax]overlay='x=%(yaxw)d:y=%(gh)d':shortest=1[vmain];
[vmain][legend]overlay=x=100:y=4
" %(imgoutbase)s.mp4
""" % locals())

The example used here is the calculation of pressure altitude, but I do not guarantee the correctness of this. Don’t trust me in this regard. (The data used in this script can be downloaded.)

using sensor_log.2016-08-10_elevs.png generated by above python script
#! /bin/sh
#
T="(122.254801 * t)"
Y="14.932258 * (310 - ((1.953434e+01 * pow(t + 12, 0)) +
(-5.740214e+00 * pow(t + 12, 1)) +
(4.387402e-01 * pow(t + 12, 2)) +
(-7.002593e-03 * pow(t + 12, 3)) +
(4.379414e-05 * pow(t + 12, 4)) +
(-9.857063e-08 * pow(t + 12, 5)))) - 1080 / 2"
#
ffmpeg -y -i sensor_log.2016-08-10_elevs.png -filter_complex "
color=black:s=1920x1080:d=149.433333[bg];

[0:v]loop=-1:size=2,crop='x=0:y=${Y}:w=100:h=1033'[yax];
[0:v]loop=-1:size=2,crop='x=(${T} + 100):y=4629:w=1820:h=47'[xax];
[0:v]loop=-1:size=2,crop='x=(${T} + 100):y=${Y}:w=1820:h=1033'[grp];
[0:v]loop=-1:size=2,crop='x=104:y=4:w=302:h=120',scale=400:-1[legend];

[bg][yax]overlay='x=0:y=0':shortest=1[v0];
[v0][grp]overlay='x=100:y=0:shortest=1'[v1];
[v1][xax]overlay='x=100:y=1033':shortest=1[vmain];
[vmain][legend]overlay=x=100:y=4
" sensor_log.2016-08-10_elevs.mp4
Watch on youtube.com

Walk across the whole area linearly without omission

In some cases, such as when the original image is grid-like, you may want to simply walk through the whole more simply and linearly.

Here, the image generated by the following script is taken as an example:

"""
Original: https://matplotlib.org/examples/color/named_colors.html
"""
from __future__ import division

import matplotlib.pyplot as plt
from matplotlib import colors as mcolors


def make_graph():
    colors = dict(mcolors.BASE_COLORS, **mcolors.CSS4_COLORS)
    by_hsv = sorted((tuple(mcolors.rgb_to_hsv(mcolors.to_rgba(color)[:3])), name)
                    for name, color in colors.items())
    sorted_names = [name for hsv, name in by_hsv]
    #
    n = len(sorted_names)
    ncols = 4
    nrows = n // ncols + 1
    #
    fig, ax = plt.subplots()
    fig.set_size_inches(60, 40)  # set large canvas size

    # Get height and width
    X, Y = fig.get_dpi() * fig.get_size_inches()
    h, w = Y / nrows, X / ncols
    #
    for i, name in enumerate(sorted_names):
        col = i % ncols
        row = i // ncols
        y = Y - (row * h) - h

        xi_line = w * (col + 0.05)
        xf_line = w * (col + 0.25)
        xi_text = w * (col + 0.26)

        ax.text(xi_text, y, name, fontsize=(h * 0.5),
                horizontalalignment='left',
                verticalalignment='center')

        ax.hlines(y + h * 0.1, xi_line, xf_line,
                  color=colors[name], linewidth=(h * 0.6))

    ax.set_xlim(0, X)
    ax.set_ylim(0, Y)
    ax.set_axis_off()

    plt.savefig("named_color.png", bbox_inches="tight")


if __name__ == '__main__':
    make_graph()

Although complicated calculations are not necessary for realization, conditional branching is sometimes necessary, so it is a little complicated.

Watch on youtube.com
00:00:00
#! /bin/bash
pref="`basename $0 .sh`"
#
ifn="named_color.png"  # 4732x3129
ow=4732 ; oh=3129
tw=1920 ; th=1080
max_x=$((${ow} - ${tw}))
#
t="(t*300)"
floor="floor(${t} / ${max_x})"
#
ffmpeg -y -i ${ifn} -filter_complex "
color=0xDDDDDD:s=${tw}x${th}[bg];
[0:v]loop=-1:size=2,drawgrid=w=${max_x}:h=${th}:c=blue:t=8[0v];

[bg][0v]
overlay='
x=-mod(${t}, ${max_x}):
y=-${th} * ${floor}
'
" -t 28 ${pref}.mp4
00:00:28
#! /bin/bash
pref="`basename $0 .sh`"
#
ifn="named_color.png"  # 4732x3129
ow=4732 ; oh=3129
tw=1920 ; th=1080
max_y=$((${oh} - ${th}))
#
t="(t*300)"
floor="floor(${t} / ${max_y})"
#
ffmpeg -y -i ${ifn} -filter_complex "
color=0xDDDDDD:s=${tw}x${th}[bg];
[0:v]loop=-1:size=2,drawgrid=w=${tw}:h=${max_y}:c=blue:t=8[0v];

[bg][0v]
overlay='
x=-${tw} * ${floor}:
y=-mod(${t}, ${max_y})
'
" -t 21 ${pref}.mp4
00:00:49
#! /bin/bash
pref="`basename $0 .sh`"
#
ifn="named_color.png"  # 4732x3129
ow=4732 ; oh=3129
tw=1920 ; th=1080
max_x=$((${ow} - ${tw}))
#
t="(t*400)"
floor="floor(${t} / ${max_x})"
stage="mod(${floor}, 4)"
#
ffmpeg -y -i ${ifn} -filter_complex "
color=0xDDDDDD:s=${tw}x${th}[bg];
[0:v]loop=-1:size=2,drawgrid=w=${max_x}:h=${th}:c=black:t=3[0v];

[bg][0v]
overlay='
x=
if(eq(${stage}, 0), -mod(${t}, ${max_x}),
if(eq(${stage}, 1), -${max_x},
if(eq(${stage}, 2), -${max_x} + mod(${t}, ${max_x}),
0))):
y=if(eq(${stage}, 0) + eq(${stage}, 2), -${th} * ${floor} / 2,
-${th} * (${floor} - 1) / 2 - (${th} / ${max_x}) * mod(${t}, ${max_x}))
'
" -t 36 ${pref}.mp4
00:01:25
#! /bin/bash
pref="`basename $0 .sh`"
#
ifn="named_color.png"  # 4732x3129
ow=4732 ; oh=3129
tw=1920 ; th=1080
max_y=$((${oh} - ${th}))
#
t="(t*400)"
floor="floor(${t} / ${max_y})"
stage="mod(${floor}, 4)"
#
ffmpeg -y -i ${ifn} -filter_complex "
color=0xDDDDDD:s=${tw}x${th}[bg];
[0:v]loop=-1:size=2,drawgrid=w=${tw}:h=${max_y}:c=black:t=3[0v];

[bg][0v]
overlay='
x=if(eq(${stage}, 0) + eq(${stage}, 2), -${tw} * ${floor} / 2,
-${tw} * (${floor} - 1) / 2 - (${tw} / ${max_y}) * mod(${t}, ${max_y})):
y=
if(eq(${stage}, 0), -mod(${t}, ${max_y}),
if(eq(${stage}, 1), -${max_y},
if(eq(${stage}, 2), -${max_y} + mod(${t}, ${max_y}),
0)))
'
" -t 26 ${pref}.mp4
00:01:51
#! /bin/bash
pref="`basename $0 .sh`"
#
ifn="named_color.png"  # 4732x3129
ow=4732 ; oh=3129
tw=$((${ow} / 4)) ; th=1080
max_y=$((${oh} - ${th}))
#
t="(t*400)"
floor="floor(${t} / ${max_y})"
stage="mod(${floor}, 4)"
#
ffmpeg -y -i ${ifn} -filter_complex "
color=0xDDDDDD:s=${tw}x${th}[bg];
[0:v]loop=-1:size=2,drawgrid=w=${tw}:h=${max_y}:c=black:t=3[0v];

[bg][0v]
overlay='
x=if(eq(${stage}, 0) + eq(${stage}, 2), -${tw} * ${floor} / 2,
-${tw} * (${floor} - 1) / 2 - (${tw} / ${max_y}) * mod(${t}, ${max_y})):
y=
if(eq(${stage}, 0), -mod(${t}, ${max_y}),
if(eq(${stage}, 1), -${max_y},
if(eq(${stage}, 2), -${max_y} + mod(${t}, ${max_y}),
0)))
'
,pad='1920:1080:(ow-iw)/2:0:0xDDDDDD'
" -t 38 ${pref}.mp4
00:02:29
#! /bin/bash
pref="`basename $0 .sh`"
#
ifn="named_color.png"  # 4732x3129
ow=4732 ; oh=3129
tw=$((${ow} / 4)) ; th=1080
max_y=$((${oh} - ${th}))
#
t="(t*400)"
floor="floor(${t} / ${max_y})"
stage="mod(${floor}, 4)"
#
ffmpeg -y -i ${ifn} -filter_complex "
color=0xDDDDDD:s=1920x${th}[bg];
[0:v]loop=-1:size=2,drawgrid=w=${tw}:h=${max_y}:c=black:t=3[0v];

[bg][0v]
overlay='
x=if(eq(${stage}, 0) + eq(${stage}, 2), -${tw} * ${floor} / 2,
-${tw} * (${floor} - 1) / 2 - (${tw} / ${max_y}) * mod(${t}, ${max_y})):
y=
if(eq(${stage}, 0), -mod(${t}, ${max_y}),
if(eq(${stage}, 1), -${max_y},
if(eq(${stage}, 2), -${max_y} + mod(${t}, ${max_y}),
0)))
'

" -t 38 ${pref}.mp4
00:03:07
#! /bin/bash
pref="`basename $0 .sh`"
#
ifn="named_color.png"  # 4732x3129
ow=4732 ; oh=3129
tw=$((${ow} / 4)) ; th=1080
max_y=$((${oh} - ${th}))
left=$(((1920 - ${tw}) / 2))
#
t="(t*400)"
floor="floor(${t} / ${max_y})"
stage="mod(${floor}, 4)"
#
ffmpeg -y -i ${ifn} -filter_complex "
color=0xDDDDDD:s=1920x${th}[bg];
[0:v]loop=-1:size=2,drawgrid=w=${tw}:h=${max_y}:c=black:t=3[0v];

[bg][0v]
overlay='
x=if(eq(${stage}, 0) + eq(${stage}, 2), -${tw} * ${floor} / 2 + ${left},
-${tw} * (${floor} - 1) / 2 + ${left} - (${tw} / ${max_y}) * mod(${t}, ${max_y})):
y=
if(eq(${stage}, 0), -mod(${t}, ${max_y}),
if(eq(${stage}, 1), -${max_y},
if(eq(${stage}, 2), -${max_y} + mod(${t}, ${max_y}),
0)))
'

" -t 38 ${pref}.mp4

For `circo’ and `twopi’ layout of GraphViz

When you use GraphViz, if you use `circo’ or `twopi’ as a renderer, its result will be painful for you to browse with an ordinaly image vierwer.

[me@host: ~]$ egrep -v '(717c254aeffbb527dabfc|"385")' \
> twopi2.gv.txt > twopi2_except_717c254aeffbb527dabfc.gv.txt
[me@host: ~]$ dot -Ktwopi -Tpng -Gdpi=55 \
> twopi2_except_717c254aeffbb527dabfc.gv.txt > twopi2.png

In such a case, it is useful to simply use cos for x and sin for y:

#! /bin/sh
pref="`basename $0 .sh`"
#
ifn="twopi2.png"  # 4011x4170
r=1900
#
t="(t / 80 * (2*PI))"
#
ffmpeg -y -i "${ifn}" -filter_complex "
color=0xEFEFEF:s=1920x1080,loop=-1:size=2[bg];
[bg][0:v]overlay='
x=-(${r} + ${r} * (1 - 0.2 * floor(${t} / (2*PI))) * cos(${t}) - 960):
y=-(${r} - ${r} * (1 - 0.2 * floor(${t} / (2*PI))) * sin(${t}) - 540)
'
,scale=1920*2:-1,crop=1920:1080
" -t 240 ${pref}.mp4
Watch on youtube.com

Capturing webpage as image and converting it to video

If you want to capture web pages into video, maybe you want to introduce the page or explain how to read it. Otherwise, you should know that in most cases this is annoying for the reader. Forced scrolling that ignores the reader’s reading speed is not what the reader wants.

Anyway, the task of “capture web page”, which was very difficult until a while ago, has become much easier with the advent of “chrome headless”.

If some trial and error is acceptable, a simple script like this may be sufficient:

webpage2movie.sh
#! /bin/bash
#
# usage: webpage2movie.sh url [window width] [window height]
#
#    webpage2movie.sh http://example.com 1280 4096
#

# default: in the case of Windows
chrome="${chrome:-/c/Program Files (x86)/Google/Chrome/Application/chrome}"

#
url="${1:-https://en.wikipedia.org/wiki/Lunar_phase}"
base="`basename \"${url}\"`"
ww="${2:-1120}"
wh="${3:-$((1080*6))}"

#
# 1. To use headless version of chrome, you must install recent version of chrome.
# 2. Currently, --disable-gpu can not be omitted.
# 3. --screenshot can take the output path.
# 4. You can think giving --force-device-scale-factor means
#    controlling the resolution of the output image.
# 5. Higer --force-device-scale-factor takes much time.
# 6. This approach requires trial and error to capture the entire page.
#
"${chrome}" \
    --headless \
    --disable-gpu \
    --window-size=${ww},${wh} \
    --force-device-scale-factor=${sc:-1.0} \
    --screenshot=`pwd -W`/"${base}.png" \
    "${url}"
# Note: "pwd -W" is of MSYS bash's own feature. If you use the other
#       environment like Unix, use "pwd" (i,e,, without "-W")

tdur=${tdur:-90}
pbdur=${pbdur:-5}  # pause before scrolling
padur=${padur:-6}  # pause after scrolling
#
ffmpeg -y -i "${base}.png" -filter_complex "
color=black:s=${vw:-1280}x${vh:-720}:d=${tdur}[vb];
[0:v]loop=-1:size=2,scale=${vw:-1280}:-1[vt];
[vb][vt]overlay='
shortest=1
:x=0
:y=-min(max(0, t - ${pbdur}), (${tdur} - ${pbdur} - ${padur})) /
(${tdur} - ${pbdur} - ${padur}) * (h - H)
'
" "${base}.mp4"
[me@host: ~]$ tdur=110 sc=3.0 ./webpage2movie.sh https://en.wikipedia.org/wiki/Lunar_phase 1280 $((1080*7))
Watch on youtube.com

Using Puppeteer

If you installed node.js, you can use Puppeteer:

Puppeteer is a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol. Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium.

For example:

ss_fullpage.js
/*
 * Usage: node thisscript.js url out.png viewportwidth viewportheight scalefactor
 * ex)
 *    node thisscript.js http://example.com example.png 1280 720 1.2
 */
'use strict';

/*
 * puppeteer-core doesn't automatically download Chromium when installed.
 * see:
 *   https://github.com/GoogleChrome/puppeteer/blob/master/docs/api.md#puppeteer-vs-puppeteer-core
 */
const puppeteer = require('puppeteer-core');

/*
 * in the case of "puppeteer-core",
 * we need to call puppeteer.connect([options]) or puppeteer.launch([options])
 * with an explicit executablePath option.
 */
const executablePath = "c:/Program Files (x86)/Google/Chrome/Application/chrome.exe";

(async() => {
    const browser = await puppeteer.launch(
        {
            headless: true,  /* Defaults to true unless the devtools option is true. */
            executablePath: executablePath,

            /* Slows down Puppeteer operations by the specified amount of milliseconds. */
            /*slowMo: 1000,*/

            defaultViewport: {
                width: parseInt(process.argv[4]),
                height: parseInt(process.argv[5]),
                deviceScaleFactor: parseFloat(process.argv[6]), /* --force-device-scale-factor */
            },
        });
    const page = await browser.newPage();
    await page.goto(
        process.argv[2]  /* [node, myscript.js, ...] */
    );
    await page.screenshot({
        path: process.argv[3],
        fullPage: true
    });
    await browser.close();
})();
[me@host: ~]$ # Depending on how npm installs modules, you may need:
[me@host: ~]$ #     export NODE_PATH='C:/Users/hhsprings/AppData/Roaming/npm/node_modules'
[me@host: ~]$ # (This example is my case. Replace "hhsprings" to your username even if you
[me@host: ~]$ # are using Windows.)
[me@host: ~]$
[me@host: ~]$ node ss_fullpage.js \
> 'https://hhsprings.bitbucket.io/docs/programming/examples/ffmpeg/drawing_texts/drawtext.html' \
> drawtext.png 1280 720 1.5

Let’s convert to video in the same way as the previous example:

still2movie.sh
#! /bin/bash
infile="${1}"
base="`basename \"${infile}\" .png`"

tdur=${tdur:-90}
pbdur=${pbdur:-5}  # pause before scrolling
padur=${padur:-5}  # pause after scrolling
#
ffmpeg -y -i "${base}.png" -filter_complex "
color=black:s=${vw:-1280}x${vh:-720}:d=${tdur}[vb];
[0:v]loop=-1:size=2,scale=${vw:-1280}:-1[vt];
[vb][vt]overlay='
shortest=1
:x=0
:y=-min(max(0, t - ${pbdur}), (${tdur} - ${pbdur} - ${padur})) /
(${tdur} - ${pbdur} - ${padur}) * (h - H)
'
" "${base}.mp4"
[me@host: ~]$ tdur=120 vw=1920 vh=1080 ./still2movie.sh drawtext.png
Watch on youtube.com

As a snake foot: `viewing the huge stillimage comfortably’

If your goal is simply to browse large images comfortably, everything I’m introducing here is ridiculous. If you want to apply the method I introduced, ask yourself “Why do I need to convert to video?” If there is no purpose, such as “explanation”, it should be meaningless to make it a video in most cases.

Very unfortunately, the “photo viewer” initially bundling on Windows is extremely poor. All operability such as zoom and scroll of “Photo Viewer” is so bad that it can be called “the world’s worst.” But if that is the reason you want to make it video, your idea is wrong.

If so, you should look for “Windows photo viewer altenative”. My favorite is Honeyview:

Watch on youtube.com