This demo video was created entirely using VibeFrame CLI. 13 commands. 5 AI providers. Zero manual editing.
The full pipeline — generate, animate, narrate, compose — all from CLI commands.
Each scene started as a generated image, then animated with a single command.
A floating holographic terminal displaying 'vibe generate video' — the CLI as a gateway to creation.


$vibe gen vid "terminal slowly rotates, text pulses with energy" -i scene1.png -o scene1.mp4
Futuristic control room with holographic screens — representing the multi-provider AI pipeline.


$vibe gen vid "camera flies through workspace, screens flicker" -i scene2.png -o scene2.mp4
A glowing play button with particle effects — the finished video, ready to ship.


$vibe gen vid "play button glows, particles converge, burst of light" -i scene3.png -o scene3.mp4
Step-by-step: the exact CLI commands used to produce this demo video.
3 images generated with Gemini Nano Banana (< 10 sec each)
3 scene videos generated with Grok Imagine Video (native audio, $0.07/sec)
AI analyzed mood, camera movement, and suggested narration text
3 narrations (ElevenLabs TTS) + 20s cinematic BGM (ElevenLabs Music)
19.5s demo video with synced narration, BGM, and 3 animated scenes
Open source. MIT licensed. One install command.