Dogfooding

Built with VibeFrame,
for VibeFrame.

This demo video was created entirely using VibeFrame CLI. 13 commands. 5 AI providers. Zero manual editing.

CLI Workflow

Text to video in your terminal

The full pipeline — generate, animate, narrate, compose — all from CLI commands.

Generate Image
$ vibe gen img "sunset over mountains" -o scene.png
Image to Video
$ vibe gen vid "camera zooms in" -i scene.png -o scene.mp4
Generate Narration
$ vibe gen tts "Welcome to VibeFrame" -o narration.mp3
Generate Music
$ vibe gen music "cinematic ambient" -o bgm.mp3 -d 20
Behind the Scenes

How it was made

Step-by-step: the exact CLI commands used to produce this demo video.

1

Generate source images

$ vibe gen img "futuristic terminal in space" -o scene1.png -r 16:9

3 images generated with Gemini Nano Banana (< 10 sec each)

2

Animate with Image-to-Video

$ vibe gen vid "camera slowly pushes in" -i scene1.png -o scene1.mp4 -d 5

3 scene videos generated with Grok Imagine Video (native audio, $0.07/sec)

3

Analyze with Video Understanding

$ vibe az media scene1.mp4 "Describe this video and suggest narration"

AI analyzed mood, camera movement, and suggested narration text

4

Generate narration & music

$ vibe gen tts "VibeFrame. AI-native video editing." -o narration.mp3

3 narrations (ElevenLabs TTS) + 20s cinematic BGM (ElevenLabs Music)

5

Compose final video

$ ffmpeg concat + audio mix → vibeframe-demo.mp4

19.5s demo video with synced narration, BGM, and 3 animated scenes

Create your own

Open source. MIT licensed. One install command.

$curl -fsSL https://vibeframe.ai/install.sh | bash