Skip to content

Instantly share code, notes, and snippets.

@nateraw
Created February 18, 2023 23:55
Show Gist options
  • Save nateraw/abaf9f6359ca90e3c95f2e1006011bd6 to your computer and use it in GitHub Desktop.
Save nateraw/abaf9f6359ca90e3c95f2e1006011bd6 to your computer and use it in GitHub Desktop.
sdv-SD2-1-example.ipynb
Display the source blob
Display the rendered blob
Raw
{
"nbformat": 4,
"nbformat_minor": 0,
"metadata": {
"colab": {
"provenance": [],
"authorship_tag": "ABX9TyPn/ls1tZkFLvKoDOMLPVyF",
"include_colab_link": true
},
"kernelspec": {
"name": "python3",
"display_name": "Python 3"
},
"language_info": {
"name": "python"
},
"accelerator": "GPU",
"gpuClass": "premium"
},
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/gist/nateraw/abaf9f6359ca90e3c95f2e1006011bd6/sdv-sd2-1-example.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "code",
"source": [
"! pip install git+https://github.com/nateraw/stable-diffusion-videos"
],
"metadata": {
"id": "rTl3OkX88Or0"
},
"execution_count": null,
"outputs": []
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"id": "ru1c_UQ88Lvh"
},
"outputs": [],
"source": [
"import torch\n",
"\n",
"from stable_diffusion_videos import StableDiffusionWalkPipeline\n",
"from diffusers import DPMSolverMultistepScheduler\n",
"\n",
"device = \"mps\" if torch.backends.mps.is_available() else \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
"torch_dtype = torch.float16 if device == \"cuda\" else torch.float32\n",
"pipe = StableDiffusionWalkPipeline.from_pretrained(\n",
" \"stabilityai/stable-diffusion-2-1\",\n",
" torch_dtype=torch_dtype,\n",
").to(device)\n",
"pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)"
]
},
{
"cell_type": "code",
"source": [
"pipe.walk(\n",
" prompts=['a cat', 'a dog'],\n",
" seeds=[1234, 4321],\n",
" num_interpolation_steps=5,\n",
" num_inference_steps=50,\n",
" fps=5,\n",
")"
],
"metadata": {
"id": "o_C6Cn2n8P_B"
},
"execution_count": null,
"outputs": []
}
]
}
@quintendewilde
Copy link

Love this!
Question how would I make my video's longer?

Kind regards!!

@nateraw
Copy link
Author

nateraw commented Feb 21, 2023

Hey there, glad ya like it :)

There are a number of ways to make the video longer. Your video is generated with num_frames = num_interpolation_steps * (len(prompts) - 1) frames, and the video duration is num_frames / fps seconds long.

So...

  • can add more prompts/seeds
  • can increase num_interpolation steps
  • can decrease FPS (this is really just making the video slower but longer in duration as its same number of frames

For example, the following would produce num_frames = 30 * 4 (120) frames at 30fps, so 4 seconds long with decent quality.

prompts=['a cat', 'a dog', 'a bird', 'a horse', 'a camel']
seeds = [0, 1, 2, 3, 4]  # Set to anything you want, or random numbers. must be same len as prompts
pipe.walk(prompts=prompts, seeds=seeds, num_interpolation_steps=30, num_inference_steps=50, fps=30)

@quintendewilde
Copy link

quintendewilde commented Feb 21, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment