animatediff-cli is a command line tool to help you generate animation using Stable Diffusion models. animatediff-cli-prompt-travel adds prompt travel feature to make it even more powerful. Follow the instructions to install this powerful AI animation tool on Windows.
Requirements
- Python 3.10 – https://www.python.org/downloads/
- Git – https://git-scm.com/downloads
- Nvidia GPU – 4070, 4080, 4090
Installation
- Open up a command prompt and change directory to where you want to install animatediff-cli-prompt-travel.
- Enter the following commads to install it.
git clone https://github.com/s9roll7/animatediff-cli-prompt-travel.git cd animatediff-cli-prompt-travel python -m venv venv venv\Scripts\activate.bat set PYTHONUTF8=1 python -m pip install --upgrade pip python -m pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121 python -m pip install -e . python -m pip install -e .[stylize] python -m pip install -e .[dwpose] pip install pytorch_lightning
- Download Stable Diffusion models and put them in data\models\sd . If you already have some Stable Diffusion models, you can use mklink to make it available to animatediff-cli-prompt-travel. For example, I entered this in an elevated command prompt under the data\models\sd directory.
mklink MeinaV11.safetensors d:\stable-diffusion-webui\models\Stable-diffusion\MeinaV11.safetensors
- Download Lora models and put them in data\share\Lora. Note that you have to create this directory.
- Download VAE models and put them in data\share\VAE. Note that you have to create this directory.
- Download motion module models and put them in data\models\motion-module. Links can be found here:
https://github.com/guoyww/AnimateDiff - Download embeddings and put them in data\embeddings.
Example
- Use a text editor to create a file called Meina.json and save it under config\prompts
{ "name": "MeinaV11", "path": "models/sd/MeinaV11.safetensors", "vae_path": "share/VAE/kl-f8-anime.ckpt", "motion_module": "models/motion-module/mm_sd_v15_v2.safetensors", "context_schedule": "uniform", "lcm_map": {}, "gradual_latent_hires_fix_map": {}, "compile": false, "tensor_interpolation_slerp": true, "seed": [ -1, -1, -1 ], "scheduler": "euler", "steps": 50, "guidance_scale": 7.0, "unet_batch_size": 1, "clip_skip": 2, "prompt_fixed_ratio": 0.5, "head_prompt": "masterpiece, best quality, a beautiful and detailed portriat of a girl in a dance club, dancing, ", "prompt_map": { "0": "smile, wearing white dress,", "32": "wearing tank top and shorts, in the spot light,", "64": "wearing bikini,", "96": "wearing black dress, in the spot light," }, "tail_prompt": "clothed, awesome and detailed background", "n_prompt": [ "(worst quality, low quality:1.4), badhandv4, bad-hands-5, BadDream, nudity,simple background,border,mouth closed,text, patreon,bed,bedroom,white background,((monochrome)),sketch" ], "is_single_prompt_mode": false, "lora_map": {}, "motion_lora_map": {}, "ip_adapter_map": {}, "img2img_map": {}, "region_map": {}, "controlnet_map": {}, "upscale_config": {}, "stylize_config": {}, "output": { "format": "mp4", "fps": 8, "encode_param": { "crf": 10 } }, "result": {} }
- Open up a command prompt and change directory to animatediff-cli-prompt-travel.
- Activate the venv.
venv\Scripts\activate.bat
- Type the following to run it.
animatediff generate -c config\prompts\Meina.json -W 512 -H 640 -L 128 -C 16
- When it’s done, the generated video and images are in output folder. It takes about 16 minutes for a Nvidia RTX 2080 Ti.
- Output
Notes
- Not all the Stable Diffusion models and Lora models work with animatediff. You have to try them to find the best models for you. I have good results with the following models:
- You can follow my other post to upscale the video and increase the frame rate.
This post may contain affiliated links. When you click on the link and purchase a product, we receive a small commision to keep us running. Thanks.
Leave a Reply