Stable Diffusion with Diffusers¶
This tutorial will demonstrate how to use the diffusers
library to generate images using
a pretrained Stable Diffusion model.
NOTE:
The source code for this tutorial can be located in github.com/dstack-examples
.
Requirements¶
Here is the list of Python libraries that we will utilize:
diffusers
transformers
scipy
ftfy
accelerate
safetensors
NOTE:
We're using the safetensors
library because it implements a new simple format for storing tensors safely (as opposed
to pickle) and that is still fast (zero-copy).
To ensure our scripts can run smoothly across all environments, let's include them in
the stable_diffusion/requirements.txt
file.
You can also install these libraries locally:
$ pip install -r stable_diffusion/requirements.txt
Also, because we'll use dstack
CLI, let's install it locally:
$ pip install dstack -U
Download the pre-trained model¶
In our tutorial, we'll use the runwayml/stable-diffusion-v1-5
model (pretrained by Runway).
Let's create the following Python file:
import shutil
from diffusers import StableDiffusionPipeline
def main():
_, cache_folder = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5",
return_cached_folder=True)
shutil.copytree(cache_folder, "./models/runwayml/stable-diffusion-v1-5", dirs_exist_ok=True)
NOTE:
By default, diffusers
downloads the model to its own cache folder built using symlinks.
Since dstack
doesn't support symlinks in artifacts, we're copying the model files to the local models
folder.
In order to run a script via dstack
, the script must be defined as a workflow via a YAML file
under .dstack/workflows
.
Let's define the following workflow YAML file:
workflows:
- name: stable-diffusion
provider: bash
commands:
- pip install -r stable_diffusion/requirements.txt
- python stable_diffusion/stable_diffusion.py
artifacts:
- path: ./models
resources:
memory: 16GB
Now, the workflow can be run anywhere via the dstack
CLI.
NOTE:
Before you run a workflow via dstack
, make sure your project has a remote Git branch configured,
and invoke the dstack init
command which will ensure that dstack
may access the repository:
$ dstack init
Here's how to run a dstack
workflow locally:
$ dstack run stable-diffusion
Once you run it, dstack
will run the script, and save the models
folder as an artifact.
After that, you can reuse it in other workflows.
Attach an interactive IDE¶
Sometimes, before you can run a workflow, you may want to run code interactively, e.g. via an IDE or a notebook.
Look at the following example:
workflows:
- name: code-stable
provider: code
deps:
- workflow: stable-diffusion
setup:
- pip install -r stable_diffusion/requirements.txt
resources:
memory: 16GB
As you see, the code-stable
workflow refers the stable-diffusion
workflow as a dependency.
If you run it, dstack
will run a VS Code application with the code, pretrained model,
and Python environment:
$ dstack run code-stable
Generate images¶
Let's write a script that generates images using a pre-trained model and given prompts:
import argparse
from pathlib import Path
import torch
from diffusers import StableDiffusionPipeline
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument("-P", "--prompt", action='append', required=True)
args = parser.parse_args()
pipe = StableDiffusionPipeline.from_pretrained("./models/runwayml/stable-diffusion-v1-5", local_files_only=True)
if torch.cuda.is_available():
pipe.to("cuda")
images = pipe(args.prompt).images
output = Path("./output")
output.mkdir(parents=True, exist_ok=True)
for i in range(len(images)):
images[i].save(output / f"{i}.png")
The script loads the model from the local models
folder, generates images and saves them to the
local output
folder.
Let's define it in our workflow YAML file:
workflows:
- name: prompt-stable
provider: bash
deps:
- workflow: stable-diffusion
commands:
- pip install -r stable_diffusion/requirements.txt
- python stable_diffusion/prompt_stable.py ${{ run.args }}
artifacts:
- path: ./output
resources:
memory: 16GB
NOTE:
The dstack run
command allows to pass arguments to the workflow via ${{ run.args }}
.
Let's run the workflow locally:
$ dstack run prompt-stable -P "cats in hats"
The output artifacts of local runs are stored under ~/.dstack/artifacts
.
Here's an example of the prompt-stable
workflow output:
Configure a remote¶
By default, workflows in dstack
run locally. However, you have the option to configure a remote
to run your
workflows.
For instance, you can set up your AWS account as a remote
to execute workflows.
To configure a remote
, run the following command:
$ dstack config
? Choose backend: aws
? AWS profile: default
? Choose AWS region: eu-west-1
? Choose S3 bucket: dstack-142421590066-eu-west-1
? Choose EC2 subnet: no preference
$
Run workflows remotely¶
Once a remote is configured, you can use the --remote
flag with the dstack run
command
to run workflows remotely.
Let's first run the stable-diffusion
workflow:
$ dstack run stable-diffusion --remote
NOTE:
When you run a remote workflow, dstack
automatically creates resources in the configured cloud,
and releases them once the workflow is finished.
Now, you can run the prompt-stable
remotely as well:
dstack run prompt-stable --remote --gpu-name V100 -P "cats in hats"
NOTE:
You can configure the required resources to run the workflows either via the resources
property in YAML
or the dstack run
command's arguments, such as --gpu
, --gpu-name
, etc.