diff --git a/.gitignore b/.gitignore index 6010b30..d28572a 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,6 @@ model_zoo/ outputs/ +*benchmark_tmp.csv # Byte-compiled / optimized / DLL files __pycache__/ @@ -130,6 +131,7 @@ venv/ ENV/ env.bak/ venv.bak/ +.venv*/ # Spyder project settings .spyderproject @@ -160,4 +162,6 @@ cython_debug/ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore # and can be added to the global gitignore or merged into this file. For a more nuclear # option (not recommended) you can uncomment the following to ignore the entire idea folder. -#.idea/ \ No newline at end of file +#.idea/ + +.vscode/ \ No newline at end of file diff --git a/Dockerfile b/Dockerfile new file mode 100644 index 0000000..8be068f --- /dev/null +++ b/Dockerfile @@ -0,0 +1,8 @@ +FROM nvidia/cuda:11.2.1-base-ubuntu20.04 +RUN apt-get update && \ + apt-get install --no-install-recommends --no-install-suggests -y \ + curl python3 python3-pip +WORKDIR /lambda_diffusers +COPY . . +RUN pip3 install --no-cache-dir -r requirements.txt +CMD ["python3", "-u", "scripts/benchmark.py", "--samples", "1,2,4,8,16"] \ No newline at end of file diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..ea7b0ed --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2022 Lambda, Inc + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md index eea53d1..7306ced 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,11 @@ _Additional models and pipelines for 🤗 Diffusers created by [Lambda Labs](htt - [Stable Diffusion Image Variations](#stable-diffusion-image-variations) - [Pokemon text to image](#pokemon-text-to-image) + +
+🦄 Other exciting ML projects at Lambda: ML Times, Distributed Training Guide, Text2Video, GPU Benchmark. +
+ ## Installation ```bash @@ -31,21 +36,33 @@ A fine-tuned version of Stable Diffusion conditioned on CLIP image embeddings to ### Usage ```python -from pathlib import Path -from lambda_diffusers import StableDiffusionImageEmbedPipeline +from diffusers import StableDiffusionImageVariationPipeline from PIL import Image -import torch -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = StableDiffusionImageEmbedPipeline.from_pretrained("lambdalabs/sd-image-variations-diffusers") -pipe = pipe.to(device) -im = Image.open("your/input/image/here.jpg") -num_samples = 4 -image = pipe(num_samples*[im], guidance_scale=3.0) -image = image["sample"] -base_path = Path("outputs/im2im") -base_path.mkdir(exist_ok=True, parents=True) -for idx, im in enumerate(image): - im.save(base_path/f"{idx:06}.jpg") + +device = "cuda:0" +sd_pipe = StableDiffusionImageVariationPipeline.from_pretrained( + "lambdalabs/sd-image-variations-diffusers", + revision="v2.0", + ) +sd_pipe = sd_pipe.to(device) + +im = Image.open("path/to/image.jpg") +tform = transforms.Compose([ + transforms.ToTensor(), + transforms.Resize( + (224, 224), + interpolation=transforms.InterpolationMode.BICUBIC, + antialias=False, + ), + transforms.Normalize( + [0.48145466, 0.4578275, 0.40821073], + [0.26862954, 0.26130258, 0.27577711]), +]) +inp = tform(im).to(device) + +out = sd_pipe(inp, guidance_scale=3) +out["images"][0].save("result.jpg") + ``` ## Pokemon text to image @@ -54,6 +71,7 @@ __Stable Diffusion fine tuned on Pokémon by [Lambda Labs](https://lambdalabs.co [](https://replicate.com/lambdal/text-to-pokemon) [](https://colab.research.google.com/github/LambdaLabsML/lambda-diffusers/blob/main/notebooks/pokemon_demo.ipynb) +[](https://huggingface.co/spaces/lambdalabs/text-to-pokemon) Put in a text prompt and generate your own Pokémon character, no "prompt engineering" required! @@ -75,7 +93,7 @@ import torch from diffusers import StableDiffusionPipeline from torch import autocast -pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/sd-pokemon-diffusers", torch_dtype=torch.float16) +pipe = StableDiffusionPipeline.from_pretrained("lambdalabs/sd-pokemon-diffusers", torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "Yoda" @@ -98,6 +116,36 @@ for idx, im in enumerate(images): im.save(f"{idx:06}.png") ``` +## Benchmarking inference + +We have updated the original benchmark using xformers and a newer version of Diffusers, see the [new results here](./docs/benchmark-update.md) (original results can still be found [here](./docs/benchmark.md)). + +### Usage + +Ensure that [NVIDIA container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) is installed on your system and then run the following: + +```bash +git clone https://github.com/LambdaLabsML/lambda-diffusers.git +cd lambda-diffusers/scripts +make bench +``` + +Currently `xformers` does not support H100. The "without xformers" results below are generated by running the benchmark with `--xformers no` (can be set in `scripts/Makefile`) + +### Results + +With [xformers](https://github.com/facebookresearch/xformers), raw data can be found [here](./benchmarks/benchmark.csv). + + +Without [xformers](https://github.com/facebookresearch/xformers), raw data can be found [here](./benchmarks/benchmark_no_xformers.csv). + + +H100 MIG performance, raw data can be found [here](./benchmarks/benchmark_H100_MIG.csv). + + +Cost analysis + + ## Links - [Captioned Pokémon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) diff --git a/benchmarks/benchmark.csv b/benchmarks/benchmark.csv new file mode 100644 index 0000000..aaa55ea --- /dev/null +++ b/benchmarks/benchmark.csv @@ -0,0 +1,81 @@ +device,precision,autocast,xformers,runtime,n_samples,latency,memory +NVIDIA A10,half,FALSE,TRUE,pytorch,1,2.01,3.13 +NVIDIA A10,single,FALSE,TRUE,pytorch,1,4.69,6.29 +NVIDIA A10,half,FALSE,TRUE,pytorch,2,3.65,4.3 +NVIDIA A10,single,FALSE,TRUE,pytorch,2,7.75,8.57 +NVIDIA A10,half,FALSE,TRUE,pytorch,4,6.68,6.63 +NVIDIA A10,single,FALSE,TRUE,pytorch,4,14.35,11.24 +NVIDIA A10,half,FALSE,TRUE,pytorch,8,12.93,11.05 +NVIDIA A10,single,FALSE,TRUE,pytorch,8,28.28,17.91 +NVIDIA A10,half,FALSE,TRUE,pytorch,16,24.65,19.86 +NVIDIA A10,single,FALSE,TRUE,pytorch,16,57.5,21.21 +NVIDIA A10,half,FALSE,TRUE,pytorch,32,48.79,7.37 +NVIDIA A10,single,FALSE,TRUE,pytorch,32,108.78,15.88 +NVIDIA A10,half,FALSE,TRUE,pytorch,64,108.26,17.54 +NVIDIA A10,single,FALSE,TRUE,pytorch,64,-1,-1 +NVIDIA A10,half,FALSE,TRUE,pytorch,128,212.94,22.18 +NVIDIA A10,single,FALSE,TRUE,pytorch,128,-1,-1 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,1,1.78,6.1 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,1,1.17,3.19 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,2,3.68,8.03 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,2,1.73,4.33 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,4,5.56,11.53 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,4,3.73,6.62 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,8,10.95,18.12 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,8,5.25,11.12 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,16,21.05,33.04 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,16,9.93,19.81 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,32,41.02,14.41 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,32,18.75,7.34 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,64,80.45,26.17 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,64,36.89,12.46 +NVIDIA A100 80GB PCIe,single,FALSE,TRUE,pytorch,128,161.52,48.01 +NVIDIA A100 80GB PCIe,half,FALSE,TRUE,pytorch,128,73.72,22.68 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,1,1.79,6.11 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,1,1.18,3.18 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,2,2.97,8.03 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,2,1.66,4.32 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,4,5.35,11.54 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,4,2.68,6.61 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,8,10.16,18.11 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,8,4.85,11.12 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,16,9.13,19.8 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,16,19.71,33.25 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,32,17.72,7.33 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,32,39.03,14.39 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,64,34.92,13.79 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,64,77.05,26.34 +NVIDIA A100-SXM4-40GB,half,FALSE,TRUE,pytorch,128,69.31,22.68 +NVIDIA A100-SXM4-40GB,single,FALSE,TRUE,pytorch,128,-1,-1 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,1,3.61,6.35 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,1,1.93,3.15 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,2,5.57,7.73 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,2,2.84,4.37 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,4,9.67,10.7 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,4,4.56,6.64 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,8,18.96,16.87 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,8,8.39,11.19 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,16,37.89,28.82 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,16,15.62,20.01 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,32,71.57,14.26 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,32,31.19,7.65 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,64,143.26,26.42 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,64,65.72,23.84 +NVIDIA RTX A6000,single,FALSE,TRUE,pytorch,128,287.96,47.92 +NVIDIA RTX A6000,half,FALSE,TRUE,pytorch,128,130.38,34.36 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,1,4.42,5.7 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,1,1.84,3.24 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,2,8.33,8.6 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,2,3.08,4.17 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,4,16.56,11.86 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,4,5.62,6.42 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,8,28.71,15.88 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,8,10.64,10.45 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,16,20.96,10.87 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,16,-1,-1 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,32,40.13,7.73 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,32,110.17,15.72 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,64,79.82,13.51 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,64,-1,-1 +Tesla V100-SXM2-16GB,single,FALSE,TRUE,pytorch,128,-1,-1 +Tesla V100-SXM2-16GB,half,FALSE,TRUE,pytorch,128,-1,-1 diff --git a/benchmarks/benchmark_H100_MIG.csv b/benchmarks/benchmark_H100_MIG.csv new file mode 100644 index 0000000..87c70dd --- /dev/null +++ b/benchmarks/benchmark_H100_MIG.csv @@ -0,0 +1,65 @@ +device,precision,autocast,xformers,runtime,n_samples,latency,memory, +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,1,1.73,7.7 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,1,1.06,3.46 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,2,2.66,9.79 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,2,1.73,4.57 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,4,4.47,18.49 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,4,2.63,8.91 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,8,8.16,23.86 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,8,4.97,12.57 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,16,15.98,42.38 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,16,9.61,29.01 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,32,32.04,80.51 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,32,19.07,55.57 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,1,2.3,7.74 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,1,1.52,3.45 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,2,3.95,9.48 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,2,2.42,4.57 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,4,7.12,18.2 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,4,4.17,8.9 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,8,13.91,23.75 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,8,7.91,12.49 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,16,15.73,29.01 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe MIG 4g.40gb,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe MIG 4g.40gb,half,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,1,4.2,7.76 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,1,2.58,3.41 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,2,7.61,11.09 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,2,4.56,4.59 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,4,14.45,17.65 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,4,8.24,6.78 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,8,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,8,15.81,15.65 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe MIG 2g.20gb,half,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,1,9.17,7.76 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,1,5.39,3.47 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,2,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,2,9.29,4.63 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,4,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,4,17.4,6.8 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,8,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,8,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe MIG 1g.10gb,half,FALSE,FALSE,pytorch,128,-1,-1 \ No newline at end of file diff --git a/benchmarks/benchmark_no_xformers.csv b/benchmarks/benchmark_no_xformers.csv new file mode 100644 index 0000000..d578b6d --- /dev/null +++ b/benchmarks/benchmark_no_xformers.csv @@ -0,0 +1,97 @@ +device,precision,autocast,xformers,runtime,n_samples,latency,memory, +NVIDIA A10,single,FALSE,FALSE,pytorch,1,4.75,6.73 +NVIDIA A10,half,FALSE,FALSE,pytorch,1,2.71,3.43 +NVIDIA A10,single,FALSE,FALSE,pytorch,2,8.75,9 +NVIDIA A10,half,FALSE,FALSE,pytorch,2,4.99,5.53 +NVIDIA A10,single,FALSE,FALSE,pytorch,4,17.18,18.14 +NVIDIA A10,half,FALSE,FALSE,pytorch,4,9.65,6.84 +NVIDIA A10,single,FALSE,FALSE,pytorch,8,-1,-1 +NVIDIA A10,half,FALSE,FALSE,pytorch,8,18.58,12.66 +NVIDIA A10,single,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA A10,half,FALSE,FALSE,pytorch,16,36.32,20.64 +NVIDIA A10,single,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA A10,half,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA A10,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA A10,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA A10,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA A10,half,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,1,1.72,7.76 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,1,1.18,3.41 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,2,3.03,9.04 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,2,1.88,5.53 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,4,5.53,18.04 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,4,3.35,6.74 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,8,10.95,23.85 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,8,6.28,12.6 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,16,12.57,20.58 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA A100-SXM4-40GB,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA A100-SXM4-40GB,half,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,1,1.99,7.76 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,1,1.5,3.45 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,2,3.52,11.11 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,2,2.3,4.53 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,4,6.31,13.98 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,4,4.04,8.91 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,8,12.21,23.91 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,8,7.59,12.75 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,16,-1,-1 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,16,14.54,21.24 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA A100-PCIE-40GB,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA A100-PCIE-40GB,half,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,1,2.05,7.76 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,1,1.53,3.41 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,2,3.09,9.04 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,2,3.06,5.53 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,4,6.34,18.04 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,4,4.57,6.74 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,8,11.16,23.85 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,8,7.91,12.6 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,16,22.59,42.63 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,16,14.22,20.58 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,32,44.02,79.6 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,32,27.73,45.19 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,64,-1.0,-1.0 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,64,55.55,79.54 +NVIDIA A100 80GB PCIe,single,False,False,pytorch,128,-1.0,-1.0 +NVIDIA A100 80GB PCIe,half,False,False,pytorch,128,-1.0,-1.0 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,1,4.15,6.76 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,1,2.43,3.42 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,2,6,11.1 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,2,3.88,4.5 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,4,12.85,13.97 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,4,7.77,8.88 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,8,32.69,23.88 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,8,21.21,12.74 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,16,81.14,42.77 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,16,48.49,21.23 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,32,-1,-1 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA RTX A6000,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA RTX A6000,half,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,1,1.73,7.7 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,1,1.06,3.46 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,2,2.66,9.79 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,2,1.73,4.57 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,4,4.47,18.49 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,4,2.63,8.91 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,8,8.16,23.86 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,8,4.97,12.57 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,16,15.98,42.38 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,16,9.61,29.01 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,32,32.04,80.51 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,32,19.07,55.57 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,64,-1,-1 +NVIDIA H100 PCIe,single,FALSE,FALSE,pytorch,128,-1,-1 +NVIDIA H100 PCIe,half,FALSE,FALSE,pytorch,128,-1,-1 diff --git a/docs/benchmark-update.md b/docs/benchmark-update.md new file mode 100644 index 0000000..b383e01 --- /dev/null +++ b/docs/benchmark-update.md @@ -0,0 +1,23 @@ +# Benchmark update + +We are currently running benchmarks to update our Stable Diffusion numbers using a more recent version of Diffusers and to take advantage of xformers. THe interim results on a limited set of GPUs are presented here. + +## Running the benchmark + +Ensure that [NVIDIA container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html) is installed on your system and then run the following: + +```bash +git clone https://github.com/LambdaLabsML/lambda-diffusers.git +cd lambda-diffusers/scripts +make bench +``` + +Results will be written to `results.csv`, the benchmark will take different amounts of time depending on the GPU present but expect it to take at least several minutes. + +## Results + +The current results for the benchmark are available in [`benchmark.csv`](../benchmarks/benchmark.csv). These results were run with Diffusers 0.11.0 and xformers using Ubuntu 20.04, Python 3.8, PyTorch 1.13, CUDA 11.8 ([NGC PyTorch container 22.11](https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes/rel-22-11.html)). + +xformers provides a significant boost in performance and memory consumption allowing large batch sizes to maximise utilisation of GPUs. Our best performance comes using NVIDIA A100-SXM4-40GB on [Lambda GPU cloud](https://cloud.lambdalabs.com), at the maximum batch size tested (128) at half precision we observe a throughput of 1.85 images/second when using DDIM 30 steps for sampling. + + \ No newline at end of file diff --git a/docs/benchmark.md b/docs/benchmark.md new file mode 100644 index 0000000..f16ea47 --- /dev/null +++ b/docs/benchmark.md @@ -0,0 +1,184 @@ +# Benchmarking Diffuser Models + +__We are currently in the process of updating our Stable Diffusion benchmark using more recent version of Diffusers and taking advantage of xformers. See the summary of interim result [here](./benchmark-update.md)__ + +We present a benchmark of [Stable Diffusion](https://huggingface.co/CompVis/stable-diffusion) model inference. This text2image model uses a text prompt as input and outputs an image of resolution `512x512`. + +Our experiments analyze inference performance in terms of speed, memory consumption, throughput, and quality of the output images. We look at how different choices in hardware (GPU model, GPU vs CPU) and software (single vs half precision, pytorch vs onnxruntime) affect inference performance. + +For reference, we will be providing benchmark results for the following GPU devices: A100 80GB PCIe, RTX3090, RTXA5500, RTXA6000, RTX3080, RTX8000. Please refer to the ["Reproducing the experiments"](#reproducing-the-experiments) section for details on running these experiments in your own environment. + + +## Inference speed + +The figure below shows the latency at inference when using different hardware and precision for generating a single image using the (arbitrary) text prompt: *"a photo of an astronaut riding a horse on mars"*. + +
+
+
+We find that:
+* The inference latencies range between `3.74` to `5.56` seconds across our tested Ampere GPUs, including the consumer 3080 card to the flagship A100 80GB card.
+* Half-precision reduces the latency by about `40%` for Ampere GPUs, and by `52%` for the previous generation `RTX8000` GPU.
+
+We believe Ampere GPUs enjoy a relatively "smaller" speedup from half-precision due to their use of `TF32`. For readers who are not familiar with `TF32`, it is a [`19-bit` format](https://blogs.nvidia.com/blog/2020/05/14/tensorfloat-32-precision-format/) that has been used as the default single-precision data type on Ampere GPUs for major deep learning frameworks such as PyTorch and TensorFlow. One can expect half-precision's speedup over `FP32` to be bigger since it is a true `32-bit` format.
+
+
+We run these same inference jobs CPU devices to put in perspective the inference speed performance observed on GPU.
+
+
+
+
+We note that:
+* GPUs are significantly faster -- by one or two orders of magnitudes depending on the precisions.
+* `onnxruntime` can reduce the latency for CPU by about `40%` to `50%`, depending on the type of CPUs.
+
+ONNX currently does not have [stable support](https://github.com/huggingface/diffusers/issues/489) for Huggingface diffusers.
+We will investigate `onnxruntime-gpu` in future benchmarks.
+
+
+
+
+## Memory
+
+We also measure the memory consumption of running stable diffusion inference.
+
+
+
+Memory usage is observed to be consistent across all tested GPUs:
+* It takes about `7.7 GB` GPU memory to run single-precision inference with batch size one.
+* It takes about `4.5 GB` GPU memory to run half-precision inference with batch size one.
+
+
+
+
+## Throughput
+
+Latency measures how quickly a _single_ input can be processed, which is critical to online applications that don't tolerate even the slightest delay. However, some (offline) applications may focus on "throughput", which measures the total volume of data processed in a fixed amount of time.
+
+
+Our throughput benchmark pushes the batch size to the maximum for each GPU, and measures the number of images they can process per minute. The reason for maximizing the batch size is to keep tensor cores busy so that computation can dominate the workload, avoiding any non-computational bottlenecks.
+
+We run a series of throughput experiment in pytorch with half-precision and using the maximum batch size that can be used for each GPU:
+
+
+
+We note:
+* Once again, A100 80GB is the top performer and has the highest throughput.
+* The gap between A100 80GB and other cards in terms of throughput can be explained by the larger maximum batch size that can be used on this card.
+
+
+As a concrete example, the chart below shows how A100 80GB's throughput increases by `64%` when we changed the batch size from 1 to 28 (the largest without causing an out of memory error). It is also interesting to see that the increase is not linear and flattens out when batch size reaches a certain value, at which point the tensor cores on the GPU are saturated and any new data in the GPU memory will have to be queued up before getting their own computing resources.
+
+
+
+
+## Precision
+
+We are curious about whether half-precision introduces degradations to the quality of the output images. To test this out, we fixed the text prompt as well as the "latent" input vector and fed them to the single-precision model and the half-precision model. We ran the inference for 100 steps and saved both models' outputs at each step, as well as the difference map:
+
+
+
+Our observation is that there are indeed visible differences between the single-precision output and the half-precision output, especially in the early steps. The differences often decrease with the number of steps, but might not always vanish.
+
+Interestingly, such a difference may not imply artifacts in half-precision's outputs. For example, in step 70, the picture below shows half-precision didn't produce the artifact in the single-precision output (an extra front leg):
+
+
+
+---
+
+## Reproducing the experiments
+
+You can use this [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) repository to reproduce the results presented in this article.
+
+### From your local machine
+
+#### Setup
+
+Before running the benchmark, make sure you have completed the repository [installation steps](../README.md#installation).
+
+You will then need to set the huggingface access token:
+1. Create a user account on HuggingFace and generate an access token.
+2. Set your huggingface access token as the `ACCESS_TOKEN` environment variable:
+```
+export ACCESS_TOKEN=