Stable Diffusion
Over the last few weeks we all have been overwhelmed by the response and have been working hard to ensure a safe and ethical release, incorporating data from our beta model tests and community for the developers to act on. In cooperation with the tireless legal, ethics and technology teams at HuggingFace and amazing engineers at CoreWeave. We have developed an AI-based Safety Classifier included by default in the overall software package. This understands concepts and other factors in generations to remove outputs that may not be desired by the model user. The parameters of this can be readily adjusted and we welcome input from the community how to improve this. Image generation models are powerful, but still need to improve to understand how to represent what we want better.
Learn more
Qwen-Image
Qwen-Image is a multimodal diffusion transformer (MMDiT) foundation model offering state-of-the-art image generation, text rendering, editing, and understanding. It excels at complex text integration, seamlessly embedding alphabetic and logographic scripts into visuals with typographic fidelity, and supports diverse artistic styles from photorealism to impressionism, anime, and minimalist design. Beyond creation, it enables advanced image editing operations such as style transfer, object insertion or removal, detail enhancement, in-image text editing, and human pose manipulation through intuitive prompts. Its built-in vision understanding tasks, including object detection, semantic segmentation, depth and edge estimation, novel view synthesis, and super-resolution, extend its capabilities into intelligent visual comprehension. Qwen-Image is accessible via popular libraries like Hugging Face Diffusers and integrates prompt-enhancement tools for multilingual support.
Learn more
SeedEdit
SeedEdit is an advanced AI image-editing model developed by the ByteDance Seed team that enables users to revise an existing image using natural-language text prompts while preserving unedited regions with high fidelity. It accepts an input image plus a text description of the change (such as style conversion, object removal or replacement, background swap, lighting shift, or text change), and produces a seamlessly edited result that maintains structural integrity, resolution, and identity of the original content. The model leverages a diffusion-based architecture trained via a meta-information embedding pipeline and joint loss (combining diffusion and reward losses) to balance image reconstruction and re-generation, resulting in strong editing controllability, detail retention, and prompt adherence. The latest version (SeedEdit 3.0) supports high-resolution edits (up to 4 K), delivers fast inference (under ~10-15 seconds in many cases), and handles multi-round sequential edits.
Learn more
Seedream
Seedream 3.0 is ByteDance’s newest high-aesthetic image generation model, officially available through its API with 200 free trial images. It supports native 2K resolution output for crisp, professional visuals across text-to-image and image-to-image tasks. The model excels at realistic character rendering, capturing nuanced facial details, natural skin textures, and expressive emotions while avoiding the artificial look common in older AI outputs. Beyond realism, Seedream provides advanced text typesetting, enabling designer-level posters with accurate typography, layout, and stylistic cohesion. Its image editing capabilities preserve fine details, follow instructions precisely, and adapt seamlessly to varied aspect ratios. With transparent pricing at just $0.03 per image, Seedream delivers professional-grade visuals at an accessible cost.
Learn more