A modern, type-safe Python SDK for NovelAI's image generation API. Features robust validation with Pydantic v2 and complete type hints.
- Python 3.10+ with full type hints and Pydantic v2 validation
- High-level convenience API with automatic validation
- Built-in PIL/Pillow support for easy image operations
- SSE streaming for real-time progress monitoring
- Precise reference(Character reference), ControlNet, and multi-character positioning
| Feature | novelai-sdk | novelai-api | novelai-python |
|---|---|---|---|
| Type Safety (Pydantic v2) | ✅ | ❌ | ✅ |
| Async Support | ✅ | ✅ | ✅ |
| Image Generation | ✅ | ✅ | ✅ |
| Text Generation | 🚧 | ✅ | ✅ |
| Precise Reference(Character Reference) | ✅ | ❌ | ❌ |
| Multi-Character Positioning | ✅ | ❌ | ✅ |
| ControlNet / Vibe Transfer | ✅ | ❌ | ✅ |
| SSE Streaming | ✅ | ❌ | ✅ |
| Python 3.10+ | ✅ | ❌ | ❌ |
| Active Maintenance | ✅ | ✅ |
✅ Supported | ❌ Not supported | 🚧 Planned |
For detailed guides and advanced usage, visit our Documentation Site.
# Using pip
pip install novelai-sdk
# Using uv (recommended)
uv add novelai-sdkfrom novelai import NovelAI
from novelai.types import GenerateImageParams
# Initialize client (API key from NOVELAI_API_KEY environment variable)
client = NovelAI()
# Generate an image
params = GenerateImageParams(
prompt="1girl, cat ears, masterpiece, best quality",
model="nai-diffusion-4-5-full",
size="portrait", # or (832, 1216)
steps=23,
scale=5.0,
)
images = client.image.generate(params)
images[0].save("output.png")# Basic generation
python -m novelai "1girl, cat ears, maid" -o output.png
# Interactive mode
python -m novelai --interactive --model nai-diffusion-4-5-full
# Generate from request JSON (high-level params)
python -m novelai --request-json examples/request_user.json -o output
# Generate from request JSON (stdin)
cat examples/request_user.json | python -m novelai --request-json-stdin -o outputProvide your NovelAI API key via environment variable or direct initialization:
# Using .env file (recommended)
from dotenv import load_dotenv
load_dotenv()
client = NovelAI()
# Environment variable
import os
os.environ["NOVELAI_API_KEY"] = "your_api_key_here"
client = NovelAI()
# Direct initialization
client = NovelAI(api_key="your_api_key_here")The library is designed with two distinct layers of data models:
- User Model (Recommended): User-friendly models with sensible defaults and automatic validation.
- API Model: Direct 1:1 mapping to NovelAI's API endpoints, primarily used internally.
from novelai import NovelAI
from novelai.types import GenerateImageParams
client = NovelAI()
params = GenerateImageParams(
prompt="a beautiful landscape",
model="nai-diffusion-4-5-full",
size="landscape",
quality=True,
)
images = client.image.generate(params)Maintain consistent character appearances with reference images:
from novelai.types import CharacterReference
character_references = [
CharacterReference(
image="reference.png",
type="character",
fidelity=0.75,
)
]
params = GenerateImageParams(
prompt="1girl, standing",
model="nai-diffusion-4-5-full",
character_references=character_references,
)Position multiple characters individually with separate prompts:
from novelai.types import Character
characters = [
Character(
prompt="1girl, red hair, blue eyes",
enabled=True,
position=(0.2, 0.5),
),
Character(
prompt="1boy, black hair, green eyes",
enabled=True,
position=(0.8, 0.5),
),
]
params = GenerateImageParams(
prompt="two people standing",
model="nai-diffusion-4-5-full",
characters=characters,
)Control composition and pose with reference images:
from novelai.types import ControlNet, ControlNetImage, GenerateImageParams
controlnet_image = ControlNetImage(image="pose_reference.png", strength=0.6)
controlnet = ControlNet(images=[controlnet_image])
params = GenerateImageParams(
prompt="1girl, standing",
model="nai-diffusion-4-5-full",
controlnet=controlnet,
)Monitor generation progress in real-time:
from novelai.types import GenerateImageStreamParams
from base64 import b64decode
params = GenerateImageStreamParams(
prompt="1girl, standing",
model="nai-diffusion-4-5-full",
stream="sse",
)
for chunk in client.image.generate_stream(params):
image_data = b64decode(chunk.image)
print(f"Received {len(image_data)} bytes")Transform existing images with text prompts:
from novelai.types import GenerateImageParams, I2iParams
i2i_params = I2iParams(
image="input.png",
strength=0.5, # 0.0-1.0
noise=0.0,
)
params = GenerateImageParams(
prompt="cyberpunk style",
model="nai-diffusion-4-5-full",
i2i=i2i_params,
)Generate multiple variations efficiently:
params = GenerateImageParams(
prompt="1girl, various poses",
model="nai-diffusion-4-5-full",
n_samples=4,
)
images = client.image.generate(params)
for i, img in enumerate(images):
img.save(f"output_{i}.png")Estimate the generation cost before sending the request:
from novelai.types import GenerateImageParams
params = GenerateImageParams(
prompt="1girl, night city",
model="nai-diffusion-4-5-full",
size=(1024, 1024),
steps=28,
)
estimate = params.calculate_anlas(is_opus=True)
print(estimate.total_anlas)calculate_anlas() is a best-effort estimate based on the current web UI and
documentation. It is useful for previews, but it is not guaranteed to be a
100% accurate billing source of truth.
For practical usage examples, see the Examples Documentation or the examples/ directory.
- Async support
- FastAPI integration example
- Vibe transfer file support (
.naiv4vibe,.naiv4vibebundle) - Anlas consumption calculator
- Image metadata extraction
- Text generation API support
git clone https://github.com/caru-ini/novelai-sdk.git
cd novelai-sdk
uv sync# Format code
uv run poe fmt
# Lint code
uv run poe lint
# Type checking
uv run poe check
# Install poe globally for easier access
uv tool install poe
# Run all checks before committing
uv run poe pre-commitTests will be added in future releases.
- Python 3.10+
- httpx (HTTP client)
- Pillow (image processing)
- Pydantic v2 (validation and type safety)
- python-dotenv (environment variables)
- rich (CLI output rendering)
Contributions are welcome. For major changes, please open an issue first.
Please see CONTRIBUTING.md for details on how to contribute, including development setup, code quality checks, and pull request guidelines.
{feat|fix|docs|style|refactor|test|chore}: Short description
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Run code quality checks (
uv run poe pre-commit) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
MIT License. See LICENSE file for details.
This is an unofficial client library. Not affiliated with NovelAI. Requires an active NovelAI subscription.
Thanks to the NovelAI team and all contributors.

