# Kawai <<<<<<< HEAD Local AI image and video generator. Simple UI. NSFW-capable. Auto GPU detection (Nvidia / AMD / Intel / Apple Silicon / CPU). ======= <<<<<<< HEAD Local AI image and video generator. Simple UI. NSFW-capable. Auto GPU detection (Nvidia / AMD / Intel / Apple Silicon / CPU). ======= Local AI image and video generator. Simple UI. NSFW-capable. Auto GPU detection (Nvidia / AMD / Intel / CPU). >>>>>>> refs/remotes/azuze/main >>>>>>> 965a3d97c6dae38fa25174559b1ea0f3050788f9 ## Quick start ``` python launcher.py ``` Run with **any Python** you have installed. The launcher bootstraps `uv` and uses it to fetch a clean Python 3.11 runtime + venv, then installs the right PyTorch build for your GPU. Nothing about your system Python is touched. <<<<<<< HEAD ======= <<<<<<< HEAD >>>>>>> 965a3d97c6dae38fa25174559b1ea0f3050788f9 Works on Windows, Linux, and macOS. First run takes a few minutes (uv install + Python 3.11 download + torch + dependencies). Subsequent runs start instantly. ### Force a specific backend Auto-detect picks one of `cuda` (NVIDIA), `rocm` (AMD on Linux), `directml` (AMD/Intel on Windows), `mps` (Apple Silicon), or `cpu`. To override: ``` python launcher.py --backend cuda # force CUDA wheel python launcher.py --backend rocm # AMD on Linux (ROCm) python launcher.py --backend directml # AMD/Intel on Windows python launcher.py --backend mps # Apple Silicon (macOS) python launcher.py --backend cpu # CPU only python launcher.py --reinstall # wipe install marker, re-detect, reinstall torch ``` `--vendor {nvidia,amd,intel,cpu}` is available too if you need to pair (e.g. `--backend directml --vendor intel`). Override is persisted in `config.local.json` and survives relaunches until you pass `--backend` again or `--reinstall`. <<<<<<< HEAD ======= ======= First run takes a few minutes (uv install + Python 3.11 download + torch + dependencies). Subsequent runs start instantly. >>>>>>> refs/remotes/azuze/main >>>>>>> 965a3d97c6dae38fa25174559b1ea0f3050788f9 ### What the launcher does 1. Installs `uv` to `.tools/` if not present. 2. Creates `venv/` with Python 3.11 (uv downloads the interpreter on demand). <<<<<<< HEAD 3. Detects GPU (Nvidia / AMD / Intel / Apple Silicon / CPU) and installs matching PyTorch wheel. ======= <<<<<<< HEAD 3. Detects GPU (Nvidia / AMD / Intel / Apple Silicon / CPU) and installs matching PyTorch wheel. ======= 3. Detects GPU (Nvidia / AMD / Intel / CPU) and installs matching PyTorch wheel. >>>>>>> refs/remotes/azuze/main >>>>>>> 965a3d97c6dae38fa25174559b1ea0f3050788f9 4. Installs latest `diffusers`, `transformers`, etc. 5. Opens browser UI at `http://127.0.0.1:7860`. ### Reset Delete `venv/` and `.tools/` to force a clean reinstall. ## Hardware tiers | VRAM | Default image model | Default video model | |------|--------------------|---------------------| | 4 GB | SDXL Turbo (fp16) | disabled | | 8 GB | Pony Diffusion XL | LTX-Video (fp8) | | 12 GB | Illustrious XL | LTX-Video (fp16) | | 16 GB+ | Illustrious XL + refiner | Wan 2.1 | User can override defaults in UI. ## Safety CSAM detection on all outputs (NudeNet age classifier + hash check). All other content allowed: NSFW, gore, violence. ## Status <<<<<<< HEAD Windows + Linux + macOS. AMD on Linux uses ROCm; AMD/Intel on Windows use DirectML; Apple Silicon uses MPS. Intel Macs run on CPU only (no GPU acceleration path). ======= <<<<<<< HEAD Windows + Linux + macOS. AMD on Linux uses ROCm; AMD/Intel on Windows use DirectML; Apple Silicon uses MPS. Intel Macs run on CPU only (no GPU acceleration path). ======= Windows only. Linux support planned. >>>>>>> refs/remotes/azuze/main >>>>>>> 965a3d97c6dae38fa25174559b1ea0f3050788f9