// index //
declarative GPU containers. vast.ai. runpod. bare-metal. zero dockerfile cope.
// start here //
- getting started — build your first container, ssh in
- options reference — all the knobs
- architecture — how it works internally
- services & runtime — Nimi, startup sequence
- defining custom services — add your own service modules
- secrets & agenix — keys never touch the nix store
- integrations — integrations with the nix ecosystem
// high-level //
- declare containers under
perSystem.nix2gpu.<n> - each container config is a nix module (like nixos modules)
nix2gpuassembles:- root filesystem with nix store + your packages
- startup script for runtime environment
- service graph via Nimi
- helper commands:
nix build .#<n>— build imagenix run .#<n>.copy-to-container-runtime— load into docker/podmannix run .#<n>.copy-to-github— push to ghcrnix run .#<n>.copy-to-runpod— push to runpod
// cloud targets //
| platform | status | notes |
|---|---|---|
| vast.ai | ✅ stable | nvidia libs at /lib/x86_64-linux-gnu |
| runpod | ✅ stable | network volumes, template support |
| lambda labs | ✅ works | standard docker |
| bare-metal | ✅ works | just run the container |
| kubernetes | 🚧 wip | gpu operator integration |
// where to go //
- just want something running → getting started
- want all the options → options reference
- hacking on internals → architecture
- secrets and tailscale → secrets