# Deployment guide — Proxmox LXC (home network) Target architecture: ``` Reverse proxy (existing) innercontext LXC (new, Debian 13) ┌──────────────────────┐ ┌────────────────────────────────────┐ │ reverse proxy │────────────▶│ nginx :80 │ │ innercontext.lan → * │ │ /api/* → uvicorn :8000/* │ └──────────────────────┘ │ /* → SvelteKit Node :3000 │ └────────────────────────────────────┘ │ │ FastAPI SvelteKit Node ``` > **Frontend is never built on the server.** The `vite build` + `adapter-node` > esbuild step is CPU/RAM-intensive and will hang on a small LXC. Build locally, > deploy the `build/` artifact via `deploy.sh`. ## 1. Prerequisites - Proxmox VE host with an existing PostgreSQL LXC and a reverse proxy - LAN hostname `innercontext.lan` resolvable on the network (via router DNS or `/etc/hosts`) - The PostgreSQL LXC must accept connections from the innercontext LXC IP --- ## 2. Create the LXC container In the Proxmox UI (or via CLI): ```bash # CLI example — adjust storage, bridge, IP to your environment pct create 200 local:vztmpl/debian-13-standard_13.0-1_amd64.tar.zst \ --hostname innercontext \ --cores 2 \ --memory 1024 \ --swap 512 \ --rootfs local-lvm:8 \ --net0 name=eth0,bridge=vmbr0,ip=dhcp \ --unprivileged 1 \ --start 1 ``` Note the container's IP address after it starts (`pct exec 200 -- ip -4 a`). --- ## 3. Container setup ```bash pct enter 200 # or SSH into the container ``` ### System packages ```bash apt update && apt upgrade -y apt install -y git nginx curl ca-certificates gnupg lsb-release libpq5 rsync ``` ### Python 3.12+ + uv ```bash apt install -y python3 python3-venv curl -LsSf https://astral.sh/uv/install.sh | UV_INSTALL_DIR=/usr/local/bin sh ``` Installing to `/usr/local/bin` makes `uv` available system-wide (required for `sudo -u innercontext uv sync`). ### Node.js 24 LTS + pnpm The server needs Node.js to **run** the pre-built frontend bundle, and pnpm to **install production runtime dependencies** (`clsx`, `bits-ui`, etc. — `adapter-node` bundles the SvelteKit framework but leaves these external). The frontend is never **built** on the server. ```bash curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.4/install.sh | bash . "$HOME/.nvm/nvm.sh" nvm install 24 ``` Copy `node` to `/usr/local/bin` so it is accessible system-wide (required for `sudo -u innercontext` and for systemd). Use `--remove-destination` to replace any existing symlink with a real file: ```bash cp --remove-destination "$(nvm which current)" /usr/local/bin/node ``` Install pnpm as a standalone binary — self-contained, no wrapper scripts, works system-wide: ```bash curl -fsSL "https://github.com/pnpm/pnpm/releases/latest/download/pnpm-linux-x64" \ -o /usr/local/bin/pnpm chmod 755 /usr/local/bin/pnpm ``` ### Application user ```bash useradd --system --create-home --shell /bin/bash innercontext ``` --- ## 4. Create the database on the PostgreSQL LXC Run on the **PostgreSQL LXC**: ```bash psql -U postgres <<'SQL' CREATE USER innercontext WITH PASSWORD 'change-me'; CREATE DATABASE innercontext OWNER innercontext; SQL ``` Edit `/etc/postgresql/18/main/pg_hba.conf` and add (replace `` with the innercontext container IP): ``` host innercontext innercontext /32 scram-sha-256 ``` Then reload: ```bash systemctl reload postgresql ``` --- ## 5. Clone the repository ```bash mkdir -p /opt/innercontext git clone https://github.com/your-user/innercontext.git /opt/innercontext chown -R innercontext:innercontext /opt/innercontext ``` --- ## 6. Backend setup ```bash cd /opt/innercontext/backend ``` ### Install dependencies ```bash sudo -u innercontext uv sync ``` ### Create `.env` ```bash cat > /opt/innercontext/backend/.env <<'EOF' DATABASE_URL=postgresql+psycopg://innercontext:change-me@/innercontext GEMINI_API_KEY=your-gemini-api-key # GEMINI_MODEL=gemini-flash-latest # optional, this is the default EOF chmod 600 /opt/innercontext/backend/.env chown innercontext:innercontext /opt/innercontext/backend/.env ``` ### Run database migrations ```bash sudo -u innercontext bash -c ' cd /opt/innercontext/backend uv run alembic upgrade head ' ``` This creates all tables on first run. On subsequent deploys it applies only the new migrations. > **Existing database (tables already created by `create_db_and_tables`):** > Run `uv run alembic stamp head` instead to mark the current schema as migrated without re-running DDL. ### Test ```bash sudo -u innercontext bash -c ' cd /opt/innercontext/backend uv run uvicorn main:app --host 127.0.0.1 --port 8000 ' # Ctrl-C after confirming it starts ``` ### Install systemd service ```bash cp /opt/innercontext/systemd/innercontext.service /etc/systemd/system/ systemctl daemon-reload systemctl enable --now innercontext systemctl status innercontext ``` --- ## 7. Frontend setup The frontend is **built locally and uploaded** via `deploy.sh` — never built on the server. This section only covers the one-time server-side configuration. ### Create `.env.production` ```bash cat > /opt/innercontext/frontend/.env.production <<'EOF' PUBLIC_API_BASE=http://innercontext.lan/api ORIGIN=http://innercontext.lan EOF chmod 600 /opt/innercontext/frontend/.env.production chown innercontext:innercontext /opt/innercontext/frontend/.env.production ``` ### Grant `innercontext` passwordless sudo for service restarts ```bash cat > /etc/sudoers.d/innercontext-deploy << 'EOF' innercontext ALL=(root) NOPASSWD: \ /usr/bin/systemctl restart innercontext, \ /usr/bin/systemctl restart innercontext-node EOF chmod 440 /etc/sudoers.d/innercontext-deploy ``` ### Install systemd service ```bash cp /opt/innercontext/systemd/innercontext-node.service /etc/systemd/system/ systemctl daemon-reload systemctl enable innercontext-node # Do NOT start yet — build/ is empty until the first deploy.sh run ``` --- ## 8. nginx setup ```bash cp /opt/innercontext/nginx/innercontext.conf /etc/nginx/sites-available/innercontext ln -s /etc/nginx/sites-available/innercontext /etc/nginx/sites-enabled/ rm -f /etc/nginx/sites-enabled/default nginx -t systemctl reload nginx ``` --- ## 9. Reverse proxy configuration Point your existing reverse proxy at the innercontext LXC's nginx (`:80`). Example — Caddy: ``` innercontext.lan { reverse_proxy :80 } ``` Example — nginx upstream: ```nginx server { listen 80; server_name innercontext.lan; location / { proxy_pass http://:80; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } ``` Reload your reverse proxy after applying the change. --- ## 10. First deploy from local machine All subsequent deploys (including the first one) use `deploy.sh` from your local machine. ### SSH config Add to `~/.ssh/config` on your local machine: ``` Host innercontext HostName User innercontext ``` Make sure your SSH public key is in `/home/innercontext/.ssh/authorized_keys` on the server. ### Run the first deploy ```bash # From the repo root on your local machine: ./deploy.sh ``` This will: 1. Build the frontend locally (`pnpm run build`) 2. Upload `frontend/build/` to the server via rsync 3. Restart `innercontext-node` 4. Upload `backend/` source to the server 5. Run `uv sync --frozen` on the server 6. Restart `innercontext` (runs alembic migrations on start) --- ## 11. Verification ```bash # From any machine on the LAN: curl http://innercontext.lan/api/health-check # {"status":"ok"} curl http://innercontext.lan/api/products # [] curl http://innercontext.lan/ # SvelteKit HTML shell ``` The web UI should be accessible at `http://innercontext.lan`. --- ## 12. Updating the application ```bash # From the repo root on your local machine: ./deploy.sh # full deploy (frontend + backend) ./deploy.sh frontend # frontend only ./deploy.sh backend # backend only ``` --- ## 13. Troubleshooting ### 502 Bad Gateway on `/api/*` ```bash systemctl status innercontext journalctl -u innercontext -n 50 # Check .env DATABASE_URL is correct and PG LXC accepts connections ``` ### 502 Bad Gateway on `/` ```bash systemctl status innercontext-node journalctl -u innercontext-node -n 50 # Verify /opt/innercontext/frontend/build/index.js exists (deploy.sh ran successfully) ``` ### Database connection refused ```bash # From innercontext LXC: psql postgresql+psycopg://innercontext:change-me@/innercontext -c "SELECT 1" # If it fails, check pg_hba.conf on the PG LXC and verify the IP matches ```