cell serve

Start HTTP API server for LLM commands

Synopsis

Starts an OpenAI-compatible HTTP server that proxies chat completions to LLM agent binaries (claude, opencode).

Endpoints:

POST /v1/chat/completions  — OpenAI chat completions API
GET  /api/v1/health        — health check

The model field selects the agent: “claude”, “opencode”, or “claude/claude-sonnet-4-5” (agent/submodel).

Request:

{"model": "claude", "messages": [{"role": "user", "content": "explain this"}]}

Examples:

cell serve
cell serve --port 9090
cell serve [flags]

Options

  -h, --help       help for serve
      --port int   port to listen on (default 8484)

Options inherited from parent commands

      --base-image string         core image for scaffold Dockerfile (default: ghcr.io/dimmkirr/devcell:core-local)
      --build                     rebuild image before running (forces --no-cache)
      --debug                     plain-text mode plus stream full build log to stdout
      --dry-run                   print docker run argv and exit without running
      --engine string             execution engine: docker or vagrant (default "docker")
      --macos                     use macOS VM via Vagrant (alias for --engine=vagrant)
      --plain-text                disable spinners, use plain log output (for CI/non-TTY)
      --session-name string       session name for persistent home (~/.devcell/<name>)
      --vagrant-box string        Vagrant box name override
      --vagrant-provider string   Vagrant provider (e.g. utm) (default "utm")

SEE ALSO