cell models

Detect locally available LLM models and show coding capability ratings

Synopsis

Detects locally running ollama and lists available models ranked by coding capability based on SWE-bench Verified leaderboard data.

Scores are fetched live from swebench.com and represent the best results for each model family (full-precision, with agentic scaffolding). Quantized ollama variants will score lower in practice.

Falls back to built-in estimates if SWE-bench data is unavailable.

Examples:

cell models
cell models --debug
cell models [flags]

Options

      --debug   Show detailed detection and ranking logs
  -h, --help    help for models

Options inherited from parent commands

      --base-image string         core image for scaffold Dockerfile (default: ghcr.io/dimmkirr/devcell:core-local)
      --build                     rebuild image before running (forces --no-cache)
      --dry-run                   print docker run argv and exit without running
      --engine string             execution engine: docker or vagrant (default "docker")
      --macos                     use macOS VM via Vagrant (alias for --engine=vagrant)
      --plain-text                disable spinners, use plain log output (for CI/non-TTY)
      --session-name string       session name for persistent home (~/.devcell/<name>)
      --vagrant-box string        Vagrant box name override
      --vagrant-provider string   Vagrant provider (e.g. utm) (default "utm")

SEE ALSO