Skip to main content

Release Notes

MVP-1.1 — Dashboard Polish (in progress)

Operator-facing polish atop MVP-1. No backend behavior changes; UX hardening only.

Highlights

  • MCP/Tools panel loads for agent-NNN IDs. The MCP catalog resolver now accepts dashboard-style identifiers (agent-001agent-014) in addition to PascalCase agent names. The Fleet → Agent → MCP Tools and Tool Calls panels populate cleanly instead of showing the "Some live detail panels could not load: tool catalog" pill.
  • Single-host deployment indicator. When only one site is registered (the default for a laptop bring-up) the per-agent "Deploy to Site" button row is replaced by a "Running locally on this host" status badge, matching the reality that there is nothing to deploy to.
  • Streaming AI Chat. /chat now consumes the existing POST /api/v1/chat/stream SSE endpoint with token-by-token rendering, a Stop button to abort an in-flight generation, a Regenerate action on the last assistant turn, and localStorage persistence of the active conversation across reloads.
  • Granular MCP/Tool-Call panel states. The Fleet → Agent → MCP Tools and Tool Calls panels now render distinct loading, error (with a retry affordance), empty, and data branches instead of a single ambiguous "still loading" placeholder. Per-section error messages are tracked independently so a tool-catalog failure no longer hides a successful tool-call history (and vice versa).
  • GET /api/v1/system/topology. New endpoint returning deployment_mode (single_host / multi_site), the resolved hostname, platform string, runtime mode, and a compact site list. The agents page consumes it to surface the actual hostname in the single-host pill ("Running locally on <hostname> · <site name>") so operators can verify the bind target at a glance.
  • Real-mode live-fire bring-up path. AuroraSOC now has a repo-local real mode task that starts the API, dashboard, Redis task worker, and local A2A agent mesh together for single-host operation. The SIEM workspace is also no longer backed by transient in-memory log rows in real mode: raw events now persist in the new siem_logs table and can be fed by the new scripts/local_siem_bridge.py Linux log bridge, while scripts/suricata_eve_bridge.py continues to feed the network telemetry pipeline for Kali-driven attack drills.
  • "Activate locally" affordance. On single-host installs, agents that are not yet bound to the local site now render an Activate locally button that deploys the agent to the only available site in one click, replacing the multi-site Deploy to Site matrix that previously implied remote infrastructure that does not exist on a laptop bring-up.- Network Analyzer findings respect runtime mode. The GET /api/v1/network-analyzer/findings endpoint no longer fabricates hardcoded C2-beaconing / DNS-tunneling rows in real or dry-run modes. The static showcase fixtures are now scoped to a _network_analysis_seed list that is surfaced only in dummy mode; real and dry-run modes return exclusively the analyses produced by POST /api/v1/network-analyzer/analyze during the current process lifetime (capped at 200 entries). Real-mode analyses continue to persist into NetworkAttackModel. Dry-run analyses are now also retained in the runtime buffer (without DB writes) so operators can review what the agent produced without leaving the page.

MVP-1 — Single-Command SOC on a Laptop

MVP-1 locks the bring-up surface to a single Granite model running on the host operator's machine and ships the Network Command Center as the primary HITL loop. The goal is for a new operator to go from a clean checkout to a running SOC in three commands.

Highlights

  • Single-model lock. granite3.2:8b (Q4_K_M) is the default for both specialist agents and the orchestrator. GRANITE_SINGLE_MODEL_MODE=true and GRANITE_USE_SHARED_MODEL_POOL=true keep the 14-agent fleet on one warm process. See Local LLM Deployment.
  • make llm-doctor. Validates Ollama connectivity, model presence, and OLLAMA_MODEL / OLLAMA_ORCHESTRATOR_MODEL agreement before the stack is brought up.
  • make stack-up. One target boots Postgres, Redis, FastAPI, and the Next.js dashboard against host-run Ollama. make stack-down KEEP_INFRA=1 preserves Postgres/Redis volumes between cycles.
  • make agents-smoke. Live LLM round-trip smoke for the full fleet (1 orchestrator + 13 specialists) using the BeeAI A2A mesh.
  • Network Command Center. Operator-facing HITL surface at /network-attacks with the Critical Approval Queue, dispatch deep-link receipts, and a documented degraded-mode notice when live reads are unavailable. See the Network Command Center user guide.
  • Agent reasoning trail. Each network attack receipt now carries the specialist's reasoning trail through the API and into the dashboard timeline.
  • Distributed mode runbook. docker-compose.host-ollama.yml and the matching verification script let a second machine target the operator's Ollama. See Deployment Modes.

Operator quick-start

ollama serve &
ollama pull granite3.2:8b
make llm-doctor
make stack-up
make agents-smoke

The dashboard is reachable at http://localhost:3100 and the API at http://localhost:8001. Use make stack-down KEEP_INFRA=1 to stop the app processes while keeping Postgres and Redis warm for the next cycle.

Verification matrix

Operators are expected to capture these numbers in their own environment as part of MVP-1 sign-off — they vary with hardware, so the release does not publish reference figures.

CheckCommandWhat to record
Cold-boot stackmake stack-upTime until /healthz is ok and the dashboard renders
Model warm-upmake llm-doctorFirst-token latency reported by the doctor
Orchestrator round-tripmake agents-smokeMedian orchestrator latency across the smoke run
Approvals decisionApprove a queued action in the Network Command CenterWall-clock from click to receipt published
Distributed modemake compose-host-ollama on a second hostNetwork round-trip from API node to host Ollama

Known constraints

  • GRANITE_USE_FINETUNED=true exits the MVP-1 envelope; the LoRA path is not part of this release.
  • Live network telemetry reads require a healthy database. In dry_run mode with the database offline, the Network Command Center renders a degraded notice rather than fabricated data.
  • The codebase canonical default is granite4:8b (specialist) and granite4:dense (orchestrator). MVP-1 also supports granite3.2:8b as a verified single-model override; set OLLAMA_MODEL=granite3.2:8b and OLLAMA_ORCHESTRATOR_MODEL=granite3.2:8b to run the entire 14-agent fleet on one warm Ollama process.