Release Notes
MVP-1.1 — Dashboard Polish (in progress)
Operator-facing polish atop MVP-1. No backend behavior changes; UX hardening only.
Highlights
- MCP/Tools panel loads for
agent-NNNIDs. The MCP catalog resolver now accepts dashboard-style identifiers (agent-001…agent-014) in addition to PascalCase agent names. The Fleet → Agent → MCP Tools and Tool Calls panels populate cleanly instead of showing the "Some live detail panels could not load: tool catalog" pill. - Single-host deployment indicator. When only one site is registered (the default for a laptop bring-up) the per-agent "Deploy to Site" button row is replaced by a "Running locally on this host" status badge, matching the reality that there is nothing to deploy to.
- Streaming AI Chat.
/chatnow consumes the existingPOST /api/v1/chat/streamSSE endpoint with token-by-token rendering, aStopbutton to abort an in-flight generation, aRegenerateaction on the last assistant turn, andlocalStoragepersistence of the active conversation across reloads. - Granular MCP/Tool-Call panel states. The Fleet → Agent → MCP Tools and Tool Calls panels now render distinct loading, error (with a retry affordance), empty, and data branches instead of a single ambiguous "still loading" placeholder. Per-section error messages are tracked independently so a tool-catalog failure no longer hides a successful tool-call history (and vice versa).
GET /api/v1/system/topology. New endpoint returningdeployment_mode(single_host/multi_site), the resolved hostname, platform string, runtime mode, and a compact site list. The agents page consumes it to surface the actual hostname in the single-host pill ("Running locally on<hostname>·<site name>") so operators can verify the bind target at a glance.- Real-mode live-fire bring-up path. AuroraSOC now has a repo-local real
mode task that starts the API, dashboard, Redis task worker, and local A2A
agent mesh together for single-host operation. The SIEM workspace is also no
longer backed by transient in-memory log rows in real mode: raw events now
persist in the new
siem_logstable and can be fed by the newscripts/local_siem_bridge.pyLinux log bridge, whilescripts/suricata_eve_bridge.pycontinues to feed the network telemetry pipeline for Kali-driven attack drills. - "Activate locally" affordance. On single-host installs, agents that
are not yet bound to the local site now render an
Activate locallybutton that deploys the agent to the only available site in one click, replacing the multi-siteDeploy to Sitematrix that previously implied remote infrastructure that does not exist on a laptop bring-up.- Network Analyzer findings respect runtime mode. TheGET /api/v1/network-analyzer/findingsendpoint no longer fabricates hardcoded C2-beaconing / DNS-tunneling rows in real or dry-run modes. The static showcase fixtures are now scoped to a_network_analysis_seedlist that is surfaced only in dummy mode; real and dry-run modes return exclusively the analyses produced byPOST /api/v1/network-analyzer/analyzeduring the current process lifetime (capped at 200 entries). Real-mode analyses continue to persist intoNetworkAttackModel. Dry-run analyses are now also retained in the runtime buffer (without DB writes) so operators can review what the agent produced without leaving the page.
MVP-1 — Single-Command SOC on a Laptop
MVP-1 locks the bring-up surface to a single Granite model running on the host operator's machine and ships the Network Command Center as the primary HITL loop. The goal is for a new operator to go from a clean checkout to a running SOC in three commands.
Highlights
- Single-model lock.
granite3.2:8b(Q4_K_M) is the default for both specialist agents and the orchestrator.GRANITE_SINGLE_MODEL_MODE=trueandGRANITE_USE_SHARED_MODEL_POOL=truekeep the 14-agent fleet on one warm process. See Local LLM Deployment. make llm-doctor. Validates Ollama connectivity, model presence, andOLLAMA_MODEL/OLLAMA_ORCHESTRATOR_MODELagreement before the stack is brought up.make stack-up. One target boots Postgres, Redis, FastAPI, and the Next.js dashboard against host-run Ollama.make stack-down KEEP_INFRA=1preserves Postgres/Redis volumes between cycles.make agents-smoke. Live LLM round-trip smoke for the full fleet (1 orchestrator + 13 specialists) using the BeeAI A2A mesh.- Network Command Center. Operator-facing HITL surface at
/network-attackswith the Critical Approval Queue, dispatch deep-link receipts, and a documented degraded-mode notice when live reads are unavailable. See the Network Command Center user guide. - Agent reasoning trail. Each network attack receipt now carries the specialist's reasoning trail through the API and into the dashboard timeline.
- Distributed mode runbook.
docker-compose.host-ollama.ymland the matching verification script let a second machine target the operator's Ollama. See Deployment Modes.
Operator quick-start
ollama serve &
ollama pull granite3.2:8b
make llm-doctor
make stack-up
make agents-smoke
The dashboard is reachable at http://localhost:3100 and the API at
http://localhost:8001. Use make stack-down KEEP_INFRA=1 to stop the app
processes while keeping Postgres and Redis warm for the next cycle.
Verification matrix
Operators are expected to capture these numbers in their own environment as part of MVP-1 sign-off — they vary with hardware, so the release does not publish reference figures.
| Check | Command | What to record |
|---|---|---|
| Cold-boot stack | make stack-up | Time until /healthz is ok and the dashboard renders |
| Model warm-up | make llm-doctor | First-token latency reported by the doctor |
| Orchestrator round-trip | make agents-smoke | Median orchestrator latency across the smoke run |
| Approvals decision | Approve a queued action in the Network Command Center | Wall-clock from click to receipt published |
| Distributed mode | make compose-host-ollama on a second host | Network round-trip from API node to host Ollama |
Known constraints
GRANITE_USE_FINETUNED=trueexits the MVP-1 envelope; the LoRA path is not part of this release.- Live network telemetry reads require a healthy database. In
dry_runmode with the database offline, the Network Command Center renders a degraded notice rather than fabricated data. - The codebase canonical default is
granite4:8b(specialist) andgranite4:dense(orchestrator). MVP-1 also supportsgranite3.2:8bas a verified single-model override; setOLLAMA_MODEL=granite3.2:8bandOLLAMA_ORCHESTRATOR_MODEL=granite3.2:8bto run the entire 14-agent fleet on one warm Ollama process.