Skip to main content

Run AuroraSOC Locally (Linux / macOS / Windows)

This guide walks through running AuroraSOC directly on your host — no Docker, no Podman — on Linux, macOS, or Windows. It is the safest path on a personal machine because every process is owned by your user, every port is explicit, and there are no shared volumes to clean up.

If you want the full containerised stack instead, see Quick Start and Stack Lifecycle.

Which mode should I run?
  • SYSTEM_MODE=dummy — in-memory demo data, no LLM calls, no persistent writes. Best for first-time UI verification on a personal machine.
  • SYSTEM_MODE=dry_run — real LLM round-trips through your local Ollama; agents reason over alerts but no destructive playbook actions execute.
  • SYSTEM_MODE=production — full read/write against PostgreSQL plus live dispatch. Use only after you understand the database and approval workflows.

This page targets dummy and dry_run. They require no database and produce no side effects on your host.

Prerequisites (all platforms)

ToolMinimum versionNotes
Python3.12+python3 --version
Node.js22+node --version (npm ships with Node)
Gitany recentgit --version
Redis7.x (optional)Only required for the WebSocket event stream. dummy mode degrades gracefully if Redis is unreachable.
Ollama0.4+ (optional)Only needed for dry_run / production. Pull granite4:8b first.

You do not need PostgreSQL, NATS, or any container runtime for this guide.

Suggested ports (used throughout this page):

PortService
8001FastAPI backend
3100Next.js dashboard
6379Redis (host default)
11434Ollama (host default)

The non-standard 8001 / 3100 choices avoid collisions with other dev tools that commonly bind 8000 / 3000.


One-time setup

# 1. Clone
git clone https://github.com/ahmeddwalid/AuroraSOC
cd AuroraSOC

# 2. Python virtual environment
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"

# 3. Dashboard dependencies
( cd dashboard && npm install )

# 4. (Optional) Local services
sudo systemctl start redis # or: redis-server &
# Ollama (only for dry_run / production)
curl -fsSL https://ollama.com/install.sh | sh
ollama pull granite4:8b

Launch — Demo Mode (SYSTEM_MODE=dummy)

Demo mode is the safest first run. The API serves 30 simulated alerts, 14 agents, 10 cases, and 13 CPS devices entirely from memory.

Open two terminals in the repo root.

Terminal 1 — API

source .venv/bin/activate
export SYSTEM_MODE=dummy \
LOCAL_AUTH_ENABLED=true \
REDIS_URL=redis://localhost:6379 \
MCP_HEALTH_PROBE_ENABLED=false \
API_URL=http://localhost:8001 \
NEXT_PUBLIC_API_URL=http://localhost:8001
python -m uvicorn aurorasoc.api.main:app --host 127.0.0.1 --port 8001

Terminal 2 — Dashboard

cd dashboard
API_URL=http://localhost:8001 \
NEXT_PUBLIC_API_URL=http://localhost:8001 \
PORT=3100 \
npm run dev

Or start both at once via the shipped VS Code task “Run AuroraSOC on 3100 and 8001” (Command Palette → Tasks: Run Task).

Verify

curl http://localhost:8001/health # → {"status":"healthy", ...}
curl http://localhost:8001/api/v1/alerts # → 30 alerts
curl http://localhost:8001/api/v1/agents # → 14 agents

Open the dashboard at http://localhost:3100 and sign in:

FieldValue
Usernameadmin
Passwordadmin123!

You should see populated Alerts, Agents, Cases, and Dashboard views.


Launch — Dry-Run Mode (SYSTEM_MODE=dry_run)

Dry-run keeps the in-memory demo dataset but routes agent reasoning through your local Ollama. No playbooks fire and nothing is written to disk.

Make sure Ollama is running and the model is pulled:

ollama serve & # if not already running
ollama pull granite4:8b
source .venv/bin/activate
export SYSTEM_MODE=dry_run \
LOCAL_AUTH_ENABLED=true \
REDIS_URL=redis://localhost:6379 \
MCP_HEALTH_PROBE_ENABLED=false \
LLM_BACKEND=ollama \
OLLAMA_BASE_URL=http://localhost:11434 \
OLLAMA_MODEL=granite4:8b \
OLLAMA_ORCHESTRATOR_MODEL=granite4:8b \
API_URL=http://localhost:8001 \
NEXT_PUBLIC_API_URL=http://localhost:8001
python -m uvicorn aurorasoc.api.main:app --host 127.0.0.1 --port 8001

The shipped task “Run AuroraSOC dry-run on 3100 and 8001” wraps the same command.

Then start the dashboard exactly as in dummy mode and stream agent thoughts:

python scripts/stream_dry_run_events.py # optional, prints every BeeAI thought to stdout

Granite 3.2 8B fallback

If your machine cannot host Granite 4 (≈ 8 GB VRAM / 16 GB RAM), AuroraSOC officially supports granite3.2:8b as a single-model override. Pull and pin it:

ollama pull granite3.2:8b
export OLLAMA_MODEL=granite3.2:8b
export OLLAMA_ORCHESTRATOR_MODEL=granite3.2:8b
export GRANITE_SINGLE_MODEL_MODE=true
export GRANITE_USE_SHARED_MODEL_POOL=true

See Local Deployment for the full single-model rationale.


Stopping cleanly

Ctrl+C in each terminal. To force-clean orphans bound to the AuroraSOC ports:

fuser -k 8001/tcp 3100/tcp 2>/dev/null || true

Troubleshooting

SymptomLikely causeFix
database_unavailable in API logsPostgreSQL not running, but you launched in dummy modeSafe to ignore — dummy mode runs without persistent storage.
Dashboard loads but tables are emptyAPI URL mismatchConfirm both API_URL and NEXT_PUBLIC_API_URL point at http://localhost:8001 in both terminals.
Address already in use on 8001 or 3100A previous run did not exit cleanlyRun the “Stopping cleanly” snippet above, then retry.
OTLP trace export failed: localhost:4317No OpenTelemetry collector runningCosmetic only. Set OTEL_SDK_DISABLED=true to silence.
Connection refused to RedisRedis service not startedStart Redis (see prerequisites) or unset REDIS_URL to disable the WebSocket event stream.
Granite 4 errors / OOM in dry-runModel too large for available VRAMSwitch to the Granite 3.2 8B fallback.
Windows: npm run dev fails with EPERM/long pathNTFS path-length limitEnable LongPathsEnabled (see Windows setup), reopen PowerShell.

For deeper diagnostics see FAQ and Common Issues.

Next steps