Run AuroraSOC Locally (Linux / macOS / Windows)
This guide walks through running AuroraSOC directly on your host — no Docker, no Podman — on Linux, macOS, or Windows. It is the safest path on a personal machine because every process is owned by your user, every port is explicit, and there are no shared volumes to clean up.
If you want the full containerised stack instead, see Quick Start and Stack Lifecycle.
SYSTEM_MODE=dummy— in-memory demo data, no LLM calls, no persistent writes. Best for first-time UI verification on a personal machine.SYSTEM_MODE=dry_run— real LLM round-trips through your local Ollama; agents reason over alerts but no destructive playbook actions execute.SYSTEM_MODE=production— full read/write against PostgreSQL plus live dispatch. Use only after you understand the database and approval workflows.
This page targets dummy and dry_run. They require no database and produce no side effects on your host.
Prerequisites (all platforms)
| Tool | Minimum version | Notes |
|---|---|---|
| Python | 3.12+ | python3 --version |
| Node.js | 22+ | node --version (npm ships with Node) |
| Git | any recent | git --version |
| Redis | 7.x (optional) | Only required for the WebSocket event stream. dummy mode degrades gracefully if Redis is unreachable. |
| Ollama | 0.4+ (optional) | Only needed for dry_run / production. Pull granite4:8b first. |
You do not need PostgreSQL, NATS, or any container runtime for this guide.
Suggested ports (used throughout this page):
| Port | Service |
|---|---|
8001 | FastAPI backend |
3100 | Next.js dashboard |
6379 | Redis (host default) |
11434 | Ollama (host default) |
The non-standard 8001 / 3100 choices avoid collisions with other dev tools that commonly bind 8000 / 3000.
One-time setup
- Linux
- macOS
- Windows (PowerShell)
- Windows (WSL2)
# 1. Clone
git clone https://github.com/ahmeddwalid/AuroraSOC
cd AuroraSOC
# 2. Python virtual environment
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
# 3. Dashboard dependencies
( cd dashboard && npm install )
# 4. (Optional) Local services
sudo systemctl start redis # or: redis-server &
# Ollama (only for dry_run / production)
curl -fsSL https://ollama.com/install.sh | sh
ollama pull granite4:8b
# 1. Clone
git clone https://github.com/ahmeddwalid/AuroraSOC
cd AuroraSOC
# 2. Python virtual environment
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev]"
# 3. Dashboard dependencies
( cd dashboard && npm install )
# 4. (Optional) Local services via Homebrew
brew install redis ollama
brew services start redis
ollama serve & # or: brew services start ollama
ollama pull granite4:8b
Apple Silicon Macs run Granite 4 on the Metal backend automatically — no extra flags required.
The native Windows path uses PowerShell 7+ with separate terminals for the API and the dashboard (the bash & / trap syntax does not translate). If you prefer a single terminal session, use the WSL2 tab below — the experience is identical to Linux.
# 1. Clone
git clone https://github.com/ahmeddwalid/AuroraSOC
cd AuroraSOC
# 2. Python virtual environment
py -3.12 -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -e ".[dev]"
# 3. Dashboard dependencies
Push-Location dashboard
npm install
Pop-Location
# 4. (Optional) Local services
# Redis: install via "Memurai" (https://www.memurai.com/) or run inside WSL2.
# Ollama: download the Windows installer from https://ollama.com/download
ollama pull granite4:8b
Some Next.js build artefacts exceed Windows' default 260-character path limit. Enable long paths once with an elevated PowerShell:
New-ItemProperty -Path "HKLM:\SYSTEM\CurrentControlSet\Control\FileSystem" `
-Name "LongPathsEnabled" -Value 1 -PropertyType DWORD -Force
Recommended for Windows users. Install Ubuntu 22.04+ via wsl --install -d Ubuntu, then follow the Linux tab inside the WSL shell. Access the dashboard from your Windows browser at http://localhost:3100 — WSL2 forwards the port automatically.
Launch — Demo Mode (SYSTEM_MODE=dummy)
Demo mode is the safest first run. The API serves 30 simulated alerts, 14 agents, 10 cases, and 13 CPS devices entirely from memory.
- Linux / macOS
- Windows (PowerShell)
Open two terminals in the repo root.
Terminal 1 — API
source .venv/bin/activate
export SYSTEM_MODE=dummy \
LOCAL_AUTH_ENABLED=true \
REDIS_URL=redis://localhost:6379 \
MCP_HEALTH_PROBE_ENABLED=false \
API_URL=http://localhost:8001 \
NEXT_PUBLIC_API_URL=http://localhost:8001
python -m uvicorn aurorasoc.api.main:app --host 127.0.0.1 --port 8001
Terminal 2 — Dashboard
cd dashboard
API_URL=http://localhost:8001 \
NEXT_PUBLIC_API_URL=http://localhost:8001 \
PORT=3100 \
npm run dev
Or start both at once via the shipped VS Code task “Run AuroraSOC on 3100 and 8001” (Command Palette → Tasks: Run Task).
Open two PowerShell windows in the repo root.
Terminal 1 — API
.\.venv\Scripts\Activate.ps1
$env:SYSTEM_MODE = "dummy"
$env:LOCAL_AUTH_ENABLED = "true"
$env:REDIS_URL = "redis://localhost:6379"
$env:MCP_HEALTH_PROBE_ENABLED = "false"
$env:API_URL = "http://localhost:8001"
$env:NEXT_PUBLIC_API_URL = "http://localhost:8001"
python -m uvicorn aurorasoc.api.main:app --host 127.0.0.1 --port 8001
Terminal 2 — Dashboard
cd dashboard
$env:API_URL = "http://localhost:8001"
$env:NEXT_PUBLIC_API_URL = "http://localhost:8001"
$env:PORT = "3100"
npm run dev
Verify
curl http://localhost:8001/health # → {"status":"healthy", ...}
curl http://localhost:8001/api/v1/alerts # → 30 alerts
curl http://localhost:8001/api/v1/agents # → 14 agents
Open the dashboard at http://localhost:3100 and sign in:
| Field | Value |
|---|---|
| Username | admin |
| Password | admin123! |
You should see populated Alerts, Agents, Cases, and Dashboard views.
Launch — Dry-Run Mode (SYSTEM_MODE=dry_run)
Dry-run keeps the in-memory demo dataset but routes agent reasoning through your local Ollama. No playbooks fire and nothing is written to disk.
Make sure Ollama is running and the model is pulled:
ollama serve & # if not already running
ollama pull granite4:8b
- Linux / macOS
- Windows (PowerShell)
source .venv/bin/activate
export SYSTEM_MODE=dry_run \
LOCAL_AUTH_ENABLED=true \
REDIS_URL=redis://localhost:6379 \
MCP_HEALTH_PROBE_ENABLED=false \
LLM_BACKEND=ollama \
OLLAMA_BASE_URL=http://localhost:11434 \
OLLAMA_MODEL=granite4:8b \
OLLAMA_ORCHESTRATOR_MODEL=granite4:8b \
API_URL=http://localhost:8001 \
NEXT_PUBLIC_API_URL=http://localhost:8001
python -m uvicorn aurorasoc.api.main:app --host 127.0.0.1 --port 8001
The shipped task “Run AuroraSOC dry-run on 3100 and 8001” wraps the same command.
.\.venv\Scripts\Activate.ps1
$env:SYSTEM_MODE = "dry_run"
$env:LOCAL_AUTH_ENABLED = "true"
$env:REDIS_URL = "redis://localhost:6379"
$env:MCP_HEALTH_PROBE_ENABLED = "false"
$env:LLM_BACKEND = "ollama"
$env:OLLAMA_BASE_URL = "http://localhost:11434"
$env:OLLAMA_MODEL = "granite4:8b"
$env:OLLAMA_ORCHESTRATOR_MODEL = "granite4:8b"
$env:API_URL = "http://localhost:8001"
$env:NEXT_PUBLIC_API_URL = "http://localhost:8001"
python -m uvicorn aurorasoc.api.main:app --host 127.0.0.1 --port 8001
Then start the dashboard exactly as in dummy mode and stream agent thoughts:
python scripts/stream_dry_run_events.py # optional, prints every BeeAI thought to stdout
Granite 3.2 8B fallback
If your machine cannot host Granite 4 (≈ 8 GB VRAM / 16 GB RAM), AuroraSOC officially supports granite3.2:8b as a single-model override. Pull and pin it:
ollama pull granite3.2:8b
export OLLAMA_MODEL=granite3.2:8b
export OLLAMA_ORCHESTRATOR_MODEL=granite3.2:8b
export GRANITE_SINGLE_MODEL_MODE=true
export GRANITE_USE_SHARED_MODEL_POOL=true
See Local Deployment for the full single-model rationale.
Stopping cleanly
- Linux / macOS
- Windows (PowerShell)
Ctrl+C in each terminal. To force-clean orphans bound to the AuroraSOC ports:
fuser -k 8001/tcp 3100/tcp 2>/dev/null || true
Ctrl+C in each terminal, then:
Get-NetTCPConnection -LocalPort 8001,3100 -ErrorAction SilentlyContinue |
Select-Object -ExpandProperty OwningProcess -Unique |
ForEach-Object { Stop-Process -Id $_ -Force }
Troubleshooting
| Symptom | Likely cause | Fix |
|---|---|---|
database_unavailable in API logs | PostgreSQL not running, but you launched in dummy mode | Safe to ignore — dummy mode runs without persistent storage. |
| Dashboard loads but tables are empty | API URL mismatch | Confirm both API_URL and NEXT_PUBLIC_API_URL point at http://localhost:8001 in both terminals. |
Address already in use on 8001 or 3100 | A previous run did not exit cleanly | Run the “Stopping cleanly” snippet above, then retry. |
OTLP trace export failed: localhost:4317 | No OpenTelemetry collector running | Cosmetic only. Set OTEL_SDK_DISABLED=true to silence. |
Connection refused to Redis | Redis service not started | Start Redis (see prerequisites) or unset REDIS_URL to disable the WebSocket event stream. |
| Granite 4 errors / OOM in dry-run | Model too large for available VRAM | Switch to the Granite 3.2 8B fallback. |
Windows: npm run dev fails with EPERM/long path | NTFS path-length limit | Enable LongPathsEnabled (see Windows setup), reopen PowerShell. |
For deeper diagnostics see FAQ and Common Issues.
Next steps
- Walk through real investigations with Common Workflows.
- Bring up the agent fleet end-to-end with AI Agent Fleet Deployment.
- Promote your local instance to the full container stack via Stack Lifecycle.