Quick Start Guide
Get AuroraSOC running on your machine in under 10 minutes. This guide covers the fastest path to a working system, with the full infrastructure running in Docker and the API serving demo data.
If you are not sure where to begin by role, read Learning Paths first.
Prerequisites
Before you begin, make sure you have:
- Podman 5.0+ and podman-compose (
sudo dnf install -y podman podman-composeon Fedora), or Docker 24+ with Docker Compose v2 - Python 3.12+
- Node.js 22+ and npm (for the dashboard)
- Git
- At least 16 GB RAM (recommended 32 GB for full agent deployment)
- GPU is optional but recommended for faster LLM inference
You can run AuroraSOC in two modes:
- Demo Mode: API + Dashboard only (no agents, uses in-memory demo data) — needs just 4 GB RAM
- Full Mode: Agent fleet + infrastructure — needs 16-32 GB RAM + GPU
The Rust core remains optional in both modes and is enabled separately with
--profile rust-core.
Choose Your Path
Use the path that matches your immediate goal:
- Demo Mode (Fastest)
- Full Mode (Agent Fleet)
Use this when you want to evaluate the UI and workflows quickly.
- Run infrastructure from
docker-compose.dev.yml - Start API and Dashboard locally
- Log in and explore alerts/cases/agents with seeded data
- Skip specialist agent deployment until you are ready
Use this when you want end-to-end agent investigations and realistic SOC flows.
- Choose between the Ollama local-first path and the vLLM GPU stack
- Start the orchestrator and specialist agents with the correct MCP services
- Add
--profile rust-coreonly if you want the Rust fast path - Validate investigation dispatch and playbook execution
- Continue with AI Agent Fleet Deployment for the full step-by-step guide
If this is your first run, keep one terminal for infrastructure logs and one terminal for API logs. This makes troubleshooting much easier.
Step 1: Clone the Repository
git clone https://github.com/ahmeddwalid/AuroraSOC
cd AuroraSOC
Step 2: Create Local Environment
Initialize a host-run demo configuration that matches the lightweight dev stack:
make env-init
This creates .env with the settings needed for the fastest local path:
SYSTEM_MODE=dummyLOCAL_AUTH_ENABLED=truePG_HOST=localhostPG_USER=auroraPG_PASSWORDaligned withdocker-compose.dev.ymlPG_SSLMODE=disableREDIS_URL=redis://localhost:6379
If your machine already has PostgreSQL, Redis, or NATS using the default dev ports,
change DEV_PG_PORT, DEV_REDIS_PORT, or DEV_NATS_PORT in .env before
starting docker-compose.dev.yml.
Step 3: Start Infrastructure Services
Start the development dependencies used by the default Python API path:
# Development infrastructure (lightweight)
podman compose -f docker-compose.dev.yml up -d
# If podman compose delegates to Docker Compose on your machine, use Docker directly
docker compose -f docker-compose.dev.yml up -d
# Verify the dev dependencies are healthy
podman compose -f docker-compose.dev.yml ps
# Or, when using Docker directly
docker compose -f docker-compose.dev.yml ps
This starts:
| Service | Port | Purpose |
|---|---|---|
| PostgreSQL 16 | 5432 | Primary database |
| Redis 7 | 6379 | Cache + event streams |
| NATS 2.10 | 4222 | Cross-site federation |
| Mosquitto | 1883/8883 | MQTT for IoT devices |
| Prometheus | 9090 | Metrics scraping |
| Grafana | 3001 | Dashboards |
| OpenTelemetry Collector | 4317/4318/8888 | Telemetry ingestion |
This lightweight dev stack does not start the agent fleet, vLLM,
or the optional rust-core profile.
If you later want the Rust fast path, enable it from the main compose file:
podman compose --profile rust-core up -d
Step 4: Install Python Dependencies
# Create and activate a virtual environment (recommended)
python3 -m venv .venv
source .venv/bin/activate
# Install AuroraSOC with development dependencies
pip install -e ".[dev]"
Step 5: Run Database Migrations
alembic upgrade head
This creates all 11 database tables: alerts, cases, case_timeline, cps_devices, attestation_results, playbooks, playbook_executions, iocs, agent_audit_log, human_approvals, and reports.
Step 6: Start the API Server
uvicorn aurorasoc.api.main:app --host 0.0.0.0 --port 8000 --reload
The API starts with comprehensive demo data even without database population:
- 30 simulated security alerts across all severity levels
- 15 investigation cases in various stages
- 13 CPS/IoT devices with attestation status
- Full agent registry (orchestrator + specialists)
- 200 SIEM log entries
- 40 EDR endpoints
- 6 SOAR playbooks
- 20 IOCs (Indicators of Compromise)
Visit http://localhost:8000/docs to see the interactive API documentation (Swagger UI).
Step 7: Start the Dashboard
In a new terminal:
cd dashboard
npm install
npm run dev
Open http://localhost:3000 in your browser. Log in with:
| Field | Value |
|---|---|
| Username | admin |
| Password | admin123! |
In development mode, AuroraSOC uses an in-memory user store with pre-configured accounts. See Authentication for production setup.
Verified Fallback: Dummy Mode Without Compose
Use this exact fallback when the default local path is blocked by port conflicts, container runtime issues, or missing optional images.
Typical symptoms:
podman composecannot reachpodman.sock- ports
3000,5432, or8000are already in use - the dev compose stack fails while pulling optional support images
alembicoruvicornresolve to system binaries instead of.venv
This path was verified on a Linux host and runs the AuroraSOC API entirely in backend dummy mode with showcase data.
1. Install backend dependencies into the project virtual environment
Run these from the repository root:
python3 -m venv .venv
.venv/bin/pip install -e ".[dev]"
2. Start the API on alternate ports with persistent backends intentionally unavailable
export SYSTEM_MODE=dummy
export PG_HOST=127.0.0.1
export PG_PORT=55432
export REDIS_URL=redis://127.0.0.1:6388
export NATS_URL=nats://127.0.0.1:4223
export CORS_ORIGINS=http://localhost:3002,http://127.0.0.1:3002
.venv/bin/uvicorn aurorasoc.api.main:app --host 0.0.0.0 --port 8010
In this fallback mode, AuroraSOC intentionally fails over to showcase reads because the database is unavailable and SYSTEM_MODE=dummy is active.
You can verify the backend mode in another terminal:
curl -s http://localhost:8010/api/v1/system/mode
Expected response fields include:
"mode":"dummy""uses_showcase_data":true"read_data_source":"showcase"
3. Start the dashboard on an alternate frontend port
In a new terminal:
cd dashboard
API_URL=http://localhost:8010 NEXT_PUBLIC_API_URL=http://localhost:8010 npm run dev -- --port 3002
Open http://localhost:3002 in your browser.
Use these demo credentials if you are prompted to sign in:
| Field | Value |
|---|---|
| Username | admin |
| Password | admin123! |
4. Why both API_URL and NEXT_PUBLIC_API_URL matter
API_URLdrives the Next.js rewrite proxy for/api/*requests.NEXT_PUBLIC_API_URLis still used by dashboard client utilities such as WebSocket and direct browser-side API helpers.
If you only set NEXT_PUBLIC_API_URL, the login page can still proxy /api/* calls to the stale backend target from dashboard/.env.local.
5. When you can skip Alembic in this fallback
You do not need alembic upgrade head for this exact fallback path.
Because the API is intentionally started with an unavailable Postgres endpoint, AuroraSOC boots without persistent storage and serves showcase data instead.
Optional: Select an Operation Mode
AuroraSOC can run in three backend runtime modes:
dummy— synthetic/showcase behavior, no mutationsdry_run— live reads and analysis previews, no mutationsreal— full platform behavior with mutations enabled
You can switch modes from Settings in the dashboard (with the required permissions), or via the system mode API endpoint.
See Operation Modes for exact behavior and when to use each mode.
Step 8 (Optional): Continue To The AI Agent Guide
If you want the orchestrator and specialist agents running with a real LLM backend, continue with AI Agent Fleet Deployment.
That guide covers:
- The Ollama local-first path for host-run agents and MCP services
- The vLLM GPU stack for the containerized deployment
- Validation, logs, and troubleshooting for the multi-agent runtime
Using the Makefile
AuroraSOC includes a comprehensive Makefile for common operations:
make help # Show all available commands
make install # Install Python dependencies
make test # Run test suite
make lint # Run linter
make prod-validate # Run env + DB + LLM preflight before a real deployment
make api # Start the API server
make dashboard-dev # Start the dashboard
make docker-up # Start default compose stack
make docker-down # Stop all Docker services
make migrate # Run database migrations
What's Next?
Now that you have AuroraSOC running, explore:
- Dashboard Overview — Learn to navigate the interface
- Alert Management — Handle security alerts
- Capabilities Demo Lab — Run the two-laptop Windows/Kali showcase safely
- Core Concepts — Understand how the AI agents work
- Architecture Overview — Deep dive into the system design
Validation Checklist
After setup, confirm each check below:
- API docs open at
http://localhost:8000/docs - Dashboard opens at
http://localhost:3000 - Login succeeds with expected role
- Alerts page loads with seeded or live data
- Case creation and status update work
- At least one WebSocket stream is receiving updates
Troubleshooting
API fails to start with secret-related error
Cause: required environment variables are missing or weak.
Fix:
- Ensure
.envexists and is loaded. - Set strong values for
JWT_SECRET_KEYandAPI_SERVICE_KEY. - Restart the API process.
alembic fails with ModuleNotFoundError: No module named 'pydantic'
Cause: the shell is using the system alembic binary instead of the project virtual environment.
Fix:
.venv/bin/alembic upgrade head
If you prefer activating the virtual environment first, verify the active binaries before retrying:
source .venv/bin/activate
which python
which pip
which alembic
Dashboard loads but cannot fetch data
Cause: API not reachable from dashboard runtime.
Fix:
- Confirm API is running at
http://localhost:8000. - Check browser network panel for
401/403/503errors. - Verify token was issued from
/api/v1/auth/token.
Docker services start but remain unhealthy
Cause: local port conflict, disk pressure, or old containers.
Fix:
- Run
podman compose -f docker-compose.dev.yml psand inspect unhealthy services. - Check service logs (
podman compose -f docker-compose.dev.yml logs <service>). - Stop conflicting local services or change ports.
FAQ
Can I run AuroraSOC without GPU?
Yes. GPU improves model inference speed, but Demo Mode and many workflows work without GPU.
Can I skip agents initially?
Yes. You can complete UI and workflow evaluation without starting specialist agents.
Where do I go next for operator playbooks?
See Dashboard Overview and SOAR Playbooks.