Getting Started for New Contributors
This page walks you through setting up a complete AuroraSOC development environment on your machine. By the end, the API will be running, the dashboard will be live, and you will be able to run the full test suite.
Prerequisites
Install these before starting:
| Tool | Minimum Version | What It Does | Install |
|---|---|---|---|
| Python | 3.12+ | Runs the backend, agents, and training scripts | python.org or sudo apt install python3.12 python3.12-venv |
| Rust | 1.75+ | Compiles the Rust core edge engine | curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh |
| Node.js | 18+ | Runs the Next.js dashboard and Docusaurus docs | nodejs.org or nvm install 18 |
| Docker & Docker Compose | 24+ / v2 | Runs PostgreSQL, Redis, NATS, Mosquitto, Grafana, etc. | docs.docker.com |
| Ollama | Latest | Serves the Granite 4 LLM locally | curl -fsSL https://ollama.com/install.sh | sh |
| Git | 2.30+ | Version control | sudo apt install git |
| Make | Any | Runs Makefile targets (the main developer interface) | sudo apt install build-essential |
Run make help at any time to see every available target with a short description.
Step 1: Clone and Enter the Repository
git clone https://github.com/your-org/AuroraSOC.git
cd AuroraSOC
Step 2: Start Infrastructure Services
AuroraSOC depends on several services. Docker Compose starts them all with one command:
make docker-up
This starts:
| Service | Port | Purpose |
|---|---|---|
| PostgreSQL 16 | 5432 | Primary database |
| Redis 7 | 6379 | Cache, event streams, rate limiting |
| Qdrant | 6333 | Vector database for agent memory |
| NATS JetStream | 4222 | Cross-site event federation |
| Mosquitto | 1883/8883 | MQTT broker for IoT devices |
| Prometheus | 9090 | Metrics collection |
| Grafana | 3001 | Monitoring dashboards |
| OTel Collector | 4317 | Distributed trace collection |
To check everything is running:
docker compose ps
To view logs:
make docker-logs
Step 3: Install Python Dependencies
Create a virtual environment and install all dependencies (including dev and test extras):
python3 -m venv .venv
source .venv/bin/activate
make dev
This runs pip install -e ".[dev,test]", which installs AuroraSOC in editable mode so your code changes take effect immediately.
Step 4: Pull the LLM Models
AuroraSOC uses IBM Granite 4 models served through Ollama:
make ollama-pull-granite
This pulls two models:
granite4:8b— used by all 16 specialist agentsgranite4:dense— used by the Orchestrator for complex reasoning
Verify the models are available:
make ollama-list
If Ollama is not running, start it first:
make ollama-serve
Step 5: Run Database Migrations
Apply the schema to PostgreSQL:
make migrate
This runs alembic upgrade head, which creates all 13 tables (alerts, cases, agents, playbooks, etc.). To check the current migration state:
alembic current
Step 6: Start the Backend API
make api
This starts FastAPI on http://localhost:8000 with hot-reload enabled. You should see:
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO: Started reloader process
Verify it's working:
curl http://localhost:8000/health
API Documentation
FastAPI auto-generates interactive API docs:
- Swagger UI: http://localhost:8000/docs
- ReDoc: http://localhost:8000/redoc
Step 7: Start the Dashboard
In a second terminal:
make dashboard-install # First time only
make dashboard-dev
The Next.js dashboard starts on http://localhost:3000. It connects to the FastAPI backend at port 8000.
You can start both the API and dashboard with a single command:
make dev-all
This runs both processes and stops both when you press Ctrl+C.
Step 8: Run the Test Suite
make test
This runs all Python tests with pytest. For a coverage report:
make test-cov
Coverage HTML report is generated in htmlcov/index.html.
To run Rust tests as well:
make rust-test
To run all checks (lint + type-check + Python tests + Rust clippy + Rust tests + dashboard lint):
make check
Step 9: Verify Everything Works
Run through this checklist to confirm your setup is complete:
-
docker compose ps— all services are "Up" -
curl http://localhost:8000/health— returns OK -
make test— all tests pass -
make lint— no linting errors - http://localhost:3000 — dashboard loads in browser
-
ollama list— showsgranite4:8bandgranite4:dense
Automated Local Setup
For a fully automated first-time setup, run:
make setup-local
This script (scripts/setup_local.sh) performs:
- Checks for all required tools (Python, Rust, Node.js, Docker, Ollama)
- Pulls Granite 4 models via Ollama
- Creates a default
.envfile if missing - Verifies all dependencies
Common Makefile Targets Reference
Development
| Target | Description |
|---|---|
make install | Install Python dependencies (production only) |
make dev | Install with dev + test extras |
make api | Start FastAPI backend (port 8000, hot-reload) |
make dev-all | Start API + dashboard together |
make dashboard-dev | Start Next.js dashboard (port 3000) |
make mcp | Start the MCP Tool Registry server |
Quality
| Target | Description |
|---|---|
make test | Run Python test suite |
make test-cov | Run tests with coverage |
make lint | Run ruff linter |
make format | Auto-format code with ruff |
make type-check | Run mypy type checking |
make check | Run ALL checks (lint + type + test + Rust + dashboard) |
Rust
| Target | Description |
|---|---|
make rust-build | Build Rust core (release mode) |
make rust-test | Run Rust tests |
make rust-clippy | Run Rust linter |
Database
| Target | Description |
|---|---|
make migrate | Apply all pending migrations |
make migrate-new MSG="description" | Create a new migration |
make migrate-down | Roll back one migration |
Docker
| Target | Description |
|---|---|
make docker-up | Start all infrastructure services |
make docker-down | Stop all services |
make docker-logs | Tail container logs |
make docker-build | Rebuild all Docker images |
LLM / Training
| Target | Description |
|---|---|
make ollama-pull-granite | Pull base Granite 4 models |
make ollama-serve | Start Ollama server |
make train-data | Prepare SOC training datasets |
make train | Fine-tune Granite 4 (requires GPU) |
make train-agent AGENT=name | Fine-tune for a specific agent |
make train-eval | Evaluate fine-tuned model |
make enable-finetuned | Switch agents to fine-tuned models |
make disable-finetuned | Switch agents back to base models |
Cleanup
| Target | Description |
|---|---|
make clean | Remove all build artifacts and caches |
Environment Variables
AuroraSOC is configured via environment variables, managed through aurorasoc/config/settings.py using Pydantic Settings. Key variables:
| Variable | Default | Description |
|---|---|---|
DATABASE_URL | postgresql+asyncpg://aurora:aurora@localhost:5432/aurorasoc | PostgreSQL connection |
REDIS_URL | redis://localhost:6379 | Redis connection |
QDRANT_URL | http://localhost:6333 | Qdrant vector DB |
NATS_URL | nats://localhost:4222 | NATS JetStream |
MQTT_BROKER | localhost | MQTT broker host |
OLLAMA_BASE_URL | http://localhost:11434 | Ollama API |
JWT_SECRET_KEY | (must set) | Secret for JWT token signing |
GRANITE_USE_FINETUNED | false | Use fine-tuned vs base models |
You can set these in a .env file in the project root — the make setup-local script creates one for you.
Troubleshooting
"Connection refused" on port 5432/6379/6333
Infrastructure services are not running. Run make docker-up and wait 10–20 seconds for startup.
Ollama model not found
Run make ollama-pull-granite. If Ollama itself isn't installed, see the Prerequisites table above.
Import errors after pulling latest code
Re-install dependencies: make dev. If the database schema changed, also run make migrate.
Tests fail with database errors
Ensure PostgreSQL is running (docker compose ps) and migrations are applied (make migrate).
Dashboard shows "Network Error"
The FastAPI backend must be running (make api) on port 8000 before the dashboard can connect.
Rust build fails
Ensure Rust is installed (rustup --version). Then try make rust-build. If proto files are needed, ensure protoc is installed.
Next Steps
Your environment is ready. Now:
- Read the Contribution Guide to understand coding standards and the PR process.
- Pick a task — check the GitHub Issues board for issues tagged
good first issue. - Start coding — the
make formatandmake lintcommands will keep your code clean.