Skip to main content

Agentic AI in the SOC

AuroraSOC represents a fundamental shift from rule-based security automation to agentic AI security operations. This page explains what agentic AI means, why it matters for security, and how AuroraSOC implements it.

Traditional SOC vs. Agentic SOC

Key Differences

AspectTraditionalAgentic AI
AnalysisHuman reads alerts one by oneAI triages all alerts, prioritizes
ContextAnalyst manually correlates dataAgent queries multiple sources autonomously
DecisionHuman determines every actionAgent decides, human approves critical actions
SpeedMinutes to hours per alertSeconds per alert
ScalabilityLimited by analyst countHandles thousands of concurrent alerts
ConsistencyVaries by analyst skill and fatigueConsistent methodology every time
LearningInstitutional knowledge in runbooksEpisodic memory from past investigations

What Makes an Agent "Agentic"?

An agent is more than a chatbot. AuroraSOC's agents have four key properties:

1. Autonomy

Agents independently decide what actions to take. When given an alert, the Security Analyst agent doesn't just describe what it sees — it actively:

  • Queries the SIEM for related events
  • Extracts and enriches IOCs
  • Maps to MITRE ATT&CK techniques
  • Recommends response actions

2. Tool Use

Agents interact with the real world through 31 MCP tools:

3. Memory

Agents remember past investigations through a three-tier memory system:

  • Tier 1 (Sliding Window) — Recent conversation history (fast, ephemeral)
  • Tier 2 (Episodic Memory) — Past cases stored in PostgreSQL via pgvector embeddings (semantic recall)
  • Tier 3 (Threat Intelligence) — IOC knowledge base backed by PostgreSQL, pgvector similarity search, and Redis caching

This means an agent can say: "This pattern is similar to the APT29 campaign we investigated three weeks ago, where we found..."

4. Collaboration

Agents work together through the A2A (Agent-to-Agent) protocol:

  • The Orchestrator decomposes complex tasks and delegates to specialists
  • Specialists can request help from other agents via handoff tools
  • Results are aggregated and synthesized into comprehensive reports

The BeeAI Framework

AuroraSOC is built on IBM's BeeAI framework, which provides:

  • RequirementAgent — Agent type with tool access and structured output
  • Tool abstraction — Standardized interface for agent-tool interaction
  • Memory interface — Pluggable memory backends
  • Middleware — Global trajectory tracking for agent reasoning
  • AgentWorkflow — Multi-step pipelines connecting agents
Why BeeAI?

BeeAI was chosen over alternatives (LangChain, CrewAI, AutoGen) because it provides:

  1. First-class A2A protocol support for inter-agent communication
  2. MCP (Model Context Protocol) tool integration
  3. Memory interface that supports custom multi-tier backends
  4. Production-grade middleware for observability and control
  5. Forced tool execution at specific steps (ThinkTool at step 1)

LLM Independence

AuroraSOC currently supports two backend families for runtime inference:

ProviderConfiguration
vLLM (Production)LLM_BACKEND=vllm
VLLM_BASE_URL=http://vllm:8000/v1
Ollama (Local)LLM_BACKEND=ollama
OLLAMA_BASE_URL=http://ollama:11434
vLLM ModelsVLLM_MODEL=granite-soc-specialist
VLLM_ORCHESTRATOR_MODEL=granite-soc-specialist
Ollama ModelsOLLAMA_MODEL=granite4:8b
OLLAMA_ORCHESTRATOR_MODEL=granite4:dense

The agent behavior and capabilities remain the same across supported backends, though performance characteristics vary.