Skip to main content

Agentic AI in the SOC

AuroraSOC represents a fundamental shift from rule-based security automation to agentic AI security operations. This page explains what agentic AI means, why it matters for security, and how AuroraSOC implements it.

Traditional SOC vs. Agentic SOC

Key Differences

AspectTraditionalAgentic AI
AnalysisHuman reads alerts one by oneAI triages all alerts, prioritizes
ContextAnalyst manually correlates dataAgent queries multiple sources autonomously
DecisionHuman determines every actionAgent decides, human approves critical actions
SpeedMinutes to hours per alertSeconds per alert
ScalabilityLimited by analyst countHandles thousands of concurrent alerts
ConsistencyVaries by analyst skill and fatigueConsistent methodology every time
LearningInstitutional knowledge in runbooksEpisodic memory from past investigations

What Makes an Agent "Agentic"?

An agent is more than a chatbot. AuroraSOC's agents have four key properties:

1. Autonomy

Agents independently decide what actions to take. When given an alert, the Security Analyst agent doesn't just describe what it sees — it actively:

  • Queries the SIEM for related events
  • Extracts and enriches IOCs
  • Maps to MITRE ATT&CK techniques
  • Recommends response actions

2. Tool Use

Agents interact with the real world through 31 MCP tools:

3. Memory

Agents remember past investigations through a three-tier memory system:

  • Tier 1 (Sliding Window) — Recent conversation history (fast, ephemeral)
  • Tier 2 (Episodic Memory) — Past cases stored in Qdrant vector database (semantic recall)
  • Tier 3 (Threat Intelligence) — IOC knowledge base with similarity search

This means an agent can say: "This pattern is similar to the APT29 campaign we investigated three weeks ago, where we found..."

4. Collaboration

Agents work together through the A2A (Agent-to-Agent) protocol:

  • The Orchestrator decomposes complex tasks and delegates to specialists
  • Specialists can request help from other agents via handoff tools
  • Results are aggregated and synthesized into comprehensive reports

The BeeAI Framework

AuroraSOC is built on IBM's BeeAI framework, which provides:

  • RequirementAgent — Agent type with tool access and structured output
  • Tool abstraction — Standardized interface for agent-tool interaction
  • Memory interface — Pluggable memory backends
  • Middleware — Global trajectory tracking for agent reasoning
  • AgentWorkflow — Multi-step pipelines connecting agents
Why BeeAI?

BeeAI was chosen over alternatives (LangChain, CrewAI, AutoGen) because it provides:

  1. First-class A2A protocol support for inter-agent communication
  2. MCP (Model Context Protocol) tool integration
  3. Memory interface that supports custom multi-tier backends
  4. Production-grade middleware for observability and control
  5. Forced tool execution at specific steps (ThinkTool at step 1)

LLM Independence

AuroraSOC is designed to work with any LLM provider:

ProviderConfiguration
OpenAI GPT-4AURORA_LLM_MODEL=gpt-4o
Anthropic ClaudeAURORA_LLM_MODEL=claude-sonnet-4-20250514
Local OllamaAURORA_LLM_BASE_URL=http://ollama:11434/v1
Azure OpenAIUse Azure-specific base URL and API key

The agent behavior and capabilities remain the same regardless of the underlying LLM, though performance may vary.