Agentic AI in the SOC
AuroraSOC represents a fundamental shift from rule-based security automation to agentic AI security operations. This page explains what agentic AI means, why it matters for security, and how AuroraSOC implements it.
Traditional SOC vs. Agentic SOC
Key Differences
| Aspect | Traditional | Agentic AI |
|---|---|---|
| Analysis | Human reads alerts one by one | AI triages all alerts, prioritizes |
| Context | Analyst manually correlates data | Agent queries multiple sources autonomously |
| Decision | Human determines every action | Agent decides, human approves critical actions |
| Speed | Minutes to hours per alert | Seconds per alert |
| Scalability | Limited by analyst count | Handles thousands of concurrent alerts |
| Consistency | Varies by analyst skill and fatigue | Consistent methodology every time |
| Learning | Institutional knowledge in runbooks | Episodic memory from past investigations |
What Makes an Agent "Agentic"?
An agent is more than a chatbot. AuroraSOC's agents have four key properties:
1. Autonomy
Agents independently decide what actions to take. When given an alert, the Security Analyst agent doesn't just describe what it sees — it actively:
- Queries the SIEM for related events
- Extracts and enriches IOCs
- Maps to MITRE ATT&CK techniques
- Recommends response actions
2. Tool Use
Agents interact with the real world through 31 MCP tools:
3. Memory
Agents remember past investigations through a three-tier memory system:
- Tier 1 (Sliding Window) — Recent conversation history (fast, ephemeral)
- Tier 2 (Episodic Memory) — Past cases stored in Qdrant vector database (semantic recall)
- Tier 3 (Threat Intelligence) — IOC knowledge base with similarity search
This means an agent can say: "This pattern is similar to the APT29 campaign we investigated three weeks ago, where we found..."
4. Collaboration
Agents work together through the A2A (Agent-to-Agent) protocol:
- The Orchestrator decomposes complex tasks and delegates to specialists
- Specialists can request help from other agents via handoff tools
- Results are aggregated and synthesized into comprehensive reports
The BeeAI Framework
AuroraSOC is built on IBM's BeeAI framework, which provides:
- RequirementAgent — Agent type with tool access and structured output
- Tool abstraction — Standardized interface for agent-tool interaction
- Memory interface — Pluggable memory backends
- Middleware — Global trajectory tracking for agent reasoning
- AgentWorkflow — Multi-step pipelines connecting agents
BeeAI was chosen over alternatives (LangChain, CrewAI, AutoGen) because it provides:
- First-class A2A protocol support for inter-agent communication
- MCP (Model Context Protocol) tool integration
- Memory interface that supports custom multi-tier backends
- Production-grade middleware for observability and control
- Forced tool execution at specific steps (ThinkTool at step 1)
LLM Independence
AuroraSOC is designed to work with any LLM provider:
| Provider | Configuration |
|---|---|
| OpenAI GPT-4 | AURORA_LLM_MODEL=gpt-4o |
| Anthropic Claude | AURORA_LLM_MODEL=claude-sonnet-4-20250514 |
| Local Ollama | AURORA_LLM_BASE_URL=http://ollama:11434/v1 |
| Azure OpenAI | Use Azure-specific base URL and API key |
The agent behavior and capabilities remain the same regardless of the underlying LLM, though performance may vary.