Tool Base Class
All 31 MCP tools in AuroraSOC extend the AuroraTool base class defined in aurorasoc/tools/base.py. This class bridges the BeeAI framework's tool interface with a simpler execution pattern.
Class Hierarchy
AuroraTool Implementation
from beeai import Tool, ToolOutput
class AuroraTool(Tool):
"""Base class for all AuroraSOC tools.
Bridges BeeAI's _run(input, options, context) interface
to a simpler _execute(**kwargs) pattern.
"""
async def _run(self, input: dict, options: dict, context: dict) -> ToolOutput:
"""BeeAI calls this method. We parse input and delegate."""
try:
# Parse input JSON to kwargs
kwargs = self._parse_input(input)
# Call subclass implementation
result = await self._execute(**kwargs)
# Wrap in ToolOutput
return ToolOutput(
result=json.dumps(result),
metadata={"tool": self.name, "success": True}
)
except Exception as e:
return ToolOutput(
result=json.dumps({"error": str(e)}),
metadata={"tool": self.name, "success": False}
)
async def _execute(self, **kwargs) -> dict:
"""Subclasses implement this."""
raise NotImplementedError
Why This Abstraction?
Without AuroraTool:
class SearchLogs(Tool):
async def _run(self, input, options, context):
# Every tool repeats: parse input, handle errors, format output
try:
data = json.loads(input) if isinstance(input, str) else input
query = data.get("query", "")
# ... actual logic ...
return ToolOutput(result=json.dumps(result), metadata={...})
except Exception as e:
return ToolOutput(result=json.dumps({"error": str(e)}), metadata={...})
With AuroraTool:
class SearchLogs(AuroraTool):
async def _execute(self, query: str, time_range: str = "15m", source: str = None) -> dict:
# Just the business logic
return {"events": [...], "count": 47}
Benefits:
- DRY — Error handling, input parsing, output formatting done once
- Testable — Test
_execute()directly without BeeAI framework setup - Consistent — Every tool returns the same format
- Typed — Kwargs provide clear parameter signatures
Input Schema Definition
Each tool defines its input schema for LLM tool-calling:
class SearchLogs(AuroraTool):
name = "search_logs"
description = "Search SIEM logs by query string, time range, and source"
input_schema = {
"type": "object",
"properties": {
"query": {
"type": "string",
"description": "Search query (e.g., 'src_ip:192.168.1.100')"
},
"time_range": {
"type": "string",
"description": "Time window (e.g., '15m', '1h', '24h')",
"default": "15m"
},
"source": {
"type": "string",
"description": "Filter by source system",
"enum": ["wazuh", "suricata", "zeek", "velociraptor"]
}
},
"required": ["query"]
}
The LLM sees this schema and generates tool calls like:
{"tool": "search_logs", "input": {"query": "src_ip:203.0.113.50", "time_range": "1h"}}
MCP Sandbox Enforcement
AuroraSOC treats MCP tool access as a runtime security boundary, not only a prompt instruction.
At agent startup, aurorasoc/agents/mcp_agent_loader.py loads tools only from the domains listed in AGENT_MCP_BINDINGS. It then compares the remote MCP server's advertised tool names with the registered domain catalog from aurorasoc/tools/mcp_domain_registry.py. Tools outside the catalog are rejected, recorded in MCP health metadata as rejected_tool_names, and are not passed to the agent.
Every loaded MCP tool is wrapped by instrument_mcp_tool() in aurorasoc/tools/mcp_health.py. The wrapper records invocation metrics and denies execution if the agent is not bound to the tool's domain or if the tool is explicitly excluded for that agent. For example, NetworkAnalyzer can receive network and SIEM read tools, but block_ip remains excluded so ad-hoc analysis cannot become containment without the approved response path.
Denied invocations are persisted as MCP tool calls with status="denied", which lets the dashboard and audit trail distinguish policy enforcement from transport failures or tool errors.
Creating a New Tool
1. Create the Tool Class
# aurorasoc/tools/my_domain/my_tool.py
from aurorasoc.tools.base import AuroraTool
class MyNewTool(AuroraTool):
name = "my_new_tool"
description = "Describe what this tool does for the LLM"
input_schema = {
"type": "object",
"properties": {
"param1": {
"type": "string",
"description": "What this parameter means"
}
},
"required": ["param1"]
}
async def _execute(self, param1: str) -> dict:
# Your tool logic here
result = await some_external_api(param1)
return {"status": "success", "data": result}
2. Register in Module __init__.py
# aurorasoc/tools/my_domain/__init__.py
from .my_tool import MyNewTool
__all__ = ["MyNewTool"]
3. Add to Agent Factory
# In the appropriate factory method
tools = [
MyNewTool(),
# ... other tools
]
4. Add to MCP Registry (Optional)
# aurorasoc/tools/registry/server.py
from aurorasoc.tools.my_domain import MyNewTool
registry.register(MyNewTool())
Error Handling Strategy
Tools should return structured errors, not raise exceptions:
async def _execute(self, hostname: str) -> dict:
try:
result = await isolate_host(hostname)
return {"status": "isolated", "hostname": hostname}
except HostNotFoundError:
return {"error": "host_not_found", "hostname": hostname}
except PermissionError:
return {"error": "insufficient_permissions", "hostname": hostname}
The LLM can then reason about errors: "The host was not found. Let me verify the hostname..."
Always return dict from _execute(). Include a "status" or "error" field so the LLM can distinguish success from failure without parsing natural language.