Transform your SPADE agents into intelligent AI assistants with multi-provider LLM integration, advanced memory management, and enterprise-grade security
SpadeLLM is a production-ready extension for SPADE that integrates Large Language Models into multi-agent systems, enabling intelligent communication, reasoning, and decision-making capabilities across distributed agent networks.
Built with enterprise requirements in mind, SpadeLLM supports multiple LLM providers (OpenAI, Ollama, LM Studio), Model Context Protocol (MCP) integration for both local and HTTP streaming servers, advanced memory management, human-in-the-loop workflows, and comprehensive guardrails for safe AI deployment. Compatible with Python 3.10-3.12 and SPADE 3.3.0+.
Seamlessly integrate with OpenAI GPT-4, Ollama local models, and LM Studio. Switch providers dynamically or use multiple models simultaneously for different agent roles.
Agents can execute custom tools and functions asynchronously with JSON schema validation. Full Model Context Protocol (MCP) support for local servers (stdout/stdin) and HTTP streaming servers. Includes LangChain tool integration and custom tool creation.
Agent-based memory for shared knowledge across the system and thread-level memory for isolated conversation contexts. Smart context window management with configurable size limits and automatic cleanup.
Web-based interface for expert consultation and oversight. Configurable timeout mechanisms for human intervention in critical decisions, ensuring responsible AI deployment.
Follow our comprehensive tutorials to master SpadeLLM, from your first agent to advanced multi-agent systems.
Learn how to create a basic SPADE-LLM agent step by step, starting with a simple setup and gradually adding features to understand core concepts.
Learn how to implement safety and content filtering in SPADE-LLM agents using comprehensive guardrail mechanisms.
Learn how to create and use custom tools with SPADE-LLM agents to perform actions beyond text generation.
Create sophisticated multi-agent systems with advanced features like MCP integration, human-in-the-loop workflows, and complex routing.
Explore all tutorials and guides in our comprehensive documentation.
Start with Your First AgentProduction-ready integration with leading AI providers and local serving frameworks:
Shared knowledge store across all agent instances. Perfect for maintaining global context, learned behaviors, and system-wide knowledge that persists across restarts and conversations.
Isolated conversation contexts for individual user sessions. Maintains conversation history, user preferences, and context-specific information without cross-contamination between threads.
Keyword-based filtering system for input and output validation. Configurable blocking policies prevent inappropriate content, protect sensitive information, and ensure compliance with content policies.
Smart and fixed window sizing with automatic cleanup. Optimizes token usage while maintaining conversation coherence. Configurable limits prevent context overflow and manage costs effectively.
from spade_llm import LLMAgent, LLMProvider
async def main():
# Configure provider
provider = LLMProvider.create_openai(
api_key="your-api-key",
model="gpt-4.1-mini"
)
# Create intelligent agent
agent = LLMAgent(
jid="assistant@example.com",
password="password",
provider=provider,
system_prompt="You are a helpful assistant"
)
await agent.start()