SPADE-LLM
Build distributed based-XMPP multi-agent systems powered by Large Language Models. Extends SPADE multi-agent platform with many LLM providers for distributed AI applications, intelligent chatbots, and collaborative agent systems.
Key Features¶
Built-in XMPP Server
No external server setup required with SPADE 4.0+. Get started instantly with zero configuration.
Multi-Provider Support
140+ providers via LiteLLM. Switch providers seamlessly.
Advanced Tool System
Function calling with async execution, human-in-the-loop workflows, and LangChain integration.
Context Management
Multi-conversation support with automatic cleanup and intelligent context window management.
Guardrails System
Content filtering and safety controls for input/output with customizable rules and policies.
MCP
Integration
Model Context Protocol server support for external tool integration and service connectivity.
Coordinator Agents
LLM-driven coordinators orchestrate SPADE subagents with shared context, sequential planning, and inter-organization routing.
RAG
System
Ground agent responses in your own documents. Load, chunk, and retrieve relevant context for more accurate answers.
Structured Outputs
Define a Pydantic model as the expected response format and get back strongly-typed structured data from any LLM.
Architecture Overview¶
graph LR
A[LLMAgent] --> C[ContextManager]
A --> D[LLMProvider]
A --> E[LLMTool]
A --> G[Guardrails]
A --> M[Memory]
A --> S[Structured Outputs]
D --> F[LiteLLM]
F --> R[Anthropic/OpenAI-Compatible/OpenRouter/...]
G --> H[Input/Output Filtering]
E --> I[Human-in-the-Loop]
E --> J[MCP]
E --> P[CustomTool/LangchainTool]
E --> Q["RetrievalTool (for RAG)"]
J --> K[STDIO]
J --> L[HTTP Streaming]
M --> N[Agent Memory]
M --> O[Thread Memory]
Quick Start¶
import spade
from spade_llm import LLMAgent, LLMProvider
async def main():
# First, start SPADE's built-in server:
# spade run
provider = LLMProvider(
model="gpt-5-nano",
api_key="your-api-key",
)
agent = LLMAgent(
jid="assistant@localhost",
password="password",
provider=provider,
system_prompt="You are a helpful assistant"
)
await agent.start()
if __name__ == "__main__":
spade.run(main())
Documentation Structure¶
Getting Started¶
- Installation - Setup and requirements
- Quick Start - Basic usage examples
Core Guides¶
- Architecture - SPADE_LLM general structure
- Providers - LLM provider configuration
- Tools System - Function calling capabilities
- Memory System - Agent learning and conversation continuity
- Context Management - Context control and message management
- Conversations - Conversation lifecycle and management
- Guardrails - Content filtering and safety controls
- Message Routing - Conditional message routing
- RAG System - Retrieval-Augmented Generation for enhanced knowledge retrieval
Reference¶
- API Reference - Complete API documentation
- Examples - Working code examples
Examples¶
Explore the examples directory for complete working examples:
multi_provider_chat.py- Chat with different LLM providersollama_with_tools.py- Local models with tool callingguardrails.py- Content filtering and safety controlslangchain_tools.py- LangChain tool integrationrag_system_ollama_chroma.py- Complete RAG system demonstrationrag_vs_no_rag_demo.py- Comparison of responses with and without RAGvalencia_multiagent_trip_planner.py- Multi-agent workflow