Skip to content

SPADE-LLM

Build distributed based-XMPP multi-agent systems powered by Large Language Models. Extends SPADE multi-agent platform with many LLM providers for distributed AI applications, intelligent chatbots, and collaborative agent systems.

Key Features

🔧

Built-in XMPP Server

No external server setup required with SPADE 4.0+. Get started instantly with zero configuration.

🧠

Multi-Provider Support

140+ providers via LiteLLM. Switch providers seamlessly.

Advanced Tool System

Function calling with async execution, human-in-the-loop workflows, and LangChain integration.

🎯

Context Management

Multi-conversation support with automatic cleanup and intelligent context window management.

🛡️

Guardrails System

Content filtering and safety controls for input/output with customizable rules and policies.

🌐

MCP
Integration

Model Context Protocol server support for external tool integration and service connectivity.

🧩

Coordinator Agents

LLM-driven coordinators orchestrate SPADE subagents with shared context, sequential planning, and inter-organization routing.

📚

RAG
System

Ground agent responses in your own documents. Load, chunk, and retrieve relevant context for more accurate answers.

📐

Structured Outputs

Define a Pydantic model as the expected response format and get back strongly-typed structured data from any LLM.

Architecture Overview

graph LR
    A[LLMAgent] --> C[ContextManager]
    A --> D[LLMProvider]
    A --> E[LLMTool]
    A --> G[Guardrails]
    A --> M[Memory]
    A --> S[Structured Outputs]
    D --> F[LiteLLM]
    F --> R[Anthropic/OpenAI-Compatible/OpenRouter/...]
    G --> H[Input/Output Filtering]
    E --> I[Human-in-the-Loop]
    E --> J[MCP]
    E --> P[CustomTool/LangchainTool]
    E --> Q["RetrievalTool (for RAG)"]
    J --> K[STDIO]
    J --> L[HTTP Streaming]
    M --> N[Agent Memory]
    M --> O[Thread Memory]

Quick Start

🐍 Basic Agent Setup
import spade
from spade_llm import LLMAgent, LLMProvider

async def main():
    # First, start SPADE's built-in server:
    # spade run

    provider = LLMProvider(
        model="gpt-5-nano",
        api_key="your-api-key",
    )

    agent = LLMAgent(
        jid="assistant@localhost",
        password="password",
        provider=provider,
        system_prompt="You are a helpful assistant"
    )

    await agent.start()

if __name__ == "__main__":
    spade.run(main())
Python 3.11+ MIT License Beta Release

Documentation Structure

Getting Started

Core Guides

Reference

Examples

Explore the examples directory for complete working examples:

  • multi_provider_chat.py - Chat with different LLM providers
  • ollama_with_tools.py - Local models with tool calling
  • guardrails.py - Content filtering and safety controls
  • langchain_tools.py - LangChain tool integration
  • rag_system_ollama_chroma.py - Complete RAG system demonstration
  • rag_vs_no_rag_demo.py - Comparison of responses with and without RAG
  • valencia_multiagent_trip_planner.py - Multi-agent workflow