Skip to content

Installation

Installation instructions

Requirements

  • Python 3.11+
  • (recommended) uv installation for environment management

Install

# Install with uv (recommended)
uv pip install spade_llm

# Or install with pip
pip install spade_llm

# If working on a project with uv
uv add spade_llm

For optional features:

# RAG support with ChromaDB
pip install spade_llm[chroma]   # or: uv add spade_llm --extra chroma

# LangChain tool integration
pip install spade_llm[langchain]   # or: uv add spade_llm --extra langchain

XMPP Server Setup

SPADE 4.0+ includes a built-in XMPP server - no external server setup required!

# Start SPADE's built-in server (simplest setup)
spade run
The built-in server provides everything you need to run SPADE-LLM agents locally. Simply start the server in one terminal and run your agents in another.

Advanced XMPP Server Configuration

For custom setups, you can specify different ports and hosts:

# Custom host and ports
spade run --host localhost --client_port 6222 --server_port 6269

# Use IP address instead of localhost
spade run --host 127.0.0.1 --client_port 6222 --server_port 6269

# Custom ports if defaults are in use
spade run --client_port 6223 --server_port 6270

Alternative: External XMPP Servers

For production or if you prefer external servers:

  • Public servers: jabber.at, jabber.org (require manual account creation)
  • Local Prosody: Install Prosody for local hosting
  • Other servers: Any XMPP-compliant server

LLM Provider Setup

Choose one provider:

OpenAI

export OPENAI_API_KEY="your-api-key"

Ollama (Local)

# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh

# Download a model
ollama pull llama3.1:8b
ollama serve

LM Studio (Local)

  1. Download LM Studio
  2. Download a model through the GUI
  3. Start the local server

RAG Dependencies (Optional)

For RAG (Retrieval-Augmented Generation) support, install with ChromaDB:

# Install with ChromaDB support
pip install spade_llm[chroma]

# Or with uv
uv add spade_llm --extra chroma

Prerequisites for RAG

Embedding Model - Choose one:

Ollama (recommended for local/private use):

# Pull embedding model
ollama pull nomic-embed-text

# Verify
curl http://localhost:11434/api/tags

OpenAI:

# Use with your OpenAI API key
export OPENAI_API_KEY="your-api-key"

# Models: text-embedding-3-small, text-embedding-3-large

Development Install

See the Development Guide for instructions on setting up a development environment, running tests, and contributing code.

Troubleshooting

SPADE server not starting:

# Check if default ports are already in use
netstat -an | grep 5222

# Try different ports if needed
spade run --client_port 6222 --server_port 6269

Agent connection issues: Ensure SPADE server is running first

# Terminal 1: Start server
spade run

# Terminal 2: Run your agent
python your_agent.py

SSL errors: For development with built-in server, disable SSL verification:

agent = LLMAgent(..., verify_security=False)

Ollama connection: Check if Ollama is running:

curl http://localhost:11434/v1/models