AI Integration Guide¶
This guide provides an overview of AI integration in MAID, helping you configure and use AI features for dynamic NPCs, content generation, and intelligent game behaviors.
Table of Contents¶
- Overview
- Quick Start
- AI Providers
- NPC Dialogue
- Rate Limiting and Budgets
- Content Safety
- Custom AI Behaviors
- Best Practices
Overview¶
MAID integrates with Large Language Models (LLMs) to enable:
- Dynamic NPC Dialogue: NPCs with unique personalities that respond naturally to player input
- Content Generation: Procedurally generated descriptions, quests, and items
- Intelligent Behaviors: Context-aware NPC decisions and reactions
Supported Providers¶
| Provider | Best For | Requirements |
|---|---|---|
| Anthropic Claude | Production, quality responses | API key |
| OpenAI GPT | Alternative production option | API key |
| Ollama | Local development, offline use | Local server |
Cost Considerations¶
AI API usage incurs costs. Plan your integration accordingly:
| Usage Level | Monthly Estimate | Approach |
|---|---|---|
| Development | Free-$10 | Ollama locally or low API usage |
| Small Server | $10-50 | AI for key NPCs only |
| Medium Server | $50-200 | AI for important NPCs + rate limits |
| Large Server | $200+ | Full AI with aggressive budgets |
Quick Start¶
1. Configure API Key¶
Create a .env file in your project root:
# For Anthropic Claude (recommended)
MAID_AI_ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
MAID_AI_DIALOGUE_ENABLED=true
# OR for OpenAI
MAID_AI_OPENAI_API_KEY=sk-your-key-here
MAID_AI_DEFAULT_PROVIDER=openai
MAID_AI_DIALOGUE_ENABLED=true
# OR for local Ollama (no API key needed)
MAID_AI_DEFAULT_PROVIDER=ollama
MAID_AI_OLLAMA_HOST=http://localhost:11434
MAID_AI_DIALOGUE_ENABLED=true
2. Install Provider Package¶
# For Anthropic
uv sync --extra anthropic
# For OpenAI
uv sync --extra openai
# For Ollama (no extra package needed)
uv sync
3. Create an AI-Enabled NPC¶
from maid_stdlib.components import DialogueComponent
# Add to any NPC entity
npc.add(DialogueComponent(
ai_enabled=True,
personality="A friendly tavern keeper who loves gossip.",
speaking_style="Casual, uses local slang, laughs often.",
npc_role="bartender",
knowledge_domains=["local gossip", "drinks", "travelers"],
greeting="Welcome to The Rusty Mug! What can I get ya?",
farewell="Come back anytime, friend!",
))
4. Test the Integration¶
# Start the server
uv run maid server start --debug
# Connect and test
telnet localhost 4000
# After logging in:
> talk bartender Hello! What's the latest news?
AI Providers¶
Anthropic Claude (Recommended)¶
Claude excels at maintaining character consistency and following complex personality instructions.
Configuration:
MAID_AI_DEFAULT_PROVIDER=anthropic
MAID_AI_ANTHROPIC_API_KEY=sk-ant-api03-...
MAID_AI_ANTHROPIC_MODEL=claude-sonnet-4-20250514
Available Models: | Model | Speed | Quality | Cost | |-------|-------|---------|------| | claude-3-5-haiku | Fast | Good | Low | | claude-sonnet-4 | Medium | Better | Medium | | claude-opus-4 | Slow | Best | High |
OpenAI GPT¶
GPT models provide a solid alternative with broad capability.
Configuration:
Available Models: | Model | Speed | Quality | Cost | |-------|-------|---------|------| | gpt-3.5-turbo | Fast | Basic | Low | | gpt-4o-mini | Fast | Good | Low | | gpt-4o | Medium | Best | Medium |
Ollama (Local)¶
Run models locally for free, ideal for development and offline use.
Setup:
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start server
ollama serve
# Pull a model
ollama pull llama3.2
Configuration:
MAID_AI_DEFAULT_PROVIDER=ollama
MAID_AI_OLLAMA_HOST=http://localhost:11434
MAID_AI_OLLAMA_MODEL=llama3.2
Recommended Models: | Model | Size | Quality | Use Case | |-------|------|---------|----------| | phi3 | 2GB | Basic | Fast testing | | llama3.2 | 2GB | Good | Development | | mistral | 4GB | Better | Production | | llama3.1 | 5GB | Best | Important NPCs |
Multi-Provider Setup¶
Configure multiple providers for flexibility:
# All providers configured
MAID_AI_ANTHROPIC_API_KEY=sk-ant-...
MAID_AI_OPENAI_API_KEY=sk-...
MAID_AI_OLLAMA_HOST=http://localhost:11434
# Default to Anthropic
MAID_AI_DEFAULT_PROVIDER=anthropic
Then assign providers per-NPC:
# Important NPC uses Claude
important_npc.add(DialogueComponent(
ai_enabled=True,
provider_name="anthropic",
model_name="claude-sonnet-4-20250514",
# ...
))
# Background NPC uses local Ollama
background_npc.add(DialogueComponent(
ai_enabled=True,
provider_name="ollama",
model_name="phi3",
# ...
))
NPC Dialogue¶
DialogueComponent Fields¶
DialogueComponent(
# Core AI settings
ai_enabled=True,
provider_name=None, # Uses default if None
model_name=None, # Uses provider default if None
# Personality definition
personality="...", # Who the character is
speaking_style="...", # How they talk
npc_role="...", # Their job/function
# Knowledge
knowledge_domains=["topic1", "topic2"],
secret_knowledge=["hidden info"],
# Constraints
will_discuss=["safe topics"],
wont_discuss=["forbidden topics"],
# Response settings
max_response_tokens=150, # 50-500
temperature=0.7, # 0.0-1.0
# Fallbacks
greeting="Hello!",
farewell="Goodbye!",
fallback_response="I don't understand.",
# Rate limiting
cooldown_seconds=2.0,
)
Personality Guidelines¶
Be Specific:
# Good - specific traits
personality=(
"A gruff but kind-hearted blacksmith who takes pride in his craft. "
"Quick to anger but forgives easily. Protective of apprentices. "
"Distrusts magic users due to a childhood incident."
)
# Bad - vague
personality="A blacksmith who is nice."
Include Motivations:
personality=(
"A nervous merchant who desperately wants to clear his family's debts. "
"Will cut corners and bend rules if it means making a sale. "
"Dreams of one day opening a shop in the capital city."
)
Speaking Style Examples¶
| Character Type | Style |
|---|---|
| Gruff Dwarf | "Short sentences. Mining metaphors. Says 'aye' not yes." |
| Noble | "Formal, never uses contractions. Subtle condescension." |
| Street Urchin | "Slang, incomplete sentences. Suspicious of authority." |
| Sage | "Measured, thoughtful. Speaks in questions. References texts." |
Temperature Guide¶
| Value | Result | Good For |
|---|---|---|
| 0.3-0.4 | Predictable, formal | Guards, officials |
| 0.5-0.6 | Consistent, natural | Merchants, craftsmen |
| 0.7-0.8 | Varied, engaging | Most NPCs |
| 0.9-1.0 | Creative, unpredictable | Mystics, tricksters |
For complete NPC dialogue documentation, see the NPC Dialogue Guide.
Rate Limiting and Budgets¶
Rate Limiting Layers¶
MAID implements multiple rate limiting layers:
- Global Rate Limit: Server-wide requests per minute
- Per-Player Rate Limit: Individual player limits
- Per-NPC Cooldown: Minimum time between NPC responses
Configuration:
# Global: 60 requests/minute for entire server
MAID_AI_DIALOGUE_GLOBAL_RATE_LIMIT_RPM=60
# Per-player: 10 requests/minute each
MAID_AI_DIALOGUE_PER_PLAYER_RATE_LIMIT_RPM=10
# Default NPC cooldown (can be overridden per-NPC)
MAID_AI_DIALOGUE_PER_NPC_COOLDOWN_SECONDS=2.0
Token Budgets¶
Control costs with daily token limits:
# Server-wide daily limit (null = unlimited)
MAID_AI_DIALOGUE_DAILY_TOKEN_BUDGET=100000
# Per-player daily limit
MAID_AI_DIALOGUE_PER_PLAYER_DAILY_BUDGET=5000
When budgets are exhausted: - NPCs use fallback responses instead of AI - Players see a message explaining the limit - Budgets reset at midnight server time
Cost Estimation¶
| NPC Response | Approximate Tokens |
|---|---|
| Short (50 tokens) | ~100-150 total |
| Medium (100 tokens) | ~200-300 total |
| Long (200 tokens) | ~400-600 total |
Rough cost per 1000 tokens (varies by provider/model): - Claude Haiku: ~\(0.001 - Claude Sonnet: ~\)0.003 - GPT-4o-mini: ~\(0.001 - GPT-4o: ~\)0.005
Content Safety¶
Built-in Filtering¶
When content_filtering is enabled, NPCs are instructed to:
- Stay fully in character
- Never acknowledge being AI
- Refuse harmful or inappropriate requests
- Avoid real-world controversial topics
Configuration:
Custom Safety Rules¶
Add game-specific safety rules in the NPC configuration:
DialogueComponent(
# ...
wont_discuss=[
"real-world events",
"how to harm players",
"exploits or cheats",
"out-of-character topics",
],
)
Conversation Logging¶
For debugging and moderation:
Warning: Only enable in development or with appropriate privacy policies.
Custom AI Behaviors¶
Programmatic AI Calls¶
Use the AI system directly for custom features:
from maid_engine.ai import get_registry, Message, CompletionOptions
async def generate_custom_content():
registry = get_registry()
provider = registry.get()
messages = [
Message.system("You generate fantasy item descriptions."),
Message.user("Generate a description for a legendary sword."),
]
result = await provider.complete(messages, CompletionOptions(
max_tokens=200,
temperature=0.8,
))
return result.content
Procedural Content Generation¶
Generate rooms, items, or quests:
async def generate_room_description(room_type: str, theme: str) -> dict:
"""Generate a room description using AI."""
prompt = f"""Generate a room for a MUD game.
Type: {room_type}
Theme: {theme}
Respond in JSON format:
{{
"name": "Room name",
"description": "2-3 sentence description",
"exits": ["possible exits"],
"items": ["items that might be here"]
}}"""
messages = [
Message.system("You are a creative game content designer."),
Message.user(prompt),
]
result = await provider.complete(messages, CompletionOptions(
temperature=0.8,
max_tokens=300,
))
return json.loads(result.content)
Context-Aware NPC Responses¶
Inject game context into AI prompts:
async def contextual_response(npc, player, message):
"""Generate NPC response with full context."""
context = f"""
You are {npc.name}, {npc.personality}
Current situation:
- Location: {npc.room.name}
- Time of day: {world.time.period}
- Weather: {world.weather.current}
About the player:
- Name: {player.name}
- Level: {player.level}
- Reputation with you: {npc.get_reputation(player)}
- Current quest: {player.active_quest or 'None'}
Respond in character, keeping your response under 100 words.
"""
messages = [
Message.system(context),
Message.user(message),
]
return await provider.complete(messages)
Best Practices¶
Development Workflow¶
- Start with Ollama: Use local models during development
- Test with Claude/GPT: Verify with production providers before launch
- Set conservative limits: Start with low budgets, increase as needed
- Monitor usage: Track token consumption and costs
NPC Design¶
- Important NPCs: Use high-quality providers (Claude, GPT-4o)
- Background NPCs: Use cheaper/local models (Ollama, Haiku)
- High-traffic NPCs: Set longer cooldowns, shorter responses
- Quest NPCs: Combine scripted triggers with AI dialogue
Performance¶
# Recommended production settings
MAID_AI_DIALOGUE_DEFAULT_MAX_TOKENS=100 # Keep responses short
MAID_AI_DIALOGUE_PER_NPC_COOLDOWN_SECONDS=3.0 # Prevent spam
MAID_AI_DIALOGUE_MAX_CONVERSATION_HISTORY=5 # Limit context
MAID_AI_DIALOGUE_CONVERSATION_TIMEOUT_MINUTES=15 # Clean up quickly
Cost Control¶
# Conservative budget settings
MAID_AI_DIALOGUE_DAILY_TOKEN_BUDGET=50000
MAID_AI_DIALOGUE_PER_PLAYER_DAILY_BUDGET=2000
MAID_AI_DIALOGUE_GLOBAL_RATE_LIMIT_RPM=30
MAID_AI_DIALOGUE_PER_PLAYER_RATE_LIMIT_RPM=5
Troubleshooting¶
Common Issues¶
NPC not responding with AI:
1. Check ai_enabled=True on DialogueComponent
2. Verify MAID_AI_DIALOGUE_ENABLED=true
3. Check API key is set correctly
4. Review server logs for errors
Responses too slow:
1. Use faster model (Haiku, gpt-3.5-turbo, phi3)
2. Reduce max_response_tokens
3. Disable context injection
Responses break character:
1. Lower temperature (try 0.5)
2. Make personality more specific
3. Add topics to wont_discuss
Rate limit errors: 1. Increase rate limits 2. Add NPC cooldowns 3. Use non-AI fallbacks for high-traffic NPCs
Related Documentation¶
- NPC Dialogue Guide - Complete NPC dialogue documentation
- AI Configuration Reference - All configuration options
- AI Provider Testing - Testing different providers
- Creating NPCs Guide - Full NPC creation guide