Skip to content

AI Integration

MAID provides built-in support for AI-powered features through multiple LLM providers.

Overview

AI integration in MAID enables:

  • NPC Dialogue - Natural conversations with AI-powered NPCs
  • AI Players - Autonomous agents that play the game as virtual players (guide)
  • Content Generation - Generate room descriptions, items, quests
  • Dynamic Storytelling - AI-driven narrative elements

Supported Providers

Anthropic (Claude)

The default and recommended provider:

MAID_AI__DEFAULT_PROVIDER=anthropic
MAID_AI__ANTHROPIC_API_KEY=sk-ant-...

OpenAI (GPT)

MAID_AI__DEFAULT_PROVIDER=openai
MAID_AI__OPENAI_API_KEY=sk-...

Ollama (Local LLMs)

For local, self-hosted models:

MAID_AI__DEFAULT_PROVIDER=ollama
MAID_AI__OLLAMA_HOST=http://localhost:11434
MAID_AI__OLLAMA_MODEL=llama2

NPC Dialogue System

NPCs can have AI-powered conversations:

from maid_stdlib.components.dialogue import DialogueComponent

npc.add(DialogueComponent(
    personality="A wise old wizard who speaks in riddles",
    speaking_style="cryptic and archaic",
    npc_role="quest_giver",
    knowledge_domains=["ancient magic", "the prophecy", "dragon lore"],
    will_discuss=["magic", "prophecy"],
    wont_discuss=["secret lair location"],
))

# Players can then talk to NPCs
# > talk wizard about the prophecy

Rate Limiting

Built-in rate limiting protects your API budget:

MAID_AI_DIALOGUE_GLOBAL_RATE_LIMIT_RPM=60
MAID_AI_DIALOGUE_PER_PLAYER_RATE_LIMIT_RPM=10
MAID_AI_DIALOGUE_DAILY_TOKEN_BUDGET=100000

Content Safety

The content filter ensures appropriate responses:

from maid_engine.ai.safety import ContentFilter

filter = ContentFilter(
    enabled=True,
    block_profanity=True,
    block_harmful_content=True,
    block_real_world_info=False,  # Allow for some games
    custom_blocked_terms=["real-world-politics"],
)

Context Provider System

The ContextOrchestrator assembles LLM prompts from multiple context providers, each contributing relevant information within a shared token budget:

from maid_engine.ai.context_providers import ContextOrchestrator

orchestrator = ContextOrchestrator(provider_timeout_ms=40)
orchestrator.register_provider("world", WorldContextProvider(), priority=10, default_budget_pct=0.4)
orchestrator.register_provider("memory", MemoryContextProvider(), priority=20, default_budget_pct=0.3)
orchestrator.register_provider("relationships", RelationshipContextProvider(), priority=30, default_budget_pct=0.3)

sections = await orchestrator.gather_context(npc_id=npc.id, player_id=player.id, total_budget=4096)

Providers are called in priority order. If the token budget is exhausted or the timeout is reached, lower-priority providers are skipped.

NPC Memory Extraction

Dialogue flows through an extraction pipeline that creates episodic memories:

  1. Player and NPC exchange dialogue
  2. The conversation is sent to the LLM for memory extraction
  3. Extracted facts are stored as episodic memories with salience scores
  4. Memories decay over time unless reinforced by subsequent interactions

AI Cost Tracking

AICostTracker monitors token usage and costs across all LLM calls:

  • Per-model pricing tables for accurate cost calculation
  • Daily and per-player token budgets with automatic enforcement
  • CompletionChunk and TokenUsage data classes for streaming instrumentation

Quest Narrative Generation

The QuestNarrativeGenerator creates quest text from structured quest seeds:

  • Uses LLM to generate narrative descriptions, dialogue, and objective text
  • Falls back to template-based generation when the LLM is unavailable or budget is exhausted
  • Integrates with NPC knowledge graphs to ensure narrative consistency

NPC Bark System

Note

The NPC Bark System is a maid-classic-rpg feature, not part of the core engine or stdlib.

NPCs emit ambient "barks" — short flavor lines triggered by world events — without requiring LLM calls:

  • Barks are selected from pre-authored pools based on NPC personality, mood, and context
  • Provides atmosphere and liveliness at zero AI cost

Further Reading