Testing Guide for Contributors¶
This guide covers testing requirements and practices for MAID contributions.
Running Tests¶
All Tests¶
# Run all package tests
uv run pytest packages/
# Run with verbose output
uv run pytest packages/ -v
# Run with coverage report
uv run pytest packages/ --cov=packages --cov-report=html
Specific Packages¶
# maid-engine tests
uv run pytest packages/maid-engine/tests/
# maid-stdlib tests
uv run pytest packages/maid-stdlib/tests/
# maid-classic-rpg tests
uv run pytest packages/maid-classic-rpg/tests/
Specific Tests¶
# Run a specific test file
uv run pytest packages/maid-engine/tests/test_world.py
# Run a specific test function
uv run pytest packages/maid-engine/tests/test_world.py::test_create_entity
# Run tests matching a pattern
uv run pytest packages/ -k "test_entity"
Test Requirements¶
Coverage Expectations¶
- New code should have > 80% test coverage
- Critical paths (auth, persistence, networking) should have > 90% coverage
- All public APIs must have tests
What to Test¶
- Unit tests for individual functions and methods
- Integration tests for component interactions
- Edge cases and error conditions
- Async behavior using
pytest-asyncio
Writing Tests¶
Test Structure¶
Place tests in the tests/ directory of each package:
packages/maid-engine/
├── src/maid_engine/
│ └── core/
│ └── world.py
└── tests/
├── conftest.py # Shared fixtures
└── core/
└── test_world.py
Pytest Configuration¶
Tests use pytest-asyncio for async support:
Basic Test Example¶
# tests/test_example.py
import pytest
from uuid import uuid4
from maid_engine.core.world import World
def test_world_creation():
"""Test that a World can be created."""
world = World()
assert world is not None
@pytest.mark.asyncio
async def test_entity_creation(world):
"""Test entity creation with fixture."""
entity = world.entities.create()
assert entity.id is not None
assert world.entities.get(entity.id) is entity
Using Fixtures¶
Define shared fixtures in conftest.py:
# tests/conftest.py
import pytest
from maid_engine.core.world import World
@pytest.fixture
def world():
"""Create a fresh World for each test."""
return World()
@pytest.fixture
def player_entity(world):
"""Create a player entity."""
entity = world.entities.create()
entity.add_tag("player")
return entity
Testing Async Code¶
@pytest.mark.asyncio
async def test_async_operation(world):
"""Test an async operation."""
result = await world.some_async_method()
assert result is not None
@pytest.mark.asyncio
async def test_event_handling(world):
"""Test event handlers are called."""
received = []
async def handler(event):
received.append(event)
world.events.subscribe(SomeEvent, handler)
await world.events.emit(SomeEvent())
assert len(received) == 1
Testing Error Cases¶
def test_invalid_input_raises():
"""Test that invalid input raises appropriate error."""
with pytest.raises(ValueError, match="must be positive"):
HealthComponent(current=-10)
@pytest.mark.asyncio
async def test_not_found_returns_none(world):
"""Test that missing entity returns None."""
result = world.entities.get(uuid4())
assert result is None
Mocking¶
Use pytest-mock or unittest.mock:
from unittest.mock import AsyncMock, MagicMock
@pytest.mark.asyncio
async def test_with_mock(world, mocker):
"""Test with mocked dependency."""
mock_store = AsyncMock()
mock_store.get.return_value = {"name": "Test"}
mocker.patch.object(world, "document_store", mock_store)
result = await world.load_data("test_id")
mock_store.get.assert_called_once_with("test_id")
Test Patterns¶
Testing Components¶
def test_component_defaults():
"""Test component has correct defaults."""
health = HealthComponent()
assert health.current == 100
assert health.maximum == 100
def test_component_validation():
"""Test component validates input."""
with pytest.raises(ValueError):
HealthComponent(current=150, maximum=100)
Testing Systems¶
@pytest.mark.asyncio
async def test_system_processes_entities(world, player_entity):
"""Test system processes matching entities."""
system = RegenerationSystem(world)
# Set up test state
health = player_entity.get(HealthComponent)
health.current = 50
# Run system
await system.update(delta=1.0)
# Verify result
assert health.current > 50
Testing Commands¶
@pytest.fixture
def mock_session():
"""Create a mock session for command testing."""
session = MagicMock()
session.messages = []
session.send = AsyncMock(side_effect=lambda m: session.messages.append(m))
return session
@pytest.mark.asyncio
async def test_look_command(world, player_entity, mock_session):
"""Test look command outputs room description."""
ctx = CommandContext(
session=mock_session,
player_id=player_entity.id,
world=world,
)
await look_command(ctx)
assert len(mock_session.messages) > 0
assert "room" in mock_session.messages[0].lower()
Content Pack Testing¶
For comprehensive content pack testing patterns, see:
- Content Pack Testing Guide - Detailed testing strategies
- Covers: component tests, system tests, event tests, command tests, persistence tests
Continuous Integration¶
Tests run automatically on pull requests. Your PR must pass:
- All tests (
uv run pytest packages/) - Linting (
uv run ruff check packages/) - Type checking (
uv run mypy packages/)
Debugging Failed Tests¶
Verbose Output¶
Drop into Debugger¶
Show Print Statements¶
Run Only Failed Tests¶
Related Guides¶
- Development Setup - Environment setup
- Style Guide - Code style requirements
- Content Pack Testing - Detailed testing patterns