Skip to content

Plugin Review Process

This document describes the review process for plugins submitted to maid-contrib. Understanding these criteria helps plugin authors prepare successful submissions and ensures consistent quality across the ecosystem.

Review Overview

Timeline

Phase Duration Description
Triage 1-2 days Initial check for completeness
Automated Checks Immediate CI runs tests and quality checks
Technical Review 3-5 days Human code review
Community Feedback 3-7 days Optional community review period
Final Decision 1-2 days Approval or feedback

Total typical time: 1-2 weeks

Review Outcomes

  • Approved - Plugin is accepted into maid-contrib
  • Changes Requested - Minor issues that need addressing before approval
  • Major Revision Required - Significant issues requiring substantial changes
  • Declined - Plugin does not meet requirements or fit the ecosystem

Review Criteria

1. Protocol Compliance

The plugin must correctly implement the ContentPack protocol.

Requirements

Requirement Priority Verification Method
Implements manifest property returning valid ContentPackManifest P0 Protocol test
Implements get_dependencies() returning list of dependency names P0 Protocol test
Implements get_systems() returning System instances P0 Protocol test
Implements get_events() returning Event types P0 Protocol test
Implements register_commands() method P0 Protocol test
Implements register_document_schemas() method P0 Protocol test
Implements on_load() async method P0 Protocol test
Implements on_unload() async method P0 Protocol test
Manifest version matches pyproject.toml version P1 Manual check
Dependencies are correctly declared P1 Integration test

Verification

# Run protocol compliance tests
uv run maid plugin check /path/to/plugin --protocol

2. Testing Requirements

Comprehensive testing ensures plugin reliability and maintainability.

Test Coverage

Metric Minimum Recommended
Line coverage 80% 90%+
Branch coverage 70% 80%+
Function coverage 90% 100%

Required Tests

  • [ ] Unit tests for all components
  • [ ] Unit tests for all systems (including tick behavior)
  • [ ] Unit tests for all command handlers
  • [ ] Unit tests for all event handlers
  • [ ] Integration tests for plugin loading
  • [ ] Integration tests for system interactions
  • [ ] Protocol compliance tests using ContentPackTestCase

Test Quality Criteria

  • Tests are deterministic (no flaky tests)
  • Tests are independent (can run in any order)
  • Tests use appropriate fixtures and mocks
  • Tests cover edge cases and error conditions
  • Tests are clearly named and documented

Verification

# Run tests with coverage
uv run pytest /path/to/plugin/tests --cov=your_plugin --cov-report=term-missing

# Verify minimum coverage
uv run maid plugin check /path/to/plugin --coverage

3. Documentation Requirements

Good documentation makes plugins accessible and maintainable.

Required Documentation

Document Required Content
README.md Description, installation, quick start, commands, configuration
CHANGELOG.md Version history with changes
LICENSE OSI-approved open source license
Docstrings All public classes, functions, and methods

README Structure

The README must include:

  1. Header - Plugin name and brief description
  2. Features - Bulleted list of features
  3. Installation - Installation commands
  4. Quick Start - Minimal working example
  5. Commands - Table of all commands with usage
  6. Configuration - Environment variables and settings
  7. Requirements - MAID version and dependencies
  8. License - License type and link to full license

Docstring Requirements

  • Google-style docstrings
  • Description of what the item does
  • Args section for functions/methods with parameters
  • Returns section for functions that return values
  • Raises section for functions that raise exceptions
  • Example section for complex functionality

Verification

# Check documentation completeness
uv run maid plugin check /path/to/plugin --docs

4. Security Review Checklist

Security review protects the MAID ecosystem and its users.

Code Security

Check Description Severity
No hardcoded secrets API keys, passwords, tokens Critical
Input validation All user inputs are validated High
No arbitrary code execution No eval(), exec(), or unsafe deserialization Critical
Safe file operations Path traversal prevention High
Dependency security No known vulnerabilities High
SQL injection prevention Parameterized queries if using databases Critical
XSS prevention Proper output encoding for web interfaces Medium

Dependency Audit

# Check for known vulnerabilities
uv run pip-audit /path/to/plugin

# Or using the quality checker
uv run maid plugin check /path/to/plugin --security

Permission Model

  • Does the plugin request appropriate permissions?
  • Are sensitive operations properly gated?
  • Is there proper access control for admin commands?

Data Handling

  • How is user data stored and processed?
  • Is sensitive data encrypted at rest?
  • Are there data retention/deletion policies?

5. API Compatibility Requirements

Ensures plugins work reliably with MAID versions.

Version Declaration

  • Plugin declares minimum supported MAID version
  • Plugin uses >= syntax for flexibility (e.g., maid-engine>=0.2.0)
  • Plugin is tested against all declared supported versions

Breaking Changes

Plugins must: - Document any breaking changes in CHANGELOG - Use semantic versioning (MAJOR for breaking changes) - Provide migration guides for breaking changes

Compatibility Testing

# Test against multiple MAID versions
uv run maid plugin check /path/to/plugin --compat

# Or manually test against specific version
MAID_TEST_VERSION=0.2.0 uv run pytest

6. Code Quality Standards

High-quality code is maintainable and reliable.

Linting

All code must pass ruff with MAID's default configuration:

uv run ruff check /path/to/plugin/src
uv run ruff format --check /path/to/plugin/src

Type Checking

All code must pass mypy in strict mode:

uv run mypy /path/to/plugin/src

Code Style

  • Follow PEP 8 conventions
  • Maximum line length: 100 characters
  • Use meaningful variable and function names
  • Keep functions focused and reasonably sized
  • Avoid deep nesting
  • Use type hints on all public APIs

7. Performance Considerations

Plugins should not negatively impact server performance.

Guidelines

  • Systems should complete tick processing efficiently
  • Avoid blocking I/O in tick handlers
  • Use appropriate data structures for lookups
  • Cache expensive computations when appropriate
  • Document any performance-intensive operations

Red Flags

  • Unbounded loops in tick handlers
  • Synchronous network calls in systems
  • Large memory allocations per tick
  • Missing indices on frequently-queried data

Review Workflow

For Reviewers

  1. Triage (Maintainer)
  2. Verify submission is complete
  3. Check basic eligibility requirements
  4. Assign reviewer(s)

  5. Automated Checks (CI)

  6. Run linting and formatting
  7. Run type checking
  8. Run tests with coverage
  9. Run security audit
  10. Run protocol compliance tests

  11. Code Review (Reviewer)

  12. Review code for quality and best practices
  13. Review architecture and design decisions
  14. Review test quality
  15. Review documentation
  16. Perform security review

  17. Feedback

  18. Provide constructive feedback
  19. Request changes if needed
  20. Work with author to resolve issues

  21. Final Approval

  22. Two reviewer approvals required
  23. One must be a maintainer
  24. All CI checks must pass

For Plugin Authors

  1. Before Submission
  2. Run maid plugin check and fix all issues
  3. Review the submission checklist
  4. Ensure all documentation is complete

  5. During Review

  6. Respond to feedback promptly
  7. Ask questions if feedback is unclear
  8. Push updates as requested

  9. After Approval

  10. Plugin is merged to maid-contrib
  11. CI/CD pipelines are configured
  12. Documentation is published

Appeals Process

If your plugin is declined:

  1. Understand the Feedback - Review all feedback carefully
  2. Ask for Clarification - If feedback is unclear, ask reviewers
  3. Address Issues - Fix identified problems
  4. Resubmit - Submit a new review request after addressing issues

If you disagree with a decision:

  1. Comment on the review issue explaining your position
  2. A different maintainer will review the appeal
  3. The appeal decision is final

Quality Metrics

We track the following metrics to improve our review process:

Metric Target
Average review time < 2 weeks
First response time < 48 hours
Approval rate > 70%
Post-approval issues < 10%