Cloud Deployment¶
Docker Deployment¶
Development (docker-compose)¶
# docker-compose.yml
version: '3.8'
services:
maid:
build: .
ports:
- "4000:4000"
- "8080:8080"
environment:
- MAID_DB__HOST=postgres
- MAID_DB__USER=maid
- MAID_DB__PASSWORD=maid
- MAID_DB__NAME=maid
- MAID_AI__DEFAULT_PROVIDER=ollama
- MAID_AI__OLLAMA_HOST=http://ollama:11434
depends_on:
- postgres
- ollama
postgres:
image: postgres:16
environment:
- POSTGRES_USER=maid
- POSTGRES_PASSWORD=maid
- POSTGRES_DB=maid
volumes:
- postgres_data:/var/lib/postgresql/data
ollama:
image: ollama/ollama
volumes:
- ollama_data:/root/.ollama
volumes:
postgres_data:
ollama_data:
# Start all services
docker-compose up -d
# Pull AI model
docker-compose exec ollama ollama pull llama3.2
# View logs
docker-compose logs -f maid
Production Dockerfile¶
FROM python:3.12-slim
WORKDIR /app
# Install uv
RUN pip install uv
# Copy and install dependencies
COPY pyproject.toml uv.lock ./
COPY packages/ packages/
RUN uv sync --no-dev
# Copy application
COPY . .
# Expose ports
EXPOSE 4000 8080
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD python -c "import socket; s=socket.socket(); s.connect(('localhost', 4000)); s.close()"
# Run server
CMD ["uv", "run", "maid", "server", "start", "--host", "0.0.0.0"]
AWS Deployment¶
flowchart TB
subgraph Internet
Users[Players]
end
subgraph AWS["AWS Cloud"]
subgraph VPC["VPC"]
subgraph Public["Public Subnet"]
NLB[Network Load Balancer<br/>TCP:4000]
ALB[Application Load Balancer<br/>HTTP:8080]
end
subgraph Private["Private Subnet"]
ECS[ECS Fargate<br/>MAID Container]
RDS[(RDS PostgreSQL)]
REDIS[(ElastiCache Redis)]
end
end
SM[Secrets Manager<br/>API Keys]
end
Users --> NLB
Users --> ALB
NLB --> ECS
ALB --> ECS
ECS --> RDS
ECS --> REDIS
ECS --> SM
AWS ECS Task Definition¶
{
"family": "maid-server",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"containerDefinitions": [
{
"name": "maid",
"image": "your-ecr-repo/maid:latest",
"portMappings": [
{"containerPort": 4000, "protocol": "tcp"},
{"containerPort": 8080, "protocol": "tcp"}
],
"environment": [
{"name": "MAID_TELNET__HOST", "value": "0.0.0.0"},
{"name": "MAID_WEB__HOST", "value": "0.0.0.0"}
],
"secrets": [
{
"name": "MAID_DB__PASSWORD",
"valueFrom": "arn:aws:secretsmanager:region:account:secret:maid/db-password"
},
{
"name": "MAID_AI__ANTHROPIC_API_KEY",
"valueFrom": "arn:aws:secretsmanager:region:account:secret:maid/anthropic-key"
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/maid",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "maid"
}
}
}
]
}
Kubernetes Deployment¶
# maid-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: maid-server
spec:
replicas: 1 # Single instance (game state is in-memory)
selector:
matchLabels:
app: maid
template:
metadata:
labels:
app: maid
spec:
containers:
- name: maid
image: your-registry/maid:latest
ports:
- containerPort: 4000
name: telnet
- containerPort: 8080
name: websocket
env:
- name: MAID_DB__HOST
value: postgres-service
- name: MAID_AI__DEFAULT_PROVIDER
value: anthropic
envFrom:
- secretRef:
name: maid-secrets
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "1Gi"
cpu: "500m"
livenessProbe:
tcpSocket:
port: 4000
initialDelaySeconds: 10
periodSeconds: 30
readinessProbe:
tcpSocket:
port: 4000
initialDelaySeconds: 5
periodSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: maid-telnet
spec:
type: LoadBalancer
ports:
- port: 4000
targetPort: 4000
protocol: TCP
selector:
app: maid
---
apiVersion: v1
kind: Service
metadata:
name: maid-websocket
spec:
type: LoadBalancer
ports:
- port: 8080
targetPort: 8080
protocol: TCP
selector:
app: maid
Cloud Deployment Considerations¶
| Aspect | Telnet (4000) | WebSocket (8080) |
|---|---|---|
| Protocol | Raw TCP | HTTP/WS |
| Load Balancer | Network LB (Layer 4) | Application LB (Layer 7) |
| SSL/TLS | Direct TLS or none | WSS via ALB |
| Scaling | Single instance* | Single instance* |
*MAID currently runs as a single instance due to in-memory game state. Horizontal scaling would require a shared state layer (Redis, database-backed state).