refactor: split docker-service-architecture skill into focused files

- Extract testing patterns to testing.md (271 lines)
  - 3-stage testing pipeline
  - Branch and worktree isolation
  - Rich test runner with progress display
  - Makefile integration

- Extract CI/CD pipelines to ci-cd.md (201 lines)
  - GitHub Actions workflow with ghcr.io
  - Gitea Actions workflow with custom registry
  - Platform comparison table
  - Multi-service matrix builds

- Streamline SKILL.md to core content (200 lines)
  - Overview and when to use
  - Directory structure
  - Docker compose patterns (dev/test/prod)
  - Common mistakes and quick reference
  - Cross-reference table to supporting files
This commit is contained in:
2026-01-01 14:34:43 -05:00
parent f5039d46b6
commit 8f91e71caa
3 changed files with 479 additions and 326 deletions

333
SKILL.md
View File

@@ -17,6 +17,13 @@ Pattern for organizing multi-service projects with Docker containerization, envi
- Implementing multi-stage testing (unit → service → integration)
- Need parallel test execution across git branches
## Related Documentation
| Topic | File | Use When |
|-------|------|----------|
| Testing patterns | testing.md | Setting up 3-stage testing, branch isolation, test runners |
| CI/CD pipelines | ci-cd.md | Configuring GitHub or Gitea automated Docker builds |
## Single-Service Projects
For projects with only one service, simplify the patterns:
@@ -94,157 +101,6 @@ project/
└── Makefile
```
## 3-Stage Testing Pattern
```
┌─────────────┐ ┌─────────────┐ ┌─────────────────┐
│ Unit Tests │ ──► │Service Tests│ ──► │Integration Tests│
│ (no Docker) │ │(per-service)│ │ (full stack) │
└─────────────┘ └─────────────┘ └─────────────────┘
Fast Medium Slow
Mocked deps Real DB, mocked All services
external APIs together
```
**Fail-fast**: Each stage only runs if previous stage passes.
### Stage 1: Unit Tests
- **Location**: `services/<name>/tests/unit/`
- **Dependencies**: All mocked
- **Containers**: None
### Stage 2: Service Tests
- **Location**: `services/<name>/tests/integration/`
- **Config**: `services/<name>/deploy/test/docker-compose.yml`
- **Dependencies**: Real database, mocked external APIs
- **Containers**: Service + its direct dependencies only
### Stage 3: Integration Tests
- **Location**: `tests/integration/` or `services/<name>/tests/integration/`
- **Config**: `deploy/test/docker-compose.yml`
- **Dependencies**: All services running
- **Containers**: Full stack
### Test Performance Optimization
**Requirement**: Minimize total test execution time. Slow tests waste developer time and CI resources.
- **Time all tests**: Use pytest's `--durations=10` to identify slowest tests
- **Investigate outliers**: Tests taking >1s in unit tests or >5s in integration tests need review
- **Common culprit**: Tests waiting for timeouts instead of using condition-based assertions
```python
# BAD: Waits full 5 seconds even if ready immediately
await asyncio.sleep(5)
assert result.is_ready()
# GOOD: Returns as soon as condition is met
await wait_for(lambda: result.is_ready(), timeout=5)
```
- **Cache expensive setup**: Use `pytest` fixtures with appropriate scope (`module`, `session`)
- **Parallelize**: Use `pytest-xdist` for CPU-bound test suites
## Test Environment Isolation
### Branch Isolation
Use `TEST_INSTANCE_ID` derived from git branch for parallel testing:
```bash
# scripts/get-test-instance-id.sh
#!/bin/bash
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
# Sanitize: replace non-alphanumeric with dash, limit length
echo "$BRANCH" | sed 's/[^a-zA-Z0-9]/-/g' | cut -c1-20
```
### Docker Compose with Instance ID
```yaml
# deploy/test/docker-compose.yml
services:
database:
container_name: mydb-test-${TEST_INSTANCE_ID}
# Use tmpfs for ephemeral storage (fast, stateless)
tmpfs:
- /var/lib/postgresql/data
# Dynamic ports - no conflicts
ports:
- "5432" # Docker assigns random host port
networks:
- test-network
networks:
test-network:
name: myproject-test-${TEST_INSTANCE_ID}
```
### Dynamic Port Discovery
```bash
# After containers start, discover assigned ports
API_PORT=$(docker compose port api 8000 | cut -d: -f2)
DB_PORT=$(docker compose port database 5432 | cut -d: -f2)
export TEST_API_URL="http://localhost:${API_PORT}"
export TEST_DATABASE_URL="postgresql://test:test@localhost:${DB_PORT}/testdb"
```
### Git Worktree Isolation
When using git worktrees for parallel feature development, additional isolation is required beyond container and network naming.
**Problem**: Without `COMPOSE_PROJECT_NAME`, Docker Compose uses the directory name as the project name. Since all test directories are typically named `test`, all worktrees share the same image names (e.g., `test-price-db:latest`). When worktree A builds while worktree B is testing, B's containers can fail due to image conflicts.
**Solution**: Set `COMPOSE_PROJECT_NAME` to include the instance ID:
```bash
# scripts/test-service.sh
TEST_INSTANCE_ID=$(./scripts/get-test-instance-id.sh)
export TEST_INSTANCE_ID
# CRITICAL: Isolate Docker images between worktrees
# Without this, all worktrees share image names like "test-price-db:latest"
export COMPOSE_PROJECT_NAME="test-${TEST_INSTANCE_ID}"
cd "services/${SERVICE}/deploy/test"
docker compose up -d --build --wait
```
**Result**: Each worktree gets unique image names:
- `test-feature-auth-price-db:latest` (worktree on feature/auth)
- `test-feature-api-price-db:latest` (worktree on feature/api)
- `test-main-price-db:latest` (main branch)
**What gets isolated with full implementation**:
| Resource | Isolation Variable | Example |
|----------|-------------------|---------|
| Container names | `container_name: mydb-test-${TEST_INSTANCE_ID}` | `mydb-test-feature-auth` |
| Network names | `name: myproject-test-${TEST_INSTANCE_ID}` | `myproject-test-feature-auth` |
| Image names | `COMPOSE_PROJECT_NAME=test-${TEST_INSTANCE_ID}` | `test-feature-auth-mydb:latest` |
### Cleanup Command Scoping
**Problem**: A global `test-clean-all` command that removes all containers matching `name=test-` will interfere with tests running in other worktrees.
**Solution**: Scope cleanup to the current branch's instance ID:
```makefile
# BAD: Removes ALL test containers across all branches
test-clean-all:
@docker ps -a --filter "name=test-" --format "{{.Names}}" | xargs -r docker rm -f
# GOOD: Only removes containers for current branch
test-clean-all:
@TEST_INSTANCE_ID=$$(./scripts/get-test-instance-id.sh); \
echo "Cleaning up test containers for instance: $$TEST_INSTANCE_ID..."; \
docker ps -a --filter "name=test-$$TEST_INSTANCE_ID" --format "{{.Names}}" | xargs -r docker rm -f 2>/dev/null || true; \
docker network rm "myproject-test-network-$$TEST_INSTANCE_ID" 2>/dev/null || true
```
## Docker Compose Patterns
### Development Environment
@@ -318,147 +174,6 @@ services:
max-file: "10"
```
## CI/CD Pipeline
### Release Workflow (Gitea/GitHub Actions)
```yaml
# .gitea/workflows/release.yml
name: Release
on:
push:
tags:
- 'v[0-9]+.[0-9]+.[0-9]+*'
jobs:
test:
runs-on: ubuntu-latest
services:
postgres:
image: postgres:15
env:
POSTGRES_PASSWORD: test
options: --health-cmd pg_isready
steps:
- uses: actions/checkout@v4
- name: Run tests
run: make test
build:
needs: test # Only build if tests pass
runs-on: ubuntu-latest
strategy:
matrix:
service: [api, worker, frontend] # Build all services
steps:
- uses: actions/checkout@v4
- name: Extract version
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/v}
echo "VERSION=$VERSION" >> $GITHUB_OUTPUT
# Detect pre-release (contains hyphen: v1.0.0-beta)
[[ "$VERSION" == *-* ]] && echo "PRERELEASE=true" >> $GITHUB_OUTPUT
- name: Build and push
run: |
IMAGE=registry.example.com/myproject-${{ matrix.service }}
docker build -t $IMAGE:${{ steps.version.outputs.VERSION }} \
-f services/${{ matrix.service }}/Dockerfile .
docker push $IMAGE:${{ steps.version.outputs.VERSION }}
# Only tag 'latest' for stable releases
if [ "${{ steps.version.outputs.PRERELEASE }}" != "true" ]; then
docker tag $IMAGE:${{ steps.version.outputs.VERSION }} $IMAGE:latest
docker push $IMAGE:latest
fi
```
## Rich Test Runner
Context-efficient test output with real-time progress:
```python
# scripts/test-runner.py (simplified structure)
from rich.console import Console
from rich.live import Live
from rich.table import Table
class TestRunner:
def run_all(self, unit=True, service=True, integration=True):
with Live(console=self.console) as live:
if unit:
self.run_stage("Unit Tests", self.unit_targets)
if not self.all_passed:
return False # Fail fast
if service:
self.run_stage("Service Tests", self.service_targets)
if not self.all_passed:
return False
if integration:
self.run_stage("Integration Tests", self.integration_targets)
self.render_summary()
return self.all_passed
```
### Output Parsing
```python
# Parse pytest progress: "tests/test_foo.py::test_bar PASSED [ 45%]"
PYTEST_PROGRESS = re.compile(r"\[\s*(\d+)%\]")
PYTEST_SUMMARY = re.compile(r"(\d+) passed(?:.*?(\d+) failed)?")
# Parse vitest: "✓ src/foo.test.tsx (11 tests) 297ms"
VITEST_PASS = re.compile(r"✓\s+(.+\.tsx?)\s+\((\d+)\s+test")
```
### Display Format
```
Unit Tests
● price-db ━━━━━━━━━━░░░░░░░░░░ 52% 497/955 12.3s
→ test_data_quality_validation
✓ orchestrator 74/74 3.2s
○ frontend pending
Service Tests
○ price-db pending
○ orchestrator pending
```
## Makefile Integration
```makefile
VERBOSE ?= 0
test:
@python scripts/test-runner.py $(if $(filter 1,$(VERBOSE)),-v)
test-verbose:
@python scripts/test-runner.py -v
test-unit:
@$(MAKE) -C services/api test-unit VERBOSE=$(VERBOSE)
@$(MAKE) -C services/worker test-unit VERBOSE=$(VERBOSE)
test-service SERVICE:
@./scripts/test-service.sh $(SERVICE)
test-integration:
@./scripts/run-integration-tests.sh
test-infra-up:
@cd deploy/test && docker compose up -d --build --wait
test-infra-down:
@cd deploy/test && docker compose down -v
```
## Common Mistakes
| Mistake | Problem | Solution |
@@ -483,37 +198,3 @@ test-infra-down:
| Restart policy | None | None | unless-stopped |
| Resource limits | None | None | Set limits |
| Logging | Default | Default | json-file with rotation |
## Service Test Script Template
```bash
#!/bin/bash
# scripts/test-service.sh
set -e
SERVICE=$1
TEST_INSTANCE_ID=$(./scripts/get-test-instance-id.sh)
export TEST_INSTANCE_ID
# CRITICAL: Isolate Docker images between worktrees
export COMPOSE_PROJECT_NAME="test-${TEST_INSTANCE_ID}"
# Start service-specific containers
cd "services/${SERVICE}/deploy/test"
docker compose up -d --build --wait
# Discover ports
API_PORT=$(docker compose port api 8000 | cut -d: -f2)
export TEST_API_URL="http://localhost:${API_PORT}"
# Run tests
cd "../.."
uv run pytest tests/integration/ --integration
TEST_EXIT=$?
# Cleanup
cd "deploy/test"
docker compose down -v
exit $TEST_EXIT
```

201
ci-cd.md Normal file
View File

@@ -0,0 +1,201 @@
# CI/CD Pipelines
Automated Docker builds triggered by semantic version tags for GitHub and Gitea.
## Automated Docker Builds
Both **GitHub** and **Gitea** support automated Docker builds triggered by semantic version tags. The key differences are in registry authentication, available actions, and runner capabilities.
### Trigger Pattern
Both platforms use the same tag pattern to trigger builds:
```yaml
on:
push:
tags:
- 'v*.*.*' # Simple glob (works everywhere)
# or
- 'v[0-9]+.[0-9]+.[0-9]+*' # More precise regex-style
```
### Prerelease Detection
Tags containing `-alpha`, `-beta`, or `-rc` are considered prereleases and should NOT receive the `latest` tag:
```bash
# v1.0.0 → stable → gets :latest
# v1.0.0-alpha → prerelease → version tag only
# v1.0.0-beta.2 → prerelease → version tag only
# v1.0.0-rc.1 → prerelease → version tag only
```
## GitHub Actions Workflow
GitHub provides first-class actions for Docker operations with built-in caching and metadata extraction.
```yaml
# .github/workflows/build.yaml
name: Build and Push Docker Image
on:
push:
tags:
- 'v*.*.*'
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
permissions:
contents: read
packages: write # Required for ghcr.io
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }} # Auto-provided
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=latest,enable=${{ !contains(github.ref, '-alpha') && !contains(github.ref, '-beta') && !contains(github.ref, '-rc') }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
```
**Key features:**
- `docker/metadata-action@v5`: Automatic semver tag generation
- `docker/build-push-action@v6`: Multi-platform builds, layer caching
- `cache-from/cache-to: type=gha`: GitHub Actions native cache (fast)
- `GITHUB_TOKEN`: Auto-provided, no secret setup needed for ghcr.io
## Gitea Actions Workflow
Gitea Actions are GitHub-compatible but lack some marketplace actions. Use direct Docker commands instead.
```yaml
# .gitea/workflows/release.yml
name: Build and Push Docker Image
on:
push:
tags:
- 'v*.*.*'
env:
REGISTRY: git.example.com # Your Gitea container registry
IMAGE_NAME: username/projectname
jobs:
build:
runs-on: ubuntu-docker # Runner with Docker access
steps:
- name: Checkout repository
run: |
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git .
- name: Extract version from tag
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/}
echo "VERSION=$VERSION" >> $GITHUB_OUTPUT
if [[ "$VERSION" == *-alpha* ]] || [[ "$VERSION" == *-beta* ]] || [[ "$VERSION" == *-rc* ]]; then
echo "IS_PRERELEASE=true" >> $GITHUB_OUTPUT
else
echo "IS_PRERELEASE=false" >> $GITHUB_OUTPUT
fi
- name: Log in to Container Registry
run: echo "${{ secrets.REGISTRY_TOKEN }}" | docker login ${{ env.REGISTRY }} -u ${{ gitea.actor }} --password-stdin
- name: Build and push Docker image
run: |
docker build -t ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }} .
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }}
if [ "${{ steps.version.outputs.IS_PRERELEASE }}" = "false" ]; then
docker tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }} ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
fi
- name: List images
run: docker images | grep projectname
```
**Key differences from GitHub:**
- `runs-on: ubuntu-docker`: Custom runner label with Docker daemon
- Manual `git clone` instead of `actions/checkout` (Gitea compatibility)
- Direct `docker build/push` instead of build-push-action
- `secrets.REGISTRY_TOKEN`: Must be configured in Gitea secrets
- `${{ gitea.actor }}` instead of `${{ github.actor }}`
## Platform Comparison
| Feature | GitHub Actions | Gitea Actions |
|---------|---------------|---------------|
| Checkout | `actions/checkout@v4` | `git clone` command |
| Registry auth | `docker/login-action@v3` | `docker login` command |
| Build/push | `docker/build-push-action@v6` | `docker build && docker push` |
| Caching | `type=gha` (built-in) | Manual or none |
| Token for ghcr.io | Auto-provided `GITHUB_TOKEN` | N/A |
| Registry token | Auto for ghcr.io | Manual secret setup |
| Semver parsing | `docker/metadata-action` | Manual bash script |
| Actor variable | `github.actor` | `gitea.actor` |
## Multi-Service Builds
For projects with multiple services, use a matrix strategy:
```yaml
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
service: [api, worker, frontend]
steps:
- uses: actions/checkout@v4
- name: Build and push
run: |
docker build -t $REGISTRY/${{ matrix.service }}:$VERSION \
-f services/${{ matrix.service }}/Dockerfile .
docker push $REGISTRY/${{ matrix.service }}:$VERSION
```
## Creating a Release
```bash
# Stable release
git tag v1.0.0
git push origin v1.0.0
# Prerelease (no :latest tag)
git tag v1.1.0-beta.1
git push origin v1.1.0-beta.1
```

271
testing.md Normal file
View File

@@ -0,0 +1,271 @@
# Testing Patterns
Testing infrastructure for Docker-based projects: 3-stage pipeline, environment isolation, and rich progress display.
## 3-Stage Testing Pattern
```
┌─────────────┐ ┌─────────────┐ ┌─────────────────┐
│ Unit Tests │ ──► │Service Tests│ ──► │Integration Tests│
│ (no Docker) │ │(per-service)│ │ (full stack) │
└─────────────┘ └─────────────┘ └─────────────────┘
Fast Medium Slow
Mocked deps Real DB, mocked All services
external APIs together
```
**Fail-fast**: Each stage only runs if previous stage passes.
### Stage 1: Unit Tests
- **Location**: `services/<name>/tests/unit/`
- **Dependencies**: All mocked
- **Containers**: None
### Stage 2: Service Tests
- **Location**: `services/<name>/tests/integration/`
- **Config**: `services/<name>/deploy/test/docker-compose.yml`
- **Dependencies**: Real database, mocked external APIs
- **Containers**: Service + its direct dependencies only
### Stage 3: Integration Tests
- **Location**: `tests/integration/` or `services/<name>/tests/integration/`
- **Config**: `deploy/test/docker-compose.yml`
- **Dependencies**: All services running
- **Containers**: Full stack
### Test Performance Optimization
**Requirement**: Minimize total test execution time. Slow tests waste developer time and CI resources.
- **Time all tests**: Use pytest's `--durations=10` to identify slowest tests
- **Investigate outliers**: Tests taking >1s in unit tests or >5s in integration tests need review
- **Common culprit**: Tests waiting for timeouts instead of using condition-based assertions
```python
# BAD: Waits full 5 seconds even if ready immediately
await asyncio.sleep(5)
assert result.is_ready()
# GOOD: Returns as soon as condition is met
await wait_for(lambda: result.is_ready(), timeout=5)
```
- **Cache expensive setup**: Use `pytest` fixtures with appropriate scope (`module`, `session`)
- **Parallelize**: Use `pytest-xdist` for CPU-bound test suites
## Test Environment Isolation
### Branch Isolation
Use `TEST_INSTANCE_ID` derived from git branch for parallel testing:
```bash
# scripts/get-test-instance-id.sh
#!/bin/bash
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
# Sanitize: replace non-alphanumeric with dash, limit length
echo "$BRANCH" | sed 's/[^a-zA-Z0-9]/-/g' | cut -c1-20
```
### Docker Compose with Instance ID
```yaml
# deploy/test/docker-compose.yml
services:
database:
container_name: mydb-test-${TEST_INSTANCE_ID}
# Use tmpfs for ephemeral storage (fast, stateless)
tmpfs:
- /var/lib/postgresql/data
# Dynamic ports - no conflicts
ports:
- "5432" # Docker assigns random host port
networks:
- test-network
networks:
test-network:
name: myproject-test-${TEST_INSTANCE_ID}
```
### Dynamic Port Discovery
```bash
# After containers start, discover assigned ports
API_PORT=$(docker compose port api 8000 | cut -d: -f2)
DB_PORT=$(docker compose port database 5432 | cut -d: -f2)
export TEST_API_URL="http://localhost:${API_PORT}"
export TEST_DATABASE_URL="postgresql://test:test@localhost:${DB_PORT}/testdb"
```
### Git Worktree Isolation
When using git worktrees for parallel feature development, additional isolation is required beyond container and network naming.
**Problem**: Without `COMPOSE_PROJECT_NAME`, Docker Compose uses the directory name as the project name. Since all test directories are typically named `test`, all worktrees share the same image names (e.g., `test-price-db:latest`). When worktree A builds while worktree B is testing, B's containers can fail due to image conflicts.
**Solution**: Set `COMPOSE_PROJECT_NAME` to include the instance ID:
```bash
# scripts/test-service.sh
TEST_INSTANCE_ID=$(./scripts/get-test-instance-id.sh)
export TEST_INSTANCE_ID
# CRITICAL: Isolate Docker images between worktrees
# Without this, all worktrees share image names like "test-price-db:latest"
export COMPOSE_PROJECT_NAME="test-${TEST_INSTANCE_ID}"
cd "services/${SERVICE}/deploy/test"
docker compose up -d --build --wait
```
**Result**: Each worktree gets unique image names:
- `test-feature-auth-price-db:latest` (worktree on feature/auth)
- `test-feature-api-price-db:latest` (worktree on feature/api)
- `test-main-price-db:latest` (main branch)
**What gets isolated with full implementation**:
| Resource | Isolation Variable | Example |
|----------|-------------------|---------|
| Container names | `container_name: mydb-test-${TEST_INSTANCE_ID}` | `mydb-test-feature-auth` |
| Network names | `name: myproject-test-${TEST_INSTANCE_ID}` | `myproject-test-feature-auth` |
| Image names | `COMPOSE_PROJECT_NAME=test-${TEST_INSTANCE_ID}` | `test-feature-auth-mydb:latest` |
### Cleanup Command Scoping
**Problem**: A global `test-clean-all` command that removes all containers matching `name=test-` will interfere with tests running in other worktrees.
**Solution**: Scope cleanup to the current branch's instance ID:
```makefile
# BAD: Removes ALL test containers across all branches
test-clean-all:
@docker ps -a --filter "name=test-" --format "{{.Names}}" | xargs -r docker rm -f
# GOOD: Only removes containers for current branch
test-clean-all:
@TEST_INSTANCE_ID=$$(./scripts/get-test-instance-id.sh); \
echo "Cleaning up test containers for instance: $$TEST_INSTANCE_ID..."; \
docker ps -a --filter "name=test-$$TEST_INSTANCE_ID" --format "{{.Names}}" | xargs -r docker rm -f 2>/dev/null || true; \
docker network rm "myproject-test-network-$$TEST_INSTANCE_ID" 2>/dev/null || true
```
## Rich Test Runner
Context-efficient test output with real-time progress:
```python
# scripts/test-runner.py (simplified structure)
from rich.console import Console
from rich.live import Live
from rich.table import Table
class TestRunner:
def run_all(self, unit=True, service=True, integration=True):
with Live(console=self.console) as live:
if unit:
self.run_stage("Unit Tests", self.unit_targets)
if not self.all_passed:
return False # Fail fast
if service:
self.run_stage("Service Tests", self.service_targets)
if not self.all_passed:
return False
if integration:
self.run_stage("Integration Tests", self.integration_targets)
self.render_summary()
return self.all_passed
```
### Output Parsing
```python
# Parse pytest progress: "tests/test_foo.py::test_bar PASSED [ 45%]"
PYTEST_PROGRESS = re.compile(r"\[\s*(\d+)%\]")
PYTEST_SUMMARY = re.compile(r"(\d+) passed(?:.*?(\d+) failed)?")
# Parse vitest: "✓ src/foo.test.tsx (11 tests) 297ms"
VITEST_PASS = re.compile(r"✓\s+(.+\.tsx?)\s+\((\d+)\s+test")
```
### Display Format
```
Unit Tests
● price-db ━━━━━━━━━━░░░░░░░░░░ 52% 497/955 12.3s
→ test_data_quality_validation
✓ orchestrator 74/74 3.2s
○ frontend pending
Service Tests
○ price-db pending
○ orchestrator pending
```
## Makefile Integration
```makefile
VERBOSE ?= 0
test:
@python scripts/test-runner.py $(if $(filter 1,$(VERBOSE)),-v)
test-verbose:
@python scripts/test-runner.py -v
test-unit:
@$(MAKE) -C services/api test-unit VERBOSE=$(VERBOSE)
@$(MAKE) -C services/worker test-unit VERBOSE=$(VERBOSE)
test-service SERVICE:
@./scripts/test-service.sh $(SERVICE)
test-integration:
@./scripts/run-integration-tests.sh
test-infra-up:
@cd deploy/test && docker compose up -d --build --wait
test-infra-down:
@cd deploy/test && docker compose down -v
```
## Service Test Script Template
```bash
#!/bin/bash
# scripts/test-service.sh
set -e
SERVICE=$1
TEST_INSTANCE_ID=$(./scripts/get-test-instance-id.sh)
export TEST_INSTANCE_ID
# CRITICAL: Isolate Docker images between worktrees
export COMPOSE_PROJECT_NAME="test-${TEST_INSTANCE_ID}"
# Start service-specific containers
cd "services/${SERVICE}/deploy/test"
docker compose up -d --build --wait
# Discover ports
API_PORT=$(docker compose port api 8000 | cut -d: -f2)
export TEST_API_URL="http://localhost:${API_PORT}"
# Run tests
cd "../.."
uv run pytest tests/integration/ --integration
TEST_EXIT=$?
# Cleanup
cd "deploy/test"
docker compose down -v
exit $TEST_EXIT
```