Compare commits

...

90 Commits

Author SHA1 Message Date
848cfd684f feat: add upload_attachment MCP tool
All checks were successful
Build and Push Docker Image / build (push) Successful in 24s
Add support for uploading file attachments to Grist documents:

- GristClient.upload_attachment() method using multipart/form-data
- upload_attachment tool function with base64 decoding and MIME detection
- Tool registration in server.py
- Comprehensive unit tests (7 new tests)

Returns attachment ID for linking to records via update_records.

Bumps version to 1.3.0.
2026-01-03 19:59:47 -05:00
ea175d55a2 Add attachment upload feature design 2026-01-03 19:50:01 -05:00
db12fca615 Merge pull request 'chore(deps): update actions/checkout action to v6' (#3) from renovate/actions-checkout-6.x into master
Reviewed-on: #3
2026-01-02 17:19:43 -05:00
d540105d09 docs(proxy): clarify proxy_url usage in documentation
All checks were successful
Build and Push Docker Image / build (push) Successful in 21s
2026-01-02 15:01:33 -05:00
d40ae0b238 feat(main): use GRIST_MCP_URL in startup config output 2026-01-02 14:58:55 -05:00
2a60de1bf1 docs: add GRIST_MCP_URL to environment variables 2026-01-02 14:56:02 -05:00
ba45de4582 fix(session): include full proxy URL from GRIST_MCP_URL env var 2026-01-02 14:54:25 -05:00
d176b03d56 chore: bump version to 1.2.0
All checks were successful
Build and Push Docker Image / build (push) Successful in 21s
2026-01-02 14:43:50 -05:00
50c5cfbab1 Merge master into feature/session-proxy 2026-01-02 14:40:37 -05:00
8484536aae fix(integration): add auth headers and fix mock server routes 2026-01-02 14:36:25 -05:00
b3bfdf97c2 fix(test): increase sleep duration for flaky expiry test 2026-01-02 14:24:10 -05:00
eabddee737 docs: update CHANGELOG for session proxy feature 2026-01-02 14:20:45 -05:00
3d1ac1fe60 test(integration): add session proxy integration test 2026-01-02 14:17:59 -05:00
ed1d14a4d4 feat(main): add /api/v1/proxy HTTP endpoint 2026-01-02 14:16:24 -05:00
80e93ab3d9 test(proxy): add permission denial test 2026-01-02 14:08:58 -05:00
7073182f9e feat(proxy): add method dispatch 2026-01-02 14:07:47 -05:00
caa435d972 feat(proxy): add request parsing 2026-01-02 13:57:38 -05:00
ba88ba01f3 feat(server): register session token tools
Add get_proxy_documentation and request_session_token tools to the MCP
server. The create_server function now accepts an optional token_manager
parameter (SessionTokenManager | None) to maintain backward compatibility.

When token_manager is None, request_session_token returns an error
message instead of creating tokens.
2026-01-02 13:51:47 -05:00
fb6d4af973 feat(tools): add request_session_token tool
Add MCP tool for agents to request short-lived session tokens for HTTP
proxy access. The tool validates that agents can only request permissions
they already have (no privilege escalation).

- Validates document access and each requested permission
- Creates session token via SessionTokenManager
- Returns token metadata including proxy URL and expiration
- Includes tests for success case and permission denial scenarios
2026-01-02 13:45:07 -05:00
a7bb11d765 feat(tools): add get_proxy_documentation tool
Add a new MCP tool that returns complete documentation for the HTTP
proxy API. This enables agents to get all the information they need
to construct valid proxy requests when writing scripts.

The tool is stateless and returns a static documentation dict
describing endpoints, methods, authentication, and example usage.
2026-01-02 13:39:02 -05:00
c65ec0489c test(session): add tests for invalid and expired tokens 2026-01-02 13:34:52 -05:00
681cb0f67c feat(session): add token validation 2026-01-02 13:31:18 -05:00
3c97ad407c feat(session): cap TTL at 1 hour maximum 2026-01-02 13:27:30 -05:00
110f87e53f docs: add logging configuration to README
All checks were successful
Build and Push Docker Image / build (push) Successful in 8s
2026-01-02 13:24:38 -05:00
b310ee10a9 feat(session): add SessionTokenManager with token creation
Add SessionTokenManager class that creates short-lived session tokens
for HTTP proxy access. Each token includes agent identity, document
scope, permissions, and expiration time.
2026-01-02 13:22:53 -05:00
f48dafc88f fix(logging): suppress uvicorn access logs and prevent duplicate logging
All checks were successful
Build and Push Docker Image / build (push) Successful in 14s
2026-01-02 13:20:35 -05:00
d80eac4a0d fix(logging): properly suppress health checks at INFO level
All checks were successful
Build and Push Docker Image / build (push) Successful in 14s
2026-01-02 13:14:52 -05:00
4923d3110c docs: add session proxy implementation plan 2026-01-02 13:04:49 -05:00
58807ddbd0 Merge branch 'feature/logging-improvements'
All checks were successful
Build and Push Docker Image / build (push) Successful in 23s
2026-01-02 13:04:48 -05:00
09b6be14df chore: bump version to 1.1.0 and update changelog 2026-01-02 13:03:57 -05:00
4e75709c4e chore: update uv.lock for version 1.0.0 2026-01-02 13:02:40 -05:00
0cdf06546c docs: add get_proxy_documentation tool to design
Dedicated tool returns complete API spec so agents can write
reusable scripts before requesting session tokens.
2026-01-02 12:58:37 -05:00
c6fbadecfc chore(logging): add module exports 2026-01-02 12:52:40 -05:00
3cf2400232 docs: add session token proxy design
Enables agents to delegate bulk data operations to scripts,
bypassing LLM generation time for data-intensive operations.
Scripts authenticate via short-lived session tokens requested
through MCP, then call a simplified HTTP proxy endpoint.
2026-01-02 12:52:02 -05:00
1eb64803be feat(logging): suppress health checks at INFO level 2026-01-02 12:51:36 -05:00
38ccaa9cb8 feat(logging): initialize logging on server startup 2026-01-02 12:49:59 -05:00
51e90abd2d feat(logging): add tool call logging to server 2026-01-02 12:47:45 -05:00
d6fb3f4ef0 feat(logging): add get_logger helper 2026-01-02 12:45:17 -05:00
163b48f1f4 feat(logging): add setup_logging with LOG_LEVEL support 2026-01-02 12:41:40 -05:00
a668baa4d0 feat(logging): add log line formatter 2026-01-02 12:37:19 -05:00
69a65a68a6 feat(logging): add stats extraction for all tools 2026-01-02 12:35:18 -05:00
ff7dff7571 feat(logging): add token truncation helper 2026-01-02 12:30:09 -05:00
77027d762e docs: add logging improvements implementation plan 2026-01-02 12:20:26 -05:00
210cfabb52 chore: add .worktrees to gitignore 2026-01-02 12:17:11 -05:00
a31b2652bb docs: add logging improvements design 2026-01-02 12:15:59 -05:00
f79ae5546f chore(deps): update actions/checkout action to v6 2026-01-02 05:20:49 +00:00
16691e1d21 Merge pull request 'chore: Configure Renovate' (#1) from renovate/configure into master
All checks were successful
Build and Push Docker Image / build (push) Successful in 8s
Reviewed-on: #1
2026-01-01 14:08:06 -05:00
204d00caf4 feat: add host_header config for Docker networking
All checks were successful
Build and Push Docker Image / build (push) Successful in 14s
When Grist validates the Host header (common with reverse proxy setups),
internal Docker networking fails because requests arrive with
Host: container-name instead of the external domain.

The new host_header config option allows overriding the Host header
sent to Grist while still connecting via internal Docker hostnames.
2026-01-01 14:06:31 -05:00
ca03d22b97 fix: handle missing config file gracefully in Docker
All checks were successful
Build and Push Docker Image / build (push) Successful in 14s
2026-01-01 12:51:25 -05:00
107db82c52 docs: update README with step-by-step first-time setup 2026-01-01 12:10:17 -05:00
4b89837b43 chore: remove logging and resource limits from prod config 2026-01-01 12:07:47 -05:00
5aaa943010 Add renovate.json 2026-01-01 16:50:55 +00:00
c8cea249bc chore: use ghcr.io image for production deployment
- Update prod docker-compose to pull from ghcr.io/xe138/grist-mcp-server
- Remove debug step from Gitea workflow
2026-01-01 11:48:39 -05:00
ae894ff52e debug: add environment diagnostics
All checks were successful
Build and Push Docker Image / build (push) Successful in 1m56s
2026-01-01 11:29:06 -05:00
7b7eea2f67 fix: use ubuntu-docker runner with Docker CLI 2026-01-01 11:20:21 -05:00
0f2544c960 fix: use git clone instead of actions/checkout for host runner
Some checks failed
Build and Push Docker Image / build (push) Failing after 1s
2026-01-01 11:15:15 -05:00
d7ce2ad962 fix: use linux_x64 host runner for Docker access
Some checks failed
Build and Push Docker Image / test (push) Failing after 1s
Build and Push Docker Image / build (push) Has been skipped
2026-01-01 11:12:43 -05:00
5892eb5cda debug: add environment diagnostics to build job
Some checks failed
Build and Push Docker Image / test (push) Successful in 6s
Build and Push Docker Image / build (push) Failing after 6s
2026-01-01 10:59:00 -05:00
9b55dedec5 test: add test job to match ffmpeg-worker structure
Some checks failed
Build and Push Docker Image / test (push) Successful in 6s
Build and Push Docker Image / build (push) Failing after 6s
2026-01-01 10:49:16 -05:00
75bae256f2 refactor: separate Gitea and GitHub workflows
Some checks failed
Build and Push Docker Image / build (push) Failing after 7s
- Add .gitea/workflows/release.yml for Gitea builds
  - Uses plain docker commands (no action dependencies)
  - Pushes to git.prettyhefty.com registry
- Simplify .github/workflows/build.yaml for GitHub only
  - Remove Gitea detection logic
  - Only push latest tag for non-prerelease versions
2026-01-01 10:46:12 -05:00
a490eab625 revert: restore Gitea build (needs Docker on runner) 2026-01-01 10:41:16 -05:00
1bf18b9ce2 fix: skip Docker build on Gitea runner (no Docker installed) 2026-01-01 10:39:37 -05:00
c30ca25503 fix: use GITEA_ACTIONS env var for registry detection
Some checks failed
Build and Push Docker Image / build (push) Failing after 19s
The workflow was checking vars.GITEA_ACTIONS (repository variable)
but Gitea sets GITEA_ACTIONS as an environment variable. This caused
Gitea builds to incorrectly try using ghcr.io.
2026-01-01 10:36:15 -05:00
880f85a2d8 chore: release v1.0.0-alpha.1
Some checks failed
Build and Push Docker Image / build (push) Failing after 1m56s
- Add CHANGELOG.md documenting initial release features
- Update version to 1.0.0 in pyproject.toml
2026-01-01 10:31:27 -05:00
49c5043661 fix: use correct Grist API endpoint for modify_column
The Grist API uses PATCH /tables/{table}/columns with a columns array
in the body, not PATCH /tables/{table}/columns/{column_id}. Updated
the endpoint to match the API spec.
2026-01-01 10:10:49 -05:00
9e96d18315 fix: import AuthError for proper permission error handling
The exception handler caught AuthError but it wasn't imported,
causing a NameError instead of a proper authorization error message.
2026-01-01 10:06:08 -05:00
88a6740b42 fix: use external port in MCP config for Docker deployments
The MCP config was using the internal container port (3000) instead of
the external mapped port. Added EXTERNAL_PORT env var support so clients
get the correct connection URL when running behind Docker port mapping.
2026-01-01 09:42:32 -05:00
6e0afa0bfb fix: correct MCP config format with type field 2026-01-01 09:36:56 -05:00
80d4347378 feat: display Claude Code MCP config on startup 2026-01-01 09:27:08 -05:00
3eee0bf296 feat: add dev config and graceful config handling
- Add deploy/dev/config.yaml for dev environment testing
- Mount config from ./config.yaml instead of project root
- Create template config if missing and exit gracefully
- Update .gitignore to only ignore root config.yaml
2026-01-01 09:22:06 -05:00
8809095549 refactor: per-connection auth via Authorization header
Replace startup token authentication with per-SSE-connection auth.
Each client now passes Bearer token in Authorization header when
connecting. Server validates against config.yaml tokens and creates
isolated Server instance per connection.

- server.py: accept (auth, agent) instead of (config_path, token)
- main.py: extract Bearer token, authenticate, create server per connection
- Remove GRIST_MCP_TOKEN from docker-compose environments
2026-01-01 08:49:58 -05:00
a2e8d76237 feat: add make dev command for attached container mode 2026-01-01 08:09:48 -05:00
8c25bec5a4 refactor: use explicit env vars in docker-compose files
Replace env_file with explicit environment variables to allow
composing grist-mcp with other services without .env conflicts.
2026-01-01 08:07:48 -05:00
7890d79bce feat: add rich test runner with progress display
- Add scripts/test-runner.py with rich progress bars and fail-fast behavior
- Add rich>=13.0.0 as dev dependency
- Update Makefile: `make test` now runs all tests (unit + integration)
- Test runner shows live progress, current test, and summary

Now `make test` runs both unit and integration tests with docker
containers, matching the docker-service-architecture skill guidelines.
2025-12-30 21:09:24 -05:00
9d73ac73b1 docs: add docker-service-architecture adaptation plan 2025-12-30 20:54:47 -05:00
c9e51c71d8 docs: update CLAUDE.md with new project structure 2025-12-30 20:49:59 -05:00
3d68de4c54 refactor: update Makefile for new deploy/ structure 2025-12-30 20:48:06 -05:00
f921412f01 feat: add test isolation scripts with dynamic port discovery
- Add get-test-instance-id.sh for branch-based container isolation
- Add run-integration-tests.sh for full test lifecycle management
- Update integration tests to read service URLs from environment
  variables (GRIST_MCP_URL, MOCK_GRIST_URL) with fallback defaults
2025-12-30 19:11:04 -05:00
757afb3c41 refactor: move docker-compose files to deploy/ directory structure
Reorganize Docker configuration into environment-specific directories:
- deploy/dev/: Development with hot reload and source mounting
- deploy/test/: Ephemeral testing with branch isolation and dynamic ports
- deploy/prod/: Production with resource limits, logging, and restart policy

Key improvements in test compose:
- Dynamic ports (no fixed 3000/8484) for parallel test runs
- Branch-isolated container/network names via TEST_INSTANCE_ID
- service_healthy condition instead of service_started
- Increased retry counts for stability
2025-12-30 17:57:09 -05:00
e235e998e4 refactor: organize tests into unit/ and integration/ directories
Move unit tests from tests/ to tests/unit/ for clearer separation
from integration tests. Update pyproject.toml testpaths and Makefile
test target to reflect the new structure.
2025-12-30 17:38:46 -05:00
c57e71b92a fix: use pure ASGI app for SSE transport compatibility
- Replace Starlette routing with direct ASGI dispatcher to avoid
  double-response issues with SSE transport
- Simplify integration test fixtures by removing async client fixture
- Consolidate integration tests into single test functions per file
  to prevent SSE connection cleanup issues between tests
- Fix add_records assertion to expect 'inserted_ids' (actual API response)
2025-12-30 15:05:32 -05:00
987b6d087a feat: add Makefile for test orchestration 2025-12-30 11:48:19 -05:00
e6f737e2a3 feat: add tool integration tests with Grist API validation 2025-12-30 11:46:34 -05:00
5607946441 feat: add MCP protocol compliance tests 2025-12-30 11:44:18 -05:00
3ecd3303ce feat: add integration test fixtures with MCP client 2025-12-30 11:43:23 -05:00
6060e19b31 feat: add docker-compose for integration testing 2025-12-30 11:39:32 -05:00
ee385d82ad feat: add integration test configuration 2025-12-30 11:38:29 -05:00
7acd602ffd feat: add mock Grist server for integration testing 2025-12-30 11:37:36 -05:00
69ec6ef0e2 feat: add /health endpoint for service readiness checks 2025-12-30 11:29:46 -05:00
f63115c8b3 docs: add pre-deployment testing implementation plan 2025-12-30 11:09:56 -05:00
62 changed files with 7908 additions and 413 deletions

View File

@@ -0,0 +1,45 @@
name: Build and Push Docker Image
on:
push:
tags:
- 'v*.*.*'
env:
REGISTRY: git.prettyhefty.com
IMAGE_NAME: bill/grist-mcp
jobs:
build:
runs-on: ubuntu-docker
steps:
- name: Checkout repository
run: |
git clone --depth 1 --branch ${GITHUB_REF_NAME} ${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}.git .
- name: Extract version from tag
id: version
run: |
VERSION=${GITHUB_REF#refs/tags/}
echo "VERSION=$VERSION" >> $GITHUB_OUTPUT
if [[ "$VERSION" == *-alpha* ]] || [[ "$VERSION" == *-beta* ]] || [[ "$VERSION" == *-rc* ]]; then
echo "IS_PRERELEASE=true" >> $GITHUB_OUTPUT
else
echo "IS_PRERELEASE=false" >> $GITHUB_OUTPUT
fi
- name: Log in to Container Registry
run: echo "${{ secrets.REGISTRY_TOKEN }}" | docker login ${{ env.REGISTRY }} -u ${{ gitea.actor }} --password-stdin
- name: Build and push Docker image
run: |
docker build -t ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }} .
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }}
if [ "${{ steps.version.outputs.IS_PRERELEASE }}" = "false" ]; then
docker tag ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:${{ steps.version.outputs.VERSION }} ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
docker push ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}:latest
fi
- name: List images
run: docker images | grep grist-mcp

View File

@@ -6,7 +6,8 @@ on:
- 'v*.*.*'
env:
IMAGE_NAME: grist-mcp
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
@@ -17,55 +18,29 @@ jobs:
steps:
- name: Checkout repository
uses: actions/checkout@v4
uses: actions/checkout@8e8c483db84b4bee98b60c0593521ed34d9990e8 # v6
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata for Docker
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=raw,value=latest,enable=${{ !contains(github.ref, '-alpha') && !contains(github.ref, '-beta') && !contains(github.ref, '-rc') }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Determine registry
id: registry
run: |
if [ "${{ vars.GITEA_ACTIONS }}" = "true" ]; then
# Gitea: use server URL as registry
REGISTRY="${{ github.server_url }}"
REGISTRY="${REGISTRY#https://}"
REGISTRY="${REGISTRY#http://}"
echo "registry=${REGISTRY}" >> $GITHUB_OUTPUT
echo "is_gitea=true" >> $GITHUB_OUTPUT
else
# GitHub: use GHCR
echo "registry=ghcr.io" >> $GITHUB_OUTPUT
echo "is_gitea=false" >> $GITHUB_OUTPUT
fi
- name: Log in to GitHub Container Registry
if: steps.registry.outputs.is_gitea == 'false'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Log in to Gitea Container Registry
if: steps.registry.outputs.is_gitea == 'true'
uses: docker/login-action@v3
with:
registry: ${{ steps.registry.outputs.registry }}
username: ${{ github.actor }}
password: ${{ secrets.REGISTRY_TOKEN }}
- name: Extract metadata (tags, labels)
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ steps.registry.outputs.registry }}/${{ github.repository }}
tags: |
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
type=semver,pattern={{major}}
type=raw,value=latest
- name: Build and push
- name: Build and push Docker image
uses: docker/build-push-action@v6
with:
context: .

3
.gitignore vendored
View File

@@ -2,7 +2,8 @@ __pycache__/
*.py[cod]
.venv/
.env
config.yaml
/config.yaml
*.egg-info/
dist/
.pytest_cache/
.worktrees/

119
CHANGELOG.md Normal file
View File

@@ -0,0 +1,119 @@
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [1.3.0] - 2026-01-03
### Added
#### Attachment Upload
- **`upload_attachment` MCP tool**: Upload files to Grist documents
- Base64-encoded content input (required for JSON-based MCP protocol)
- Automatic MIME type detection from filename
- Returns attachment ID for linking to records via `update_records`
#### Usage
```python
# 1. Upload attachment
result = upload_attachment(
document="accounting",
filename="invoice.pdf",
content_base64="JVBERi0xLjQK..."
)
# Returns: {"attachment_id": 42, "filename": "invoice.pdf", "size_bytes": 31395}
# 2. Link to record
update_records(document="accounting", table="Bills", records=[
{"id": 1, "fields": {"Attachment": [42]}}
])
```
## [1.2.0] - 2026-01-02
### Added
#### Session Token Proxy
- **Session token proxy**: Agents can request short-lived tokens for bulk operations
- `get_proxy_documentation` MCP tool: returns complete proxy API spec
- `request_session_token` MCP tool: creates scoped session tokens with TTL (max 1 hour)
- `POST /api/v1/proxy` HTTP endpoint: accepts session tokens for direct API access
- Supports all 11 Grist operations (read, write, schema) via HTTP
## [1.1.0] - 2026-01-02
### Added
#### Logging
- **Tool Call Logging**: Human-readable logs for every MCP tool call with agent identity, document, stats, and duration
- **Token Truncation**: Secure token display in logs (first/last 3 chars only)
- **Stats Extraction**: Meaningful operation stats per tool (e.g., "42 records", "3 tables")
- **LOG_LEVEL Support**: Configure logging verbosity via environment variable (DEBUG, INFO, WARNING, ERROR)
- **Health Check Suppression**: `/health` requests logged at DEBUG level to reduce noise
#### Log Format
```
2026-01-02 10:15:23 | agent-name (abc...xyz) | get_records | sales | 42 records | success | 125ms
```
- Pipe-delimited format for easy parsing
- Multi-line error details with indentation
- Duration tracking in milliseconds
## [1.0.0] - 2026-01-01
Initial release of grist-mcp, an MCP server for AI agents to interact with Grist spreadsheets.
### Added
#### Core Features
- **MCP Server**: Full Model Context Protocol implementation with SSE transport
- **Token-based Authentication**: Secure agent authentication via `GRIST_MCP_TOKEN`
- **Granular Permissions**: Per-document access control with `read`, `write`, and `schema` scopes
- **Multi-tenant Support**: Configure multiple Grist instances and documents
#### Discovery Tools
- `list_documents`: List accessible documents with their permissions
#### Read Tools
- `list_tables`: List all tables in a document
- `describe_table`: Get column metadata (id, type, formula)
- `get_records`: Fetch records with optional filter, sort, and limit
- `sql_query`: Execute read-only SELECT queries
#### Write Tools
- `add_records`: Insert new records into a table
- `update_records`: Modify existing records by ID
- `delete_records`: Remove records by ID
#### Schema Tools
- `create_table`: Create new tables with column definitions
- `add_column`: Add columns to existing tables
- `modify_column`: Change column type or formula
- `delete_column`: Remove columns from tables
#### Infrastructure
- **Docker Support**: Multi-stage Dockerfile with non-root user
- **Docker Compose**: Ready-to-deploy configuration with environment variables
- **Health Endpoint**: `/health` for container orchestration readiness checks
- **SSE Transport**: Server-Sent Events for MCP client communication
- **Environment Variable Substitution**: `${VAR}` syntax in config files
#### Testing
- **Unit Tests**: Comprehensive coverage with pytest-httpx mocking
- **Integration Tests**: Docker-based tests with ephemeral containers
- **Rich Test Runner**: Progress display for test execution
- **Test Isolation**: Dynamic port discovery for parallel test runs
#### Developer Experience
- **Makefile**: Commands for testing, building, and deployment
- **Dev Environment**: Docker Compose setup for local development
- **MCP Config Display**: Startup message with client configuration snippet
### Security
- SQL injection prevention with SELECT-only query validation
- API key isolation per document
- Token validation at startup (no runtime exposure)
- Non-root container execution

View File

@@ -17,11 +17,23 @@ grist-mcp is an MCP (Model Context Protocol) server that enables AI agents to in
## Commands
```bash
# Run tests
uv run pytest -v
# Run unit tests
make test-unit
# or: uv run pytest tests/unit/ -v
# Run a specific test file
uv run pytest tests/test_auth.py -v
# Run integration tests (manages containers automatically)
make test-integration
# or: ./scripts/run-integration-tests.sh
# Full pre-deploy pipeline
make pre-deploy
# Development environment
make dev-up # Start
make dev-down # Stop
# Build Docker image
make build
# Run the server (requires config and token)
CONFIG_PATH=./config.yaml GRIST_MCP_TOKEN=your-token uv run python -m grist_mcp.main
@@ -30,7 +42,7 @@ CONFIG_PATH=./config.yaml GRIST_MCP_TOKEN=your-token uv run python -m grist_mcp.
## Project Structure
```
src/grist_mcp/
src/grist_mcp/ # Source code
├── main.py # Entry point, runs stdio server
├── server.py # MCP server setup, tool registration, call_tool dispatch
├── config.py # YAML config loading with env var substitution
@@ -41,6 +53,14 @@ src/grist_mcp/
├── read.py # list_tables, describe_table, get_records, sql_query
├── write.py # add_records, update_records, delete_records
└── schema.py # create_table, add_column, modify_column, delete_column
tests/
├── unit/ # Unit tests (no containers)
└── integration/ # Integration tests (with Docker)
deploy/
├── dev/ # Development docker-compose
├── test/ # Test docker-compose (ephemeral)
└── prod/ # Production docker-compose
scripts/ # Test automation scripts
```
## Key Patterns
@@ -71,11 +91,18 @@ The optional `client` parameter enables dependency injection for testing.
## Testing
Tests use pytest-httpx to mock Grist API responses. Each test file has fixtures for common setup:
### Unit Tests (`tests/unit/`)
Fast tests using pytest-httpx to mock Grist API responses. Run with `make test-unit`.
- `test_auth.py`: Uses in-memory Config objects
- `test_grist_client.py`: Uses HTTPXMock for API mocking
- `test_tools_*.py`: Combine auth fixtures with mocked clients
### Integration Tests (`tests/integration/`)
Tests against real Grist containers. Run with `make test-integration`.
- Automatically manages Docker containers via `scripts/run-integration-tests.sh`
- Uses environment variables for configuration (no hardcoded URLs)
- Containers are ephemeral and cleaned up after tests
## Configuration
See `config.yaml.example` for the configuration format. Key points:

40
Makefile Normal file
View File

@@ -0,0 +1,40 @@
.PHONY: help test test-unit test-integration build dev dev-up dev-down pre-deploy clean
VERBOSE ?= 0
# Default target
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
# Testing
test: ## Run all tests (unit + integration) with rich progress display
@uv run python scripts/test-runner.py $(if $(filter 1,$(VERBOSE)),-v)
test-unit: ## Run unit tests only
@uv run python scripts/test-runner.py --unit-only $(if $(filter 1,$(VERBOSE)),-v)
test-integration: ## Run integration tests only (starts/stops containers)
@uv run python scripts/test-runner.py --integration-only $(if $(filter 1,$(VERBOSE)),-v)
# Docker
build: ## Build Docker image
docker build -t grist-mcp:latest .
dev: ## Start development environment (attached, streams logs)
cd deploy/dev && docker compose up --build
dev-up: ## Start development environment (detached)
cd deploy/dev && docker compose up -d --build
dev-down: ## Stop development environment
cd deploy/dev && docker compose down
# Pre-deployment
pre-deploy: test ## Full pre-deployment pipeline
@echo "Pre-deployment checks passed!"
# Cleanup
clean: ## Remove test artifacts and containers
cd deploy/test && docker compose down -v --rmi local 2>/dev/null || true
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type d -name .pytest_cache -exec rm -rf {} + 2>/dev/null || true

319
README.md
View File

@@ -15,50 +15,33 @@ grist-mcp is a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/)
- **Security**: Token-based authentication with per-document permission scopes (read, write, schema)
- **Multi-tenant**: Support multiple Grist instances and documents
## Requirements
## Quick Start (Docker)
- Python 3.14+
### Prerequisites
- Docker and Docker Compose
- Access to one or more Grist documents with API keys
## Installation
### 1. Create configuration directory
```bash
# Clone the repository
git clone https://github.com/your-org/grist-mcp.git
cd grist-mcp
# Install with uv
uv sync --dev
mkdir grist-mcp && cd grist-mcp
```
## Configuration
Create a `config.yaml` file based on the example:
### 2. Download configuration files
```bash
# Download docker-compose.yml
curl -O https://raw.githubusercontent.com/Xe138/grist-mcp-server/master/deploy/prod/docker-compose.yml
# Download example config
curl -O https://raw.githubusercontent.com/Xe138/grist-mcp-server/master/config.yaml.example
cp config.yaml.example config.yaml
```
### Configuration Structure
### 3. Generate tokens
```yaml
# Document definitions
documents:
my-document:
url: https://docs.getgrist.com # Grist instance URL
doc_id: abcd1234 # Document ID from URL
api_key: ${GRIST_API_KEY} # API key (supports env vars)
# Agent tokens with access scopes
tokens:
- token: your-secret-token # Unique token for this agent
name: my-agent # Human-readable name
scope:
- document: my-document
permissions: [read, write] # Allowed: read, write, schema
```
### Generating Tokens
Generate a secure token for your agent:
```bash
python -c "import secrets; print(secrets.token_urlsafe(32))"
@@ -66,34 +49,53 @@ python -c "import secrets; print(secrets.token_urlsafe(32))"
openssl rand -base64 32
```
### Environment Variables
### 4. Configure config.yaml
- `CONFIG_PATH`: Path to config file (default: `/app/config.yaml`)
- `GRIST_MCP_TOKEN`: Agent token for authentication
- Config file supports `${VAR}` syntax for API keys
Edit `config.yaml` to define your Grist documents and agent tokens:
## Usage
```yaml
# Document definitions
documents:
my-document: # Friendly name (used in token scopes)
url: https://docs.getgrist.com # Your Grist instance URL
doc_id: abcd1234efgh5678 # Document ID from the URL
api_key: your-grist-api-key # Grist API key (or use ${ENV_VAR} syntax)
### Running the Server
The server uses SSE (Server-Sent Events) transport over HTTP:
```bash
# Set your agent token
export GRIST_MCP_TOKEN="your-agent-token"
# Run with custom config path (defaults to port 3000)
CONFIG_PATH=./config.yaml uv run python -m grist_mcp.main
# Or specify a custom port
PORT=8080 CONFIG_PATH=./config.yaml uv run python -m grist_mcp.main
# Agent tokens with access scopes
tokens:
- token: your-generated-token-here # The token you generated in step 3
name: my-agent # Human-readable name
scope:
- document: my-document # Must match a document name above
permissions: [read, write] # Allowed: read, write, schema
```
The server exposes two endpoints:
- `http://localhost:3000/sse` - SSE connection endpoint
- `http://localhost:3000/messages` - Message posting endpoint
**Finding your Grist document ID**: Open your Grist document in a browser. The URL will look like:
`https://docs.getgrist.com/abcd1234efgh5678/My-Document` - the document ID is `abcd1234efgh5678`.
### MCP Client Configuration
**Getting a Grist API key**: In Grist, go to Profile Settings → API → Create API Key.
### 5. Create .env file
Create a `.env` file with your agent token:
```bash
# .env
GRIST_MCP_TOKEN=your-generated-token-here
PORT=3000
```
The `GRIST_MCP_TOKEN` must match one of the tokens defined in `config.yaml`.
### 6. Start the server
```bash
docker compose up -d
```
The server will be available at `http://localhost:3000`.
### 7. Configure your MCP client
Add to your MCP client configuration (e.g., Claude Desktop):
@@ -101,24 +103,13 @@ Add to your MCP client configuration (e.g., Claude Desktop):
{
"mcpServers": {
"grist": {
"type": "sse",
"url": "http://localhost:3000/sse"
}
}
}
```
For remote deployments, use the server's public URL:
```json
{
"mcpServers": {
"grist": {
"url": "https://your-server.example.com/sse"
}
}
}
```
## Available Tools
### Discovery
@@ -149,6 +140,105 @@ For remote deployments, use the server's public URL:
| `modify_column` | Change a column's type or formula |
| `delete_column` | Remove a column from a table |
## Configuration Reference
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `PORT` | Server port | `3000` |
| `GRIST_MCP_TOKEN` | Agent authentication token (required) | - |
| `CONFIG_PATH` | Path to config file inside container | `/app/config.yaml` |
| `LOG_LEVEL` | Logging verbosity (`DEBUG`, `INFO`, `WARNING`, `ERROR`) | `INFO` |
| `GRIST_MCP_URL` | Public URL of this server (for session proxy tokens) | - |
### config.yaml Structure
```yaml
# Document definitions (each is self-contained)
documents:
budget-2024:
url: https://work.getgrist.com
doc_id: mK7xB2pQ9mN4v
api_key: ${GRIST_WORK_API_KEY} # Supports environment variable substitution
personal-tracker:
url: https://docs.getgrist.com
doc_id: pN0zE5sT2qP7x
api_key: ${GRIST_PERSONAL_API_KEY}
# Agent tokens with access scopes
tokens:
- token: your-secure-token-here
name: finance-agent
scope:
- document: budget-2024
permissions: [read, write] # Can read and write
- token: another-token-here
name: readonly-agent
scope:
- document: budget-2024
permissions: [read] # Read only
- document: personal-tracker
permissions: [read, write, schema] # Full access
```
### Permission Levels
- `read`: Query tables and records, run SQL queries
- `write`: Add, update, delete records
- `schema`: Create tables, add/modify/delete columns
## Logging
### Configuration
Set the `LOG_LEVEL` environment variable to control logging verbosity:
| Level | Description |
|-------|-------------|
| `DEBUG` | Show all logs including HTTP requests and tool arguments |
| `INFO` | Show tool calls with stats (default) |
| `WARNING` | Show only auth errors and warnings |
| `ERROR` | Show only errors |
```bash
# In .env or docker-compose.yml
LOG_LEVEL=INFO
```
### Log Format
At `INFO` level, each tool call produces a single log line:
```
2026-01-02 10:15:23 | agent-name (abc...xyz) | get_records | sales | 42 records | success | 125ms
```
| Field | Description |
|-------|-------------|
| Timestamp | `YYYY-MM-DD HH:MM:SS` |
| Agent | Agent name with truncated token |
| Tool | MCP tool name |
| Document | Document name (or `-` for list_documents) |
| Stats | Operation result (e.g., `42 records`, `3 tables`) |
| Status | `success`, `auth_error`, or `error` |
| Duration | Execution time in milliseconds |
Errors include details on a second indented line:
```
2026-01-02 10:15:23 | agent-name (abc...xyz) | add_records | sales | - | error | 89ms
Grist API error: Invalid column 'foo'
```
### Production Recommendations
- Use `LOG_LEVEL=INFO` for normal operation (default)
- Use `LOG_LEVEL=DEBUG` for troubleshooting (shows HTTP traffic)
- Use `LOG_LEVEL=WARNING` for minimal logging
## Security
- **Token-based auth**: Each agent has a unique token with specific document access
@@ -159,10 +249,30 @@ For remote deployments, use the server's public URL:
## Development
### Running Tests
### Requirements
- Python 3.14+
- uv package manager
### Local Setup
```bash
uv run pytest -v
# Clone the repository
git clone https://github.com/Xe138/grist-mcp-server.git
cd grist-mcp-server
# Install dependencies
uv sync --dev
# Run tests
make test-unit
```
### Running Locally
```bash
export GRIST_MCP_TOKEN="your-agent-token"
CONFIG_PATH=./config.yaml uv run python -m grist_mcp.main
```
### Project Structure
@@ -170,7 +280,6 @@ uv run pytest -v
```
grist-mcp/
├── src/grist_mcp/
│ ├── __init__.py
│ ├── main.py # Entry point
│ ├── server.py # MCP server setup and tool registration
│ ├── config.py # Configuration loading
@@ -182,73 +291,13 @@ grist-mcp/
│ ├── write.py # Write operations
│ └── schema.py # Schema operations
├── tests/
├── config.yaml.example
└── pyproject.toml
```
## Docker Deployment
### Prerequisites
- Docker and Docker Compose
### Quick Start
```bash
# 1. Copy example files
cp .env.example .env
cp config.yaml.example config.yaml
# 2. Edit .env with your tokens and API keys
# - Set GRIST_MCP_TOKEN to a secure agent token
# - Set your Grist API keys
# 3. Edit config.yaml with your document settings
# - Configure your Grist documents
# - Set up token scopes and permissions
# 4. Start the server
docker compose up -d
```
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `PORT` | Server port | `3000` |
| `GRIST_MCP_TOKEN` | Agent authentication token (required) | - |
| `CONFIG_PATH` | Path to config file inside container | `/app/config.yaml` |
| `GRIST_*_API_KEY` | Grist API keys referenced in config.yaml | - |
### Using Prebuilt Images
To use a prebuilt image from a container registry:
```yaml
# docker-compose.yaml
services:
grist-mcp:
image: your-registry/grist-mcp:latest
ports:
- "${PORT:-3000}:3000"
volumes:
- ./config.yaml:/app/config.yaml:ro
env_file:
- .env
restart: unless-stopped
```
### Building Locally
```bash
# Build the image
docker build -t grist-mcp .
# Run directly
docker run -p 3000:3000 \
-v $(pwd)/config.yaml:/app/config.yaml:ro \
--env-file .env \
grist-mcp
│ ├── unit/ # Unit tests
│ └── integration/ # Integration tests
├── deploy/
│ ├── dev/ # Development docker-compose
│ ├── test/ # Test docker-compose
│ └── prod/ # Production docker-compose
└── config.yaml.example
```
## License

View File

@@ -23,6 +23,14 @@ documents:
doc_id: pN0zE5sT2qP7x
api_key: ${GRIST_PERSONAL_API_KEY}
# Docker networking example: connect via internal hostname,
# but send the external domain in the Host header
docker-grist:
url: http://grist:8080
doc_id: abc123
api_key: ${GRIST_API_KEY}
host_header: grist.example.com # Required when Grist validates Host header
# Agent tokens with access scopes
tokens:
- token: REPLACE_WITH_GENERATED_TOKEN

1
deploy/dev/.env.example Normal file
View File

@@ -0,0 +1 @@
PORT=3010

30
deploy/dev/config.yaml Normal file
View File

@@ -0,0 +1,30 @@
# Development configuration for grist-mcp
#
# Token Generation:
# python -c "import secrets; print(secrets.token_urlsafe(32))"
# openssl rand -base64 32
# Document definitions
documents:
mcp-test-document:
url: https://grist.bballou.com/
doc_id: mVQvKTAyZC1FWZQgfuVeHC
api_key: 83a03433a61ee9d2f2bf055d7f4518bedef0421a
# Agent tokens with access scopes
tokens:
- token: test-token-all-permissions
name: dev-agent
scope:
- document: mcp-test-document
permissions: [read, write, schema]
- token: test-token-read-permissions
name: dev-agent-read
scope:
- document: mcp-test-document
permissions: [read]
- token: test-token-no-schema-permissions
name: dev-agent-no-schema
scope:
- document: mcp-test-document
permissions: [read, write]

View File

@@ -0,0 +1,20 @@
# Development environment - hot reload, persistent data
services:
grist-mcp:
build:
context: ../..
dockerfile: Dockerfile
ports:
- "${PORT:-3000}:3000"
volumes:
- ../../src:/app/src:ro
- ./config.yaml:/app/config.yaml:ro
environment:
- CONFIG_PATH=/app/config.yaml
- EXTERNAL_PORT=${PORT:-3000}
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:3000/health')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

1
deploy/prod/.env.example Normal file
View File

@@ -0,0 +1 @@
PORT=3000

View File

@@ -0,0 +1,18 @@
# Production environment
services:
grist-mcp:
image: ghcr.io/xe138/grist-mcp-server:latest
ports:
- "${PORT:-3000}:3000"
volumes:
- ./config.yaml:/app/config.yaml:ro
environment:
- CONFIG_PATH=/app/config.yaml
- EXTERNAL_PORT=${PORT:-3000}
restart: unless-stopped
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:3000/health')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s

View File

@@ -0,0 +1,46 @@
# Test environment - ephemeral, branch-isolated
services:
grist-mcp:
build:
context: ../..
dockerfile: Dockerfile
container_name: grist-mcp-test-${TEST_INSTANCE_ID:-default}
ports:
- "3000" # Dynamic port
environment:
- CONFIG_PATH=/app/config.yaml
volumes:
- ../../tests/integration/config.test.yaml:/app/config.yaml:ro
depends_on:
mock-grist:
condition: service_healthy
networks:
- test-net
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:3000/health')"]
interval: 5s
timeout: 5s
retries: 10
start_period: 10s
mock-grist:
build:
context: ../../tests/integration/mock_grist
container_name: mock-grist-test-${TEST_INSTANCE_ID:-default}
ports:
- "8484" # Dynamic port
environment:
- PORT=8484
networks:
- test-net
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8484/health')"]
interval: 5s
timeout: 5s
retries: 10
start_period: 10s
networks:
test-net:
name: grist-mcp-test-${TEST_INSTANCE_ID:-default}
driver: bridge

View File

@@ -1,10 +0,0 @@
services:
grist-mcp:
build: .
ports:
- "${PORT:-3000}:3000"
volumes:
- ./config.yaml:/app/config.yaml:ro
env_file:
- .env
restart: unless-stopped

View File

@@ -0,0 +1,587 @@
# Docker Service Architecture Adaptation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Adapt grist-mcp to follow the docker-service-architecture skill guidelines for better test isolation, environment separation, and CI/CD readiness.
**Architecture:** Single-service project pattern with 2-stage testing (unit → integration), environment-specific deploy configs (dev/test/prod), and branch-isolated test infrastructure.
**Tech Stack:** Docker Compose, Make, Python/pytest, bash scripts
---
## Current State Analysis
**What we have:**
- Single service (grist-mcp) with mock server for testing
- 2-stage testing: unit tests (41) + integration tests (2)
- docker-compose.test.yaml at project root
- docker-compose.yaml for production at root
- Basic Makefile with pre-deploy target
**Gaps vs. Skill Guidelines:**
| Area | Current | Skill Guideline |
|------|---------|-----------------|
| Directory structure | Flat docker-compose files at root | `deploy/{dev,test,prod}/` directories |
| Test organization | `tests/*.py` + `tests/integration/` | `tests/unit/` + `tests/integration/` |
| Port allocation | Fixed (3000, 8484) | Dynamic with discovery |
| Branch isolation | None | TEST_INSTANCE_ID from git branch |
| Container naming | Default | Instance-based (`-${TEST_INSTANCE_ID}`) |
| Test storage | Default volumes | tmpfs for ephemeral |
| depends_on | `service_started` | `service_healthy` |
---
## Task 1: Restructure Tests Directory
**Files:**
- Move: `tests/test_*.py``tests/unit/test_*.py`
- Keep: `tests/integration/` as-is
- Create: `tests/unit/__init__.py`
**Step 1: Create unit test directory and move files**
```bash
mkdir -p tests/unit
mv tests/test_*.py tests/unit/
touch tests/unit/__init__.py
```
**Step 2: Update pyproject.toml testpaths**
```toml
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests/unit", "tests/integration"]
```
**Step 3: Update Makefile test target**
```makefile
test: ## Run unit tests
uv run pytest tests/unit/ -v
```
**Step 4: Verify tests still pass**
```bash
uv run pytest tests/unit/ -v
uv run pytest tests/integration/ -v --ignore=tests/integration
```
**Step 5: Commit**
```bash
git add tests/ pyproject.toml Makefile
git commit -m "refactor: organize tests into unit/ and integration/ directories"
```
---
## Task 2: Create Deploy Directory Structure
**Files:**
- Create: `deploy/dev/docker-compose.yml`
- Create: `deploy/dev/.env.example`
- Create: `deploy/test/docker-compose.yml`
- Create: `deploy/prod/docker-compose.yml`
- Create: `deploy/prod/.env.example`
- Delete: `docker-compose.yaml`, `docker-compose.test.yaml` (after migration)
**Step 1: Create deploy directory structure**
```bash
mkdir -p deploy/{dev,test,prod}
```
**Step 2: Create deploy/dev/docker-compose.yml**
```yaml
# Development environment - hot reload, persistent data
services:
grist-mcp:
build:
context: ../..
dockerfile: Dockerfile
ports:
- "${PORT:-3000}:3000"
volumes:
- ../../src:/app/src:ro
- ../../config.yaml:/app/config.yaml:ro
env_file:
- .env
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:3000/health')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
```
**Step 3: Create deploy/dev/.env.example**
```bash
PORT=3000
GRIST_MCP_TOKEN=your-token-here
CONFIG_PATH=/app/config.yaml
```
**Step 4: Create deploy/test/docker-compose.yml**
```yaml
# Test environment - ephemeral, branch-isolated
services:
grist-mcp:
build:
context: ../..
dockerfile: Dockerfile
container_name: grist-mcp-test-${TEST_INSTANCE_ID:-default}
ports:
- "3000" # Dynamic port
environment:
- CONFIG_PATH=/app/config.yaml
- GRIST_MCP_TOKEN=test-token
- PORT=3000
volumes:
- ../../tests/integration/config.test.yaml:/app/config.yaml:ro
depends_on:
mock-grist:
condition: service_healthy
networks:
- test-net
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:3000/health')"]
interval: 5s
timeout: 5s
retries: 10
start_period: 10s
mock-grist:
build:
context: ../../tests/integration/mock_grist
container_name: mock-grist-test-${TEST_INSTANCE_ID:-default}
ports:
- "8484" # Dynamic port
environment:
- PORT=8484
networks:
- test-net
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8484/health')"]
interval: 5s
timeout: 5s
retries: 10
start_period: 10s
networks:
test-net:
name: grist-mcp-test-${TEST_INSTANCE_ID:-default}
driver: bridge
```
**Step 5: Create deploy/prod/docker-compose.yml**
```yaml
# Production environment - resource limits, logging, restart policy
services:
grist-mcp:
build:
context: ../..
dockerfile: Dockerfile
ports:
- "${PORT:-3000}:3000"
volumes:
- ./config.yaml:/app/config.yaml:ro
env_file:
- .env
restart: unless-stopped
deploy:
resources:
limits:
memory: 512M
cpus: "1"
reservations:
memory: 128M
logging:
driver: "json-file"
options:
max-size: "50m"
max-file: "5"
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:3000/health')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
```
**Step 6: Create deploy/prod/.env.example**
```bash
PORT=3000
GRIST_MCP_TOKEN=your-production-token
CONFIG_PATH=/app/config.yaml
```
**Step 7: Verify test compose works**
```bash
cd deploy/test
TEST_INSTANCE_ID=manual docker compose up -d --build
docker compose ps
docker compose down -v
```
**Step 8: Remove old compose files and commit**
```bash
rm docker-compose.yaml docker-compose.test.yaml
git add deploy/
git rm docker-compose.yaml docker-compose.test.yaml
git commit -m "refactor: move docker-compose files to deploy/ directory structure"
```
---
## Task 3: Add Test Isolation Scripts
**Files:**
- Create: `scripts/get-test-instance-id.sh`
- Create: `scripts/run-integration-tests.sh`
**Step 1: Create scripts directory**
```bash
mkdir -p scripts
```
**Step 2: Create get-test-instance-id.sh**
```bash
#!/bin/bash
# scripts/get-test-instance-id.sh
# Generate a unique instance ID from git branch for parallel test isolation
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
# Sanitize: replace non-alphanumeric with dash, limit length
echo "$BRANCH" | sed 's/[^a-zA-Z0-9]/-/g' | cut -c1-20
```
**Step 3: Create run-integration-tests.sh**
```bash
#!/bin/bash
# scripts/run-integration-tests.sh
# Run integration tests with branch isolation and dynamic port discovery
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
# Get branch-based instance ID
TEST_INSTANCE_ID=$("$SCRIPT_DIR/get-test-instance-id.sh")
export TEST_INSTANCE_ID
echo "Test instance ID: $TEST_INSTANCE_ID"
# Start containers
cd "$PROJECT_ROOT/deploy/test"
docker compose up -d --build --wait
# Discover dynamic ports
GRIST_MCP_PORT=$(docker compose port grist-mcp 3000 | cut -d: -f2)
MOCK_GRIST_PORT=$(docker compose port mock-grist 8484 | cut -d: -f2)
echo "grist-mcp available at: http://localhost:$GRIST_MCP_PORT"
echo "mock-grist available at: http://localhost:$MOCK_GRIST_PORT"
# Export for tests
export GRIST_MCP_URL="http://localhost:$GRIST_MCP_PORT"
export MOCK_GRIST_URL="http://localhost:$MOCK_GRIST_PORT"
# Run tests
cd "$PROJECT_ROOT"
TEST_EXIT=0
uv run pytest tests/integration/ -v || TEST_EXIT=$?
# Cleanup
cd "$PROJECT_ROOT/deploy/test"
docker compose down -v
exit $TEST_EXIT
```
**Step 4: Make scripts executable**
```bash
chmod +x scripts/get-test-instance-id.sh
chmod +x scripts/run-integration-tests.sh
```
**Step 5: Verify scripts work**
```bash
./scripts/get-test-instance-id.sh
./scripts/run-integration-tests.sh
```
**Step 6: Commit**
```bash
git add scripts/
git commit -m "feat: add test isolation scripts with dynamic port discovery"
```
---
## Task 4: Update Integration Tests for Dynamic Ports
**Files:**
- Modify: `tests/integration/conftest.py`
**Step 1: Update conftest.py to use environment variables**
```python
"""Fixtures for integration tests."""
import os
import time
import httpx
import pytest
# Use environment variables for dynamic port discovery
GRIST_MCP_URL = os.environ.get("GRIST_MCP_URL", "http://localhost:3000")
MOCK_GRIST_URL = os.environ.get("MOCK_GRIST_URL", "http://localhost:8484")
MAX_WAIT_SECONDS = 30
def wait_for_service(url: str, timeout: int = MAX_WAIT_SECONDS) -> bool:
"""Wait for a service to become healthy."""
start = time.time()
while time.time() - start < timeout:
try:
response = httpx.get(f"{url}/health", timeout=2.0)
if response.status_code == 200:
return True
except httpx.RequestError:
pass
time.sleep(0.5)
return False
@pytest.fixture(scope="session")
def services_ready():
"""Ensure both services are healthy before running tests."""
if not wait_for_service(MOCK_GRIST_URL):
pytest.fail(f"Mock Grist server not ready at {MOCK_GRIST_URL}")
if not wait_for_service(GRIST_MCP_URL):
pytest.fail(f"grist-mcp server not ready at {GRIST_MCP_URL}")
return True
```
**Step 2: Update test files to use environment URLs**
In `tests/integration/test_mcp_protocol.py` and `tests/integration/test_tools_integration.py`:
```python
import os
GRIST_MCP_URL = os.environ.get("GRIST_MCP_URL", "http://localhost:3000")
MOCK_GRIST_URL = os.environ.get("MOCK_GRIST_URL", "http://localhost:8484")
```
**Step 3: Run tests to verify**
```bash
./scripts/run-integration-tests.sh
```
**Step 4: Commit**
```bash
git add tests/integration/
git commit -m "feat: support dynamic ports via environment variables in tests"
```
---
## Task 5: Update Makefile
**Files:**
- Modify: `Makefile`
**Step 1: Rewrite Makefile with new structure**
```makefile
.PHONY: help test test-unit test-integration build dev-up dev-down integration pre-deploy clean
VERBOSE ?= 0
PYTEST_ARGS := $(if $(filter 1,$(VERBOSE)),-v,-q)
# Default target
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
# Testing
test: test-unit ## Run all tests (unit only by default)
test-unit: ## Run unit tests
uv run pytest tests/unit/ $(PYTEST_ARGS)
test-integration: ## Run integration tests (starts/stops containers)
./scripts/run-integration-tests.sh
# Docker
build: ## Build Docker image
docker build -t grist-mcp:latest .
dev-up: ## Start development environment
cd deploy/dev && docker compose up -d --build
dev-down: ## Stop development environment
cd deploy/dev && docker compose down
# Pre-deployment
pre-deploy: test-unit test-integration ## Full pre-deployment pipeline
@echo "Pre-deployment checks passed!"
# Cleanup
clean: ## Remove test artifacts and containers
cd deploy/test && docker compose down -v --rmi local 2>/dev/null || true
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type d -name .pytest_cache -exec rm -rf {} + 2>/dev/null || true
```
**Step 2: Verify Makefile targets**
```bash
make help
make test-unit
make test-integration
make pre-deploy
```
**Step 3: Commit**
```bash
git add Makefile
git commit -m "refactor: update Makefile for new deploy/ structure"
```
---
## Task 6: Update CLAUDE.md
**Files:**
- Modify: `CLAUDE.md`
**Step 1: Update commands section**
Add to CLAUDE.md:
```markdown
## Commands
```bash
# Run unit tests
make test-unit
# or: uv run pytest tests/unit/ -v
# Run integration tests (manages containers automatically)
make test-integration
# or: ./scripts/run-integration-tests.sh
# Full pre-deploy pipeline
make pre-deploy
# Development environment
make dev-up # Start
make dev-down # Stop
# Build Docker image
make build
```
## Project Structure
```
src/grist_mcp/ # Source code
tests/
├── unit/ # Unit tests (no containers)
└── integration/ # Integration tests (with Docker)
deploy/
├── dev/ # Development docker-compose
├── test/ # Test docker-compose (ephemeral)
└── prod/ # Production docker-compose
scripts/ # Test automation scripts
```
```
**Step 2: Commit**
```bash
git add CLAUDE.md
git commit -m "docs: update CLAUDE.md with new project structure"
```
---
## Task 7: Final Verification
**Step 1: Run full pre-deploy pipeline**
```bash
make pre-deploy
```
Expected output:
- Unit tests pass (41 tests)
- Integration tests pass with branch isolation
- Containers cleaned up
**Step 2: Test parallel execution (optional)**
```bash
# In terminal 1
git checkout -b test-branch-1
make test-integration &
# In terminal 2
git checkout -b test-branch-2
make test-integration &
```
Both should run without port conflicts.
**Step 3: Commit final verification**
```bash
git add .
git commit -m "chore: complete docker-service-architecture adaptation"
```
---
## Summary of Changes
| Before | After |
|--------|-------|
| `tests/test_*.py` | `tests/unit/test_*.py` |
| `docker-compose.yaml` | `deploy/dev/docker-compose.yml` |
| `docker-compose.test.yaml` | `deploy/test/docker-compose.yml` |
| (none) | `deploy/prod/docker-compose.yml` |
| Fixed ports (3000, 8484) | Dynamic ports with discovery |
| No branch isolation | TEST_INSTANCE_ID from git branch |
| `service_started` | `service_healthy` |
| Basic Makefile | Environment-aware with VERBOSE support |
## Benefits
1. **Parallel testing** - Multiple branches can run tests simultaneously
2. **Environment parity** - Clear dev/test/prod separation
3. **CI/CD ready** - Scripts work in automated pipelines
4. **Faster feedback** - Dynamic ports eliminate conflicts
5. **Cleaner structure** - Tests and deploys clearly organized

View File

@@ -0,0 +1,980 @@
# Pre-Deployment Testing Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Create a pre-deployment test pipeline with Makefile orchestration, mock Grist server, and MCP protocol integration tests.
**Architecture:** Makefile orchestrates unit tests, Docker builds, and integration tests. Integration tests use the MCP Python SDK to connect to the containerized grist-mcp server, which talks to a mock Grist API server. Both run in docker-compose on an isolated network.
**Tech Stack:** Python 3.14, pytest, MCP SDK, Starlette (mock server), Docker Compose, Make
---
## Task 1: Add Health Endpoint to grist-mcp
The integration tests need to poll for service readiness. Add a `/health` endpoint.
**Files:**
- Modify: `src/grist_mcp/main.py:42-47`
**Step 1: Add health endpoint to main.py**
In `src/grist_mcp/main.py`, add a health route to the Starlette app:
```python
from starlette.responses import JSONResponse
async def handle_health(request):
return JSONResponse({"status": "ok"})
```
And add the route:
```python
return Starlette(
routes=[
Route("/health", endpoint=handle_health),
Route("/sse", endpoint=handle_sse),
Route("/messages", endpoint=handle_messages, methods=["POST"]),
]
)
```
**Step 2: Run existing tests**
Run: `uv run pytest tests/test_server.py -v`
Expected: PASS (health endpoint doesn't break existing tests)
**Step 3: Commit**
```bash
git add src/grist_mcp/main.py
git commit -m "feat: add /health endpoint for service readiness checks"
```
---
## Task 2: Create Mock Grist Server
**Files:**
- Create: `tests/integration/mock_grist/__init__.py`
- Create: `tests/integration/mock_grist/server.py`
- Create: `tests/integration/mock_grist/Dockerfile`
- Create: `tests/integration/mock_grist/requirements.txt`
**Step 1: Create directory structure**
```bash
mkdir -p tests/integration/mock_grist
```
**Step 2: Create requirements.txt**
Create `tests/integration/mock_grist/requirements.txt`:
```
starlette>=0.41.0
uvicorn>=0.32.0
```
**Step 3: Create __init__.py**
Create empty `tests/integration/mock_grist/__init__.py`:
```python
```
**Step 4: Create server.py**
Create `tests/integration/mock_grist/server.py`:
```python
"""Mock Grist API server for integration testing."""
import json
import logging
import os
from datetime import datetime
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
logging.basicConfig(level=logging.INFO, format="%(asctime)s [MOCK-GRIST] %(message)s")
logger = logging.getLogger(__name__)
# Mock data
MOCK_TABLES = {
"People": {
"columns": [
{"id": "Name", "fields": {"type": "Text"}},
{"id": "Age", "fields": {"type": "Int"}},
{"id": "Email", "fields": {"type": "Text"}},
],
"records": [
{"id": 1, "fields": {"Name": "Alice", "Age": 30, "Email": "alice@example.com"}},
{"id": 2, "fields": {"Name": "Bob", "Age": 25, "Email": "bob@example.com"}},
],
},
"Tasks": {
"columns": [
{"id": "Title", "fields": {"type": "Text"}},
{"id": "Done", "fields": {"type": "Bool"}},
],
"records": [
{"id": 1, "fields": {"Title": "Write tests", "Done": False}},
{"id": 2, "fields": {"Title": "Deploy", "Done": False}},
],
},
}
# Track requests for test assertions
request_log: list[dict] = []
def log_request(method: str, path: str, body: dict | None = None):
"""Log a request for later inspection."""
entry = {
"timestamp": datetime.utcnow().isoformat(),
"method": method,
"path": path,
"body": body,
}
request_log.append(entry)
logger.info(f"{method} {path}" + (f" body={json.dumps(body)}" if body else ""))
async def health(request):
"""Health check endpoint."""
return JSONResponse({"status": "ok"})
async def get_request_log(request):
"""Return the request log for test assertions."""
return JSONResponse(request_log)
async def clear_request_log(request):
"""Clear the request log."""
request_log.clear()
return JSONResponse({"status": "cleared"})
async def list_tables(request):
"""GET /api/docs/{doc_id}/tables"""
doc_id = request.path_params["doc_id"]
log_request("GET", f"/api/docs/{doc_id}/tables")
tables = [{"id": name} for name in MOCK_TABLES.keys()]
return JSONResponse({"tables": tables})
async def get_table_columns(request):
"""GET /api/docs/{doc_id}/tables/{table_id}/columns"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
log_request("GET", f"/api/docs/{doc_id}/tables/{table_id}/columns")
if table_id not in MOCK_TABLES:
return JSONResponse({"error": "Table not found"}, status_code=404)
return JSONResponse({"columns": MOCK_TABLES[table_id]["columns"]})
async def get_records(request):
"""GET /api/docs/{doc_id}/tables/{table_id}/records"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
log_request("GET", f"/api/docs/{doc_id}/tables/{table_id}/records")
if table_id not in MOCK_TABLES:
return JSONResponse({"error": "Table not found"}, status_code=404)
return JSONResponse({"records": MOCK_TABLES[table_id]["records"]})
async def add_records(request):
"""POST /api/docs/{doc_id}/tables/{table_id}/records"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables/{table_id}/records", body)
# Return mock IDs for new records
new_ids = [{"id": 100 + i} for i in range(len(body.get("records", [])))]
return JSONResponse({"records": new_ids})
async def update_records(request):
"""PATCH /api/docs/{doc_id}/tables/{table_id}/records"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("PATCH", f"/api/docs/{doc_id}/tables/{table_id}/records", body)
return JSONResponse({})
async def delete_records(request):
"""POST /api/docs/{doc_id}/tables/{table_id}/data/delete"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables/{table_id}/data/delete", body)
return JSONResponse({})
async def sql_query(request):
"""GET /api/docs/{doc_id}/sql"""
doc_id = request.path_params["doc_id"]
query = request.query_params.get("q", "")
log_request("GET", f"/api/docs/{doc_id}/sql?q={query}")
# Return mock SQL results
return JSONResponse({
"records": [
{"fields": {"Name": "Alice", "Age": 30}},
{"fields": {"Name": "Bob", "Age": 25}},
]
})
async def create_tables(request):
"""POST /api/docs/{doc_id}/tables"""
doc_id = request.path_params["doc_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables", body)
# Return the created tables with their IDs
tables = [{"id": t["id"]} for t in body.get("tables", [])]
return JSONResponse({"tables": tables})
async def add_column(request):
"""POST /api/docs/{doc_id}/tables/{table_id}/columns"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables/{table_id}/columns", body)
columns = [{"id": c["id"]} for c in body.get("columns", [])]
return JSONResponse({"columns": columns})
async def modify_column(request):
"""PATCH /api/docs/{doc_id}/tables/{table_id}/columns/{col_id}"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
col_id = request.path_params["col_id"]
body = await request.json()
log_request("PATCH", f"/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}", body)
return JSONResponse({})
async def delete_column(request):
"""DELETE /api/docs/{doc_id}/tables/{table_id}/columns/{col_id}"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
col_id = request.path_params["col_id"]
log_request("DELETE", f"/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}")
return JSONResponse({})
app = Starlette(
routes=[
# Test control endpoints
Route("/health", endpoint=health),
Route("/_test/requests", endpoint=get_request_log),
Route("/_test/requests/clear", endpoint=clear_request_log, methods=["POST"]),
# Grist API endpoints
Route("/api/docs/{doc_id}/tables", endpoint=list_tables),
Route("/api/docs/{doc_id}/tables", endpoint=create_tables, methods=["POST"]),
Route("/api/docs/{doc_id}/tables/{table_id}/columns", endpoint=get_table_columns),
Route("/api/docs/{doc_id}/tables/{table_id}/columns", endpoint=add_column, methods=["POST"]),
Route("/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}", endpoint=modify_column, methods=["PATCH"]),
Route("/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}", endpoint=delete_column, methods=["DELETE"]),
Route("/api/docs/{doc_id}/tables/{table_id}/records", endpoint=get_records),
Route("/api/docs/{doc_id}/tables/{table_id}/records", endpoint=add_records, methods=["POST"]),
Route("/api/docs/{doc_id}/tables/{table_id}/records", endpoint=update_records, methods=["PATCH"]),
Route("/api/docs/{doc_id}/tables/{table_id}/data/delete", endpoint=delete_records, methods=["POST"]),
Route("/api/docs/{doc_id}/sql", endpoint=sql_query),
]
)
if __name__ == "__main__":
import uvicorn
port = int(os.environ.get("PORT", "8484"))
logger.info(f"Starting mock Grist server on port {port}")
uvicorn.run(app, host="0.0.0.0", port=port)
```
**Step 5: Create Dockerfile**
Create `tests/integration/mock_grist/Dockerfile`:
```dockerfile
FROM python:3.14-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py .
ENV PORT=8484
EXPOSE 8484
CMD ["python", "server.py"]
```
**Step 6: Commit**
```bash
git add tests/integration/mock_grist/
git commit -m "feat: add mock Grist server for integration testing"
```
---
## Task 3: Create Integration Test Configuration
**Files:**
- Create: `tests/integration/__init__.py`
- Create: `tests/integration/config.test.yaml`
**Step 1: Create __init__.py**
Create empty `tests/integration/__init__.py`:
```python
```
**Step 2: Create config.test.yaml**
Create `tests/integration/config.test.yaml`:
```yaml
documents:
test-doc:
url: http://mock-grist:8484
doc_id: test-doc-id
api_key: test-api-key
tokens:
- token: test-token
name: test-agent
scope:
- document: test-doc
permissions: [read, write, schema]
```
**Step 3: Commit**
```bash
git add tests/integration/__init__.py tests/integration/config.test.yaml
git commit -m "feat: add integration test configuration"
```
---
## Task 4: Create Docker Compose Test Configuration
**Files:**
- Create: `docker-compose.test.yaml`
**Step 1: Create docker-compose.test.yaml**
Create `docker-compose.test.yaml`:
```yaml
services:
grist-mcp:
build: .
ports:
- "3000:3000"
environment:
- CONFIG_PATH=/app/config.yaml
- GRIST_MCP_TOKEN=test-token
- PORT=3000
volumes:
- ./tests/integration/config.test.yaml:/app/config.yaml:ro
depends_on:
mock-grist:
condition: service_started
networks:
- test-net
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:3000/health')"]
interval: 5s
timeout: 5s
retries: 5
mock-grist:
build: tests/integration/mock_grist
ports:
- "8484:8484"
environment:
- PORT=8484
networks:
- test-net
healthcheck:
test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8484/health')"]
interval: 5s
timeout: 5s
retries: 5
networks:
test-net:
driver: bridge
```
**Step 2: Commit**
```bash
git add docker-compose.test.yaml
git commit -m "feat: add docker-compose for integration testing"
```
---
## Task 5: Create Integration Test Fixtures
**Files:**
- Create: `tests/integration/conftest.py`
**Step 1: Create conftest.py**
Create `tests/integration/conftest.py`:
```python
"""Fixtures for integration tests."""
import asyncio
import time
import httpx
import pytest
from mcp import ClientSession
from mcp.client.sse import sse_client
GRIST_MCP_URL = "http://localhost:3000"
MOCK_GRIST_URL = "http://localhost:8484"
MAX_WAIT_SECONDS = 30
def wait_for_service(url: str, timeout: int = MAX_WAIT_SECONDS) -> bool:
"""Wait for a service to become healthy."""
start = time.time()
while time.time() - start < timeout:
try:
response = httpx.get(f"{url}/health", timeout=2.0)
if response.status_code == 200:
return True
except httpx.RequestError:
pass
time.sleep(0.5)
return False
@pytest.fixture(scope="session")
def services_ready():
"""Ensure both services are healthy before running tests."""
if not wait_for_service(MOCK_GRIST_URL):
pytest.fail(f"Mock Grist server not ready at {MOCK_GRIST_URL}")
if not wait_for_service(GRIST_MCP_URL):
pytest.fail(f"grist-mcp server not ready at {GRIST_MCP_URL}")
return True
@pytest.fixture
async def mcp_client(services_ready):
"""Create an MCP client connected to grist-mcp via SSE."""
async with sse_client(f"{GRIST_MCP_URL}/sse") as (read_stream, write_stream):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
yield session
@pytest.fixture
def mock_grist_client(services_ready):
"""HTTP client for interacting with mock Grist test endpoints."""
with httpx.Client(base_url=MOCK_GRIST_URL, timeout=10.0) as client:
yield client
@pytest.fixture(autouse=True)
def clear_mock_grist_log(mock_grist_client):
"""Clear the mock Grist request log before each test."""
mock_grist_client.post("/_test/requests/clear")
yield
```
**Step 2: Commit**
```bash
git add tests/integration/conftest.py
git commit -m "feat: add integration test fixtures with MCP client"
```
---
## Task 6: Create MCP Protocol Tests
**Files:**
- Create: `tests/integration/test_mcp_protocol.py`
**Step 1: Create test_mcp_protocol.py**
Create `tests/integration/test_mcp_protocol.py`:
```python
"""Test MCP protocol compliance over SSE transport."""
import pytest
@pytest.mark.asyncio
async def test_mcp_connection_initializes(mcp_client):
"""Test that MCP client can connect and initialize."""
# If we get here, connection and initialization succeeded
assert mcp_client is not None
@pytest.mark.asyncio
async def test_list_tools_returns_all_tools(mcp_client):
"""Test that list_tools returns all expected tools."""
result = await mcp_client.list_tools()
tool_names = [tool.name for tool in result.tools]
expected_tools = [
"list_documents",
"list_tables",
"describe_table",
"get_records",
"sql_query",
"add_records",
"update_records",
"delete_records",
"create_table",
"add_column",
"modify_column",
"delete_column",
]
for expected in expected_tools:
assert expected in tool_names, f"Missing tool: {expected}"
assert len(result.tools) == 12
@pytest.mark.asyncio
async def test_list_tools_has_descriptions(mcp_client):
"""Test that all tools have descriptions."""
result = await mcp_client.list_tools()
for tool in result.tools:
assert tool.description, f"Tool {tool.name} has no description"
assert len(tool.description) > 10, f"Tool {tool.name} description too short"
@pytest.mark.asyncio
async def test_list_tools_has_input_schemas(mcp_client):
"""Test that all tools have input schemas."""
result = await mcp_client.list_tools()
for tool in result.tools:
assert tool.inputSchema is not None, f"Tool {tool.name} has no inputSchema"
assert "type" in tool.inputSchema, f"Tool {tool.name} schema missing type"
```
**Step 2: Commit**
```bash
git add tests/integration/test_mcp_protocol.py
git commit -m "feat: add MCP protocol compliance tests"
```
---
## Task 7: Create Tool Integration Tests
**Files:**
- Create: `tests/integration/test_tools_integration.py`
**Step 1: Create test_tools_integration.py**
Create `tests/integration/test_tools_integration.py`:
```python
"""Test tool calls through MCP client to verify Grist API interactions."""
import json
import pytest
@pytest.mark.asyncio
async def test_list_documents(mcp_client):
"""Test list_documents returns accessible documents."""
result = await mcp_client.call_tool("list_documents", {})
assert len(result.content) == 1
data = json.loads(result.content[0].text)
assert "documents" in data
assert len(data["documents"]) == 1
assert data["documents"][0]["name"] == "test-doc"
assert "read" in data["documents"][0]["permissions"]
@pytest.mark.asyncio
async def test_list_tables(mcp_client, mock_grist_client):
"""Test list_tables calls correct Grist API endpoint."""
result = await mcp_client.call_tool("list_tables", {"document": "test-doc"})
# Check response
data = json.loads(result.content[0].text)
assert "tables" in data
assert "People" in data["tables"]
assert "Tasks" in data["tables"]
# Verify mock received correct request
log = mock_grist_client.get("/_test/requests").json()
assert len(log) >= 1
assert log[-1]["method"] == "GET"
assert "/tables" in log[-1]["path"]
@pytest.mark.asyncio
async def test_describe_table(mcp_client, mock_grist_client):
"""Test describe_table returns column information."""
result = await mcp_client.call_tool(
"describe_table",
{"document": "test-doc", "table": "People"}
)
data = json.loads(result.content[0].text)
assert "columns" in data
column_ids = [c["id"] for c in data["columns"]]
assert "Name" in column_ids
assert "Age" in column_ids
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
assert any("/columns" in entry["path"] for entry in log)
@pytest.mark.asyncio
async def test_get_records(mcp_client, mock_grist_client):
"""Test get_records fetches records from table."""
result = await mcp_client.call_tool(
"get_records",
{"document": "test-doc", "table": "People"}
)
data = json.loads(result.content[0].text)
assert "records" in data
assert len(data["records"]) == 2
assert data["records"][0]["Name"] == "Alice"
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
assert any("/records" in entry["path"] and entry["method"] == "GET" for entry in log)
@pytest.mark.asyncio
async def test_sql_query(mcp_client, mock_grist_client):
"""Test sql_query executes SQL and returns results."""
result = await mcp_client.call_tool(
"sql_query",
{"document": "test-doc", "query": "SELECT Name, Age FROM People"}
)
data = json.loads(result.content[0].text)
assert "records" in data
assert len(data["records"]) >= 1
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
assert any("/sql" in entry["path"] for entry in log)
@pytest.mark.asyncio
async def test_add_records(mcp_client, mock_grist_client):
"""Test add_records sends correct payload to Grist."""
new_records = [
{"Name": "Charlie", "Age": 35, "Email": "charlie@example.com"}
]
result = await mcp_client.call_tool(
"add_records",
{"document": "test-doc", "table": "People", "records": new_records}
)
data = json.loads(result.content[0].text)
assert "record_ids" in data
assert len(data["record_ids"]) == 1
# Verify API call body
log = mock_grist_client.get("/_test/requests").json()
post_requests = [e for e in log if e["method"] == "POST" and "/records" in e["path"]]
assert len(post_requests) >= 1
assert post_requests[-1]["body"]["records"][0]["fields"]["Name"] == "Charlie"
@pytest.mark.asyncio
async def test_update_records(mcp_client, mock_grist_client):
"""Test update_records sends correct payload to Grist."""
updates = [
{"id": 1, "fields": {"Age": 31}}
]
result = await mcp_client.call_tool(
"update_records",
{"document": "test-doc", "table": "People", "records": updates}
)
data = json.loads(result.content[0].text)
assert "updated" in data
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
patch_requests = [e for e in log if e["method"] == "PATCH" and "/records" in e["path"]]
assert len(patch_requests) >= 1
@pytest.mark.asyncio
async def test_delete_records(mcp_client, mock_grist_client):
"""Test delete_records sends correct IDs to Grist."""
result = await mcp_client.call_tool(
"delete_records",
{"document": "test-doc", "table": "People", "record_ids": [1, 2]}
)
data = json.loads(result.content[0].text)
assert "deleted" in data
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
delete_requests = [e for e in log if "/data/delete" in e["path"]]
assert len(delete_requests) >= 1
assert delete_requests[-1]["body"] == [1, 2]
@pytest.mark.asyncio
async def test_create_table(mcp_client, mock_grist_client):
"""Test create_table sends correct schema to Grist."""
columns = [
{"id": "Title", "type": "Text"},
{"id": "Count", "type": "Int"},
]
result = await mcp_client.call_tool(
"create_table",
{"document": "test-doc", "table_id": "NewTable", "columns": columns}
)
data = json.loads(result.content[0].text)
assert "table_id" in data
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
post_tables = [e for e in log if e["method"] == "POST" and e["path"].endswith("/tables")]
assert len(post_tables) >= 1
@pytest.mark.asyncio
async def test_add_column(mcp_client, mock_grist_client):
"""Test add_column sends correct column definition."""
result = await mcp_client.call_tool(
"add_column",
{
"document": "test-doc",
"table": "People",
"column_id": "Phone",
"column_type": "Text",
}
)
data = json.loads(result.content[0].text)
assert "column_id" in data
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
post_cols = [e for e in log if e["method"] == "POST" and "/columns" in e["path"]]
assert len(post_cols) >= 1
@pytest.mark.asyncio
async def test_modify_column(mcp_client, mock_grist_client):
"""Test modify_column sends correct update."""
result = await mcp_client.call_tool(
"modify_column",
{
"document": "test-doc",
"table": "People",
"column_id": "Age",
"type": "Numeric",
}
)
data = json.loads(result.content[0].text)
assert "modified" in data
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
patch_cols = [e for e in log if e["method"] == "PATCH" and "/columns/" in e["path"]]
assert len(patch_cols) >= 1
@pytest.mark.asyncio
async def test_delete_column(mcp_client, mock_grist_client):
"""Test delete_column calls correct endpoint."""
result = await mcp_client.call_tool(
"delete_column",
{
"document": "test-doc",
"table": "People",
"column_id": "Email",
}
)
data = json.loads(result.content[0].text)
assert "deleted" in data
# Verify API call
log = mock_grist_client.get("/_test/requests").json()
delete_cols = [e for e in log if e["method"] == "DELETE" and "/columns/" in e["path"]]
assert len(delete_cols) >= 1
@pytest.mark.asyncio
async def test_unauthorized_document_fails(mcp_client):
"""Test that accessing unauthorized document returns error."""
result = await mcp_client.call_tool(
"list_tables",
{"document": "unauthorized-doc"}
)
assert "error" in result.content[0].text.lower() or "authorization" in result.content[0].text.lower()
```
**Step 2: Commit**
```bash
git add tests/integration/test_tools_integration.py
git commit -m "feat: add tool integration tests with Grist API validation"
```
---
## Task 8: Create Makefile
**Files:**
- Create: `Makefile`
**Step 1: Create Makefile**
Create `Makefile`:
```makefile
.PHONY: help test build integration-up integration-test integration-down integration pre-deploy clean
# Default target
help: ## Show this help
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}'
test: ## Run unit tests
uv run pytest tests/ -v --ignore=tests/integration
build: ## Build Docker images for testing
docker compose -f docker-compose.test.yaml build
integration-up: ## Start integration test containers
docker compose -f docker-compose.test.yaml up -d
@echo "Waiting for services to be ready..."
@sleep 5
integration-test: ## Run integration tests (containers must be up)
uv run pytest tests/integration/ -v
integration-down: ## Stop and remove test containers
docker compose -f docker-compose.test.yaml down -v
integration: build integration-up ## Full integration cycle (build, up, test, down)
@$(MAKE) integration-test || ($(MAKE) integration-down && exit 1)
@$(MAKE) integration-down
pre-deploy: test integration ## Full pre-deployment pipeline (unit tests + integration)
@echo "Pre-deployment checks passed!"
clean: ## Remove all test artifacts and containers
docker compose -f docker-compose.test.yaml down -v --rmi local 2>/dev/null || true
find . -type d -name __pycache__ -exec rm -rf {} + 2>/dev/null || true
find . -type d -name .pytest_cache -exec rm -rf {} + 2>/dev/null || true
```
**Step 2: Verify Makefile syntax**
Run: `make help`
Expected: List of available targets with descriptions
**Step 3: Commit**
```bash
git add Makefile
git commit -m "feat: add Makefile for test orchestration"
```
---
## Task 9: Run Full Pre-Deploy Pipeline
**Step 1: Run unit tests**
Run: `make test`
Expected: All unit tests pass
**Step 2: Run full pre-deploy**
Run: `make pre-deploy`
Expected: Unit tests pass, Docker builds succeed, integration tests pass, containers cleaned up
**Step 3: Commit any fixes needed**
If any tests fail, fix them and commit:
```bash
git add -A
git commit -m "fix: resolve integration test issues"
```
---
## Summary
Files created:
- `src/grist_mcp/main.py` - Modified with /health endpoint
- `tests/integration/mock_grist/__init__.py`
- `tests/integration/mock_grist/server.py`
- `tests/integration/mock_grist/Dockerfile`
- `tests/integration/mock_grist/requirements.txt`
- `tests/integration/__init__.py`
- `tests/integration/config.test.yaml`
- `tests/integration/conftest.py`
- `tests/integration/test_mcp_protocol.py`
- `tests/integration/test_tools_integration.py`
- `docker-compose.test.yaml`
- `Makefile`
Usage:
```bash
make help # Show all targets
make test # Unit tests only
make integration # Integration tests only
make pre-deploy # Full pipeline
make clean # Cleanup
```

View File

@@ -0,0 +1,96 @@
# Logging Improvements Design
## Overview
Improve MCP server logging to provide meaningful operational visibility. Replace generic HTTP request logs with application-level context including agent identity, tool usage, document access, and operation stats.
## Current State
Logs show only uvicorn HTTP requests with no application context:
```
INFO: 172.20.0.2:43254 - "POST /messages?session_id=... HTTP/1.1" 202 Accepted
INFO: 127.0.0.1:41508 - "GET /health HTTP/1.1" 200 OK
```
## Desired State
Human-readable single-line format with full context:
```
2025-01-02 10:15:23 | dev-agent (abc...xyz) | get_records | sales | 42 records | success | 125ms
2025-01-02 10:15:24 | dev-agent (abc...xyz) | update_records | sales | 3 records | success | 89ms
2025-01-02 10:15:25 | dev-agent (abc...xyz) | add_records | inventory | 5 records | error | 89ms
Grist API error: Invalid column 'foo'
```
## Design Decisions
| Decision | Choice |
|----------|--------|
| Log format | Human-readable single-line (pipe-delimited) |
| Configuration | Environment variable only (`LOG_LEVEL`) |
| Log levels | Standard (DEBUG/INFO/WARNING/ERROR) |
| Health checks | DEBUG level only (suppressed at INFO) |
| Error details | Multi-line (indented on second line) |
## Log Format
```
YYYY-MM-DD HH:MM:SS | <agent_name> (<token_truncated>) | <tool> | <document> | <stats> | <status> | <duration>
```
**Token truncation:** First 3 and last 3 characters (e.g., `abc...xyz`). Tokens <=8 chars show `***`.
**Document field:** Shows `-` for tools without a document (e.g., `list_documents`).
## Log Levels
| Level | Events |
|-------|--------|
| ERROR | Unhandled exceptions, Grist API failures |
| WARNING | Auth errors (invalid token, permission denied) |
| INFO | Tool calls (one line per call with stats) |
| DEBUG | Health checks, detailed arguments, full results |
**Environment variable:** `LOG_LEVEL` (default: `INFO`)
## Stats Per Tool
| Tool | Stats |
|------|-------|
| `list_documents` | `N docs` |
| `list_tables` | `N tables` |
| `describe_table` | `N columns` |
| `get_records` | `N records` |
| `sql_query` | `N rows` |
| `add_records` | `N records` |
| `update_records` | `N records` |
| `delete_records` | `N records` |
| `create_table` | `N columns` |
| `add_column` | `1 column` |
| `modify_column` | `1 column` |
| `delete_column` | `1 column` |
## Files Changed
| File | Change |
|------|--------|
| `src/grist_mcp/logging.py` | New - logging setup, formatters, stats extraction |
| `src/grist_mcp/main.py` | Call `setup_logging()`, configure uvicorn logger |
| `src/grist_mcp/server.py` | Wrap `call_tool` with logging |
| `tests/unit/test_logging.py` | New - unit tests for logging module |
Tool implementations in `tools/` remain unchanged - logging is handled at the server layer.
## Testing
**Unit tests:**
- `test_setup_logging_default_level`
- `test_setup_logging_from_env`
- `test_token_truncation`
- `test_extract_stats`
- `test_format_log_line`
- `test_error_multiline_format`
**Manual verification:**
- Run `make dev-up`, make tool calls, verify log format
- Test with `LOG_LEVEL=DEBUG` for verbose output

View File

@@ -0,0 +1,821 @@
# Logging Improvements Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** Add informative application-level logging that shows agent identity, tool usage, document access, and operation stats.
**Architecture:** New `logging.py` module provides setup and formatting. `server.py` wraps tool calls with timing and stats extraction. `main.py` initializes logging and configures uvicorn to suppress health check noise.
**Tech Stack:** Python `logging` stdlib, custom `Formatter`, uvicorn log config
---
### Task 1: Token Truncation Helper
**Files:**
- Create: `src/grist_mcp/logging.py`
- Test: `tests/unit/test_logging.py`
**Step 1: Write the failing test**
Create `tests/unit/test_logging.py`:
```python
"""Unit tests for logging module."""
import pytest
from grist_mcp.logging import truncate_token
class TestTruncateToken:
def test_normal_token_shows_prefix_suffix(self):
token = "abcdefghijklmnop"
assert truncate_token(token) == "abc...nop"
def test_short_token_shows_asterisks(self):
token = "abcdefgh" # 8 chars
assert truncate_token(token) == "***"
def test_very_short_token_shows_asterisks(self):
token = "abc"
assert truncate_token(token) == "***"
```
**Step 2: Run test to verify it fails**
Run: `uv run pytest tests/unit/test_logging.py -v`
Expected: FAIL with "No module named 'grist_mcp.logging'"
**Step 3: Write minimal implementation**
Create `src/grist_mcp/logging.py`:
```python
"""Logging configuration and utilities."""
def truncate_token(token: str) -> str:
"""Truncate token to show first 3 and last 3 chars.
Tokens 8 chars or shorter show *** for security.
"""
if len(token) <= 8:
return "***"
return f"{token[:3]}...{token[-3:]}"
```
**Step 4: Run test to verify it passes**
Run: `uv run pytest tests/unit/test_logging.py -v`
Expected: PASS (3 tests)
**Step 5: Commit**
```bash
git add src/grist_mcp/logging.py tests/unit/test_logging.py
git commit -m "feat(logging): add token truncation helper"
```
---
### Task 2: Stats Extraction Function
**Files:**
- Modify: `src/grist_mcp/logging.py`
- Modify: `tests/unit/test_logging.py`
**Step 1: Write the failing tests**
Add to `tests/unit/test_logging.py`:
```python
from grist_mcp.logging import truncate_token, extract_stats
class TestExtractStats:
def test_list_documents(self):
result = {"documents": [{"name": "a"}, {"name": "b"}, {"name": "c"}]}
assert extract_stats("list_documents", {}, result) == "3 docs"
def test_list_tables(self):
result = {"tables": ["Orders", "Products"]}
assert extract_stats("list_tables", {}, result) == "2 tables"
def test_describe_table(self):
result = {"columns": [{"id": "A"}, {"id": "B"}]}
assert extract_stats("describe_table", {}, result) == "2 columns"
def test_get_records(self):
result = {"records": [{"id": 1}, {"id": 2}]}
assert extract_stats("get_records", {}, result) == "2 records"
def test_sql_query(self):
result = {"records": [{"a": 1}, {"a": 2}, {"a": 3}]}
assert extract_stats("sql_query", {}, result) == "3 rows"
def test_add_records_from_args(self):
args = {"records": [{"a": 1}, {"a": 2}]}
assert extract_stats("add_records", args, {"ids": [1, 2]}) == "2 records"
def test_update_records_from_args(self):
args = {"records": [{"id": 1, "fields": {}}, {"id": 2, "fields": {}}]}
assert extract_stats("update_records", args, {}) == "2 records"
def test_delete_records_from_args(self):
args = {"record_ids": [1, 2, 3]}
assert extract_stats("delete_records", args, {}) == "3 records"
def test_create_table(self):
args = {"columns": [{"id": "A"}, {"id": "B"}]}
assert extract_stats("create_table", args, {}) == "2 columns"
def test_single_column_ops(self):
assert extract_stats("add_column", {}, {}) == "1 column"
assert extract_stats("modify_column", {}, {}) == "1 column"
assert extract_stats("delete_column", {}, {}) == "1 column"
def test_unknown_tool(self):
assert extract_stats("unknown_tool", {}, {}) == "-"
```
**Step 2: Run test to verify it fails**
Run: `uv run pytest tests/unit/test_logging.py::TestExtractStats -v`
Expected: FAIL with "cannot import name 'extract_stats'"
**Step 3: Write minimal implementation**
Add to `src/grist_mcp/logging.py`:
```python
def extract_stats(tool_name: str, arguments: dict, result: dict) -> str:
"""Extract meaningful stats from tool call based on tool type."""
if tool_name == "list_documents":
count = len(result.get("documents", []))
return f"{count} docs"
if tool_name == "list_tables":
count = len(result.get("tables", []))
return f"{count} tables"
if tool_name == "describe_table":
count = len(result.get("columns", []))
return f"{count} columns"
if tool_name == "get_records":
count = len(result.get("records", []))
return f"{count} records"
if tool_name == "sql_query":
count = len(result.get("records", []))
return f"{count} rows"
if tool_name == "add_records":
count = len(arguments.get("records", []))
return f"{count} records"
if tool_name == "update_records":
count = len(arguments.get("records", []))
return f"{count} records"
if tool_name == "delete_records":
count = len(arguments.get("record_ids", []))
return f"{count} records"
if tool_name == "create_table":
count = len(arguments.get("columns", []))
return f"{count} columns"
if tool_name in ("add_column", "modify_column", "delete_column"):
return "1 column"
return "-"
```
**Step 4: Run test to verify it passes**
Run: `uv run pytest tests/unit/test_logging.py -v`
Expected: PASS (all tests)
**Step 5: Commit**
```bash
git add src/grist_mcp/logging.py tests/unit/test_logging.py
git commit -m "feat(logging): add stats extraction for all tools"
```
---
### Task 3: Log Line Formatter
**Files:**
- Modify: `src/grist_mcp/logging.py`
- Modify: `tests/unit/test_logging.py`
**Step 1: Write the failing tests**
Add to `tests/unit/test_logging.py`:
```python
from grist_mcp.logging import truncate_token, extract_stats, format_tool_log
class TestFormatToolLog:
def test_success_format(self):
line = format_tool_log(
agent_name="dev-agent",
token="abcdefghijklmnop",
tool="get_records",
document="sales",
stats="42 records",
status="success",
duration_ms=125,
)
assert "dev-agent" in line
assert "abc...nop" in line
assert "get_records" in line
assert "sales" in line
assert "42 records" in line
assert "success" in line
assert "125ms" in line
# Check pipe-delimited format
assert line.count("|") == 6
def test_no_document(self):
line = format_tool_log(
agent_name="dev-agent",
token="abcdefghijklmnop",
tool="list_documents",
document=None,
stats="3 docs",
status="success",
duration_ms=45,
)
assert "| - |" in line
def test_error_format(self):
line = format_tool_log(
agent_name="dev-agent",
token="abcdefghijklmnop",
tool="add_records",
document="inventory",
stats="5 records",
status="error",
duration_ms=89,
error_message="Grist API error: Invalid column 'foo'",
)
assert "error" in line
assert "\n Grist API error: Invalid column 'foo'" in line
```
**Step 2: Run test to verify it fails**
Run: `uv run pytest tests/unit/test_logging.py::TestFormatToolLog -v`
Expected: FAIL with "cannot import name 'format_tool_log'"
**Step 3: Write minimal implementation**
Add to `src/grist_mcp/logging.py`:
```python
from datetime import datetime
def format_tool_log(
agent_name: str,
token: str,
tool: str,
document: str | None,
stats: str,
status: str,
duration_ms: int,
error_message: str | None = None,
) -> str:
"""Format a tool call log line.
Format: YYYY-MM-DD HH:MM:SS | agent (token) | tool | doc | stats | status | duration
"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
truncated = truncate_token(token)
doc = document if document else "-"
line = f"{timestamp} | {agent_name} ({truncated}) | {tool} | {doc} | {stats} | {status} | {duration_ms}ms"
if error_message:
line += f"\n {error_message}"
return line
```
**Step 4: Run test to verify it passes**
Run: `uv run pytest tests/unit/test_logging.py -v`
Expected: PASS (all tests)
**Step 5: Commit**
```bash
git add src/grist_mcp/logging.py tests/unit/test_logging.py
git commit -m "feat(logging): add log line formatter"
```
---
### Task 4: Setup Logging Function
**Files:**
- Modify: `src/grist_mcp/logging.py`
- Modify: `tests/unit/test_logging.py`
**Step 1: Write the failing tests**
Add to `tests/unit/test_logging.py`:
```python
import logging
import os
class TestSetupLogging:
def test_default_level_is_info(self, monkeypatch):
monkeypatch.delenv("LOG_LEVEL", raising=False)
from grist_mcp.logging import setup_logging
setup_logging()
logger = logging.getLogger("grist_mcp")
assert logger.level == logging.INFO
def test_respects_log_level_env(self, monkeypatch):
monkeypatch.setenv("LOG_LEVEL", "DEBUG")
from grist_mcp.logging import setup_logging
setup_logging()
logger = logging.getLogger("grist_mcp")
assert logger.level == logging.DEBUG
def test_invalid_level_defaults_to_info(self, monkeypatch):
monkeypatch.setenv("LOG_LEVEL", "INVALID")
from grist_mcp.logging import setup_logging
setup_logging()
logger = logging.getLogger("grist_mcp")
assert logger.level == logging.INFO
```
**Step 2: Run test to verify it fails**
Run: `uv run pytest tests/unit/test_logging.py::TestSetupLogging -v`
Expected: FAIL with "cannot import name 'setup_logging'"
**Step 3: Write minimal implementation**
Add to `src/grist_mcp/logging.py`:
```python
import logging
import os
def setup_logging() -> None:
"""Configure logging based on LOG_LEVEL environment variable.
Valid levels: DEBUG, INFO, WARNING, ERROR (default: INFO)
"""
level_name = os.environ.get("LOG_LEVEL", "INFO").upper()
level = getattr(logging, level_name, None)
if not isinstance(level, int):
level = logging.INFO
logger = logging.getLogger("grist_mcp")
logger.setLevel(level)
# Only add handler if not already configured
if not logger.handlers:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
logger.addHandler(handler)
```
**Step 4: Run test to verify it passes**
Run: `uv run pytest tests/unit/test_logging.py -v`
Expected: PASS (all tests)
**Step 5: Commit**
```bash
git add src/grist_mcp/logging.py tests/unit/test_logging.py
git commit -m "feat(logging): add setup_logging with LOG_LEVEL support"
```
---
### Task 5: Get Logger Helper
**Files:**
- Modify: `src/grist_mcp/logging.py`
- Modify: `tests/unit/test_logging.py`
**Step 1: Write the failing test**
Add to `tests/unit/test_logging.py`:
```python
class TestGetLogger:
def test_returns_child_logger(self):
from grist_mcp.logging import get_logger
logger = get_logger("server")
assert logger.name == "grist_mcp.server"
def test_inherits_parent_level(self, monkeypatch):
monkeypatch.setenv("LOG_LEVEL", "WARNING")
from grist_mcp.logging import setup_logging, get_logger
setup_logging()
logger = get_logger("test")
# Child inherits from parent when level is NOTSET
assert logger.getEffectiveLevel() == logging.WARNING
```
**Step 2: Run test to verify it fails**
Run: `uv run pytest tests/unit/test_logging.py::TestGetLogger -v`
Expected: FAIL with "cannot import name 'get_logger'"
**Step 3: Write minimal implementation**
Add to `src/grist_mcp/logging.py`:
```python
def get_logger(name: str) -> logging.Logger:
"""Get a child logger under the grist_mcp namespace."""
return logging.getLogger(f"grist_mcp.{name}")
```
**Step 4: Run test to verify it passes**
Run: `uv run pytest tests/unit/test_logging.py -v`
Expected: PASS (all tests)
**Step 5: Commit**
```bash
git add src/grist_mcp/logging.py tests/unit/test_logging.py
git commit -m "feat(logging): add get_logger helper"
```
---
### Task 6: Integrate Logging into Server
**Files:**
- Modify: `src/grist_mcp/server.py`
**Step 1: Add logging imports and logger**
At the top of `src/grist_mcp/server.py`, add imports:
```python
import time
from grist_mcp.logging import get_logger, extract_stats, format_tool_log
logger = get_logger("server")
```
**Step 2: Wrap call_tool with logging**
Replace the `call_tool` function body (lines 209-276) with this logged version:
```python
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
start_time = time.time()
document = arguments.get("document")
# Log arguments at DEBUG level
logger.debug(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats=f"args: {json.dumps(arguments)}",
status="started",
duration_ms=0,
)
)
try:
if name == "list_documents":
result = await _list_documents(_current_agent)
elif name == "list_tables":
result = await _list_tables(_current_agent, auth, arguments["document"])
elif name == "describe_table":
result = await _describe_table(
_current_agent, auth, arguments["document"], arguments["table"]
)
elif name == "get_records":
result = await _get_records(
_current_agent, auth, arguments["document"], arguments["table"],
filter=arguments.get("filter"),
sort=arguments.get("sort"),
limit=arguments.get("limit"),
)
elif name == "sql_query":
result = await _sql_query(
_current_agent, auth, arguments["document"], arguments["query"]
)
elif name == "add_records":
result = await _add_records(
_current_agent, auth, arguments["document"], arguments["table"],
arguments["records"],
)
elif name == "update_records":
result = await _update_records(
_current_agent, auth, arguments["document"], arguments["table"],
arguments["records"],
)
elif name == "delete_records":
result = await _delete_records(
_current_agent, auth, arguments["document"], arguments["table"],
arguments["record_ids"],
)
elif name == "create_table":
result = await _create_table(
_current_agent, auth, arguments["document"], arguments["table_id"],
arguments["columns"],
)
elif name == "add_column":
result = await _add_column(
_current_agent, auth, arguments["document"], arguments["table"],
arguments["column_id"], arguments["column_type"],
formula=arguments.get("formula"),
)
elif name == "modify_column":
result = await _modify_column(
_current_agent, auth, arguments["document"], arguments["table"],
arguments["column_id"],
type=arguments.get("type"),
formula=arguments.get("formula"),
)
elif name == "delete_column":
result = await _delete_column(
_current_agent, auth, arguments["document"], arguments["table"],
arguments["column_id"],
)
else:
return [TextContent(type="text", text=f"Unknown tool: {name}")]
duration_ms = int((time.time() - start_time) * 1000)
stats = extract_stats(name, arguments, result)
logger.info(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats=stats,
status="success",
duration_ms=duration_ms,
)
)
return [TextContent(type="text", text=json.dumps(result))]
except AuthError as e:
duration_ms = int((time.time() - start_time) * 1000)
logger.warning(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats="-",
status="auth_error",
duration_ms=duration_ms,
error_message=str(e),
)
)
return [TextContent(type="text", text=f"Authorization error: {e}")]
except Exception as e:
duration_ms = int((time.time() - start_time) * 1000)
logger.error(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats="-",
status="error",
duration_ms=duration_ms,
error_message=str(e),
)
)
return [TextContent(type="text", text=f"Error: {e}")]
```
**Step 3: Run tests to verify nothing broke**
Run: `uv run pytest tests/unit/ -v`
Expected: PASS (all tests)
**Step 4: Commit**
```bash
git add src/grist_mcp/server.py
git commit -m "feat(logging): add tool call logging to server"
```
---
### Task 7: Initialize Logging in Main
**Files:**
- Modify: `src/grist_mcp/main.py`
**Step 1: Add logging setup to main()**
Add import at top of `src/grist_mcp/main.py`:
```python
from grist_mcp.logging import setup_logging
```
**Step 2: Call setup_logging at start of main()**
In the `main()` function, add as the first line after the port/config variables:
```python
def main():
"""Run the SSE server."""
port = int(os.environ.get("PORT", "3000"))
external_port = int(os.environ.get("EXTERNAL_PORT", str(port)))
config_path = os.environ.get("CONFIG_PATH", "/app/config.yaml")
setup_logging() # <-- Add this line
if not _ensure_config(config_path):
```
**Step 3: Run tests to verify nothing broke**
Run: `uv run pytest tests/unit/ -v`
Expected: PASS (all tests)
**Step 4: Commit**
```bash
git add src/grist_mcp/main.py
git commit -m "feat(logging): initialize logging on server startup"
```
---
### Task 8: Suppress Health Check Noise
**Files:**
- Modify: `src/grist_mcp/main.py`
**Step 1: Configure uvicorn to use custom log config**
Replace the `uvicorn.run` call in `main()` with:
```python
# Configure uvicorn logging to reduce health check noise
log_config = uvicorn.config.LOGGING_CONFIG
log_config["formatters"]["default"]["fmt"] = "%(message)s"
log_config["formatters"]["access"]["fmt"] = "%(message)s"
uvicorn.run(app, host="0.0.0.0", port=port, log_config=log_config)
```
**Step 2: Add health check filter**
Create a filter class and apply it. Add before the `main()` function:
```python
class HealthCheckFilter(logging.Filter):
"""Filter out health check requests at INFO level."""
def filter(self, record: logging.LogRecord) -> bool:
message = record.getMessage()
if "/health" in message:
# Downgrade to DEBUG by changing the level
record.levelno = logging.DEBUG
record.levelname = "DEBUG"
return True
```
Add import at top:
```python
import logging
```
**Step 3: Apply filter in main()**
After `setup_logging()` call, add:
```python
setup_logging()
# Add health check filter to uvicorn access logger
logging.getLogger("uvicorn.access").addFilter(HealthCheckFilter())
```
**Step 4: Run tests to verify nothing broke**
Run: `uv run pytest tests/unit/ -v`
Expected: PASS (all tests)
**Step 5: Commit**
```bash
git add src/grist_mcp/main.py
git commit -m "feat(logging): suppress health checks at INFO level"
```
---
### Task 9: Manual Verification
**Step 1: Start development environment**
Run: `make dev-up`
**Step 2: Make some tool calls**
Use Claude Code or another MCP client to call some tools (list_documents, get_records, etc.)
**Step 3: Verify log format**
Check docker logs show the expected format:
```
2026-01-02 10:15:23 | dev-agent (abc...xyz) | get_records | sales | 42 records | success | 125ms
```
**Step 4: Test DEBUG level**
Restart with `LOG_LEVEL=DEBUG` and verify:
- Health checks appear
- Detailed args appear for each call
**Step 5: Clean up**
Run: `make dev-down`
---
### Task 10: Update Module Exports
**Files:**
- Modify: `src/grist_mcp/logging.py`
**Step 1: Add __all__ export list**
At the top of `src/grist_mcp/logging.py` (after imports), add:
```python
__all__ = [
"setup_logging",
"get_logger",
"truncate_token",
"extract_stats",
"format_tool_log",
]
```
**Step 2: Run all tests**
Run: `uv run pytest tests/unit/ -v`
Expected: PASS (all tests)
**Step 3: Final commit**
```bash
git add src/grist_mcp/logging.py
git commit -m "chore(logging): add module exports"
```
---
## Summary
After completing all tasks, the logging module provides:
- `LOG_LEVEL` environment variable support (DEBUG/INFO/WARNING/ERROR)
- Human-readable pipe-delimited log format
- Token truncation for security
- Stats extraction per tool type
- Health check suppression at INFO level
- Multi-line error details
The implementation follows TDD with frequent commits, keeping each change small and verifiable.

View File

@@ -0,0 +1,310 @@
# Session Token Proxy Design
## Problem
When an agent needs to insert, update, or query thousands of records, the LLM must generate all that JSON in its response. This is slow regardless of how fast the actual API call is. The LLM generation time is the bottleneck.
## Solution
Add a "session token" mechanism that lets agents delegate bulk data operations to scripts that call grist-mcp directly over HTTP, bypassing LLM generation entirely.
## Flow
```
1. Agent calls MCP tool:
request_session_token(document="sales", permissions=["write"], ttl_seconds=300)
2. Server generates token, stores in memory:
{"sess_abc123...": {document: "sales", permissions: ["write"], expires: <timestamp>}}
3. Server returns token to agent:
{"token": "sess_abc123...", "expires_in": 300, "proxy_url": "/api/v1/proxy"}
4. Agent spawns script with token:
python bulk_insert.py --token sess_abc123... --file data.csv
5. Script calls grist-mcp HTTP endpoint:
POST /api/v1/proxy
Authorization: Bearer sess_abc123...
{"table": "Orders", "method": "add_records", "records": [...]}
6. Server validates token, executes against Grist, returns result
```
## Design Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Token scope | Single document + permission level | Simpler than multi-doc; matches existing permission model |
| Token storage | In-memory dict | Appropriate for short-lived tokens; restart invalidates (acceptable) |
| HTTP interface | Wrapped endpoint `/api/v1/proxy` | Simpler than mirroring Grist API paths |
| Request format | Discrete fields (table, method, etc.) | Scripts don't need to know Grist internals or doc IDs |
| Document in request | Implicit from token | Token is scoped to one document; no need to specify |
| Server architecture | Single process, add routes | Already running HTTP for SSE; just add routes |
## MCP Tool: get_proxy_documentation
Returns complete documentation for the HTTP proxy API. Agents call this when writing scripts that will use the proxy.
**Input schema**:
```json
{
"type": "object",
"properties": {},
"required": []
}
```
**Response**:
```json
{
"description": "HTTP proxy API for bulk data operations. Use request_session_token to get a short-lived token, then call the proxy endpoint directly from scripts.",
"endpoint": "POST /api/v1/proxy",
"authentication": "Bearer token in Authorization header",
"request_format": {
"method": "Operation name (required)",
"table": "Table name (required for most operations)",
"...": "Additional fields vary by method"
},
"methods": {
"get_records": {
"description": "Fetch records from a table",
"fields": {"table": "string", "filter": "object (optional)", "sort": "string (optional)", "limit": "integer (optional)"}
},
"sql_query": {
"description": "Run a read-only SQL query",
"fields": {"query": "string"}
},
"list_tables": {
"description": "List all tables in the document",
"fields": {}
},
"describe_table": {
"description": "Get column information for a table",
"fields": {"table": "string"}
},
"add_records": {
"description": "Add records to a table",
"fields": {"table": "string", "records": "array of objects"}
},
"update_records": {
"description": "Update existing records",
"fields": {"table": "string", "records": "array of {id, fields}"}
},
"delete_records": {
"description": "Delete records by ID",
"fields": {"table": "string", "record_ids": "array of integers"}
},
"create_table": {
"description": "Create a new table",
"fields": {"table_id": "string", "columns": "array of {id, type}"}
},
"add_column": {
"description": "Add a column to a table",
"fields": {"table": "string", "column_id": "string", "column_type": "string", "formula": "string (optional)"}
},
"modify_column": {
"description": "Modify a column's type or formula",
"fields": {"table": "string", "column_id": "string", "type": "string (optional)", "formula": "string (optional)"}
},
"delete_column": {
"description": "Delete a column",
"fields": {"table": "string", "column_id": "string"}
}
},
"response_format": {
"success": {"success": true, "data": "..."},
"error": {"success": false, "error": "message", "code": "ERROR_CODE"}
},
"error_codes": ["UNAUTHORIZED", "INVALID_TOKEN", "TOKEN_EXPIRED", "INVALID_REQUEST", "GRIST_ERROR"],
"example_script": "#!/usr/bin/env python3\nimport requests\nimport sys\n\ntoken = sys.argv[1]\nhost = sys.argv[2]\n\nresponse = requests.post(\n f'{host}/api/v1/proxy',\n headers={'Authorization': f'Bearer {token}'},\n json={'method': 'add_records', 'table': 'Orders', 'records': [{'item': 'Widget', 'qty': 100}]}\n)\nprint(response.json())"
}
```
## MCP Tool: request_session_token
**Input schema**:
```json
{
"type": "object",
"properties": {
"document": {
"type": "string",
"description": "Document name to grant access to"
},
"permissions": {
"type": "array",
"items": {"type": "string", "enum": ["read", "write", "schema"]},
"description": "Permission levels to grant (cannot exceed agent's permissions)"
},
"ttl_seconds": {
"type": "integer",
"description": "Token lifetime in seconds (max 3600, default 300)"
}
},
"required": ["document", "permissions"]
}
```
**Response**:
```json
{
"token": "sess_a1b2c3d4...",
"document": "sales",
"permissions": ["write"],
"expires_at": "2025-01-02T15:30:00Z",
"proxy_url": "/api/v1/proxy"
}
```
**Validation**:
- Agent must have access to the requested document
- Requested permissions cannot exceed agent's permissions for that document
- TTL capped at 3600 seconds (1 hour), default 300 seconds (5 minutes)
## Proxy Endpoint
**Endpoint**: `POST /api/v1/proxy`
**Authentication**: `Authorization: Bearer <session_token>`
**Request body** - method determines required fields:
```python
# Read operations
{"method": "get_records", "table": "Orders", "filter": {...}, "sort": "date", "limit": 1000}
{"method": "sql_query", "query": "SELECT * FROM Orders WHERE amount > 100"}
{"method": "list_tables"}
{"method": "describe_table", "table": "Orders"}
# Write operations
{"method": "add_records", "table": "Orders", "records": [{...}, {...}]}
{"method": "update_records", "table": "Orders", "records": [{"id": 1, "fields": {...}}]}
{"method": "delete_records", "table": "Orders", "record_ids": [1, 2, 3]}
# Schema operations
{"method": "create_table", "table_id": "NewTable", "columns": [{...}]}
{"method": "add_column", "table": "Orders", "column_id": "status", "column_type": "Text"}
{"method": "modify_column", "table": "Orders", "column_id": "status", "type": "Choice"}
{"method": "delete_column", "table": "Orders", "column_id": "old_field"}
```
**Response format**:
```json
{"success": true, "data": {...}}
{"success": false, "error": "Permission denied", "code": "UNAUTHORIZED"}
```
**Error codes**:
- `UNAUTHORIZED` - Permission denied for this operation
- `INVALID_TOKEN` - Token format invalid or not found
- `TOKEN_EXPIRED` - Token has expired
- `INVALID_REQUEST` - Malformed request body
- `GRIST_ERROR` - Error from Grist API
## Implementation Architecture
### New Files
**`src/grist_mcp/session.py`** - Session token management:
```python
@dataclass
class SessionToken:
token: str
document: str
permissions: list[str]
agent_name: str
created_at: datetime
expires_at: datetime
class SessionTokenManager:
def __init__(self):
self._tokens: dict[str, SessionToken] = {}
def create_token(self, agent: Agent, document: str,
permissions: list[str], ttl_seconds: int) -> SessionToken:
"""Create a new session token. Validates permissions against agent's scope."""
...
def validate_token(self, token: str) -> SessionToken | None:
"""Validate token and return session info. Returns None if invalid/expired."""
# Also cleans up this token if expired
...
def cleanup_expired(self) -> int:
"""Remove all expired tokens. Returns count removed."""
...
```
**`src/grist_mcp/proxy.py`** - HTTP proxy handler:
```python
async def handle_proxy(
scope: Scope,
receive: Receive,
send: Send,
token_manager: SessionTokenManager,
auth: Authenticator
) -> None:
"""Handle POST /api/v1/proxy requests."""
# 1. Extract Bearer token from Authorization header
# 2. Validate session token
# 3. Parse request body (method, table, etc.)
# 4. Check permissions for requested method
# 5. Build GristClient for the token's document
# 6. Dispatch to appropriate tool function
# 7. Return JSON response
```
### Modified Files
**`src/grist_mcp/main.py`**:
- Import `SessionTokenManager` and `handle_proxy`
- Instantiate `SessionTokenManager` in `create_app()`
- Add route: `elif path == "/api/v1/proxy" and method == "POST"`
- Pass `token_manager` to `create_server()`
**`src/grist_mcp/server.py`**:
- Accept `token_manager` parameter in `create_server()`
- Add `get_proxy_documentation` tool to `list_tools()` (no parameters, returns static docs)
- Add `request_session_token` tool to `list_tools()`
- Add handlers in `call_tool()` for both tools
## Security
1. **No privilege escalation** - Session token can only grant permissions the agent already has for the document. Validated at token creation.
2. **Short-lived by default** - 5 minute default TTL, 1 hour maximum cap.
3. **Token format** - Prefixed with `sess_` to distinguish from agent tokens. Generated with `secrets.token_urlsafe(32)`.
4. **Lazy cleanup** - Expired tokens removed during validation. No background task needed.
5. **Audit logging** - Token creation and proxy requests logged with agent name, document, method.
## Testing
### Unit Tests
**`tests/unit/test_session.py`**:
- Token creation with valid permissions
- Token creation fails when exceeding agent permissions
- Token validation succeeds for valid token
- Token validation fails for expired token
- Token validation fails for unknown token
- TTL capping at maximum
- Cleanup removes expired tokens
**`tests/unit/test_proxy.py`**:
- Request parsing for each method type
- Error response for invalid token
- Error response for expired token
- Error response for permission denied
- Error response for malformed request
- Successful dispatch to each tool function (mocked)
### Integration Tests
**`tests/integration/test_session_proxy.py`**:
- Full flow: MCP token request → HTTP proxy call → Grist operation
- Verify data actually written to Grist
- Verify token expiry prevents access

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,187 @@
# Attachment Upload Feature Design
**Date:** 2026-01-03
**Status:** Approved
## Summary
Add an `upload_attachment` MCP tool to upload files to Grist documents and receive an attachment ID for linking to records.
## Design Decisions
| Decision | Choice | Rationale |
|----------|--------|-----------|
| Content encoding | Base64 string | MCP tools use JSON; binary must be encoded |
| Batch support | Single file only | YAGNI; caller can loop if needed |
| Linking behavior | Upload only, return ID | Single responsibility; use existing `update_records` to link |
| Download support | Not included | YAGNI; can add later if needed |
| Permission level | Write | Attachments are data, not schema |
| Proxy support | MCP tool only | Reduces scope; scripts can use Grist API directly |
## Tool Interface
### Input Schema
```json
{
"type": "object",
"properties": {
"document": {
"type": "string",
"description": "Document name"
},
"filename": {
"type": "string",
"description": "Filename with extension (e.g., 'invoice.pdf')"
},
"content_base64": {
"type": "string",
"description": "File content as base64-encoded string"
},
"content_type": {
"type": "string",
"description": "MIME type (optional, auto-detected from filename if omitted)"
}
},
"required": ["document", "filename", "content_base64"]
}
```
### Response
```json
{
"attachment_id": 42,
"filename": "invoice.pdf",
"size_bytes": 30720
}
```
### Usage Example
```python
# 1. Upload attachment
result = upload_attachment(
document="accounting",
filename="Invoice-001.pdf",
content_base64="JVBERi0xLjQK..."
)
# 2. Link to record via existing update_records tool
update_records("Bills", [{
"id": 1,
"fields": {"Attachment": [result["attachment_id"]]}
}])
```
## Implementation
### Files to Modify
1. **`src/grist_mcp/grist_client.py`** - Add `upload_attachment()` method
2. **`src/grist_mcp/tools/write.py`** - Add tool function
3. **`src/grist_mcp/server.py`** - Register tool
### GristClient Method
```python
async def upload_attachment(
self,
filename: str,
content: bytes,
content_type: str | None = None
) -> dict:
"""Upload a file attachment. Returns attachment metadata."""
if content_type is None:
content_type = "application/octet-stream"
files = {"upload": (filename, content, content_type)}
async with httpx.AsyncClient(timeout=self._timeout) as client:
response = await client.post(
f"{self._base_url}/attachments",
headers=self._headers,
files=files,
)
response.raise_for_status()
# Grist returns list of attachment IDs
attachment_ids = response.json()
return {
"attachment_id": attachment_ids[0],
"filename": filename,
"size_bytes": len(content),
}
```
### Tool Function
```python
import base64
import mimetypes
async def upload_attachment(
agent: Agent,
auth: Authenticator,
document: str,
filename: str,
content_base64: str,
content_type: str | None = None,
client: GristClient | None = None,
) -> dict:
"""Upload a file attachment to a document."""
auth.authorize(agent, document, Permission.WRITE)
# Decode base64
try:
content = base64.b64decode(content_base64)
except Exception:
raise ValueError("Invalid base64 encoding")
# Auto-detect MIME type if not provided
if content_type is None:
content_type, _ = mimetypes.guess_type(filename)
if content_type is None:
content_type = "application/octet-stream"
if client is None:
doc = auth.get_document(document)
client = GristClient(doc)
return await client.upload_attachment(filename, content, content_type)
```
## Error Handling
| Error | Cause | Response |
|-------|-------|----------|
| Invalid base64 | Malformed content_base64 | `ValueError: Invalid base64 encoding` |
| Authorization | Agent lacks write permission | `AuthError` (existing pattern) |
| Grist API error | Upload fails | `httpx.HTTPStatusError` (existing pattern) |
## Testing
### Unit Tests
**`tests/unit/test_tools_write.py`:**
- `test_upload_attachment_success` - Valid base64, returns attachment_id
- `test_upload_attachment_invalid_base64` - Raises ValueError
- `test_upload_attachment_auth_required` - Verifies write permission check
- `test_upload_attachment_mime_detection` - Auto-detects type from filename
**`tests/unit/test_grist_client.py`:**
- `test_upload_attachment_api_call` - Correct multipart request format
- `test_upload_attachment_with_explicit_content_type` - Passes through MIME type
### Mock Approach
Mock `httpx.AsyncClient` responses; no Grist server needed for unit tests.
## Future Considerations
Not included in this implementation (YAGNI):
- Batch upload (multiple files)
- Download attachment
- Proxy API support
- Size limit validation (rely on Grist's limits)
These can be added if real use cases emerge.

View File

@@ -1,6 +1,6 @@
[project]
name = "grist-mcp"
version = "0.1.0"
version = "1.3.0"
description = "MCP server for AI agents to interact with Grist documents"
requires-python = ">=3.14"
dependencies = [
@@ -17,6 +17,8 @@ dev = [
"pytest>=8.0.0",
"pytest-asyncio>=0.24.0",
"pytest-httpx>=0.32.0",
"pytest-timeout>=2.0.0",
"rich>=13.0.0",
]
[build-system]
@@ -25,4 +27,7 @@ build-backend = "hatchling.build"
[tool.pytest.ini_options]
asyncio_mode = "auto"
testpaths = ["tests"]
testpaths = ["tests/unit", "tests/integration"]
markers = [
"integration: marks tests as integration tests (require Docker containers)",
]

3
renovate.json Normal file
View File

@@ -0,0 +1,3 @@
{
"$schema": "https://docs.renovatebot.com/renovate-schema.json"
}

View File

@@ -0,0 +1,7 @@
#!/bin/bash
# scripts/get-test-instance-id.sh
# Generate a unique instance ID from git branch for parallel test isolation
BRANCH=$(git rev-parse --abbrev-ref HEAD 2>/dev/null || echo "unknown")
# Sanitize: replace non-alphanumeric with dash, limit length
echo "$BRANCH" | sed 's/[^a-zA-Z0-9]/-/g' | cut -c1-20

View File

@@ -0,0 +1,39 @@
#!/bin/bash
# scripts/run-integration-tests.sh
# Run integration tests with branch isolation and dynamic port discovery
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
# Get branch-based instance ID
TEST_INSTANCE_ID=$("$SCRIPT_DIR/get-test-instance-id.sh")
export TEST_INSTANCE_ID
echo "Test instance ID: $TEST_INSTANCE_ID"
# Start containers
cd "$PROJECT_ROOT/deploy/test"
docker compose up -d --build --wait
# Discover dynamic ports
GRIST_MCP_PORT=$(docker compose port grist-mcp 3000 | cut -d: -f2)
MOCK_GRIST_PORT=$(docker compose port mock-grist 8484 | cut -d: -f2)
echo "grist-mcp available at: http://localhost:$GRIST_MCP_PORT"
echo "mock-grist available at: http://localhost:$MOCK_GRIST_PORT"
# Export for tests
export GRIST_MCP_URL="http://localhost:$GRIST_MCP_PORT"
export MOCK_GRIST_URL="http://localhost:$MOCK_GRIST_PORT"
# Run tests
cd "$PROJECT_ROOT"
TEST_EXIT=0
uv run pytest tests/integration/ -v || TEST_EXIT=$?
# Cleanup
cd "$PROJECT_ROOT/deploy/test"
docker compose down -v
exit $TEST_EXIT

248
scripts/test-runner.py Executable file
View File

@@ -0,0 +1,248 @@
#!/usr/bin/env python3
"""Rich test runner with progress display and fail-fast behavior.
Runs unit tests, then integration tests with real-time progress indication.
"""
import argparse
import os
import re
import subprocess
import sys
from dataclasses import dataclass, field
from enum import Enum
from pathlib import Path
from rich.console import Console
from rich.live import Live
from rich.table import Table
from rich.text import Text
class Status(Enum):
PENDING = "pending"
RUNNING = "running"
PASSED = "passed"
FAILED = "failed"
@dataclass
class TestStage:
name: str
command: list[str]
status: Status = Status.PENDING
progress: int = 0
total: int = 0
passed: int = 0
failed: int = 0
current_test: str = ""
duration: float = 0.0
output: list[str] = field(default_factory=list)
# Regex patterns for parsing pytest output
PYTEST_PROGRESS = re.compile(r"\[\s*(\d+)%\]")
PYTEST_COLLECTING = re.compile(r"collected (\d+) items?")
PYTEST_RESULT = re.compile(r"(\d+) passed")
PYTEST_FAILED = re.compile(r"(\d+) failed")
PYTEST_DURATION = re.compile(r"in ([\d.]+)s")
PYTEST_TEST_LINE = re.compile(r"(tests/\S+::\S+)")
class TestRunner:
def __init__(self, verbose: bool = False):
self.console = Console()
self.verbose = verbose
self.project_root = Path(__file__).parent.parent
self.stages: list[TestStage] = []
self.all_passed = True
def add_stage(self, name: str, command: list[str]) -> None:
self.stages.append(TestStage(name=name, command=command))
def render_table(self) -> Table:
table = Table(show_header=False, box=None, padding=(0, 1))
table.add_column("Status", width=3)
table.add_column("Name", width=20)
table.add_column("Progress", width=30)
table.add_column("Time", width=8)
for stage in self.stages:
# Status icon
if stage.status == Status.PENDING:
icon = Text("", style="dim")
elif stage.status == Status.RUNNING:
icon = Text("", style="yellow")
elif stage.status == Status.PASSED:
icon = Text("", style="green")
else:
icon = Text("", style="red")
# Progress display
if stage.status == Status.PENDING:
progress = Text("pending", style="dim")
elif stage.status == Status.RUNNING:
if stage.total > 0:
bar_width = 20
filled = int(bar_width * stage.progress / 100)
bar = "" * filled + "" * (bar_width - filled)
progress = Text(f"{bar} {stage.progress:3d}% {stage.passed}/{stage.total}")
if stage.current_test:
progress.append(f"\n{stage.current_test[:40]}", style="dim")
else:
progress = Text("collecting...", style="yellow")
elif stage.status == Status.PASSED:
progress = Text(f"{stage.passed}/{stage.total}", style="green")
else:
progress = Text(f"{stage.passed}/{stage.total} ({stage.failed} failed)", style="red")
# Duration
if stage.duration > 0:
duration = Text(f"{stage.duration:.1f}s", style="dim")
else:
duration = Text("")
table.add_row(icon, stage.name, progress, duration)
return table
def parse_output(self, stage: TestStage, line: str) -> None:
"""Parse pytest output line and update stage state."""
stage.output.append(line)
# Check for collected count
match = PYTEST_COLLECTING.search(line)
if match:
stage.total = int(match.group(1))
# Check for progress percentage
match = PYTEST_PROGRESS.search(line)
if match:
stage.progress = int(match.group(1))
# Estimate passed based on progress
if stage.total > 0:
stage.passed = int(stage.total * stage.progress / 100)
# Check for current test
match = PYTEST_TEST_LINE.search(line)
if match:
stage.current_test = match.group(1)
# Check for final results
match = PYTEST_RESULT.search(line)
if match:
stage.passed = int(match.group(1))
match = PYTEST_FAILED.search(line)
if match:
stage.failed = int(match.group(1))
match = PYTEST_DURATION.search(line)
if match:
stage.duration = float(match.group(1))
def run_stage(self, stage: TestStage, live: Live) -> bool:
"""Run a single test stage and return True if passed."""
stage.status = Status.RUNNING
live.update(self.render_table())
env = os.environ.copy()
env["PYTHONUNBUFFERED"] = "1"
try:
process = subprocess.Popen(
stage.command,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
text=True,
cwd=self.project_root,
env=env,
)
for line in process.stdout:
line = line.rstrip()
self.parse_output(stage, line)
live.update(self.render_table())
if self.verbose:
self.console.print(line)
process.wait()
if process.returncode == 0:
stage.status = Status.PASSED
stage.progress = 100
return True
else:
stage.status = Status.FAILED
self.all_passed = False
return False
except Exception as e:
stage.status = Status.FAILED
stage.output.append(str(e))
self.all_passed = False
return False
finally:
live.update(self.render_table())
def run_all(self) -> bool:
"""Run all test stages with fail-fast behavior."""
self.console.print()
with Live(self.render_table(), console=self.console, refresh_per_second=4) as live:
for stage in self.stages:
if not self.run_stage(stage, live):
# Fail fast - don't run remaining stages
break
self.console.print()
# Print summary
if self.all_passed:
self.console.print("[green]All tests passed![/green]")
else:
self.console.print("[red]Tests failed![/red]")
# Print failure details
for stage in self.stages:
if stage.status == Status.FAILED:
self.console.print(f"\n[red]Failures in {stage.name}:[/red]")
# Print last 20 lines of output for context
for line in stage.output[-20:]:
self.console.print(f" {line}")
return self.all_passed
def main():
parser = argparse.ArgumentParser(description="Run tests with rich progress display")
parser.add_argument("-v", "--verbose", action="store_true", help="Show full test output")
parser.add_argument("--unit-only", action="store_true", help="Run only unit tests")
parser.add_argument("--integration-only", action="store_true", help="Run only integration tests")
args = parser.parse_args()
runner = TestRunner(verbose=args.verbose)
# Determine which stages to run
run_unit = not args.integration_only
run_integration = not args.unit_only
if run_unit:
runner.add_stage(
"Unit Tests",
["uv", "run", "pytest", "tests/unit/", "-v", "--tb=short"],
)
if run_integration:
# Use the integration test script which handles containers
runner.add_stage(
"Integration Tests",
["bash", "./scripts/run-integration-tests.sh"],
)
success = runner.run_all()
sys.exit(0 if success else 1)
if __name__ == "__main__":
main()

View File

@@ -14,6 +14,7 @@ class Document:
url: str
doc_id: str
api_key: str
host_header: str | None = None # Override Host header for Docker networking
@dataclass
@@ -78,6 +79,7 @@ def load_config(config_path: str) -> Config:
url=doc_data["url"],
doc_id=doc_data["doc_id"],
api_key=doc_data["api_key"],
host_header=doc_data.get("host_header"),
)
# Parse tokens

View File

@@ -17,6 +17,8 @@ class GristClient:
self._doc = document
self._base_url = f"{document.url.rstrip('/')}/api/docs/{document.doc_id}"
self._headers = {"Authorization": f"Bearer {document.api_key}"}
if document.host_header:
self._headers["Host"] = document.host_header
self._timeout = timeout
async def _request(self, method: str, path: str, **kwargs) -> dict:
@@ -114,6 +116,39 @@ class GristClient:
"""Delete records by ID."""
await self._request("POST", f"/tables/{table}/data/delete", json=record_ids)
async def upload_attachment(
self,
filename: str,
content: bytes,
content_type: str = "application/octet-stream",
) -> dict:
"""Upload a file attachment. Returns attachment metadata.
Args:
filename: Name for the uploaded file.
content: File content as bytes.
content_type: MIME type of the file.
Returns:
Dict with attachment_id, filename, and size_bytes.
"""
files = {"upload": (filename, content, content_type)}
async with httpx.AsyncClient(timeout=self._timeout) as client:
response = await client.post(
f"{self._base_url}/attachments",
headers=self._headers,
files=files,
)
response.raise_for_status()
# Grist returns list of attachment IDs
attachment_ids = response.json()
return {
"attachment_id": attachment_ids[0],
"filename": filename,
"size_bytes": len(content),
}
# Schema operations
async def create_table(self, table_id: str, columns: list[dict]) -> str:
@@ -160,7 +195,8 @@ class GristClient:
if formula is not None:
fields["formula"] = formula
await self._request("PATCH", f"/tables/{table}/columns/{column_id}", json={"fields": fields})
payload = {"columns": [{"id": column_id, "fields": fields}]}
await self._request("PATCH", f"/tables/{table}/columns", json=payload)
async def delete_column(self, table: str, column_id: str) -> None:
"""Delete a column from a table."""

120
src/grist_mcp/logging.py Normal file
View File

@@ -0,0 +1,120 @@
"""Logging configuration and utilities."""
import logging
import os
from datetime import datetime
__all__ = [
"setup_logging",
"get_logger",
"truncate_token",
"extract_stats",
"format_tool_log",
]
def setup_logging() -> None:
"""Configure logging based on LOG_LEVEL environment variable.
Valid levels: DEBUG, INFO, WARNING, ERROR (default: INFO)
"""
level_name = os.environ.get("LOG_LEVEL", "INFO").upper()
level = getattr(logging, level_name, None)
if not isinstance(level, int):
level = logging.INFO
logger = logging.getLogger("grist_mcp")
logger.setLevel(level)
logger.propagate = False # Prevent duplicate logs to root logger
# Only add handler if not already configured
if not logger.handlers:
handler = logging.StreamHandler()
handler.setFormatter(logging.Formatter("%(message)s"))
logger.addHandler(handler)
def extract_stats(tool_name: str, arguments: dict, result: dict) -> str:
"""Extract meaningful stats from tool call based on tool type."""
if tool_name == "list_documents":
count = len(result.get("documents", []))
return f"{count} docs"
if tool_name == "list_tables":
count = len(result.get("tables", []))
return f"{count} tables"
if tool_name == "describe_table":
count = len(result.get("columns", []))
return f"{count} columns"
if tool_name == "get_records":
count = len(result.get("records", []))
return f"{count} records"
if tool_name == "sql_query":
count = len(result.get("records", []))
return f"{count} rows"
if tool_name == "add_records":
count = len(arguments.get("records", []))
return f"{count} records"
if tool_name == "update_records":
count = len(arguments.get("records", []))
return f"{count} records"
if tool_name == "delete_records":
count = len(arguments.get("record_ids", []))
return f"{count} records"
if tool_name == "create_table":
count = len(arguments.get("columns", []))
return f"{count} columns"
if tool_name in ("add_column", "modify_column", "delete_column"):
return "1 column"
return "-"
def truncate_token(token: str) -> str:
"""Truncate token to show first 3 and last 3 chars.
Tokens 8 chars or shorter show *** for security.
"""
if len(token) <= 8:
return "***"
return f"{token[:3]}...{token[-3:]}"
def format_tool_log(
agent_name: str,
token: str,
tool: str,
document: str | None,
stats: str,
status: str,
duration_ms: int,
error_message: str | None = None,
) -> str:
"""Format a tool call log line.
Format: YYYY-MM-DD HH:MM:SS | agent (token) | tool | doc | stats | status | duration
"""
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
truncated = truncate_token(token)
doc = document if document else "-"
line = f"{timestamp} | {agent_name} ({truncated}) | {tool} | {doc} | {stats} | {status} | {duration_ms}ms"
if error_message:
line += f"\n {error_message}"
return line
def get_logger(name: str) -> logging.Logger:
"""Get a child logger under the grist_mcp namespace."""
return logging.getLogger(f"grist_mcp.{name}")

View File

@@ -1,60 +1,317 @@
"""Main entry point for the MCP server with SSE transport."""
import json
import logging
import os
import sys
from typing import Any
import uvicorn
from mcp.server.sse import SseServerTransport
from starlette.applications import Starlette
from starlette.routing import Route
from grist_mcp.server import create_server
from grist_mcp.auth import AuthError
from grist_mcp.config import Config, load_config
from grist_mcp.auth import Authenticator, AuthError
from grist_mcp.session import SessionTokenManager
from grist_mcp.proxy import parse_proxy_request, dispatch_proxy_request, ProxyError
from grist_mcp.logging import setup_logging
def create_app() -> Starlette:
"""Create the Starlette ASGI application."""
config_path = os.environ.get("CONFIG_PATH", "/app/config.yaml")
Scope = dict[str, Any]
Receive = Any
Send = Any
if not os.path.exists(config_path):
print(f"Error: Config file not found at {config_path}", file=sys.stderr)
sys.exit(1)
def _get_bearer_token(scope: Scope) -> str | None:
"""Extract Bearer token from Authorization header."""
headers = dict(scope.get("headers", []))
auth_header = headers.get(b"authorization", b"").decode()
if auth_header.startswith("Bearer "):
return auth_header[7:]
return None
async def send_error(send: Send, status: int, message: str) -> None:
"""Send an HTTP error response."""
body = json.dumps({"error": message}).encode()
await send({
"type": "http.response.start",
"status": status,
"headers": [[b"content-type", b"application/json"]],
})
await send({
"type": "http.response.body",
"body": body,
})
async def send_json_response(send: Send, status: int, data: dict) -> None:
"""Send a JSON response."""
body = json.dumps(data).encode()
await send({
"type": "http.response.start",
"status": status,
"headers": [[b"content-type", b"application/json"]],
})
await send({
"type": "http.response.body",
"body": body,
})
CONFIG_TEMPLATE = """\
# grist-mcp configuration
#
# Token Generation:
# python -c "import secrets; print(secrets.token_urlsafe(32))"
# openssl rand -base64 32
# Document definitions
documents:
my-document:
url: https://docs.getgrist.com
doc_id: YOUR_DOC_ID
api_key: ${GRIST_API_KEY}
# Agent tokens with access scopes
tokens:
- token: REPLACE_WITH_GENERATED_TOKEN
name: my-agent
scope:
- document: my-document
permissions: [read, write]
"""
def _ensure_config(config_path: str) -> bool:
"""Ensure config file exists. Creates template if missing.
Returns True if config is ready, False if template was created.
"""
path = os.path.abspath(config_path)
# Check if path is a directory (Docker creates this when mounting missing file)
if os.path.isdir(path):
print(f"ERROR: Config path is a directory: {path}")
print()
print("This usually means the config file doesn't exist on the host.")
print("Please create the config file before starting the container:")
print()
print(f" mkdir -p $(dirname {config_path})")
print(f" cat > {config_path} << 'EOF'")
print(CONFIG_TEMPLATE)
print("EOF")
print()
return False
if os.path.exists(path):
return True
# Create template config
try:
server = create_server(config_path)
except AuthError as e:
print(f"Authentication error: {e}", file=sys.stderr)
sys.exit(1)
with open(path, "w") as f:
f.write(CONFIG_TEMPLATE)
print(f"Created template configuration at: {path}")
print()
print("Please edit this file to configure your Grist documents and agent tokens,")
print("then restart the server.")
except PermissionError:
print(f"ERROR: Cannot create config file at: {path}")
print()
print("Please create the config file manually before starting the container.")
print()
return False
def create_app(config: Config):
"""Create the ASGI application."""
auth = Authenticator(config)
token_manager = SessionTokenManager()
proxy_base_url = os.environ.get("GRIST_MCP_URL")
sse = SseServerTransport("/messages")
async def handle_sse(request):
async with sse.connect_sse(
request.scope, request.receive, request._send
) as streams:
async def handle_sse(scope: Scope, receive: Receive, send: Send) -> None:
# Extract and validate token from Authorization header
token = _get_bearer_token(scope)
if not token:
await send_error(send, 401, "Missing Authorization header")
return
try:
agent = auth.authenticate(token)
except AuthError as e:
await send_error(send, 401, str(e))
return
# Create a server instance for this authenticated connection
server = create_server(auth, agent, token_manager, proxy_base_url)
async with sse.connect_sse(scope, receive, send) as streams:
await server.run(
streams[0], streams[1], server.create_initialization_options()
)
async def handle_messages(request):
await sse.handle_post_message(request.scope, request.receive, request._send)
async def handle_messages(scope: Scope, receive: Receive, send: Send) -> None:
await sse.handle_post_message(scope, receive, send)
return Starlette(
routes=[
Route("/sse", endpoint=handle_sse),
Route("/messages", endpoint=handle_messages, methods=["POST"]),
]
)
async def handle_health(scope: Scope, receive: Receive, send: Send) -> None:
await send({
"type": "http.response.start",
"status": 200,
"headers": [[b"content-type", b"application/json"]],
})
await send({
"type": "http.response.body",
"body": b'{"status":"ok"}',
})
async def handle_not_found(scope: Scope, receive: Receive, send: Send) -> None:
await send({
"type": "http.response.start",
"status": 404,
"headers": [[b"content-type", b"application/json"]],
})
await send({
"type": "http.response.body",
"body": b'{"error":"Not found"}',
})
async def handle_proxy(scope: Scope, receive: Receive, send: Send) -> None:
# Extract token
token = _get_bearer_token(scope)
if not token:
await send_json_response(send, 401, {
"success": False,
"error": "Missing Authorization header",
"code": "INVALID_TOKEN",
})
return
# Validate session token
session = token_manager.validate_token(token)
if session is None:
await send_json_response(send, 401, {
"success": False,
"error": "Invalid or expired token",
"code": "TOKEN_EXPIRED",
})
return
# Read request body
body = b""
while True:
message = await receive()
body += message.get("body", b"")
if not message.get("more_body", False):
break
try:
request_data = json.loads(body)
except json.JSONDecodeError:
await send_json_response(send, 400, {
"success": False,
"error": "Invalid JSON",
"code": "INVALID_REQUEST",
})
return
# Parse and dispatch
try:
request = parse_proxy_request(request_data)
result = await dispatch_proxy_request(request, session, auth)
await send_json_response(send, 200, result)
except ProxyError as e:
status = 403 if e.code == "UNAUTHORIZED" else 400
await send_json_response(send, status, {
"success": False,
"error": e.message,
"code": e.code,
})
async def app(scope: Scope, receive: Receive, send: Send) -> None:
if scope["type"] != "http":
return
path = scope["path"]
method = scope["method"]
if path == "/health" and method == "GET":
await handle_health(scope, receive, send)
elif path == "/sse" and method == "GET":
await handle_sse(scope, receive, send)
elif path == "/messages" and method == "POST":
await handle_messages(scope, receive, send)
elif path == "/api/v1/proxy" and method == "POST":
await handle_proxy(scope, receive, send)
else:
await handle_not_found(scope, receive, send)
return app
def _print_mcp_config(external_port: int, tokens: list) -> None:
"""Print Claude Code MCP configuration."""
# Use GRIST_MCP_URL if set, otherwise fall back to localhost
base_url = os.environ.get("GRIST_MCP_URL")
if base_url:
sse_url = f"{base_url.rstrip('/')}/sse"
else:
sse_url = f"http://localhost:{external_port}/sse"
print()
print("Claude Code MCP configuration (copy-paste to add):")
for t in tokens:
config = (
f'{{"type": "sse", "url": "{sse_url}", '
f'"headers": {{"Authorization": "Bearer {t.token}"}}}}'
)
print(f" claude mcp add-json grist-{t.name} '{config}'")
print()
class UvicornAccessFilter(logging.Filter):
"""Suppress uvicorn access logs unless LOG_LEVEL is DEBUG.
At INFO level, only grist_mcp tool logs are shown.
At DEBUG level, all HTTP requests are visible.
"""
def filter(self, record: logging.LogRecord) -> bool:
# Only show uvicorn access logs at DEBUG level
return os.environ.get("LOG_LEVEL", "INFO").upper() == "DEBUG"
def main():
"""Run the SSE server."""
port = int(os.environ.get("PORT", "3000"))
app = create_app()
external_port = int(os.environ.get("EXTERNAL_PORT", str(port)))
config_path = os.environ.get("CONFIG_PATH", "/app/config.yaml")
setup_logging()
# Suppress uvicorn access logs at INFO level (only show tool logs)
logging.getLogger("uvicorn.access").addFilter(UvicornAccessFilter())
if not _ensure_config(config_path):
return
config = load_config(config_path)
print(f"Starting grist-mcp SSE server on port {port}")
print(f" SSE endpoint: http://0.0.0.0:{port}/sse")
print(f" Messages endpoint: http://0.0.0.0:{port}/messages")
uvicorn.run(app, host="0.0.0.0", port=port)
_print_mcp_config(external_port, config.tokens)
app = create_app(config)
# Configure uvicorn logging to reduce health check noise
log_config = uvicorn.config.LOGGING_CONFIG
log_config["formatters"]["default"]["fmt"] = "%(message)s"
log_config["formatters"]["access"]["fmt"] = "%(message)s"
uvicorn.run(app, host="0.0.0.0", port=port, log_config=log_config)
if __name__ == "__main__":

192
src/grist_mcp/proxy.py Normal file
View File

@@ -0,0 +1,192 @@
"""HTTP proxy handler for session token access."""
from dataclasses import dataclass
from typing import Any
from grist_mcp.auth import Authenticator
from grist_mcp.grist_client import GristClient
from grist_mcp.session import SessionToken
class ProxyError(Exception):
"""Error during proxy request processing."""
def __init__(self, message: str, code: str):
self.message = message
self.code = code
super().__init__(message)
@dataclass
class ProxyRequest:
"""Parsed proxy request."""
method: str
table: str | None = None
records: list[dict] | None = None
record_ids: list[int] | None = None
filter: dict | None = None
sort: str | None = None
limit: int | None = None
query: str | None = None
table_id: str | None = None
columns: list[dict] | None = None
column_id: str | None = None
column_type: str | None = None
formula: str | None = None
type: str | None = None
METHODS_REQUIRING_TABLE = {
"get_records", "describe_table", "add_records", "update_records",
"delete_records", "add_column", "modify_column", "delete_column",
}
def parse_proxy_request(body: dict[str, Any]) -> ProxyRequest:
"""Parse and validate a proxy request body."""
if "method" not in body:
raise ProxyError("Missing required field: method", "INVALID_REQUEST")
method = body["method"]
if method in METHODS_REQUIRING_TABLE and "table" not in body:
raise ProxyError(f"Missing required field 'table' for method '{method}'", "INVALID_REQUEST")
return ProxyRequest(
method=method,
table=body.get("table"),
records=body.get("records"),
record_ids=body.get("record_ids"),
filter=body.get("filter"),
sort=body.get("sort"),
limit=body.get("limit"),
query=body.get("query"),
table_id=body.get("table_id"),
columns=body.get("columns"),
column_id=body.get("column_id"),
column_type=body.get("column_type"),
formula=body.get("formula"),
type=body.get("type"),
)
# Map methods to required permissions
METHOD_PERMISSIONS = {
"list_tables": "read",
"describe_table": "read",
"get_records": "read",
"sql_query": "read",
"add_records": "write",
"update_records": "write",
"delete_records": "write",
"create_table": "schema",
"add_column": "schema",
"modify_column": "schema",
"delete_column": "schema",
}
async def dispatch_proxy_request(
request: ProxyRequest,
session: SessionToken,
auth: Authenticator,
client: GristClient | None = None,
) -> dict[str, Any]:
"""Dispatch a proxy request to the appropriate handler."""
# Check permission
required_perm = METHOD_PERMISSIONS.get(request.method)
if required_perm is None:
raise ProxyError(f"Unknown method: {request.method}", "INVALID_REQUEST")
if required_perm not in session.permissions:
raise ProxyError(
f"Permission '{required_perm}' required for {request.method}",
"UNAUTHORIZED",
)
# Create client if not provided
if client is None:
doc = auth.get_document(session.document)
client = GristClient(doc)
# Dispatch to appropriate method
try:
if request.method == "list_tables":
data = await client.list_tables()
return {"success": True, "data": {"tables": data}}
elif request.method == "describe_table":
data = await client.describe_table(request.table)
return {"success": True, "data": {"table": request.table, "columns": data}}
elif request.method == "get_records":
data = await client.get_records(
request.table,
filter=request.filter,
sort=request.sort,
limit=request.limit,
)
return {"success": True, "data": {"records": data}}
elif request.method == "sql_query":
if request.query is None:
raise ProxyError("Missing required field: query", "INVALID_REQUEST")
data = await client.sql_query(request.query)
return {"success": True, "data": {"records": data}}
elif request.method == "add_records":
if request.records is None:
raise ProxyError("Missing required field: records", "INVALID_REQUEST")
data = await client.add_records(request.table, request.records)
return {"success": True, "data": {"record_ids": data}}
elif request.method == "update_records":
if request.records is None:
raise ProxyError("Missing required field: records", "INVALID_REQUEST")
await client.update_records(request.table, request.records)
return {"success": True, "data": {"updated": len(request.records)}}
elif request.method == "delete_records":
if request.record_ids is None:
raise ProxyError("Missing required field: record_ids", "INVALID_REQUEST")
await client.delete_records(request.table, request.record_ids)
return {"success": True, "data": {"deleted": len(request.record_ids)}}
elif request.method == "create_table":
if request.table_id is None or request.columns is None:
raise ProxyError("Missing required fields: table_id, columns", "INVALID_REQUEST")
data = await client.create_table(request.table_id, request.columns)
return {"success": True, "data": {"table_id": data}}
elif request.method == "add_column":
if request.column_id is None or request.column_type is None:
raise ProxyError("Missing required fields: column_id, column_type", "INVALID_REQUEST")
await client.add_column(
request.table, request.column_id, request.column_type,
formula=request.formula,
)
return {"success": True, "data": {"column_id": request.column_id}}
elif request.method == "modify_column":
if request.column_id is None:
raise ProxyError("Missing required field: column_id", "INVALID_REQUEST")
await client.modify_column(
request.table, request.column_id,
type=request.type,
formula=request.formula,
)
return {"success": True, "data": {"column_id": request.column_id}}
elif request.method == "delete_column":
if request.column_id is None:
raise ProxyError("Missing required field: column_id", "INVALID_REQUEST")
await client.delete_column(request.table, request.column_id)
return {"success": True, "data": {"deleted": request.column_id}}
else:
raise ProxyError(f"Unknown method: {request.method}", "INVALID_REQUEST")
except ProxyError:
raise
except Exception as e:
raise ProxyError(str(e), "GRIST_ERROR")

View File

@@ -1,13 +1,18 @@
"""MCP server setup and tool registration."""
import json
import os
import time
from mcp.server import Server
from mcp.types import Tool, TextContent
from grist_mcp.config import load_config
from grist_mcp.auth import Authenticator, AuthError, Agent
from grist_mcp.auth import Authenticator, Agent, AuthError
from grist_mcp.session import SessionTokenManager
from grist_mcp.tools.session import get_proxy_documentation as _get_proxy_documentation
from grist_mcp.tools.session import request_session_token as _request_session_token
from grist_mcp.logging import get_logger, extract_stats, format_tool_log
logger = get_logger("server")
from grist_mcp.tools.discovery import list_documents as _list_documents
from grist_mcp.tools.read import list_tables as _list_tables
@@ -17,33 +22,33 @@ from grist_mcp.tools.read import sql_query as _sql_query
from grist_mcp.tools.write import add_records as _add_records
from grist_mcp.tools.write import update_records as _update_records
from grist_mcp.tools.write import delete_records as _delete_records
from grist_mcp.tools.write import upload_attachment as _upload_attachment
from grist_mcp.tools.schema import create_table as _create_table
from grist_mcp.tools.schema import add_column as _add_column
from grist_mcp.tools.schema import modify_column as _modify_column
from grist_mcp.tools.schema import delete_column as _delete_column
def create_server(config_path: str, token: str | None = None) -> Server:
"""Create and configure the MCP server.
def create_server(
auth: Authenticator,
agent: Agent,
token_manager: SessionTokenManager | None = None,
proxy_base_url: str | None = None,
) -> Server:
"""Create and configure the MCP server for an authenticated agent.
Args:
config_path: Path to the configuration YAML file.
token: Agent token for authentication. If not provided, reads from
GRIST_MCP_TOKEN environment variable.
auth: Authenticator instance for permission checks.
agent: The authenticated agent for this server instance.
token_manager: Optional session token manager for HTTP proxy access.
proxy_base_url: Base URL for the proxy endpoint (e.g., "https://example.com").
Raises:
AuthError: If token is invalid or not provided.
Returns:
Configured MCP Server instance.
"""
config = load_config(config_path)
auth = Authenticator(config)
server = Server("grist-mcp")
# Authenticate agent from token (required for all tool calls)
auth_token = token or os.environ.get("GRIST_MCP_TOKEN")
if not auth_token:
raise AuthError("No token provided. Set GRIST_MCP_TOKEN environment variable.")
_current_agent: Agent = auth.authenticate(auth_token)
_current_agent = agent
_proxy_base_url = proxy_base_url
@server.list_tools()
async def list_tools() -> list[Tool]:
@@ -214,10 +219,80 @@ def create_server(config_path: str, token: str | None = None) -> Server:
"required": ["document", "table", "column_id"],
},
),
Tool(
name="upload_attachment",
description="Upload a file attachment to a Grist document. Returns attachment ID for linking to records via update_records.",
inputSchema={
"type": "object",
"properties": {
"document": {
"type": "string",
"description": "Document name",
},
"filename": {
"type": "string",
"description": "Filename with extension (e.g., 'invoice.pdf')",
},
"content_base64": {
"type": "string",
"description": "File content as base64-encoded string",
},
"content_type": {
"type": "string",
"description": "MIME type (optional, auto-detected from filename)",
},
},
"required": ["document", "filename", "content_base64"],
},
),
Tool(
name="get_proxy_documentation",
description="Get complete documentation for the HTTP proxy API",
inputSchema={"type": "object", "properties": {}, "required": []},
),
Tool(
name="request_session_token",
description="Request a short-lived token for direct HTTP API access. Use this to delegate bulk data operations to scripts.",
inputSchema={
"type": "object",
"properties": {
"document": {
"type": "string",
"description": "Document name to grant access to",
},
"permissions": {
"type": "array",
"items": {"type": "string", "enum": ["read", "write", "schema"]},
"description": "Permission levels to grant",
},
"ttl_seconds": {
"type": "integer",
"description": "Token lifetime in seconds (max 3600, default 300)",
},
},
"required": ["document", "permissions"],
},
),
]
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list[TextContent]:
start_time = time.time()
document = arguments.get("document")
# Log arguments at DEBUG level
logger.debug(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats=f"args: {json.dumps(arguments)}",
status="started",
duration_ms=0,
)
)
try:
if name == "list_documents":
result = await _list_documents(_current_agent)
@@ -276,14 +351,74 @@ def create_server(config_path: str, token: str | None = None) -> Server:
_current_agent, auth, arguments["document"], arguments["table"],
arguments["column_id"],
)
elif name == "upload_attachment":
result = await _upload_attachment(
_current_agent, auth, arguments["document"],
arguments["filename"], arguments["content_base64"],
content_type=arguments.get("content_type"),
)
elif name == "get_proxy_documentation":
result = await _get_proxy_documentation()
elif name == "request_session_token":
if token_manager is None:
return [TextContent(type="text", text="Session tokens not enabled")]
result = await _request_session_token(
_current_agent, auth, token_manager,
arguments["document"],
arguments["permissions"],
ttl_seconds=arguments.get("ttl_seconds", 300),
proxy_base_url=_proxy_base_url,
)
else:
return [TextContent(type="text", text=f"Unknown tool: {name}")]
duration_ms = int((time.time() - start_time) * 1000)
stats = extract_stats(name, arguments, result)
logger.info(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats=stats,
status="success",
duration_ms=duration_ms,
)
)
return [TextContent(type="text", text=json.dumps(result))]
except AuthError as e:
duration_ms = int((time.time() - start_time) * 1000)
logger.warning(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats="-",
status="auth_error",
duration_ms=duration_ms,
error_message=str(e),
)
)
return [TextContent(type="text", text=f"Authorization error: {e}")]
except Exception as e:
duration_ms = int((time.time() - start_time) * 1000)
logger.error(
format_tool_log(
agent_name=_current_agent.name,
token=_current_agent.token,
tool=name,
document=document,
stats="-",
status="error",
duration_ms=duration_ms,
error_message=str(e),
)
)
return [TextContent(type="text", text=f"Error: {e}")]
return server

73
src/grist_mcp/session.py Normal file
View File

@@ -0,0 +1,73 @@
"""Session token management for HTTP proxy access."""
import secrets
from dataclasses import dataclass
from datetime import datetime, timedelta, timezone
MAX_TTL_SECONDS = 3600 # 1 hour
DEFAULT_TTL_SECONDS = 300 # 5 minutes
@dataclass
class SessionToken:
"""A short-lived session token for proxy access."""
token: str
document: str
permissions: list[str]
agent_name: str
created_at: datetime
expires_at: datetime
class SessionTokenManager:
"""Manages creation and validation of session tokens."""
def __init__(self):
self._tokens: dict[str, SessionToken] = {}
def create_token(
self,
agent_name: str,
document: str,
permissions: list[str],
ttl_seconds: int = DEFAULT_TTL_SECONDS,
) -> SessionToken:
"""Create a new session token.
TTL is capped at MAX_TTL_SECONDS (1 hour).
"""
now = datetime.now(timezone.utc)
token_str = f"sess_{secrets.token_urlsafe(32)}"
# Cap TTL at maximum
effective_ttl = min(ttl_seconds, MAX_TTL_SECONDS)
session = SessionToken(
token=token_str,
document=document,
permissions=permissions,
agent_name=agent_name,
created_at=now,
expires_at=now + timedelta(seconds=effective_ttl),
)
self._tokens[token_str] = session
return session
def validate_token(self, token: str) -> SessionToken | None:
"""Validate a session token.
Returns the SessionToken if valid and not expired, None otherwise.
Also removes expired tokens lazily.
"""
session = self._tokens.get(token)
if session is None:
return None
now = datetime.now(timezone.utc)
if session.expires_at < now:
# Token expired, remove it
del self._tokens[token]
return None
return session

View File

@@ -0,0 +1,158 @@
"""Session token tools for HTTP proxy access."""
from grist_mcp.auth import Agent, Authenticator, AuthError, Permission
from grist_mcp.session import SessionTokenManager
PROXY_DOCUMENTATION = {
"description": "HTTP proxy API for bulk data operations. Use request_session_token to get a short-lived token, then call the proxy endpoint directly from scripts.",
"endpoint": "POST /api/v1/proxy",
"endpoint_note": "The full URL is returned in the 'proxy_url' field of request_session_token response",
"authentication": "Bearer token in Authorization header",
"request_format": {
"method": "Operation name (required)",
"table": "Table name (required for most operations)",
},
"methods": {
"get_records": {
"description": "Fetch records from a table",
"fields": {
"table": "string",
"filter": "object (optional)",
"sort": "string (optional)",
"limit": "integer (optional)",
},
},
"sql_query": {
"description": "Run a read-only SQL query",
"fields": {"query": "string"},
},
"list_tables": {
"description": "List all tables in the document",
"fields": {},
},
"describe_table": {
"description": "Get column information for a table",
"fields": {"table": "string"},
},
"add_records": {
"description": "Add records to a table",
"fields": {"table": "string", "records": "array of objects"},
},
"update_records": {
"description": "Update existing records",
"fields": {"table": "string", "records": "array of {id, fields}"},
},
"delete_records": {
"description": "Delete records by ID",
"fields": {"table": "string", "record_ids": "array of integers"},
},
"create_table": {
"description": "Create a new table",
"fields": {"table_id": "string", "columns": "array of {id, type}"},
},
"add_column": {
"description": "Add a column to a table",
"fields": {
"table": "string",
"column_id": "string",
"column_type": "string",
"formula": "string (optional)",
},
},
"modify_column": {
"description": "Modify a column's type or formula",
"fields": {
"table": "string",
"column_id": "string",
"type": "string (optional)",
"formula": "string (optional)",
},
},
"delete_column": {
"description": "Delete a column",
"fields": {"table": "string", "column_id": "string"},
},
},
"response_format": {
"success": {"success": True, "data": "..."},
"error": {"success": False, "error": "message", "code": "ERROR_CODE"},
},
"error_codes": [
"UNAUTHORIZED",
"INVALID_TOKEN",
"TOKEN_EXPIRED",
"INVALID_REQUEST",
"GRIST_ERROR",
],
"example_script": """#!/usr/bin/env python3
import requests
import sys
# Use token and proxy_url from request_session_token response
token = sys.argv[1]
proxy_url = sys.argv[2]
response = requests.post(
proxy_url,
headers={'Authorization': f'Bearer {token}'},
json={
'method': 'add_records',
'table': 'Orders',
'records': [{'item': 'Widget', 'qty': 100}]
}
)
print(response.json())
""",
}
async def get_proxy_documentation() -> dict:
"""Return complete documentation for the HTTP proxy API."""
return PROXY_DOCUMENTATION
async def request_session_token(
agent: Agent,
auth: Authenticator,
token_manager: SessionTokenManager,
document: str,
permissions: list[str],
ttl_seconds: int = 300,
proxy_base_url: str | None = None,
) -> dict:
"""Request a short-lived session token for HTTP proxy access.
The token can only grant permissions the agent already has.
"""
# Verify agent has access to the document
# Check each requested permission
for perm_str in permissions:
try:
perm = Permission(perm_str)
except ValueError:
raise AuthError(f"Invalid permission: {perm_str}")
auth.authorize(agent, document, perm)
# Create the session token
session = token_manager.create_token(
agent_name=agent.name,
document=document,
permissions=permissions,
ttl_seconds=ttl_seconds,
)
# Build proxy URL - use base URL if provided, otherwise just path
proxy_path = "/api/v1/proxy"
if proxy_base_url:
proxy_url = f"{proxy_base_url.rstrip('/')}{proxy_path}"
else:
proxy_url = proxy_path
return {
"token": session.token,
"document": session.document,
"permissions": session.permissions,
"expires_at": session.expires_at.isoformat(),
"proxy_url": proxy_url,
}

View File

@@ -1,4 +1,7 @@
"""Write tools - create, update, delete records."""
"""Write tools - create, update, delete records, upload attachments."""
import base64
import mimetypes
from grist_mcp.auth import Agent, Authenticator, Permission
from grist_mcp.grist_client import GristClient
@@ -59,3 +62,50 @@ async def delete_records(
await client.delete_records(table, record_ids)
return {"deleted": True}
async def upload_attachment(
agent: Agent,
auth: Authenticator,
document: str,
filename: str,
content_base64: str,
content_type: str | None = None,
client: GristClient | None = None,
) -> dict:
"""Upload a file attachment to a document.
Args:
agent: The authenticated agent.
auth: Authenticator for permission checks.
document: Document name.
filename: Filename with extension.
content_base64: File content as base64-encoded string.
content_type: MIME type (auto-detected from filename if omitted).
client: Optional GristClient instance.
Returns:
Dict with attachment_id, filename, and size_bytes.
Raises:
ValueError: If content_base64 is not valid base64.
"""
auth.authorize(agent, document, Permission.WRITE)
# Decode base64 content
try:
content = base64.b64decode(content_base64)
except Exception:
raise ValueError("Invalid base64 encoding")
# Auto-detect MIME type if not provided
if content_type is None:
content_type, _ = mimetypes.guess_type(filename)
if content_type is None:
content_type = "application/octet-stream"
if client is None:
doc = auth.get_document(document)
client = GristClient(doc)
return await client.upload_attachment(filename, content, content_type)

View File

View File

@@ -0,0 +1,12 @@
documents:
test-doc:
url: http://mock-grist:8484
doc_id: test-doc-id
api_key: test-api-key
tokens:
- token: test-token
name: test-agent
scope:
- document: test-doc
permissions: [read, write, schema]

View File

@@ -0,0 +1,36 @@
"""Fixtures for integration tests."""
import os
import time
import httpx
import pytest
GRIST_MCP_URL = os.environ.get("GRIST_MCP_URL", "http://localhost:3000")
MOCK_GRIST_URL = os.environ.get("MOCK_GRIST_URL", "http://localhost:8484")
MAX_WAIT_SECONDS = 30
def wait_for_service(url: str, timeout: int = MAX_WAIT_SECONDS) -> bool:
"""Wait for a service to become healthy."""
start = time.time()
while time.time() - start < timeout:
try:
response = httpx.get(f"{url}/health", timeout=2.0)
if response.status_code == 200:
return True
except httpx.RequestError:
pass
time.sleep(0.5)
return False
@pytest.fixture(scope="session")
def services_ready():
"""Ensure both services are healthy before running tests."""
if not wait_for_service(MOCK_GRIST_URL):
pytest.fail(f"Mock Grist server not ready at {MOCK_GRIST_URL}")
if not wait_for_service(GRIST_MCP_URL):
pytest.fail(f"grist-mcp server not ready at {GRIST_MCP_URL}")
return True

View File

@@ -0,0 +1,13 @@
FROM python:3.14-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY server.py .
ENV PORT=8484
EXPOSE 8484
CMD ["python", "server.py"]

View File

View File

@@ -0,0 +1,2 @@
starlette>=0.41.0
uvicorn>=0.32.0

View File

@@ -0,0 +1,227 @@
"""Mock Grist API server for integration testing."""
import json
import logging
import os
from datetime import datetime, timezone
from starlette.applications import Starlette
from starlette.responses import JSONResponse
from starlette.routing import Route
logging.basicConfig(level=logging.INFO, format="%(asctime)s [MOCK-GRIST] %(message)s")
logger = logging.getLogger(__name__)
# Mock data
MOCK_TABLES = {
"People": {
"columns": [
{"id": "Name", "fields": {"type": "Text"}},
{"id": "Age", "fields": {"type": "Int"}},
{"id": "Email", "fields": {"type": "Text"}},
],
"records": [
{"id": 1, "fields": {"Name": "Alice", "Age": 30, "Email": "alice@example.com"}},
{"id": 2, "fields": {"Name": "Bob", "Age": 25, "Email": "bob@example.com"}},
],
},
"Tasks": {
"columns": [
{"id": "Title", "fields": {"type": "Text"}},
{"id": "Done", "fields": {"type": "Bool"}},
],
"records": [
{"id": 1, "fields": {"Title": "Write tests", "Done": False}},
{"id": 2, "fields": {"Title": "Deploy", "Done": False}},
],
},
}
# Track requests for test assertions
request_log: list[dict] = []
def log_request(method: str, path: str, body: dict | None = None):
"""Log a request for later inspection."""
entry = {
"timestamp": datetime.now(timezone.utc).isoformat(),
"method": method,
"path": path,
"body": body,
}
request_log.append(entry)
logger.info(f"{method} {path}" + (f" body={json.dumps(body)}" if body else ""))
async def health(request):
"""Health check endpoint."""
return JSONResponse({"status": "ok"})
async def get_request_log(request):
"""Return the request log for test assertions."""
return JSONResponse(request_log)
async def clear_request_log(request):
"""Clear the request log."""
request_log.clear()
return JSONResponse({"status": "cleared"})
async def list_tables(request):
"""GET /api/docs/{doc_id}/tables"""
doc_id = request.path_params["doc_id"]
log_request("GET", f"/api/docs/{doc_id}/tables")
tables = [{"id": name} for name in MOCK_TABLES.keys()]
return JSONResponse({"tables": tables})
async def get_table_columns(request):
"""GET /api/docs/{doc_id}/tables/{table_id}/columns"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
log_request("GET", f"/api/docs/{doc_id}/tables/{table_id}/columns")
if table_id not in MOCK_TABLES:
return JSONResponse({"error": "Table not found"}, status_code=404)
return JSONResponse({"columns": MOCK_TABLES[table_id]["columns"]})
async def get_records(request):
"""GET /api/docs/{doc_id}/tables/{table_id}/records"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
log_request("GET", f"/api/docs/{doc_id}/tables/{table_id}/records")
if table_id not in MOCK_TABLES:
return JSONResponse({"error": "Table not found"}, status_code=404)
return JSONResponse({"records": MOCK_TABLES[table_id]["records"]})
async def add_records(request):
"""POST /api/docs/{doc_id}/tables/{table_id}/records"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables/{table_id}/records", body)
# Return mock IDs for new records
new_ids = [{"id": 100 + i} for i in range(len(body.get("records", [])))]
return JSONResponse({"records": new_ids})
async def update_records(request):
"""PATCH /api/docs/{doc_id}/tables/{table_id}/records"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("PATCH", f"/api/docs/{doc_id}/tables/{table_id}/records", body)
return JSONResponse({})
async def delete_records(request):
"""POST /api/docs/{doc_id}/tables/{table_id}/data/delete"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables/{table_id}/data/delete", body)
return JSONResponse({})
async def sql_query(request):
"""GET /api/docs/{doc_id}/sql"""
doc_id = request.path_params["doc_id"]
query = request.query_params.get("q", "")
log_request("GET", f"/api/docs/{doc_id}/sql?q={query}")
# Return mock SQL results
return JSONResponse({
"records": [
{"fields": {"Name": "Alice", "Age": 30}},
{"fields": {"Name": "Bob", "Age": 25}},
]
})
async def create_tables(request):
"""POST /api/docs/{doc_id}/tables"""
doc_id = request.path_params["doc_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables", body)
# Return the created tables with their IDs
tables = [{"id": t["id"]} for t in body.get("tables", [])]
return JSONResponse({"tables": tables})
async def add_column(request):
"""POST /api/docs/{doc_id}/tables/{table_id}/columns"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("POST", f"/api/docs/{doc_id}/tables/{table_id}/columns", body)
columns = [{"id": c["id"]} for c in body.get("columns", [])]
return JSONResponse({"columns": columns})
async def modify_column(request):
"""PATCH /api/docs/{doc_id}/tables/{table_id}/columns/{col_id}"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
col_id = request.path_params["col_id"]
body = await request.json()
log_request("PATCH", f"/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}", body)
return JSONResponse({})
async def modify_columns(request):
"""PATCH /api/docs/{doc_id}/tables/{table_id}/columns - batch modify columns"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
body = await request.json()
log_request("PATCH", f"/api/docs/{doc_id}/tables/{table_id}/columns", body)
return JSONResponse({})
async def delete_column(request):
"""DELETE /api/docs/{doc_id}/tables/{table_id}/columns/{col_id}"""
doc_id = request.path_params["doc_id"]
table_id = request.path_params["table_id"]
col_id = request.path_params["col_id"]
log_request("DELETE", f"/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}")
return JSONResponse({})
app = Starlette(
routes=[
# Test control endpoints
Route("/health", endpoint=health),
Route("/_test/requests", endpoint=get_request_log),
Route("/_test/requests/clear", endpoint=clear_request_log, methods=["POST"]),
# Grist API endpoints
Route("/api/docs/{doc_id}/tables", endpoint=list_tables),
Route("/api/docs/{doc_id}/tables", endpoint=create_tables, methods=["POST"]),
Route("/api/docs/{doc_id}/tables/{table_id}/columns", endpoint=get_table_columns),
Route("/api/docs/{doc_id}/tables/{table_id}/columns", endpoint=add_column, methods=["POST"]),
Route("/api/docs/{doc_id}/tables/{table_id}/columns", endpoint=modify_columns, methods=["PATCH"]),
Route("/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}", endpoint=modify_column, methods=["PATCH"]),
Route("/api/docs/{doc_id}/tables/{table_id}/columns/{col_id}", endpoint=delete_column, methods=["DELETE"]),
Route("/api/docs/{doc_id}/tables/{table_id}/records", endpoint=get_records),
Route("/api/docs/{doc_id}/tables/{table_id}/records", endpoint=add_records, methods=["POST"]),
Route("/api/docs/{doc_id}/tables/{table_id}/records", endpoint=update_records, methods=["PATCH"]),
Route("/api/docs/{doc_id}/tables/{table_id}/data/delete", endpoint=delete_records, methods=["POST"]),
Route("/api/docs/{doc_id}/sql", endpoint=sql_query),
]
)
if __name__ == "__main__":
import uvicorn
port = int(os.environ.get("PORT", "8484"))
logger.info(f"Starting mock Grist server on port {port}")
uvicorn.run(app, host="0.0.0.0", port=port)

View File

@@ -0,0 +1,66 @@
"""Test MCP protocol compliance over SSE transport."""
import os
from contextlib import asynccontextmanager
import pytest
from mcp import ClientSession
from mcp.client.sse import sse_client
GRIST_MCP_URL = os.environ.get("GRIST_MCP_URL", "http://localhost:3000")
GRIST_MCP_TOKEN = os.environ.get("GRIST_MCP_TOKEN", "test-token")
@asynccontextmanager
async def create_mcp_session():
"""Create and yield an MCP session."""
headers = {"Authorization": f"Bearer {GRIST_MCP_TOKEN}"}
async with sse_client(f"{GRIST_MCP_URL}/sse", headers=headers) as (read_stream, write_stream):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
yield session
@pytest.mark.asyncio
async def test_mcp_protocol_compliance(services_ready):
"""Test MCP protocol compliance - connection, tools, descriptions, schemas."""
async with create_mcp_session() as client:
# Test 1: Connection initializes
assert client is not None
# Test 2: list_tools returns all expected tools
result = await client.list_tools()
tool_names = [tool.name for tool in result.tools]
expected_tools = [
"list_documents",
"list_tables",
"describe_table",
"get_records",
"sql_query",
"add_records",
"update_records",
"delete_records",
"create_table",
"add_column",
"modify_column",
"delete_column",
"get_proxy_documentation",
"request_session_token",
]
for expected in expected_tools:
assert expected in tool_names, f"Missing tool: {expected}"
assert len(result.tools) == 14, f"Expected 14 tools, got {len(result.tools)}"
# Test 3: All tools have descriptions
for tool in result.tools:
assert tool.description, f"Tool {tool.name} has no description"
assert len(tool.description) > 10, f"Tool {tool.name} description too short"
# Test 4: All tools have input schemas
for tool in result.tools:
assert tool.inputSchema is not None, f"Tool {tool.name} has no inputSchema"
assert "type" in tool.inputSchema, f"Tool {tool.name} schema missing type"

View File

@@ -0,0 +1,52 @@
"""Integration tests for session token proxy."""
import os
import pytest
import httpx
GRIST_MCP_URL = os.environ.get("GRIST_MCP_URL", "http://localhost:3000")
GRIST_MCP_TOKEN = os.environ.get("GRIST_MCP_TOKEN")
@pytest.fixture
def mcp_client():
"""Client for MCP SSE endpoint."""
return httpx.Client(
base_url=GRIST_MCP_URL,
headers={"Authorization": f"Bearer {GRIST_MCP_TOKEN}"},
)
@pytest.fixture
def proxy_client():
"""Client for proxy endpoint (session token set per-test)."""
return httpx.Client(base_url=GRIST_MCP_URL)
@pytest.mark.integration
def test_full_session_proxy_flow(mcp_client, proxy_client):
"""Test: request token via MCP, use token to call proxy."""
# This test requires a running grist-mcp server with proper config
# Skip if not configured
if not GRIST_MCP_TOKEN:
pytest.skip("GRIST_MCP_TOKEN not set")
# Step 1: Request session token (would be via MCP in real usage)
# For integration test, we test the proxy endpoint directly
# This is a placeholder - full MCP integration would use SSE
# Step 2: Use proxy endpoint
# Note: Need a valid session token to test this fully
# For now, verify endpoint exists and rejects bad tokens
response = proxy_client.post(
"/api/v1/proxy",
headers={"Authorization": "Bearer invalid_token"},
json={"method": "list_tables"},
)
assert response.status_code == 401
data = response.json()
assert data["success"] is False
assert data["code"] in ["INVALID_TOKEN", "TOKEN_EXPIRED"]

View File

@@ -0,0 +1,225 @@
"""Test tool calls through MCP client to verify Grist API interactions."""
import json
import os
from contextlib import asynccontextmanager
import httpx
import pytest
from mcp import ClientSession
from mcp.client.sse import sse_client
GRIST_MCP_URL = os.environ.get("GRIST_MCP_URL", "http://localhost:3000")
MOCK_GRIST_URL = os.environ.get("MOCK_GRIST_URL", "http://localhost:8484")
GRIST_MCP_TOKEN = os.environ.get("GRIST_MCP_TOKEN", "test-token")
@asynccontextmanager
async def create_mcp_session():
"""Create and yield an MCP session."""
headers = {"Authorization": f"Bearer {GRIST_MCP_TOKEN}"}
async with sse_client(f"{GRIST_MCP_URL}/sse", headers=headers) as (read_stream, write_stream):
async with ClientSession(read_stream, write_stream) as session:
await session.initialize()
yield session
def get_mock_request_log():
"""Get the request log from mock Grist server."""
with httpx.Client(base_url=MOCK_GRIST_URL, timeout=10.0) as client:
return client.get("/_test/requests").json()
def clear_mock_request_log():
"""Clear the mock Grist request log."""
with httpx.Client(base_url=MOCK_GRIST_URL, timeout=10.0) as client:
client.post("/_test/requests/clear")
@pytest.mark.asyncio
async def test_all_tools(services_ready):
"""Test all MCP tools - reads, writes, schema ops, and auth errors."""
async with create_mcp_session() as client:
# ===== READ TOOLS =====
# Test list_documents
clear_mock_request_log()
result = await client.call_tool("list_documents", {})
assert len(result.content) == 1
data = json.loads(result.content[0].text)
assert "documents" in data
assert len(data["documents"]) == 1
assert data["documents"][0]["name"] == "test-doc"
assert "read" in data["documents"][0]["permissions"]
# Test list_tables
clear_mock_request_log()
result = await client.call_tool("list_tables", {"document": "test-doc"})
data = json.loads(result.content[0].text)
assert "tables" in data
assert "People" in data["tables"]
assert "Tasks" in data["tables"]
log = get_mock_request_log()
assert any("/tables" in entry["path"] for entry in log)
# Test describe_table
clear_mock_request_log()
result = await client.call_tool(
"describe_table",
{"document": "test-doc", "table": "People"}
)
data = json.loads(result.content[0].text)
assert "columns" in data
column_ids = [c["id"] for c in data["columns"]]
assert "Name" in column_ids
assert "Age" in column_ids
log = get_mock_request_log()
assert any("/columns" in entry["path"] for entry in log)
# Test get_records
clear_mock_request_log()
result = await client.call_tool(
"get_records",
{"document": "test-doc", "table": "People"}
)
data = json.loads(result.content[0].text)
assert "records" in data
assert len(data["records"]) == 2
assert data["records"][0]["Name"] == "Alice"
log = get_mock_request_log()
assert any("/records" in entry["path"] and entry["method"] == "GET" for entry in log)
# Test sql_query
clear_mock_request_log()
result = await client.call_tool(
"sql_query",
{"document": "test-doc", "query": "SELECT Name, Age FROM People"}
)
data = json.loads(result.content[0].text)
assert "records" in data
assert len(data["records"]) >= 1
log = get_mock_request_log()
assert any("/sql" in entry["path"] for entry in log)
# ===== WRITE TOOLS =====
# Test add_records
clear_mock_request_log()
new_records = [
{"Name": "Charlie", "Age": 35, "Email": "charlie@example.com"}
]
result = await client.call_tool(
"add_records",
{"document": "test-doc", "table": "People", "records": new_records}
)
data = json.loads(result.content[0].text)
assert "inserted_ids" in data
assert len(data["inserted_ids"]) == 1
log = get_mock_request_log()
post_requests = [e for e in log if e["method"] == "POST" and "/records" in e["path"]]
assert len(post_requests) >= 1
assert post_requests[-1]["body"]["records"][0]["fields"]["Name"] == "Charlie"
# Test update_records
clear_mock_request_log()
updates = [{"id": 1, "fields": {"Age": 31}}]
result = await client.call_tool(
"update_records",
{"document": "test-doc", "table": "People", "records": updates}
)
data = json.loads(result.content[0].text)
assert "updated" in data
log = get_mock_request_log()
patch_requests = [e for e in log if e["method"] == "PATCH" and "/records" in e["path"]]
assert len(patch_requests) >= 1
# Test delete_records
clear_mock_request_log()
result = await client.call_tool(
"delete_records",
{"document": "test-doc", "table": "People", "record_ids": [1, 2]}
)
data = json.loads(result.content[0].text)
assert "deleted" in data
log = get_mock_request_log()
delete_requests = [e for e in log if "/data/delete" in e["path"]]
assert len(delete_requests) >= 1
assert delete_requests[-1]["body"] == [1, 2]
# ===== SCHEMA TOOLS =====
# Test create_table
clear_mock_request_log()
columns = [
{"id": "Title", "type": "Text"},
{"id": "Count", "type": "Int"},
]
result = await client.call_tool(
"create_table",
{"document": "test-doc", "table_id": "NewTable", "columns": columns}
)
data = json.loads(result.content[0].text)
assert "table_id" in data
log = get_mock_request_log()
post_tables = [e for e in log if e["method"] == "POST" and e["path"].endswith("/tables")]
assert len(post_tables) >= 1
# Test add_column
clear_mock_request_log()
result = await client.call_tool(
"add_column",
{
"document": "test-doc",
"table": "People",
"column_id": "Phone",
"column_type": "Text",
}
)
data = json.loads(result.content[0].text)
assert "column_id" in data
log = get_mock_request_log()
post_cols = [e for e in log if e["method"] == "POST" and "/columns" in e["path"]]
assert len(post_cols) >= 1
# Test modify_column
clear_mock_request_log()
result = await client.call_tool(
"modify_column",
{
"document": "test-doc",
"table": "People",
"column_id": "Age",
"type": "Numeric",
}
)
data = json.loads(result.content[0].text)
assert "modified" in data
log = get_mock_request_log()
patch_cols = [e for e in log if e["method"] == "PATCH" and "/columns" in e["path"]]
assert len(patch_cols) >= 1
# Test delete_column
clear_mock_request_log()
result = await client.call_tool(
"delete_column",
{
"document": "test-doc",
"table": "People",
"column_id": "Email",
}
)
data = json.loads(result.content[0].text)
assert "deleted" in data
log = get_mock_request_log()
delete_cols = [e for e in log if e["method"] == "DELETE" and "/columns/" in e["path"]]
assert len(delete_cols) >= 1
# ===== AUTHORIZATION =====
# Test unauthorized document fails
result = await client.call_tool(
"list_tables",
{"document": "unauthorized-doc"}
)
assert "error" in result.content[0].text.lower() or "authorization" in result.content[0].text.lower()

View File

@@ -1,52 +0,0 @@
import pytest
from mcp.types import ListToolsRequest
from grist_mcp.server import create_server
@pytest.mark.asyncio
async def test_create_server_registers_tools(tmp_path):
config_file = tmp_path / "config.yaml"
config_file.write_text("""
documents:
test-doc:
url: https://grist.example.com
doc_id: abc123
api_key: test-key
tokens:
- token: test-token
name: test-agent
scope:
- document: test-doc
permissions: [read, write, schema]
""")
server = create_server(str(config_file), token="test-token")
# Server should have tools registered
assert server is not None
# Get the list_tools handler and call it
handler = server.request_handlers.get(ListToolsRequest)
assert handler is not None
req = ListToolsRequest(method="tools/list")
result = await handler(req)
# Check tool names are registered
tool_names = [t.name for t in result.root.tools]
assert "list_documents" in tool_names
assert "list_tables" in tool_names
assert "describe_table" in tool_names
assert "get_records" in tool_names
assert "sql_query" in tool_names
assert "add_records" in tool_names
assert "update_records" in tool_names
assert "delete_records" in tool_names
assert "create_table" in tool_names
assert "add_column" in tool_names
assert "modify_column" in tool_names
assert "delete_column" in tool_names
# Should have all 12 tools
assert len(result.root.tools) == 12

View File

@@ -1,96 +0,0 @@
import pytest
from unittest.mock import AsyncMock
from grist_mcp.tools.write import add_records, update_records, delete_records
from grist_mcp.auth import Authenticator, AuthError
from grist_mcp.config import Config, Document, Token, TokenScope
@pytest.fixture
def config():
return Config(
documents={
"budget": Document(
url="https://grist.example.com",
doc_id="abc123",
api_key="key",
),
},
tokens=[
Token(
token="write-token",
name="write-agent",
scope=[TokenScope(document="budget", permissions=["read", "write"])],
),
Token(
token="read-token",
name="read-agent",
scope=[TokenScope(document="budget", permissions=["read"])],
),
],
)
@pytest.fixture
def auth(config):
return Authenticator(config)
@pytest.fixture
def mock_client():
client = AsyncMock()
client.add_records.return_value = [1, 2]
client.update_records.return_value = None
client.delete_records.return_value = None
return client
@pytest.mark.asyncio
async def test_add_records(auth, mock_client):
agent = auth.authenticate("write-token")
result = await add_records(
agent, auth, "budget", "Table1",
records=[{"Name": "Alice"}, {"Name": "Bob"}],
client=mock_client,
)
assert result == {"inserted_ids": [1, 2]}
@pytest.mark.asyncio
async def test_add_records_denied_without_write(auth, mock_client):
agent = auth.authenticate("read-token")
with pytest.raises(AuthError, match="Permission denied"):
await add_records(
agent, auth, "budget", "Table1",
records=[{"Name": "Alice"}],
client=mock_client,
)
@pytest.mark.asyncio
async def test_update_records(auth, mock_client):
agent = auth.authenticate("write-token")
result = await update_records(
agent, auth, "budget", "Table1",
records=[{"id": 1, "fields": {"Name": "Updated"}}],
client=mock_client,
)
assert result == {"updated": True}
@pytest.mark.asyncio
async def test_delete_records(auth, mock_client):
agent = auth.authenticate("write-token")
result = await delete_records(
agent, auth, "budget", "Table1",
record_ids=[1, 2],
client=mock_client,
)
assert result == {"deleted": True}

0
tests/unit/__init__.py Normal file
View File

View File

@@ -160,7 +160,7 @@ async def test_add_column(client, httpx_mock: HTTPXMock):
@pytest.mark.asyncio
async def test_modify_column(client, httpx_mock: HTTPXMock):
httpx_mock.add_response(
url="https://grist.example.com/api/docs/abc123/tables/Table1/columns/Amount",
url="https://grist.example.com/api/docs/abc123/tables/Table1/columns",
method="PATCH",
json={},
)
@@ -196,3 +196,43 @@ def test_sql_validation_rejects_multiple_statements(client):
def test_sql_validation_allows_trailing_semicolon(client):
# Should not raise
client._validate_sql_query("SELECT * FROM users;")
# Attachment tests
@pytest.mark.asyncio
async def test_upload_attachment(client, httpx_mock: HTTPXMock):
httpx_mock.add_response(
url="https://grist.example.com/api/docs/abc123/attachments",
method="POST",
json=[42],
)
result = await client.upload_attachment(
filename="invoice.pdf",
content=b"PDF content here",
content_type="application/pdf",
)
assert result == {
"attachment_id": 42,
"filename": "invoice.pdf",
"size_bytes": 16,
}
@pytest.mark.asyncio
async def test_upload_attachment_default_content_type(client, httpx_mock: HTTPXMock):
httpx_mock.add_response(
url="https://grist.example.com/api/docs/abc123/attachments",
method="POST",
json=[99],
)
result = await client.upload_attachment(
filename="data.bin",
content=b"\x00\x01\x02",
)
assert result["attachment_id"] == 99
assert result["size_bytes"] == 3

170
tests/unit/test_logging.py Normal file
View File

@@ -0,0 +1,170 @@
"""Unit tests for logging module."""
import logging
from grist_mcp.logging import truncate_token, extract_stats, format_tool_log
class TestTruncateToken:
def test_normal_token_shows_prefix_suffix(self):
token = "abcdefghijklmnop"
assert truncate_token(token) == "abc...nop"
def test_short_token_shows_asterisks(self):
token = "abcdefgh" # 8 chars
assert truncate_token(token) == "***"
def test_very_short_token_shows_asterisks(self):
token = "abc"
assert truncate_token(token) == "***"
def test_empty_token_shows_asterisks(self):
assert truncate_token("") == "***"
def test_boundary_token_shows_prefix_suffix(self):
token = "abcdefghi" # 9 chars - first to show truncation
assert truncate_token(token) == "abc...ghi"
class TestExtractStats:
def test_list_documents(self):
result = {"documents": [{"name": "a"}, {"name": "b"}, {"name": "c"}]}
assert extract_stats("list_documents", {}, result) == "3 docs"
def test_list_tables(self):
result = {"tables": ["Orders", "Products"]}
assert extract_stats("list_tables", {}, result) == "2 tables"
def test_describe_table(self):
result = {"columns": [{"id": "A"}, {"id": "B"}]}
assert extract_stats("describe_table", {}, result) == "2 columns"
def test_get_records(self):
result = {"records": [{"id": 1}, {"id": 2}]}
assert extract_stats("get_records", {}, result) == "2 records"
def test_sql_query(self):
result = {"records": [{"a": 1}, {"a": 2}, {"a": 3}]}
assert extract_stats("sql_query", {}, result) == "3 rows"
def test_add_records_from_args(self):
args = {"records": [{"a": 1}, {"a": 2}]}
assert extract_stats("add_records", args, {"ids": [1, 2]}) == "2 records"
def test_update_records_from_args(self):
args = {"records": [{"id": 1, "fields": {}}, {"id": 2, "fields": {}}]}
assert extract_stats("update_records", args, {}) == "2 records"
def test_delete_records_from_args(self):
args = {"record_ids": [1, 2, 3]}
assert extract_stats("delete_records", args, {}) == "3 records"
def test_create_table(self):
args = {"columns": [{"id": "A"}, {"id": "B"}]}
assert extract_stats("create_table", args, {}) == "2 columns"
def test_single_column_ops(self):
assert extract_stats("add_column", {}, {}) == "1 column"
assert extract_stats("modify_column", {}, {}) == "1 column"
assert extract_stats("delete_column", {}, {}) == "1 column"
def test_empty_result_returns_zero(self):
assert extract_stats("list_documents", {}, {"documents": []}) == "0 docs"
def test_unknown_tool(self):
assert extract_stats("unknown_tool", {}, {}) == "-"
class TestFormatToolLog:
def test_success_format(self):
line = format_tool_log(
agent_name="dev-agent",
token="abcdefghijklmnop",
tool="get_records",
document="sales",
stats="42 records",
status="success",
duration_ms=125,
)
assert "dev-agent" in line
assert "abc...nop" in line
assert "get_records" in line
assert "sales" in line
assert "42 records" in line
assert "success" in line
assert "125ms" in line
# Check pipe-delimited format
assert line.count("|") == 6
def test_no_document(self):
line = format_tool_log(
agent_name="dev-agent",
token="abcdefghijklmnop",
tool="list_documents",
document=None,
stats="3 docs",
status="success",
duration_ms=45,
)
assert "| - |" in line
def test_error_format(self):
line = format_tool_log(
agent_name="dev-agent",
token="abcdefghijklmnop",
tool="add_records",
document="inventory",
stats="5 records",
status="error",
duration_ms=89,
error_message="Grist API error: Invalid column 'foo'",
)
assert "error" in line
assert "\n Grist API error: Invalid column 'foo'" in line
class TestSetupLogging:
def test_default_level_is_info(self, monkeypatch):
monkeypatch.delenv("LOG_LEVEL", raising=False)
from grist_mcp.logging import setup_logging
setup_logging()
logger = logging.getLogger("grist_mcp")
assert logger.level == logging.INFO
def test_respects_log_level_env(self, monkeypatch):
monkeypatch.setenv("LOG_LEVEL", "DEBUG")
from grist_mcp.logging import setup_logging
setup_logging()
logger = logging.getLogger("grist_mcp")
assert logger.level == logging.DEBUG
def test_invalid_level_defaults_to_info(self, monkeypatch):
monkeypatch.setenv("LOG_LEVEL", "INVALID")
from grist_mcp.logging import setup_logging
setup_logging()
logger = logging.getLogger("grist_mcp")
assert logger.level == logging.INFO
class TestGetLogger:
def test_returns_child_logger(self):
from grist_mcp.logging import get_logger
logger = get_logger("server")
assert logger.name == "grist_mcp.server"
def test_inherits_parent_level(self, monkeypatch):
monkeypatch.setenv("LOG_LEVEL", "WARNING")
from grist_mcp.logging import setup_logging, get_logger
setup_logging()
logger = get_logger("test")
# Child inherits from parent when level is NOTSET
assert logger.getEffectiveLevel() == logging.WARNING

98
tests/unit/test_proxy.py Normal file
View File

@@ -0,0 +1,98 @@
from datetime import datetime, timezone
from unittest.mock import AsyncMock, MagicMock
import pytest
from grist_mcp.proxy import parse_proxy_request, ProxyRequest, ProxyError, dispatch_proxy_request
from grist_mcp.session import SessionToken
@pytest.fixture
def mock_session():
return SessionToken(
token="sess_test",
document="sales",
permissions=["read", "write"],
agent_name="test-agent",
created_at=datetime.now(timezone.utc),
expires_at=datetime.now(timezone.utc),
)
@pytest.fixture
def mock_auth():
auth = MagicMock()
doc = MagicMock()
doc.url = "https://grist.example.com"
doc.doc_id = "abc123"
doc.api_key = "key"
auth.get_document.return_value = doc
return auth
def test_parse_proxy_request_valid_add_records():
body = {
"method": "add_records",
"table": "Orders",
"records": [{"item": "Widget", "qty": 10}],
}
request = parse_proxy_request(body)
assert request.method == "add_records"
assert request.table == "Orders"
assert request.records == [{"item": "Widget", "qty": 10}]
def test_parse_proxy_request_missing_method():
body = {"table": "Orders"}
with pytest.raises(ProxyError) as exc_info:
parse_proxy_request(body)
assert exc_info.value.code == "INVALID_REQUEST"
assert "method" in str(exc_info.value)
@pytest.mark.asyncio
async def test_dispatch_add_records(mock_session, mock_auth):
request = ProxyRequest(
method="add_records",
table="Orders",
records=[{"item": "Widget"}],
)
mock_client = AsyncMock()
mock_client.add_records.return_value = [1, 2, 3]
result = await dispatch_proxy_request(
request, mock_session, mock_auth, client=mock_client
)
assert result["success"] is True
assert result["data"]["record_ids"] == [1, 2, 3]
mock_client.add_records.assert_called_once_with("Orders", [{"item": "Widget"}])
@pytest.mark.asyncio
async def test_dispatch_denies_without_permission(mock_auth):
# Session only has read permission
session = SessionToken(
token="sess_test",
document="sales",
permissions=["read"], # No write
agent_name="test-agent",
created_at=datetime.now(timezone.utc),
expires_at=datetime.now(timezone.utc),
)
request = ProxyRequest(
method="add_records", # Requires write
table="Orders",
records=[{"item": "Widget"}],
)
with pytest.raises(ProxyError) as exc_info:
await dispatch_proxy_request(request, session, mock_auth)
assert exc_info.value.code == "UNAUTHORIZED"

101
tests/unit/test_server.py Normal file
View File

@@ -0,0 +1,101 @@
import pytest
from mcp.types import ListToolsRequest
from grist_mcp.server import create_server
from grist_mcp.config import load_config
from grist_mcp.auth import Authenticator
@pytest.mark.asyncio
async def test_create_server_registers_tools(tmp_path):
config_file = tmp_path / "config.yaml"
config_file.write_text("""
documents:
test-doc:
url: https://grist.example.com
doc_id: abc123
api_key: test-key
tokens:
- token: test-token
name: test-agent
scope:
- document: test-doc
permissions: [read, write, schema]
""")
config = load_config(str(config_file))
auth = Authenticator(config)
agent = auth.authenticate("test-token")
server = create_server(auth, agent)
# Server should have tools registered
assert server is not None
# Get the list_tools handler and call it
handler = server.request_handlers.get(ListToolsRequest)
assert handler is not None
req = ListToolsRequest(method="tools/list")
result = await handler(req)
# Check tool names are registered
tool_names = [t.name for t in result.root.tools]
assert "list_documents" in tool_names
assert "list_tables" in tool_names
assert "describe_table" in tool_names
assert "get_records" in tool_names
assert "sql_query" in tool_names
assert "add_records" in tool_names
assert "update_records" in tool_names
assert "delete_records" in tool_names
assert "create_table" in tool_names
assert "add_column" in tool_names
assert "modify_column" in tool_names
assert "delete_column" in tool_names
assert "upload_attachment" in tool_names
# Session tools (always registered)
assert "get_proxy_documentation" in tool_names
assert "request_session_token" in tool_names
# Should have all 15 tools
assert len(result.root.tools) == 15
@pytest.mark.asyncio
async def test_create_server_registers_session_tools(tmp_path):
from grist_mcp.session import SessionTokenManager
config_file = tmp_path / "config.yaml"
config_file.write_text("""
documents:
test-doc:
url: https://grist.example.com
doc_id: abc123
api_key: test-key
tokens:
- token: valid-token
name: test-agent
scope:
- document: test-doc
permissions: [read, write, schema]
""")
config = load_config(str(config_file))
auth = Authenticator(config)
agent = auth.authenticate("valid-token")
token_manager = SessionTokenManager()
server = create_server(auth, agent, token_manager)
# Get the list_tools handler and call it
handler = server.request_handlers.get(ListToolsRequest)
assert handler is not None
req = ListToolsRequest(method="tools/list")
result = await handler(req)
tool_names = [t.name for t in result.root.tools]
assert "get_proxy_documentation" in tool_names
assert "request_session_token" in tool_names

View File

@@ -0,0 +1,81 @@
import pytest
from datetime import datetime, timedelta, timezone
from grist_mcp.session import SessionTokenManager, SessionToken
def test_create_token_returns_valid_session_token():
manager = SessionTokenManager()
token = manager.create_token(
agent_name="test-agent",
document="sales",
permissions=["read", "write"],
ttl_seconds=300,
)
assert token.token.startswith("sess_")
assert len(token.token) > 20
assert token.document == "sales"
assert token.permissions == ["read", "write"]
assert token.agent_name == "test-agent"
assert token.expires_at > datetime.now(timezone.utc)
assert token.expires_at < datetime.now(timezone.utc) + timedelta(seconds=310)
def test_create_token_caps_ttl_at_maximum():
manager = SessionTokenManager()
# Request 2 hours, should be capped at 1 hour
token = manager.create_token(
agent_name="test-agent",
document="sales",
permissions=["read"],
ttl_seconds=7200,
)
# Should be capped at 3600 seconds (1 hour)
max_expires = datetime.now(timezone.utc) + timedelta(seconds=3610)
assert token.expires_at < max_expires
def test_validate_token_returns_session_for_valid_token():
manager = SessionTokenManager()
created = manager.create_token(
agent_name="test-agent",
document="sales",
permissions=["read"],
ttl_seconds=300,
)
session = manager.validate_token(created.token)
assert session is not None
assert session.document == "sales"
assert session.agent_name == "test-agent"
def test_validate_token_returns_none_for_unknown_token():
manager = SessionTokenManager()
session = manager.validate_token("sess_unknown_token")
assert session is None
def test_validate_token_returns_none_for_expired_token():
manager = SessionTokenManager()
created = manager.create_token(
agent_name="test-agent",
document="sales",
permissions=["read"],
ttl_seconds=1,
)
# Wait for expiry
import time
time.sleep(1.5)
session = manager.validate_token(created.token)
assert session is None

View File

@@ -0,0 +1,122 @@
import pytest
from grist_mcp.tools.session import get_proxy_documentation, request_session_token
from grist_mcp.auth import Authenticator, Agent, AuthError
from grist_mcp.config import Config, Document, Token, TokenScope
from grist_mcp.session import SessionTokenManager
@pytest.fixture
def sample_config():
return Config(
documents={
"sales": Document(
url="https://grist.example.com",
doc_id="abc123",
api_key="key",
),
},
tokens=[
Token(
token="agent-token",
name="test-agent",
scope=[
TokenScope(document="sales", permissions=["read", "write"]),
],
),
],
)
@pytest.fixture
def auth_and_agent(sample_config):
auth = Authenticator(sample_config)
agent = auth.authenticate("agent-token")
return auth, agent
@pytest.mark.asyncio
async def test_get_proxy_documentation_returns_complete_spec():
result = await get_proxy_documentation()
assert "description" in result
assert "endpoint" in result
assert result["endpoint"] == "POST /api/v1/proxy"
assert "authentication" in result
assert "methods" in result
assert "add_records" in result["methods"]
assert "get_records" in result["methods"]
assert "example_script" in result
@pytest.mark.asyncio
async def test_request_session_token_creates_valid_token(auth_and_agent):
auth, agent = auth_and_agent
manager = SessionTokenManager()
result = await request_session_token(
agent=agent,
auth=auth,
token_manager=manager,
document="sales",
permissions=["read", "write"],
ttl_seconds=300,
)
assert "token" in result
assert result["token"].startswith("sess_")
assert result["document"] == "sales"
assert result["permissions"] == ["read", "write"]
assert "expires_at" in result
assert result["proxy_url"] == "/api/v1/proxy"
@pytest.mark.asyncio
async def test_request_session_token_rejects_unauthorized_document(sample_config):
auth = Authenticator(sample_config)
agent = auth.authenticate("agent-token")
manager = SessionTokenManager()
with pytest.raises(AuthError, match="Document not in scope"):
await request_session_token(
agent=agent,
auth=auth,
token_manager=manager,
document="unauthorized_doc",
permissions=["read"],
ttl_seconds=300,
)
@pytest.mark.asyncio
async def test_request_session_token_rejects_unauthorized_permission(sample_config):
auth = Authenticator(sample_config)
agent = auth.authenticate("agent-token")
manager = SessionTokenManager()
# Agent has read/write on sales, but not schema
with pytest.raises(AuthError, match="Permission denied"):
await request_session_token(
agent=agent,
auth=auth,
token_manager=manager,
document="sales",
permissions=["read", "schema"], # schema not granted
ttl_seconds=300,
)
@pytest.mark.asyncio
async def test_request_session_token_rejects_invalid_permission(sample_config):
auth = Authenticator(sample_config)
agent = auth.authenticate("agent-token")
manager = SessionTokenManager()
with pytest.raises(AuthError, match="Invalid permission"):
await request_session_token(
agent=agent,
auth=auth,
token_manager=manager,
document="sales",
permissions=["read", "invalid_perm"],
ttl_seconds=300,
)

View File

@@ -0,0 +1,200 @@
import base64
import pytest
from unittest.mock import AsyncMock
from grist_mcp.tools.write import add_records, update_records, delete_records, upload_attachment
from grist_mcp.auth import Authenticator, AuthError
from grist_mcp.config import Config, Document, Token, TokenScope
@pytest.fixture
def config():
return Config(
documents={
"budget": Document(
url="https://grist.example.com",
doc_id="abc123",
api_key="key",
),
},
tokens=[
Token(
token="write-token",
name="write-agent",
scope=[TokenScope(document="budget", permissions=["read", "write"])],
),
Token(
token="read-token",
name="read-agent",
scope=[TokenScope(document="budget", permissions=["read"])],
),
],
)
@pytest.fixture
def auth(config):
return Authenticator(config)
@pytest.fixture
def mock_client():
client = AsyncMock()
client.add_records.return_value = [1, 2]
client.update_records.return_value = None
client.delete_records.return_value = None
return client
@pytest.mark.asyncio
async def test_add_records(auth, mock_client):
agent = auth.authenticate("write-token")
result = await add_records(
agent, auth, "budget", "Table1",
records=[{"Name": "Alice"}, {"Name": "Bob"}],
client=mock_client,
)
assert result == {"inserted_ids": [1, 2]}
@pytest.mark.asyncio
async def test_add_records_denied_without_write(auth, mock_client):
agent = auth.authenticate("read-token")
with pytest.raises(AuthError, match="Permission denied"):
await add_records(
agent, auth, "budget", "Table1",
records=[{"Name": "Alice"}],
client=mock_client,
)
@pytest.mark.asyncio
async def test_update_records(auth, mock_client):
agent = auth.authenticate("write-token")
result = await update_records(
agent, auth, "budget", "Table1",
records=[{"id": 1, "fields": {"Name": "Updated"}}],
client=mock_client,
)
assert result == {"updated": True}
@pytest.mark.asyncio
async def test_delete_records(auth, mock_client):
agent = auth.authenticate("write-token")
result = await delete_records(
agent, auth, "budget", "Table1",
record_ids=[1, 2],
client=mock_client,
)
assert result == {"deleted": True}
# Upload attachment tests
@pytest.fixture
def mock_client_with_attachment():
client = AsyncMock()
client.upload_attachment.return_value = {
"attachment_id": 42,
"filename": "invoice.pdf",
"size_bytes": 1024,
}
return client
@pytest.mark.asyncio
async def test_upload_attachment_success(auth, mock_client_with_attachment):
agent = auth.authenticate("write-token")
content = b"PDF content"
content_base64 = base64.b64encode(content).decode()
result = await upload_attachment(
agent, auth, "budget",
filename="invoice.pdf",
content_base64=content_base64,
client=mock_client_with_attachment,
)
assert result == {
"attachment_id": 42,
"filename": "invoice.pdf",
"size_bytes": 1024,
}
mock_client_with_attachment.upload_attachment.assert_called_once_with(
"invoice.pdf", content, "application/pdf"
)
@pytest.mark.asyncio
async def test_upload_attachment_invalid_base64(auth, mock_client_with_attachment):
agent = auth.authenticate("write-token")
with pytest.raises(ValueError, match="Invalid base64 encoding"):
await upload_attachment(
agent, auth, "budget",
filename="test.txt",
content_base64="not-valid-base64!!!",
client=mock_client_with_attachment,
)
@pytest.mark.asyncio
async def test_upload_attachment_auth_required(auth, mock_client_with_attachment):
agent = auth.authenticate("read-token")
content_base64 = base64.b64encode(b"test").decode()
with pytest.raises(AuthError, match="Permission denied"):
await upload_attachment(
agent, auth, "budget",
filename="test.txt",
content_base64=content_base64,
client=mock_client_with_attachment,
)
@pytest.mark.asyncio
async def test_upload_attachment_mime_detection(auth, mock_client_with_attachment):
agent = auth.authenticate("write-token")
content = b"PNG content"
content_base64 = base64.b64encode(content).decode()
await upload_attachment(
agent, auth, "budget",
filename="image.png",
content_base64=content_base64,
client=mock_client_with_attachment,
)
# Should auto-detect image/png from filename
mock_client_with_attachment.upload_attachment.assert_called_once_with(
"image.png", content, "image/png"
)
@pytest.mark.asyncio
async def test_upload_attachment_explicit_content_type(auth, mock_client_with_attachment):
agent = auth.authenticate("write-token")
content = b"custom content"
content_base64 = base64.b64encode(content).decode()
await upload_attachment(
agent, auth, "budget",
filename="file.dat",
content_base64=content_base64,
content_type="application/custom",
client=mock_client_with_attachment,
)
# Should use explicit content type
mock_client_with_attachment.upload_attachment.assert_called_once_with(
"file.dat", content, "application/custom"
)

83
uv.lock generated
View File

@@ -153,7 +153,7 @@ wheels = [
[[package]]
name = "grist-mcp"
version = "0.1.0"
version = "1.2.0"
source = { editable = "." }
dependencies = [
{ name = "httpx" },
@@ -169,6 +169,8 @@ dev = [
{ name = "pytest" },
{ name = "pytest-asyncio" },
{ name = "pytest-httpx" },
{ name = "pytest-timeout" },
{ name = "rich" },
]
[package.metadata]
@@ -178,7 +180,9 @@ requires-dist = [
{ name = "pytest", marker = "extra == 'dev'", specifier = ">=8.0.0" },
{ name = "pytest-asyncio", marker = "extra == 'dev'", specifier = ">=0.24.0" },
{ name = "pytest-httpx", marker = "extra == 'dev'", specifier = ">=0.32.0" },
{ name = "pytest-timeout", marker = "extra == 'dev'", specifier = ">=2.0.0" },
{ name = "pyyaml", specifier = ">=6.0" },
{ name = "rich", marker = "extra == 'dev'", specifier = ">=13.0.0" },
{ name = "sse-starlette", specifier = ">=2.1.0" },
{ name = "starlette", specifier = ">=0.41.0" },
{ name = "uvicorn", specifier = ">=0.32.0" },
@@ -276,9 +280,21 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/41/45/1a4ed80516f02155c51f51e8cedb3c1902296743db0bbc66608a0db2814f/jsonschema_specifications-2025.9.1-py3-none-any.whl", hash = "sha256:98802fee3a11ee76ecaca44429fda8a41bff98b00a0f2838151b113f210cc6fe", size = 18437, upload-time = "2025-09-08T01:34:57.871Z" },
]
[[package]]
name = "markdown-it-py"
version = "4.0.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "mdurl" },
]
sdist = { url = "https://files.pythonhosted.org/packages/5b/f5/4ec618ed16cc4f8fb3b701563655a69816155e79e24a17b651541804721d/markdown_it_py-4.0.0.tar.gz", hash = "sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3", size = 73070, upload-time = "2025-08-11T12:57:52.854Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/94/54/e7d793b573f298e1c9013b8c4dade17d481164aa517d1d7148619c2cedbf/markdown_it_py-4.0.0-py3-none-any.whl", hash = "sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147", size = 87321, upload-time = "2025-08-11T12:57:51.923Z" },
]
[[package]]
name = "mcp"
version = "1.23.1"
version = "1.25.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
@@ -296,9 +312,18 @@ dependencies = [
{ name = "typing-inspection" },
{ name = "uvicorn", marker = "sys_platform != 'emscripten'" },
]
sdist = { url = "https://files.pythonhosted.org/packages/12/42/10c0c09ca27aceacd8c428956cfabdd67e3d328fe55c4abc16589285d294/mcp-1.23.1.tar.gz", hash = "sha256:7403e053e8e2283b1e6ae631423cb54736933fea70b32422152e6064556cd298", size = 596519, upload-time = "2025-12-02T18:41:12.807Z" }
sdist = { url = "https://files.pythonhosted.org/packages/d5/2d/649d80a0ecf6a1f82632ca44bec21c0461a9d9fc8934d38cb5b319f2db5e/mcp-1.25.0.tar.gz", hash = "sha256:56310361ebf0364e2d438e5b45f7668cbb124e158bb358333cd06e49e83a6802", size = 605387, upload-time = "2025-12-19T10:19:56.985Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/9f/9e/26e1d2d2c6afe15dfba5ca6799eeeea7656dce625c22766e4c57305e9cc2/mcp-1.23.1-py3-none-any.whl", hash = "sha256:3ce897fcc20a41bd50b4c58d3aa88085f11f505dcc0eaed48930012d34c731d8", size = 231433, upload-time = "2025-12-02T18:41:11.195Z" },
{ url = "https://files.pythonhosted.org/packages/e2/fc/6dc7659c2ae5ddf280477011f4213a74f806862856b796ef08f028e664bf/mcp-1.25.0-py3-none-any.whl", hash = "sha256:b37c38144a666add0862614cc79ec276e97d72aa8ca26d622818d4e278b9721a", size = 233076, upload-time = "2025-12-19T10:19:55.416Z" },
]
[[package]]
name = "mdurl"
version = "0.1.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/d6/54/cfe61301667036ec958cb99bd3efefba235e65cdeb9c84d24a8293ba1d90/mdurl-0.1.2.tar.gz", hash = "sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba", size = 8729, upload-time = "2022-08-14T12:40:10.846Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" },
]
[[package]]
@@ -421,7 +446,7 @@ crypto = [
[[package]]
name = "pytest"
version = "9.0.1"
version = "9.0.2"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "colorama", marker = "sys_platform == 'win32'" },
@@ -430,9 +455,9 @@ dependencies = [
{ name = "pluggy" },
{ name = "pygments" },
]
sdist = { url = "https://files.pythonhosted.org/packages/07/56/f013048ac4bc4c1d9be45afd4ab209ea62822fb1598f40687e6bf45dcea4/pytest-9.0.1.tar.gz", hash = "sha256:3e9c069ea73583e255c3b21cf46b8d3c56f6e3a1a8f6da94ccb0fcf57b9d73c8", size = 1564125, upload-time = "2025-11-12T13:05:09.333Z" }
sdist = { url = "https://files.pythonhosted.org/packages/d1/db/7ef3487e0fb0049ddb5ce41d3a49c235bf9ad299b6a25d5780a89f19230f/pytest-9.0.2.tar.gz", hash = "sha256:75186651a92bd89611d1d9fc20f0b4345fd827c41ccd5c299a868a05d70edf11", size = 1568901, upload-time = "2025-12-06T21:30:51.014Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/0b/8b/6300fb80f858cda1c51ffa17075df5d846757081d11ab4aa35cef9e6258b/pytest-9.0.1-py3-none-any.whl", hash = "sha256:67be0030d194df2dfa7b556f2e56fb3c3315bd5c8822c6951162b92b32ce7dad", size = 373668, upload-time = "2025-11-12T13:05:07.379Z" },
{ url = "https://files.pythonhosted.org/packages/3b/ab/b3226f0bd7cdcf710fbede2b3548584366da3b19b5021e74f5bde2a8fa3f/pytest-9.0.2-py3-none-any.whl", hash = "sha256:711ffd45bf766d5264d487b917733b453d917afd2b0ad65223959f59089f875b", size = 374801, upload-time = "2025-12-06T21:30:49.154Z" },
]
[[package]]
@@ -460,6 +485,18 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/e2/d2/1eb1ea9c84f0d2033eb0b49675afdc71aa4ea801b74615f00f3c33b725e3/pytest_httpx-0.36.0-py3-none-any.whl", hash = "sha256:bd4c120bb80e142df856e825ec9f17981effb84d159f9fa29ed97e2357c3a9c8", size = 20229, upload-time = "2025-12-02T16:34:56.45Z" },
]
[[package]]
name = "pytest-timeout"
version = "2.4.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pytest" },
]
sdist = { url = "https://files.pythonhosted.org/packages/ac/82/4c9ecabab13363e72d880f2fb504c5f750433b2b6f16e99f4ec21ada284c/pytest_timeout-2.4.0.tar.gz", hash = "sha256:7e68e90b01f9eff71332b25001f85c75495fc4e3a836701876183c4bcfd0540a", size = 17973, upload-time = "2025-05-05T19:44:34.99Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fa/b6/3127540ecdf1464a00e5a01ee60a1b09175f6913f0644ac748494d9c4b21/pytest_timeout-2.4.0-py3-none-any.whl", hash = "sha256:c42667e5cdadb151aeb5b26d114aff6bdf5a907f176a007a30b940d3d865b5c2", size = 14382, upload-time = "2025-05-05T19:44:33.502Z" },
]
[[package]]
name = "python-dotenv"
version = "1.2.1"
@@ -471,11 +508,11 @@ wheels = [
[[package]]
name = "python-multipart"
version = "0.0.20"
version = "0.0.21"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/f3/87/f44d7c9f274c7ee665a29b885ec97089ec5dc034c7f3fafa03da9e39a09e/python_multipart-0.0.20.tar.gz", hash = "sha256:8dd0cab45b8e23064ae09147625994d090fa46f5b0d1e13af944c331a7fa9d13", size = 37158, upload-time = "2024-12-16T19:45:46.972Z" }
sdist = { url = "https://files.pythonhosted.org/packages/78/96/804520d0850c7db98e5ccb70282e29208723f0964e88ffd9d0da2f52ea09/python_multipart-0.0.21.tar.gz", hash = "sha256:7137ebd4d3bbf70ea1622998f902b97a29434a9e8dc40eb203bbcf7c2a2cba92", size = 37196, upload-time = "2025-12-17T09:24:22.446Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/45/58/38b5afbc1a800eeea951b9285d3912613f2603bdf897a4ab0f4bd7f405fc/python_multipart-0.0.20-py3-none-any.whl", hash = "sha256:8a62d3a8335e06589fe01f2a3e178cdcc632f3fbe0d492ad9ee0ec35aab1f104", size = 24546, upload-time = "2024-12-16T19:45:44.423Z" },
{ url = "https://files.pythonhosted.org/packages/aa/76/03af049af4dcee5d27442f71b6924f01f3efb5d2bd34f23fcd563f2cc5f5/python_multipart-0.0.21-py3-none-any.whl", hash = "sha256:cf7a6713e01c87aa35387f4774e812c4361150938d20d232800f75ffcf266090", size = 24541, upload-time = "2025-12-17T09:24:21.153Z" },
]
[[package]]
@@ -527,6 +564,19 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/2c/58/ca301544e1fa93ed4f80d724bf5b194f6e4b945841c5bfd555878eea9fcb/referencing-0.37.0-py3-none-any.whl", hash = "sha256:381329a9f99628c9069361716891d34ad94af76e461dcb0335825aecc7692231", size = 26766, upload-time = "2025-10-13T15:30:47.625Z" },
]
[[package]]
name = "rich"
version = "14.2.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "markdown-it-py" },
{ name = "pygments" },
]
sdist = { url = "https://files.pythonhosted.org/packages/fb/d2/8920e102050a0de7bfabeb4c4614a49248cf8d5d7a8d01885fbb24dc767a/rich-14.2.0.tar.gz", hash = "sha256:73ff50c7c0c1c77c8243079283f4edb376f0f6442433aecb8ce7e6d0b92d1fe4", size = 219990, upload-time = "2025-10-09T14:16:53.064Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/25/7a/b0178788f8dc6cafce37a212c99565fa1fe7872c70c6c9c1e1a372d9d88f/rich-14.2.0-py3-none-any.whl", hash = "sha256:76bc51fe2e57d2b1be1f96c524b890b816e334ab4c1e45888799bfaab0021edd", size = 243393, upload-time = "2025-10-09T14:16:51.245Z" },
]
[[package]]
name = "rpds-py"
version = "0.30.0"
@@ -566,14 +616,15 @@ wheels = [
[[package]]
name = "sse-starlette"
version = "3.0.3"
version = "3.1.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "anyio" },
{ name = "starlette" },
]
sdist = { url = "https://files.pythonhosted.org/packages/db/3c/fa6517610dc641262b77cc7bf994ecd17465812c1b0585fe33e11be758ab/sse_starlette-3.0.3.tar.gz", hash = "sha256:88cfb08747e16200ea990c8ca876b03910a23b547ab3bd764c0d8eb81019b971", size = 21943, upload-time = "2025-10-30T18:44:20.117Z" }
sdist = { url = "https://files.pythonhosted.org/packages/62/08/8f554b0e5bad3e4e880521a1686d96c05198471eed860b0eb89b57ea3636/sse_starlette-3.1.1.tar.gz", hash = "sha256:bffa531420c1793ab224f63648c059bcadc412bf9fdb1301ac8de1cf9a67b7fb", size = 24306, upload-time = "2025-12-26T15:22:53.836Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/23/a0/984525d19ca5c8a6c33911a0c164b11490dd0f90ff7fd689f704f84e9a11/sse_starlette-3.0.3-py3-none-any.whl", hash = "sha256:af5bf5a6f3933df1d9c7f8539633dc8444ca6a97ab2e2a7cd3b6e431ac03a431", size = 11765, upload-time = "2025-10-30T18:44:18.834Z" },
{ url = "https://files.pythonhosted.org/packages/e3/31/4c281581a0f8de137b710a07f65518b34bcf333b201cfa06cfda9af05f8a/sse_starlette-3.1.1-py3-none-any.whl", hash = "sha256:bb38f71ae74cfd86b529907a9fda5632195dfa6ae120f214ea4c890c7ee9d436", size = 12442, upload-time = "2025-12-26T15:22:52.911Z" },
]
[[package]]
@@ -611,13 +662,13 @@ wheels = [
[[package]]
name = "uvicorn"
version = "0.38.0"
version = "0.40.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "click" },
{ name = "h11" },
]
sdist = { url = "https://files.pythonhosted.org/packages/cb/ce/f06b84e2697fef4688ca63bdb2fdf113ca0a3be33f94488f2cadb690b0cf/uvicorn-0.38.0.tar.gz", hash = "sha256:fd97093bdd120a2609fc0d3afe931d4d4ad688b6e75f0f929fde1bc36fe0e91d", size = 80605, upload-time = "2025-10-18T13:46:44.63Z" }
sdist = { url = "https://files.pythonhosted.org/packages/c3/d1/8f3c683c9561a4e6689dd3b1d345c815f10f86acd044ee1fb9a4dcd0b8c5/uvicorn-0.40.0.tar.gz", hash = "sha256:839676675e87e73694518b5574fd0f24c9d97b46bea16df7b8c05ea1a51071ea", size = 81761, upload-time = "2025-12-21T14:16:22.45Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ee/d9/d88e73ca598f4f6ff671fb5fde8a32925c2e08a637303a1d12883c7305fa/uvicorn-0.38.0-py3-none-any.whl", hash = "sha256:48c0afd214ceb59340075b4a052ea1ee91c16fbc2a9b1469cca0e54566977b02", size = 68109, upload-time = "2025-10-18T13:46:42.958Z" },
{ url = "https://files.pythonhosted.org/packages/3d/d8/2083a1daa7439a66f3a48589a57d576aa117726762618f6bb09fe3798796/uvicorn-0.40.0-py3-none-any.whl", hash = "sha256:c6c8f55bc8bf13eb6fa9ff87ad62308bbbc33d0b67f84293151efe87e0d5f2ee", size = 68502, upload-time = "2025-12-21T14:16:21.041Z" },
]