feat: add standardized testing scripts and documentation

Add comprehensive suite of testing scripts for different workflows:
- test.sh: Interactive menu for all testing operations
- quick_test.sh: Fast unit test feedback (~10-30s)
- run_tests.sh: Main test runner with full configuration options
- coverage_report.sh: Coverage analysis with HTML/JSON/terminal reports
- ci_test.sh: CI/CD optimized testing with JUnit/coverage XML output

Features:
- Colored terminal output with clear error messages
- Consistent option flags across all scripts
- Support for test markers (unit, integration, e2e, slow, etc.)
- Parallel execution support
- Coverage thresholds (default: 85%)
- Virtual environment and dependency checks

Documentation:
- Update CLAUDE.md with testing section and examples
- Expand docs/developer/testing.md with comprehensive guide
- Add scripts/README.md with quick reference

All scripts are tested and executable. This standardizes the testing
process for local development, CI/CD, and pull request workflows.
This commit is contained in:
2025-11-03 21:39:41 -05:00
parent 84320ab8a5
commit 923cdec5ca
12 changed files with 1467 additions and 48 deletions

View File

@@ -327,6 +327,55 @@ DEPLOYMENT_MODE=DEV python main.py configs/default_config.json
## Testing Changes
### Automated Test Scripts
The project includes standardized test scripts for different workflows:
```bash
# Quick feedback during development (unit tests only, ~10-30 seconds)
bash scripts/quick_test.sh
# Full test suite with coverage (before commits/PRs)
bash scripts/run_tests.sh
# Generate coverage report with HTML output
bash scripts/coverage_report.sh -o
# CI/CD optimized testing (for automation)
bash scripts/ci_test.sh -f -m 85
# Interactive menu (recommended for beginners)
bash scripts/test.sh
```
**Common test script options:**
```bash
# Run only unit tests
bash scripts/run_tests.sh -t unit
# Run with custom markers
bash scripts/run_tests.sh -m "unit and not slow"
# Fail fast on first error
bash scripts/run_tests.sh -f
# Run tests in parallel
bash scripts/run_tests.sh -p
# Skip coverage reporting (faster)
bash scripts/run_tests.sh -n
```
**Available test markers:**
- `unit` - Fast, isolated unit tests
- `integration` - Tests with real dependencies
- `e2e` - End-to-end tests (requires Docker)
- `slow` - Tests taking >10 seconds
- `performance` - Performance benchmarks
- `security` - Security tests
### Manual Testing Workflow
When modifying agent behavior or adding tools:
1. Create test config with short date range (2-3 days)
2. Set `max_steps` low (e.g., 10) to iterate faster
@@ -334,6 +383,13 @@ When modifying agent behavior or adding tools:
4. Verify position updates in `position/position.jsonl`
5. Use `main.sh` only for full end-to-end testing
### Test Coverage
- **Minimum coverage:** 85%
- **Target coverage:** 90%
- **Configuration:** `pytest.ini`
- **Coverage reports:** `htmlcov/index.html`, `coverage.xml`, terminal output
See [docs/developer/testing.md](docs/developer/testing.md) for complete testing guide.
## Documentation Structure

View File

@@ -61,6 +61,15 @@ curl -X POST http://localhost:5000/simulate/to-date \
**Focus:** Comprehensive testing, documentation, and production readiness
#### API Consolidation & Improvements
- **Endpoint Refactoring** - Simplify API surface before v1.0
- Merge results and reasoning endpoints:
- Current: `/jobs/{job_id}/results` and `/jobs/{job_id}/reasoning/{model_name}` are separate
- Consolidated: Single endpoint with query parameters to control response
- `/jobs/{job_id}/results?include_reasoning=true&model=<model_name>`
- Benefits: Fewer endpoints, more consistent API design, easier to use
- Maintains backward compatibility with legacy endpoints (deprecated but functional)
#### Testing & Validation
- **Comprehensive Test Suite** - Full coverage of core functionality
- Unit tests for all agent components
@@ -93,10 +102,37 @@ curl -X POST http://localhost:5000/simulate/to-date \
- File system error handling (disk full, permission errors)
- Comprehensive error messages with troubleshooting guidance
- Logging improvements:
- Structured logging with consistent format
- Log rotation and size management
- Error classification (user error vs. system error)
- Debug mode for detailed diagnostics
- **Configurable Log Levels** - Environment-based logging control
- `LOG_LEVEL` environment variable (DEBUG, INFO, WARNING, ERROR, CRITICAL)
- Per-component log level configuration (API, agents, MCP tools, database)
- Default production level: INFO, development level: DEBUG
- **Structured Logging** - Consistent, parseable log format
- JSON-formatted logs option for production (machine-readable)
- Human-readable format for development
- Consistent fields: timestamp, level, component, message, context
- Correlation IDs for request tracing across components
- **Log Clarity & Organization** - Improve log readability
- Clear log prefixes per component: `[API]`, `[AGENT]`, `[MCP]`, `[DB]`
- Reduce noise: consolidate repetitive messages, rate-limit verbose logs
- Action-oriented messages: "Starting simulation job_id=123" vs "Job started"
- Include relevant context: model name, date, symbols in trading logs
- Progress indicators for long operations (e.g., "Processing date 15/30")
- **Log Rotation & Management** - Prevent disk space issues
- Automatic log rotation by size (default: 10MB per file)
- Retention policy (default: 30 days)
- Separate log files per component (api.log, agents.log, mcp.log)
- Archive old logs with compression
- **Error Classification** - Distinguish error types
- User errors (invalid input, configuration issues): WARN level
- System errors (API failures, database errors): ERROR level
- Critical failures (MCP service down, data corruption): CRITICAL level
- Include error codes for programmatic handling
- **Debug Mode** - Enhanced diagnostics for troubleshooting
- `DEBUG=true` environment variable
- Detailed request/response logging (sanitize API keys)
- MCP tool call/response logging with timing
- Database query logging with execution time
- Memory and resource usage tracking
#### Performance & Scalability
- **Performance Optimization** - Ensure efficient resource usage

View File

@@ -1,15 +1,310 @@
# Testing Guide
Guide for testing AI-Trader-Server during development.
This guide covers running tests for the AI-Trader project, including unit tests, integration tests, and end-to-end tests.
## Quick Start
```bash
# Interactive test menu (recommended for local development)
bash scripts/test.sh
# Quick unit tests (fast feedback)
bash scripts/quick_test.sh
# Full test suite with coverage
bash scripts/run_tests.sh
# Generate coverage report
bash scripts/coverage_report.sh
```
---
## Automated Testing
## Test Scripts Overview
### 1. `test.sh` - Interactive Test Helper
**Purpose:** Interactive menu for common test operations
**Usage:**
```bash
# Interactive mode
bash scripts/test.sh
# Non-interactive mode
bash scripts/test.sh -t unit -f
```
**Menu Options:**
1. Quick test (unit only, no coverage)
2. Full test suite (with coverage)
3. Coverage report
4. Unit tests only
5. Integration tests only
6. E2E tests only
7. Run with custom markers
8. Parallel execution
9. CI mode
---
### 2. `quick_test.sh` - Fast Feedback Loop
**Purpose:** Rapid test execution during development
**Usage:**
```bash
bash scripts/quick_test.sh
```
**When to use:**
- During active development
- Before committing code
- Quick verification of changes
- TDD workflow
---
### 3. `run_tests.sh` - Main Test Runner
**Purpose:** Comprehensive test execution with full configuration options
**Usage:**
```bash
# Run all tests with coverage (default)
bash scripts/run_tests.sh
# Run only unit tests
bash scripts/run_tests.sh -t unit
# Run without coverage
bash scripts/run_tests.sh -n
# Run with custom markers
bash scripts/run_tests.sh -m "unit and not slow"
# Fail on first error
bash scripts/run_tests.sh -f
# Run tests in parallel
bash scripts/run_tests.sh -p
```
**Options:**
```
-t, --type TYPE Test type: all, unit, integration, e2e (default: all)
-m, --markers MARKERS Run tests matching markers
-f, --fail-fast Stop on first failure
-n, --no-coverage Skip coverage reporting
-v, --verbose Verbose output
-p, --parallel Run tests in parallel
--no-html Skip HTML coverage report
-h, --help Show help message
```
---
### 4. `coverage_report.sh` - Coverage Analysis
**Purpose:** Generate detailed coverage reports
**Usage:**
```bash
# Generate coverage report (default: 85% threshold)
bash scripts/coverage_report.sh
# Set custom coverage threshold
bash scripts/coverage_report.sh -m 90
# Generate and open HTML report
bash scripts/coverage_report.sh -o
```
**Options:**
```
-m, --min-coverage NUM Minimum coverage percentage (default: 85)
-o, --open Open HTML report in browser
-i, --include-integration Include integration and e2e tests
-h, --help Show help message
```
---
### 5. `ci_test.sh` - CI/CD Optimized Runner
**Purpose:** Test execution optimized for CI/CD environments
**Usage:**
```bash
# Basic CI run
bash scripts/ci_test.sh
# Fail fast with custom coverage
bash scripts/ci_test.sh -f -m 90
# Using environment variables
CI_FAIL_FAST=true CI_COVERAGE_MIN=90 bash scripts/ci_test.sh
```
**Environment Variables:**
```bash
CI_FAIL_FAST=true # Enable fail-fast mode
CI_COVERAGE_MIN=90 # Set coverage threshold
CI_PARALLEL=true # Enable parallel execution
CI_VERBOSE=true # Enable verbose output
```
**Output artifacts:**
- `junit.xml` - Test results for CI reporting
- `coverage.xml` - Coverage data for CI tools
- `htmlcov/` - HTML coverage report
---
## Test Structure
```
tests/
├── conftest.py # Shared pytest fixtures
├── unit/ # Fast, isolated tests
├── integration/ # Tests with dependencies
├── e2e/ # End-to-end tests
├── performance/ # Performance benchmarks
└── security/ # Security tests
```
---
## Test Markers
Tests are organized using pytest markers:
| Marker | Description | Usage |
|--------|-------------|-------|
| `unit` | Fast, isolated unit tests | `-m unit` |
| `integration` | Tests with real dependencies | `-m integration` |
| `e2e` | End-to-end tests (requires Docker) | `-m e2e` |
| `slow` | Tests taking >10 seconds | `-m slow` |
| `performance` | Performance benchmarks | `-m performance` |
| `security` | Security tests | `-m security` |
**Examples:**
```bash
# Run only unit tests
bash scripts/run_tests.sh -m unit
# Run all except slow tests
bash scripts/run_tests.sh -m "not slow"
# Combine markers
bash scripts/run_tests.sh -m "unit and not slow"
```
---
## Common Workflows
### During Development
```bash
# Quick check before each commit
bash scripts/quick_test.sh
# Run relevant test type
bash scripts/run_tests.sh -t unit -f
# Full test before push
bash scripts/run_tests.sh
```
### Before Pull Request
```bash
# Run full test suite
bash scripts/run_tests.sh
# Generate coverage report
bash scripts/coverage_report.sh -o
# Ensure coverage meets 85% threshold
```
### CI/CD Pipeline
```bash
# Run CI-optimized tests
bash scripts/ci_test.sh -f -m 85
```
---
## Debugging Test Failures
```bash
# Run with verbose output
bash scripts/run_tests.sh -v -f
# Run specific test file
./venv/bin/python -m pytest tests/unit/test_database.py -v
# Run specific test function
./venv/bin/python -m pytest tests/unit/test_database.py::test_function -v
# Run with debugger on failure
./venv/bin/python -m pytest --pdb tests/
# Show print statements
./venv/bin/python -m pytest -s tests/
```
---
## Coverage Configuration
Configured in `pytest.ini`:
- Minimum coverage: 85%
- Target coverage: 90%
- Coverage reports: HTML, JSON, terminal
---
## Writing New Tests
### Unit Test Example
```python
import pytest
@pytest.mark.unit
def test_function_returns_expected_value():
# Arrange
input_data = {"key": "value"}
# Act
result = my_function(input_data)
# Assert
assert result == expected_output
```
### Integration Test Example
```python
@pytest.mark.integration
def test_database_integration(clean_db):
conn = get_db_connection(clean_db)
insert_data(conn, test_data)
result = query_data(conn)
assert len(result) == 1
```
---
## Docker Testing
### Docker Build Validation
```bash
chmod +x scripts/*.sh
bash scripts/validate_docker_build.sh
```
@@ -30,35 +325,16 @@ Tests all API endpoints with real simulations.
---
## Unit Tests
## Summary
```bash
# Install dependencies
pip install -r requirements.txt
# Run tests
pytest tests/ -v
# With coverage
pytest tests/ -v --cov=api --cov-report=term-missing
# Specific test file
pytest tests/unit/test_job_manager.py -v
```
| Script | Purpose | Speed | Coverage | Use Case |
|--------|---------|-------|----------|----------|
| `test.sh` | Interactive menu | Varies | Optional | Local development |
| `quick_test.sh` | Fast feedback | ⚡⚡⚡ | No | Active development |
| `run_tests.sh` | Full test suite | ⚡⚡ | Yes | Pre-commit, pre-PR |
| `coverage_report.sh` | Coverage analysis | ⚡ | Yes | Coverage review |
| `ci_test.sh` | CI/CD pipeline | ⚡⚡ | Yes | Automation |
---
## Integration Tests
```bash
# Run integration tests only
pytest tests/integration/ -v
# Test with real API server
docker-compose up -d
pytest tests/integration/test_api_endpoints.py -v
```
---
For detailed testing procedures, see root [TESTING_GUIDE.md](../../TESTING_GUIDE.md).
For detailed testing procedures and troubleshooting, see [TESTING_GUIDE.md](../../TESTING_GUIDE.md).

109
scripts/README.md Normal file
View File

@@ -0,0 +1,109 @@
# AI-Trader Scripts
This directory contains standardized scripts for testing, validation, and operations.
## Testing Scripts
### Interactive Testing
**`test.sh`** - Interactive test menu
```bash
bash scripts/test.sh
```
User-friendly menu for all testing operations. Best for local development.
### Development Testing
**`quick_test.sh`** - Fast unit test feedback
```bash
bash scripts/quick_test.sh
```
- Runs unit tests only
- No coverage
- Fails fast
- ~10-30 seconds
**`run_tests.sh`** - Full test suite
```bash
bash scripts/run_tests.sh [OPTIONS]
```
- All test types (unit, integration, e2e)
- Coverage reporting
- Parallel execution support
- Highly configurable
**`coverage_report.sh`** - Coverage analysis
```bash
bash scripts/coverage_report.sh [OPTIONS]
```
- Generate HTML/JSON/terminal reports
- Check coverage thresholds
- Open reports in browser
### CI/CD Testing
**`ci_test.sh`** - CI-optimized testing
```bash
bash scripts/ci_test.sh [OPTIONS]
```
- JUnit XML output
- Coverage XML for CI tools
- Environment variable configuration
- Excludes Docker tests
## Validation Scripts
**`validate_docker_build.sh`** - Docker build validation
```bash
bash scripts/validate_docker_build.sh
```
Validates Docker setup, build, and container startup.
**`test_api_endpoints.sh`** - API endpoint testing
```bash
bash scripts/test_api_endpoints.sh
```
Tests all REST API endpoints with real simulations.
## Other Scripts
**`migrate_price_data.py`** - Data migration utility
```bash
python scripts/migrate_price_data.py
```
Migrates price data between formats.
## Quick Reference
| Task | Script | Command |
|------|--------|---------|
| Quick test | `quick_test.sh` | `bash scripts/quick_test.sh` |
| Full test | `run_tests.sh` | `bash scripts/run_tests.sh` |
| Coverage | `coverage_report.sh` | `bash scripts/coverage_report.sh -o` |
| CI test | `ci_test.sh` | `bash scripts/ci_test.sh -f` |
| Interactive | `test.sh` | `bash scripts/test.sh` |
| Docker validation | `validate_docker_build.sh` | `bash scripts/validate_docker_build.sh` |
| API testing | `test_api_endpoints.sh` | `bash scripts/test_api_endpoints.sh` |
## Common Options
Most test scripts support:
- `-h, --help` - Show help
- `-v, --verbose` - Verbose output
- `-f, --fail-fast` - Stop on first failure
- `-t, --type TYPE` - Test type (unit, integration, e2e, all)
- `-m, --markers MARKERS` - Pytest markers
- `-p, --parallel` - Parallel execution
## Documentation
For detailed usage, see:
- [Testing Guide](../docs/developer/testing.md)
- [Testing & Validation Guide](../TESTING_GUIDE.md)
## Making Scripts Executable
If scripts are not executable:
```bash
chmod +x scripts/*.sh
```

243
scripts/ci_test.sh Executable file
View File

@@ -0,0 +1,243 @@
#!/bin/bash
# AI-Trader CI Test Script
# Optimized for CI/CD environments (GitHub Actions, Jenkins, etc.)
set -e
# Colors for output (disabled in CI if not supported)
if [ -t 1 ]; then
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
else
RED=''
GREEN=''
YELLOW=''
BLUE=''
NC=''
fi
# Script directory
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
# CI-specific defaults
FAIL_FAST=false
JUNIT_XML=true
COVERAGE_MIN=85
PARALLEL=false
VERBOSE=false
# Parse environment variables (common in CI)
if [ -n "$CI_FAIL_FAST" ]; then
FAIL_FAST="$CI_FAIL_FAST"
fi
if [ -n "$CI_COVERAGE_MIN" ]; then
COVERAGE_MIN="$CI_COVERAGE_MIN"
fi
if [ -n "$CI_PARALLEL" ]; then
PARALLEL="$CI_PARALLEL"
fi
if [ -n "$CI_VERBOSE" ]; then
VERBOSE="$CI_VERBOSE"
fi
# Parse command line arguments (override env vars)
while [[ $# -gt 0 ]]; do
case $1 in
-f|--fail-fast)
FAIL_FAST=true
shift
;;
-m|--min-coverage)
COVERAGE_MIN="$2"
shift 2
;;
-p|--parallel)
PARALLEL=true
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
--no-junit)
JUNIT_XML=false
shift
;;
-h|--help)
cat << EOF
Usage: $0 [OPTIONS]
CI-optimized test runner for AI-Trader.
OPTIONS:
-f, --fail-fast Stop on first failure
-m, --min-coverage NUM Minimum coverage percentage (default: 85)
-p, --parallel Run tests in parallel
-v, --verbose Verbose output
--no-junit Skip JUnit XML generation
-h, --help Show this help message
ENVIRONMENT VARIABLES:
CI_FAIL_FAST Set to 'true' to enable fail-fast
CI_COVERAGE_MIN Minimum coverage threshold
CI_PARALLEL Set to 'true' to enable parallel execution
CI_VERBOSE Set to 'true' for verbose output
EXAMPLES:
# Basic CI run
$0
# Fail fast with custom coverage threshold
$0 -f -m 90
# Parallel execution
$0 -p
# GitHub Actions
CI_FAIL_FAST=true CI_COVERAGE_MIN=90 $0
EOF
exit 0
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
exit 1
;;
esac
done
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE}AI-Trader CI Test Runner${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e "${YELLOW}CI Configuration:${NC}"
echo " Fail Fast: $FAIL_FAST"
echo " Min Coverage: ${COVERAGE_MIN}%"
echo " Parallel: $PARALLEL"
echo " Verbose: $VERBOSE"
echo " JUnit XML: $JUNIT_XML"
echo " Environment: ${CI:-local}"
echo ""
# Change to project root
cd "$PROJECT_ROOT"
# Check Python version
echo -e "${YELLOW}Checking Python version...${NC}"
PYTHON_VERSION=$(./venv/bin/python --version 2>&1)
echo " $PYTHON_VERSION"
echo ""
# Install/verify dependencies
echo -e "${YELLOW}Verifying test dependencies...${NC}"
./venv/bin/python -m pip install --quiet pytest pytest-cov pytest-xdist 2>&1 | grep -v "already satisfied" || true
echo " ✓ Dependencies verified"
echo ""
# Build pytest command
PYTEST_CMD="./venv/bin/python -m pytest"
PYTEST_ARGS="-v --tb=short --strict-markers"
# Coverage
PYTEST_ARGS="$PYTEST_ARGS --cov=api --cov=agent --cov=tools"
PYTEST_ARGS="$PYTEST_ARGS --cov-report=term-missing:skip-covered"
PYTEST_ARGS="$PYTEST_ARGS --cov-report=html:htmlcov"
PYTEST_ARGS="$PYTEST_ARGS --cov-report=xml:coverage.xml"
PYTEST_ARGS="$PYTEST_ARGS --cov-fail-under=$COVERAGE_MIN"
# JUnit XML for CI integrations
if [ "$JUNIT_XML" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS --junit-xml=junit.xml"
fi
# Fail fast
if [ "$FAIL_FAST" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS -x"
fi
# Parallel execution
if [ "$PARALLEL" = true ]; then
# Check if pytest-xdist is available
if ./venv/bin/python -c "import xdist" 2>/dev/null; then
PYTEST_ARGS="$PYTEST_ARGS -n auto"
echo -e "${YELLOW}Parallel execution enabled${NC}"
else
echo -e "${YELLOW}Warning: pytest-xdist not available, running sequentially${NC}"
fi
echo ""
fi
# Verbose
if [ "$VERBOSE" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS -vv"
fi
# Exclude e2e tests in CI (require Docker)
PYTEST_ARGS="$PYTEST_ARGS -m 'not e2e'"
# Test path
PYTEST_ARGS="$PYTEST_ARGS tests/"
# Run tests
echo -e "${BLUE}Running test suite...${NC}"
echo ""
echo "Command: $PYTEST_CMD $PYTEST_ARGS"
echo ""
# Execute tests
set +e # Don't exit on test failure, we want to process results
$PYTEST_CMD $PYTEST_ARGS
TEST_EXIT_CODE=$?
set -e
echo ""
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE}Test Results${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
# Process results
if [ $TEST_EXIT_CODE -eq 0 ]; then
echo -e "${GREEN}✓ All tests passed!${NC}"
echo ""
# Show artifacts
echo -e "${YELLOW}Artifacts generated:${NC}"
if [ -f "coverage.xml" ]; then
echo " ✓ coverage.xml (for CI coverage tools)"
fi
if [ -f "junit.xml" ]; then
echo " ✓ junit.xml (for CI test reporting)"
fi
if [ -d "htmlcov" ]; then
echo " ✓ htmlcov/ (HTML coverage report)"
fi
else
echo -e "${RED}✗ Tests failed (exit code: $TEST_EXIT_CODE)${NC}"
echo ""
if [ $TEST_EXIT_CODE -eq 1 ]; then
echo " Reason: Test failures"
elif [ $TEST_EXIT_CODE -eq 2 ]; then
echo " Reason: Test execution interrupted"
elif [ $TEST_EXIT_CODE -eq 3 ]; then
echo " Reason: Internal pytest error"
elif [ $TEST_EXIT_CODE -eq 4 ]; then
echo " Reason: pytest usage error"
elif [ $TEST_EXIT_CODE -eq 5 ]; then
echo " Reason: No tests collected"
fi
fi
echo ""
echo -e "${BLUE}========================================${NC}"
# Exit with test result code
exit $TEST_EXIT_CODE

170
scripts/coverage_report.sh Executable file
View File

@@ -0,0 +1,170 @@
#!/bin/bash
# AI-Trader Coverage Report Generator
# Generate detailed coverage reports and check coverage thresholds
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
# Default values
MIN_COVERAGE=85
OPEN_HTML=false
INCLUDE_INTEGRATION=false
# Usage information
usage() {
cat << EOF
Usage: $0 [OPTIONS]
Generate coverage reports for AI-Trader test suite.
OPTIONS:
-m, --min-coverage NUM Minimum coverage percentage (default: 85)
-o, --open Open HTML report in browser after generation
-i, --include-integration Include integration and e2e tests
-h, --help Show this help message
EXAMPLES:
# Generate coverage report with default threshold (85%)
$0
# Set custom coverage threshold
$0 -m 90
# Generate and open HTML report
$0 -o
# Include integration tests in coverage
$0 -i
EOF
exit 1
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-m|--min-coverage)
MIN_COVERAGE="$2"
shift 2
;;
-o|--open)
OPEN_HTML=true
shift
;;
-i|--include-integration)
INCLUDE_INTEGRATION=true
shift
;;
-h|--help)
usage
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
usage
;;
esac
done
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE}AI-Trader Coverage Report${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e "${YELLOW}Configuration:${NC}"
echo " Minimum Coverage: ${MIN_COVERAGE}%"
echo " Include Integration: $INCLUDE_INTEGRATION"
echo ""
# Check if virtual environment exists
if [ ! -d "$PROJECT_ROOT/venv" ]; then
echo -e "${RED}Error: Virtual environment not found${NC}"
exit 1
fi
# Change to project root
cd "$PROJECT_ROOT"
# Build pytest command
PYTEST_CMD="./venv/bin/python -m pytest tests/"
PYTEST_ARGS="-v --tb=short"
PYTEST_ARGS="$PYTEST_ARGS --cov=api --cov=agent --cov=tools"
PYTEST_ARGS="$PYTEST_ARGS --cov-report=term-missing"
PYTEST_ARGS="$PYTEST_ARGS --cov-report=html:htmlcov"
PYTEST_ARGS="$PYTEST_ARGS --cov-report=json:coverage.json"
PYTEST_ARGS="$PYTEST_ARGS --cov-fail-under=$MIN_COVERAGE"
# Filter tests if not including integration
if [ "$INCLUDE_INTEGRATION" = false ]; then
PYTEST_ARGS="$PYTEST_ARGS -m 'not e2e'"
echo -e "${YELLOW}Running tests (excluding e2e)...${NC}"
else
echo -e "${YELLOW}Running all tests...${NC}"
fi
echo ""
# Run tests with coverage
$PYTEST_CMD $PYTEST_ARGS
TEST_EXIT_CODE=$?
echo ""
# Parse coverage from JSON report
if [ -f "coverage.json" ]; then
TOTAL_COVERAGE=$(./venv/bin/python -c "import json; data=json.load(open('coverage.json')); print(f\"{data['totals']['percent_covered']:.2f}\")")
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE}Coverage Summary${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e " Total Coverage: ${GREEN}${TOTAL_COVERAGE}%${NC}"
echo -e " Minimum Required: ${MIN_COVERAGE}%"
echo ""
if [ $TEST_EXIT_CODE -eq 0 ]; then
echo -e "${GREEN}✓ Coverage threshold met!${NC}"
else
echo -e "${RED}✗ Coverage below threshold${NC}"
fi
echo ""
echo -e "${YELLOW}Reports Generated:${NC}"
echo " HTML: file://$PROJECT_ROOT/htmlcov/index.html"
echo " JSON: $PROJECT_ROOT/coverage.json"
echo " Terminal: (shown above)"
# Open HTML report if requested
if [ "$OPEN_HTML" = true ]; then
echo ""
echo -e "${BLUE}Opening HTML report...${NC}"
# Try different browsers/commands
if command -v xdg-open &> /dev/null; then
xdg-open "htmlcov/index.html"
elif command -v open &> /dev/null; then
open "htmlcov/index.html"
elif command -v start &> /dev/null; then
start "htmlcov/index.html"
else
echo -e "${YELLOW}Could not open browser automatically${NC}"
echo "Please open: file://$PROJECT_ROOT/htmlcov/index.html"
fi
fi
else
echo -e "${RED}Error: coverage.json not generated${NC}"
TEST_EXIT_CODE=1
fi
echo ""
echo -e "${BLUE}========================================${NC}"
exit $TEST_EXIT_CODE

59
scripts/quick_test.sh Executable file
View File

@@ -0,0 +1,59 @@
#!/bin/bash
# AI-Trader Quick Test Script
# Fast test run for rapid feedback during development
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE}AI-Trader Quick Test${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e "${YELLOW}Running unit tests (no coverage, fail-fast)${NC}"
echo ""
# Change to project root
cd "$PROJECT_ROOT"
# Check if virtual environment exists
if [ ! -d "./venv" ]; then
echo -e "${RED}Error: Virtual environment not found${NC}"
echo -e "${YELLOW}Please run: python3 -m venv venv && ./venv/bin/pip install -r requirements.txt${NC}"
exit 1
fi
# Run unit tests only, no coverage, fail on first error
./venv/bin/python -m pytest tests/ \
-v \
-m "unit and not slow" \
-x \
--tb=short \
--no-cov
TEST_EXIT_CODE=$?
echo ""
if [ $TEST_EXIT_CODE -eq 0 ]; then
echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN}✓ Quick tests passed!${NC}"
echo -e "${GREEN}========================================${NC}"
echo ""
echo -e "${YELLOW}For full test suite with coverage, run:${NC}"
echo " bash scripts/run_tests.sh"
else
echo -e "${RED}========================================${NC}"
echo -e "${RED}✗ Quick tests failed${NC}"
echo -e "${RED}========================================${NC}"
fi
exit $TEST_EXIT_CODE

221
scripts/run_tests.sh Executable file
View File

@@ -0,0 +1,221 @@
#!/bin/bash
# AI-Trader Test Runner
# Standardized script for running tests with various options
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Script directory
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
# Default values
TEST_TYPE="all"
COVERAGE=true
VERBOSE=false
FAIL_FAST=false
MARKERS=""
PARALLEL=false
HTML_REPORT=true
# Usage information
usage() {
cat << EOF
Usage: $0 [OPTIONS]
Run AI-Trader test suite with standardized configuration.
OPTIONS:
-t, --type TYPE Test type: all, unit, integration, e2e (default: all)
-m, --markers MARKERS Run tests matching markers (e.g., "unit and not slow")
-f, --fail-fast Stop on first failure
-n, --no-coverage Skip coverage reporting
-v, --verbose Verbose output
-p, --parallel Run tests in parallel (requires pytest-xdist)
--no-html Skip HTML coverage report
-h, --help Show this help message
EXAMPLES:
# Run all tests with coverage
$0
# Run only unit tests
$0 -t unit
# Run integration tests without coverage
$0 -t integration -n
# Run specific markers with fail-fast
$0 -m "unit and not slow" -f
# Run tests in parallel
$0 -p
# Quick test run (unit only, no coverage, fail-fast)
$0 -t unit -n -f
MARKERS:
unit - Fast, isolated unit tests
integration - Tests with real dependencies
e2e - End-to-end tests (requires Docker)
slow - Tests taking >10 seconds
performance - Performance benchmarks
security - Security tests
EOF
exit 1
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-t|--type)
TEST_TYPE="$2"
shift 2
;;
-m|--markers)
MARKERS="$2"
shift 2
;;
-f|--fail-fast)
FAIL_FAST=true
shift
;;
-n|--no-coverage)
COVERAGE=false
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-p|--parallel)
PARALLEL=true
shift
;;
--no-html)
HTML_REPORT=false
shift
;;
-h|--help)
usage
;;
*)
echo -e "${RED}Unknown option: $1${NC}"
usage
;;
esac
done
# Build pytest command
PYTEST_CMD="./venv/bin/python -m pytest"
PYTEST_ARGS="-v --tb=short"
# Add test type markers
if [ "$TEST_TYPE" != "all" ]; then
if [ -n "$MARKERS" ]; then
MARKERS="$TEST_TYPE and ($MARKERS)"
else
MARKERS="$TEST_TYPE"
fi
fi
# Add custom markers
if [ -n "$MARKERS" ]; then
PYTEST_ARGS="$PYTEST_ARGS -m \"$MARKERS\""
fi
# Add coverage options
if [ "$COVERAGE" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS --cov=api --cov=agent --cov=tools"
PYTEST_ARGS="$PYTEST_ARGS --cov-report=term-missing"
if [ "$HTML_REPORT" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS --cov-report=html:htmlcov"
fi
else
PYTEST_ARGS="$PYTEST_ARGS --no-cov"
fi
# Add fail-fast
if [ "$FAIL_FAST" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS -x"
fi
# Add parallel execution
if [ "$PARALLEL" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS -n auto"
fi
# Add verbosity
if [ "$VERBOSE" = true ]; then
PYTEST_ARGS="$PYTEST_ARGS -vv"
fi
# Add test path
PYTEST_ARGS="$PYTEST_ARGS tests/"
# Print configuration
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE}AI-Trader Test Runner${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e "${YELLOW}Configuration:${NC}"
echo " Test Type: $TEST_TYPE"
echo " Markers: ${MARKERS:-none}"
echo " Coverage: $COVERAGE"
echo " Fail Fast: $FAIL_FAST"
echo " Parallel: $PARALLEL"
echo " Verbose: $VERBOSE"
echo ""
# Check if virtual environment exists
if [ ! -d "$PROJECT_ROOT/venv" ]; then
echo -e "${RED}Error: Virtual environment not found at $PROJECT_ROOT/venv${NC}"
echo -e "${YELLOW}Please run: python3 -m venv venv && ./venv/bin/pip install -r requirements.txt${NC}"
exit 1
fi
# Check if pytest is installed
if ! ./venv/bin/python -c "import pytest" 2>/dev/null; then
echo -e "${RED}Error: pytest not installed${NC}"
echo -e "${YELLOW}Please run: ./venv/bin/pip install -r requirements.txt${NC}"
exit 1
fi
# Change to project root
cd "$PROJECT_ROOT"
# Run tests
echo -e "${BLUE}Running tests...${NC}"
echo ""
# Execute pytest with eval to handle quotes properly
eval "$PYTEST_CMD $PYTEST_ARGS"
TEST_EXIT_CODE=$?
# Print results
echo ""
if [ $TEST_EXIT_CODE -eq 0 ]; then
echo -e "${GREEN}========================================${NC}"
echo -e "${GREEN}✓ All tests passed!${NC}"
echo -e "${GREEN}========================================${NC}"
if [ "$COVERAGE" = true ] && [ "$HTML_REPORT" = true ]; then
echo ""
echo -e "${YELLOW}Coverage report generated:${NC}"
echo " HTML: file://$PROJECT_ROOT/htmlcov/index.html"
fi
else
echo -e "${RED}========================================${NC}"
echo -e "${RED}✗ Tests failed${NC}"
echo -e "${RED}========================================${NC}"
fi
exit $TEST_EXIT_CODE

249
scripts/test.sh Executable file
View File

@@ -0,0 +1,249 @@
#!/bin/bash
# AI-Trader Test Helper
# Interactive menu for common test operations
set -e
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m'
# Script directory
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
show_menu() {
clear
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE} AI-Trader Test Helper${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e "${CYAN}Quick Actions:${NC}"
echo " 1) Quick test (unit only, no coverage)"
echo " 2) Full test suite (with coverage)"
echo " 3) Coverage report"
echo ""
echo -e "${CYAN}Specific Test Types:${NC}"
echo " 4) Unit tests only"
echo " 5) Integration tests only"
echo " 6) E2E tests only (requires Docker)"
echo ""
echo -e "${CYAN}Advanced Options:${NC}"
echo " 7) Run with custom markers"
echo " 8) Parallel execution"
echo " 9) CI mode (for automation)"
echo ""
echo -e "${CYAN}Other:${NC}"
echo " h) Show help"
echo " q) Quit"
echo ""
echo -ne "${YELLOW}Select an option: ${NC}"
}
run_quick_test() {
echo -e "${BLUE}Running quick test...${NC}"
bash "$SCRIPT_DIR/quick_test.sh"
}
run_full_test() {
echo -e "${BLUE}Running full test suite...${NC}"
bash "$SCRIPT_DIR/run_tests.sh"
}
run_coverage() {
echo -e "${BLUE}Generating coverage report...${NC}"
bash "$SCRIPT_DIR/coverage_report.sh" -o
}
run_unit() {
echo -e "${BLUE}Running unit tests...${NC}"
bash "$SCRIPT_DIR/run_tests.sh" -t unit
}
run_integration() {
echo -e "${BLUE}Running integration tests...${NC}"
bash "$SCRIPT_DIR/run_tests.sh" -t integration
}
run_e2e() {
echo -e "${BLUE}Running E2E tests...${NC}"
echo -e "${YELLOW}Note: This requires Docker to be running${NC}"
read -p "Continue? (y/n) " -n 1 -r
echo
if [[ $REPLY =~ ^[Yy]$ ]]; then
bash "$SCRIPT_DIR/run_tests.sh" -t e2e
fi
}
run_custom_markers() {
echo ""
echo -e "${YELLOW}Available markers:${NC}"
echo " - unit"
echo " - integration"
echo " - e2e"
echo " - slow"
echo " - performance"
echo " - security"
echo ""
echo -e "${YELLOW}Examples:${NC}"
echo " unit and not slow"
echo " integration or performance"
echo " not e2e"
echo ""
read -p "Enter markers expression: " markers
if [ -n "$markers" ]; then
echo -e "${BLUE}Running tests with markers: $markers${NC}"
bash "$SCRIPT_DIR/run_tests.sh" -m "$markers"
else
echo -e "${RED}No markers provided, skipping${NC}"
sleep 2
fi
}
run_parallel() {
echo -e "${BLUE}Running tests in parallel...${NC}"
bash "$SCRIPT_DIR/run_tests.sh" -p
}
run_ci() {
echo -e "${BLUE}Running in CI mode...${NC}"
bash "$SCRIPT_DIR/ci_test.sh"
}
show_help() {
clear
echo -e "${BLUE}========================================${NC}"
echo -e "${BLUE}AI-Trader Test Scripts Help${NC}"
echo -e "${BLUE}========================================${NC}"
echo ""
echo -e "${CYAN}Available Scripts:${NC}"
echo ""
echo -e "${GREEN}1. quick_test.sh${NC}"
echo " Fast feedback loop for development"
echo " - Runs unit tests only"
echo " - No coverage reporting"
echo " - Fails fast on first error"
echo " Usage: bash scripts/quick_test.sh"
echo ""
echo -e "${GREEN}2. run_tests.sh${NC}"
echo " Main test runner with full options"
echo " - Supports all test types (unit, integration, e2e)"
echo " - Coverage reporting"
echo " - Custom marker filtering"
echo " - Parallel execution"
echo " Usage: bash scripts/run_tests.sh [OPTIONS]"
echo " Examples:"
echo " bash scripts/run_tests.sh -t unit"
echo " bash scripts/run_tests.sh -m 'not slow' -f"
echo " bash scripts/run_tests.sh -p"
echo ""
echo -e "${GREEN}3. coverage_report.sh${NC}"
echo " Generate detailed coverage reports"
echo " - HTML, JSON, and terminal reports"
echo " - Configurable coverage thresholds"
echo " - Can open HTML report in browser"
echo " Usage: bash scripts/coverage_report.sh [OPTIONS]"
echo " Examples:"
echo " bash scripts/coverage_report.sh -o"
echo " bash scripts/coverage_report.sh -m 90"
echo ""
echo -e "${GREEN}4. ci_test.sh${NC}"
echo " CI/CD optimized test runner"
echo " - JUnit XML output"
echo " - Coverage XML for CI tools"
echo " - Environment variable configuration"
echo " - Skips Docker-dependent tests"
echo " Usage: bash scripts/ci_test.sh [OPTIONS]"
echo " Examples:"
echo " bash scripts/ci_test.sh -f -m 90"
echo " CI_PARALLEL=true bash scripts/ci_test.sh"
echo ""
echo -e "${CYAN}Common Options:${NC}"
echo " -t, --type Test type (unit, integration, e2e, all)"
echo " -m, --markers Pytest markers expression"
echo " -f, --fail-fast Stop on first failure"
echo " -p, --parallel Run tests in parallel"
echo " -n, --no-coverage Skip coverage reporting"
echo " -v, --verbose Verbose output"
echo " -h, --help Show help"
echo ""
echo -e "${CYAN}Test Markers:${NC}"
echo " unit - Fast, isolated unit tests"
echo " integration - Tests with real dependencies"
echo " e2e - End-to-end tests (requires Docker)"
echo " slow - Tests taking >10 seconds"
echo " performance - Performance benchmarks"
echo " security - Security tests"
echo ""
echo -e "Press any key to return to menu..."
read -n 1 -s
}
# Main menu loop
if [ $# -eq 0 ]; then
# Interactive mode
while true; do
show_menu
read -n 1 choice
echo ""
case $choice in
1)
run_quick_test
;;
2)
run_full_test
;;
3)
run_coverage
;;
4)
run_unit
;;
5)
run_integration
;;
6)
run_e2e
;;
7)
run_custom_markers
;;
8)
run_parallel
;;
9)
run_ci
;;
h|H)
show_help
;;
q|Q)
echo -e "${GREEN}Goodbye!${NC}"
exit 0
;;
*)
echo -e "${RED}Invalid option${NC}"
sleep 1
;;
esac
if [ $? -eq 0 ]; then
echo ""
echo -e "${GREEN}Operation completed successfully!${NC}"
else
echo ""
echo -e "${RED}Operation failed!${NC}"
fi
echo ""
read -p "Press Enter to continue..."
done
else
# Non-interactive: forward to run_tests.sh
bash "$SCRIPT_DIR/run_tests.sh" "$@"
fi

View File

@@ -55,7 +55,7 @@ def test_complete_async_download_flow(test_client, monkeypatch):
monkeypatch.setattr("api.price_data_manager.PriceDataManager", MockPriceManager)
# Mock execution to avoid actual trading
def mock_execute_date(self, date, models, config_path):
def mock_execute_date(self, date, models, config_path, completion_skips=None):
# Update job details to simulate successful execution
from api.job_manager import JobManager
job_manager = JobManager(db_path=test_client.app.state.db_path)
@@ -155,7 +155,7 @@ def test_flow_with_partial_data(test_client, monkeypatch):
monkeypatch.setattr("api.price_data_manager.PriceDataManager", MockPriceManagerPartial)
def mock_execute_date(self, date, models, config_path):
def mock_execute_date(self, date, models, config_path, completion_skips=None):
# Update job details to simulate successful execution
from api.job_manager import JobManager
job_manager = JobManager(db_path=test_client.app.state.db_path)

View File

@@ -26,7 +26,7 @@ def test_worker_prepares_data_before_execution(tmp_path):
def mock_prepare(*args, **kwargs):
prepare_called.append(True)
return (["2025-10-01"], []) # Return available dates, no warnings
return (["2025-10-01"], [], {}) # Return available dates, no warnings, no completion skips
worker._prepare_data = mock_prepare
@@ -55,7 +55,7 @@ def test_worker_handles_no_available_dates(tmp_path):
worker = SimulationWorker(job_id=job_id, db_path=db_path)
# Mock _prepare_data to return empty dates
worker._prepare_data = Mock(return_value=([], []))
worker._prepare_data = Mock(return_value=([], [], {}))
# Run worker
result = worker.run()
@@ -84,7 +84,7 @@ def test_worker_stores_warnings(tmp_path):
# Mock _prepare_data to return warnings
warnings = ["Rate limited", "Skipped 1 date"]
worker._prepare_data = Mock(return_value=(["2025-10-01"], warnings))
worker._prepare_data = Mock(return_value=(["2025-10-01"], warnings, {}))
worker._execute_date = Mock()
# Run worker

View File

@@ -50,7 +50,7 @@ class TestSimulationWorkerExecution:
worker = SimulationWorker(job_id=job_id, db_path=clean_db)
# Mock _prepare_data to return both dates
worker._prepare_data = Mock(return_value=(["2025-01-16", "2025-01-17"], []))
worker._prepare_data = Mock(return_value=(["2025-01-16", "2025-01-17"], [], {}))
# Mock ModelDayExecutor
with patch("api.simulation_worker.ModelDayExecutor") as mock_executor_class:
@@ -82,7 +82,7 @@ class TestSimulationWorkerExecution:
worker = SimulationWorker(job_id=job_id, db_path=clean_db)
# Mock _prepare_data to return both dates
worker._prepare_data = Mock(return_value=(["2025-01-16", "2025-01-17"], []))
worker._prepare_data = Mock(return_value=(["2025-01-16", "2025-01-17"], [], {}))
execution_order = []
@@ -127,7 +127,7 @@ class TestSimulationWorkerExecution:
worker = SimulationWorker(job_id=job_id, db_path=clean_db)
# Mock _prepare_data to return the date
worker._prepare_data = Mock(return_value=(["2025-01-16"], []))
worker._prepare_data = Mock(return_value=(["2025-01-16"], [], {}))
def create_mock_executor(job_id, date, model_sig, config_path, db_path):
"""Create mock executor that simulates job detail status updates."""
@@ -168,7 +168,7 @@ class TestSimulationWorkerExecution:
worker = SimulationWorker(job_id=job_id, db_path=clean_db)
# Mock _prepare_data to return the date
worker._prepare_data = Mock(return_value=(["2025-01-16"], []))
worker._prepare_data = Mock(return_value=(["2025-01-16"], [], {}))
call_count = 0
@@ -223,7 +223,7 @@ class TestSimulationWorkerErrorHandling:
worker = SimulationWorker(job_id=job_id, db_path=clean_db)
# Mock _prepare_data to return the date
worker._prepare_data = Mock(return_value=(["2025-01-16"], []))
worker._prepare_data = Mock(return_value=(["2025-01-16"], [], {}))
execution_count = 0
@@ -298,7 +298,7 @@ class TestSimulationWorkerConcurrency:
worker = SimulationWorker(job_id=job_id, db_path=clean_db)
# Mock _prepare_data to return the date
worker._prepare_data = Mock(return_value=(["2025-01-16"], []))
worker._prepare_data = Mock(return_value=(["2025-01-16"], [], {}))
with patch("api.simulation_worker.ModelDayExecutor") as mock_executor_class:
mock_executor = Mock()
@@ -521,7 +521,7 @@ class TestSimulationWorkerHelperMethods:
worker.job_manager.get_completed_model_dates = Mock(return_value={})
# Execute
available_dates, warnings = worker._prepare_data(
available_dates, warnings, completion_skips = worker._prepare_data(
requested_dates=["2025-10-01"],
models=["gpt-5"],
config_path="config.json"
@@ -570,7 +570,7 @@ class TestSimulationWorkerHelperMethods:
worker.job_manager.get_completed_model_dates = Mock(return_value={})
# Execute
available_dates, warnings = worker._prepare_data(
available_dates, warnings, completion_skips = worker._prepare_data(
requested_dates=["2025-10-01"],
models=["gpt-5"],
config_path="config.json"