mirror of
https://github.com/Xe138/AI-Trader.git
synced 2026-04-01 17:17:24 -04:00
feat: add standardized testing scripts and documentation
Add comprehensive suite of testing scripts for different workflows: - test.sh: Interactive menu for all testing operations - quick_test.sh: Fast unit test feedback (~10-30s) - run_tests.sh: Main test runner with full configuration options - coverage_report.sh: Coverage analysis with HTML/JSON/terminal reports - ci_test.sh: CI/CD optimized testing with JUnit/coverage XML output Features: - Colored terminal output with clear error messages - Consistent option flags across all scripts - Support for test markers (unit, integration, e2e, slow, etc.) - Parallel execution support - Coverage thresholds (default: 85%) - Virtual environment and dependency checks Documentation: - Update CLAUDE.md with testing section and examples - Expand docs/developer/testing.md with comprehensive guide - Add scripts/README.md with quick reference All scripts are tested and executable. This standardizes the testing process for local development, CI/CD, and pull request workflows.
This commit is contained in:
109
scripts/README.md
Normal file
109
scripts/README.md
Normal file
@@ -0,0 +1,109 @@
|
||||
# AI-Trader Scripts
|
||||
|
||||
This directory contains standardized scripts for testing, validation, and operations.
|
||||
|
||||
## Testing Scripts
|
||||
|
||||
### Interactive Testing
|
||||
|
||||
**`test.sh`** - Interactive test menu
|
||||
```bash
|
||||
bash scripts/test.sh
|
||||
```
|
||||
User-friendly menu for all testing operations. Best for local development.
|
||||
|
||||
### Development Testing
|
||||
|
||||
**`quick_test.sh`** - Fast unit test feedback
|
||||
```bash
|
||||
bash scripts/quick_test.sh
|
||||
```
|
||||
- Runs unit tests only
|
||||
- No coverage
|
||||
- Fails fast
|
||||
- ~10-30 seconds
|
||||
|
||||
**`run_tests.sh`** - Full test suite
|
||||
```bash
|
||||
bash scripts/run_tests.sh [OPTIONS]
|
||||
```
|
||||
- All test types (unit, integration, e2e)
|
||||
- Coverage reporting
|
||||
- Parallel execution support
|
||||
- Highly configurable
|
||||
|
||||
**`coverage_report.sh`** - Coverage analysis
|
||||
```bash
|
||||
bash scripts/coverage_report.sh [OPTIONS]
|
||||
```
|
||||
- Generate HTML/JSON/terminal reports
|
||||
- Check coverage thresholds
|
||||
- Open reports in browser
|
||||
|
||||
### CI/CD Testing
|
||||
|
||||
**`ci_test.sh`** - CI-optimized testing
|
||||
```bash
|
||||
bash scripts/ci_test.sh [OPTIONS]
|
||||
```
|
||||
- JUnit XML output
|
||||
- Coverage XML for CI tools
|
||||
- Environment variable configuration
|
||||
- Excludes Docker tests
|
||||
|
||||
## Validation Scripts
|
||||
|
||||
**`validate_docker_build.sh`** - Docker build validation
|
||||
```bash
|
||||
bash scripts/validate_docker_build.sh
|
||||
```
|
||||
Validates Docker setup, build, and container startup.
|
||||
|
||||
**`test_api_endpoints.sh`** - API endpoint testing
|
||||
```bash
|
||||
bash scripts/test_api_endpoints.sh
|
||||
```
|
||||
Tests all REST API endpoints with real simulations.
|
||||
|
||||
## Other Scripts
|
||||
|
||||
**`migrate_price_data.py`** - Data migration utility
|
||||
```bash
|
||||
python scripts/migrate_price_data.py
|
||||
```
|
||||
Migrates price data between formats.
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Script | Command |
|
||||
|------|--------|---------|
|
||||
| Quick test | `quick_test.sh` | `bash scripts/quick_test.sh` |
|
||||
| Full test | `run_tests.sh` | `bash scripts/run_tests.sh` |
|
||||
| Coverage | `coverage_report.sh` | `bash scripts/coverage_report.sh -o` |
|
||||
| CI test | `ci_test.sh` | `bash scripts/ci_test.sh -f` |
|
||||
| Interactive | `test.sh` | `bash scripts/test.sh` |
|
||||
| Docker validation | `validate_docker_build.sh` | `bash scripts/validate_docker_build.sh` |
|
||||
| API testing | `test_api_endpoints.sh` | `bash scripts/test_api_endpoints.sh` |
|
||||
|
||||
## Common Options
|
||||
|
||||
Most test scripts support:
|
||||
- `-h, --help` - Show help
|
||||
- `-v, --verbose` - Verbose output
|
||||
- `-f, --fail-fast` - Stop on first failure
|
||||
- `-t, --type TYPE` - Test type (unit, integration, e2e, all)
|
||||
- `-m, --markers MARKERS` - Pytest markers
|
||||
- `-p, --parallel` - Parallel execution
|
||||
|
||||
## Documentation
|
||||
|
||||
For detailed usage, see:
|
||||
- [Testing Guide](../docs/developer/testing.md)
|
||||
- [Testing & Validation Guide](../TESTING_GUIDE.md)
|
||||
|
||||
## Making Scripts Executable
|
||||
|
||||
If scripts are not executable:
|
||||
```bash
|
||||
chmod +x scripts/*.sh
|
||||
```
|
||||
243
scripts/ci_test.sh
Executable file
243
scripts/ci_test.sh
Executable file
@@ -0,0 +1,243 @@
|
||||
#!/bin/bash
|
||||
# AI-Trader CI Test Script
|
||||
# Optimized for CI/CD environments (GitHub Actions, Jenkins, etc.)
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output (disabled in CI if not supported)
|
||||
if [ -t 1 ]; then
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m'
|
||||
else
|
||||
RED=''
|
||||
GREEN=''
|
||||
YELLOW=''
|
||||
BLUE=''
|
||||
NC=''
|
||||
fi
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# CI-specific defaults
|
||||
FAIL_FAST=false
|
||||
JUNIT_XML=true
|
||||
COVERAGE_MIN=85
|
||||
PARALLEL=false
|
||||
VERBOSE=false
|
||||
|
||||
# Parse environment variables (common in CI)
|
||||
if [ -n "$CI_FAIL_FAST" ]; then
|
||||
FAIL_FAST="$CI_FAIL_FAST"
|
||||
fi
|
||||
|
||||
if [ -n "$CI_COVERAGE_MIN" ]; then
|
||||
COVERAGE_MIN="$CI_COVERAGE_MIN"
|
||||
fi
|
||||
|
||||
if [ -n "$CI_PARALLEL" ]; then
|
||||
PARALLEL="$CI_PARALLEL"
|
||||
fi
|
||||
|
||||
if [ -n "$CI_VERBOSE" ]; then
|
||||
VERBOSE="$CI_VERBOSE"
|
||||
fi
|
||||
|
||||
# Parse command line arguments (override env vars)
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-f|--fail-fast)
|
||||
FAIL_FAST=true
|
||||
shift
|
||||
;;
|
||||
-m|--min-coverage)
|
||||
COVERAGE_MIN="$2"
|
||||
shift 2
|
||||
;;
|
||||
-p|--parallel)
|
||||
PARALLEL=true
|
||||
shift
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
--no-junit)
|
||||
JUNIT_XML=false
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
cat << EOF
|
||||
Usage: $0 [OPTIONS]
|
||||
|
||||
CI-optimized test runner for AI-Trader.
|
||||
|
||||
OPTIONS:
|
||||
-f, --fail-fast Stop on first failure
|
||||
-m, --min-coverage NUM Minimum coverage percentage (default: 85)
|
||||
-p, --parallel Run tests in parallel
|
||||
-v, --verbose Verbose output
|
||||
--no-junit Skip JUnit XML generation
|
||||
-h, --help Show this help message
|
||||
|
||||
ENVIRONMENT VARIABLES:
|
||||
CI_FAIL_FAST Set to 'true' to enable fail-fast
|
||||
CI_COVERAGE_MIN Minimum coverage threshold
|
||||
CI_PARALLEL Set to 'true' to enable parallel execution
|
||||
CI_VERBOSE Set to 'true' for verbose output
|
||||
|
||||
EXAMPLES:
|
||||
# Basic CI run
|
||||
$0
|
||||
|
||||
# Fail fast with custom coverage threshold
|
||||
$0 -f -m 90
|
||||
|
||||
# Parallel execution
|
||||
$0 -p
|
||||
|
||||
# GitHub Actions
|
||||
CI_FAIL_FAST=true CI_COVERAGE_MIN=90 $0
|
||||
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown option: $1${NC}"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}AI-Trader CI Test Runner${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}CI Configuration:${NC}"
|
||||
echo " Fail Fast: $FAIL_FAST"
|
||||
echo " Min Coverage: ${COVERAGE_MIN}%"
|
||||
echo " Parallel: $PARALLEL"
|
||||
echo " Verbose: $VERBOSE"
|
||||
echo " JUnit XML: $JUNIT_XML"
|
||||
echo " Environment: ${CI:-local}"
|
||||
echo ""
|
||||
|
||||
# Change to project root
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Check Python version
|
||||
echo -e "${YELLOW}Checking Python version...${NC}"
|
||||
PYTHON_VERSION=$(./venv/bin/python --version 2>&1)
|
||||
echo " $PYTHON_VERSION"
|
||||
echo ""
|
||||
|
||||
# Install/verify dependencies
|
||||
echo -e "${YELLOW}Verifying test dependencies...${NC}"
|
||||
./venv/bin/python -m pip install --quiet pytest pytest-cov pytest-xdist 2>&1 | grep -v "already satisfied" || true
|
||||
echo " ✓ Dependencies verified"
|
||||
echo ""
|
||||
|
||||
# Build pytest command
|
||||
PYTEST_CMD="./venv/bin/python -m pytest"
|
||||
PYTEST_ARGS="-v --tb=short --strict-markers"
|
||||
|
||||
# Coverage
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov=api --cov=agent --cov=tools"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=term-missing:skip-covered"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=html:htmlcov"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=xml:coverage.xml"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-fail-under=$COVERAGE_MIN"
|
||||
|
||||
# JUnit XML for CI integrations
|
||||
if [ "$JUNIT_XML" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS --junit-xml=junit.xml"
|
||||
fi
|
||||
|
||||
# Fail fast
|
||||
if [ "$FAIL_FAST" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -x"
|
||||
fi
|
||||
|
||||
# Parallel execution
|
||||
if [ "$PARALLEL" = true ]; then
|
||||
# Check if pytest-xdist is available
|
||||
if ./venv/bin/python -c "import xdist" 2>/dev/null; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -n auto"
|
||||
echo -e "${YELLOW}Parallel execution enabled${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Warning: pytest-xdist not available, running sequentially${NC}"
|
||||
fi
|
||||
echo ""
|
||||
fi
|
||||
|
||||
# Verbose
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -vv"
|
||||
fi
|
||||
|
||||
# Exclude e2e tests in CI (require Docker)
|
||||
PYTEST_ARGS="$PYTEST_ARGS -m 'not e2e'"
|
||||
|
||||
# Test path
|
||||
PYTEST_ARGS="$PYTEST_ARGS tests/"
|
||||
|
||||
# Run tests
|
||||
echo -e "${BLUE}Running test suite...${NC}"
|
||||
echo ""
|
||||
echo "Command: $PYTEST_CMD $PYTEST_ARGS"
|
||||
echo ""
|
||||
|
||||
# Execute tests
|
||||
set +e # Don't exit on test failure, we want to process results
|
||||
$PYTEST_CMD $PYTEST_ARGS
|
||||
TEST_EXIT_CODE=$?
|
||||
set -e
|
||||
|
||||
echo ""
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}Test Results${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
|
||||
# Process results
|
||||
if [ $TEST_EXIT_CODE -eq 0 ]; then
|
||||
echo -e "${GREEN}✓ All tests passed!${NC}"
|
||||
echo ""
|
||||
|
||||
# Show artifacts
|
||||
echo -e "${YELLOW}Artifacts generated:${NC}"
|
||||
if [ -f "coverage.xml" ]; then
|
||||
echo " ✓ coverage.xml (for CI coverage tools)"
|
||||
fi
|
||||
if [ -f "junit.xml" ]; then
|
||||
echo " ✓ junit.xml (for CI test reporting)"
|
||||
fi
|
||||
if [ -d "htmlcov" ]; then
|
||||
echo " ✓ htmlcov/ (HTML coverage report)"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}✗ Tests failed (exit code: $TEST_EXIT_CODE)${NC}"
|
||||
echo ""
|
||||
|
||||
if [ $TEST_EXIT_CODE -eq 1 ]; then
|
||||
echo " Reason: Test failures"
|
||||
elif [ $TEST_EXIT_CODE -eq 2 ]; then
|
||||
echo " Reason: Test execution interrupted"
|
||||
elif [ $TEST_EXIT_CODE -eq 3 ]; then
|
||||
echo " Reason: Internal pytest error"
|
||||
elif [ $TEST_EXIT_CODE -eq 4 ]; then
|
||||
echo " Reason: pytest usage error"
|
||||
elif [ $TEST_EXIT_CODE -eq 5 ]; then
|
||||
echo " Reason: No tests collected"
|
||||
fi
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
|
||||
# Exit with test result code
|
||||
exit $TEST_EXIT_CODE
|
||||
170
scripts/coverage_report.sh
Executable file
170
scripts/coverage_report.sh
Executable file
@@ -0,0 +1,170 @@
|
||||
#!/bin/bash
|
||||
# AI-Trader Coverage Report Generator
|
||||
# Generate detailed coverage reports and check coverage thresholds
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Default values
|
||||
MIN_COVERAGE=85
|
||||
OPEN_HTML=false
|
||||
INCLUDE_INTEGRATION=false
|
||||
|
||||
# Usage information
|
||||
usage() {
|
||||
cat << EOF
|
||||
Usage: $0 [OPTIONS]
|
||||
|
||||
Generate coverage reports for AI-Trader test suite.
|
||||
|
||||
OPTIONS:
|
||||
-m, --min-coverage NUM Minimum coverage percentage (default: 85)
|
||||
-o, --open Open HTML report in browser after generation
|
||||
-i, --include-integration Include integration and e2e tests
|
||||
-h, --help Show this help message
|
||||
|
||||
EXAMPLES:
|
||||
# Generate coverage report with default threshold (85%)
|
||||
$0
|
||||
|
||||
# Set custom coverage threshold
|
||||
$0 -m 90
|
||||
|
||||
# Generate and open HTML report
|
||||
$0 -o
|
||||
|
||||
# Include integration tests in coverage
|
||||
$0 -i
|
||||
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-m|--min-coverage)
|
||||
MIN_COVERAGE="$2"
|
||||
shift 2
|
||||
;;
|
||||
-o|--open)
|
||||
OPEN_HTML=true
|
||||
shift
|
||||
;;
|
||||
-i|--include-integration)
|
||||
INCLUDE_INTEGRATION=true
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown option: $1${NC}"
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}AI-Trader Coverage Report${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Configuration:${NC}"
|
||||
echo " Minimum Coverage: ${MIN_COVERAGE}%"
|
||||
echo " Include Integration: $INCLUDE_INTEGRATION"
|
||||
echo ""
|
||||
|
||||
# Check if virtual environment exists
|
||||
if [ ! -d "$PROJECT_ROOT/venv" ]; then
|
||||
echo -e "${RED}Error: Virtual environment not found${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Change to project root
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Build pytest command
|
||||
PYTEST_CMD="./venv/bin/python -m pytest tests/"
|
||||
PYTEST_ARGS="-v --tb=short"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov=api --cov=agent --cov=tools"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=term-missing"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=html:htmlcov"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=json:coverage.json"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-fail-under=$MIN_COVERAGE"
|
||||
|
||||
# Filter tests if not including integration
|
||||
if [ "$INCLUDE_INTEGRATION" = false ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -m 'not e2e'"
|
||||
echo -e "${YELLOW}Running tests (excluding e2e)...${NC}"
|
||||
else
|
||||
echo -e "${YELLOW}Running all tests...${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
|
||||
# Run tests with coverage
|
||||
$PYTEST_CMD $PYTEST_ARGS
|
||||
TEST_EXIT_CODE=$?
|
||||
|
||||
echo ""
|
||||
|
||||
# Parse coverage from JSON report
|
||||
if [ -f "coverage.json" ]; then
|
||||
TOTAL_COVERAGE=$(./venv/bin/python -c "import json; data=json.load(open('coverage.json')); print(f\"{data['totals']['percent_covered']:.2f}\")")
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}Coverage Summary${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e " Total Coverage: ${GREEN}${TOTAL_COVERAGE}%${NC}"
|
||||
echo -e " Minimum Required: ${MIN_COVERAGE}%"
|
||||
echo ""
|
||||
|
||||
if [ $TEST_EXIT_CODE -eq 0 ]; then
|
||||
echo -e "${GREEN}✓ Coverage threshold met!${NC}"
|
||||
else
|
||||
echo -e "${RED}✗ Coverage below threshold${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${YELLOW}Reports Generated:${NC}"
|
||||
echo " HTML: file://$PROJECT_ROOT/htmlcov/index.html"
|
||||
echo " JSON: $PROJECT_ROOT/coverage.json"
|
||||
echo " Terminal: (shown above)"
|
||||
|
||||
# Open HTML report if requested
|
||||
if [ "$OPEN_HTML" = true ]; then
|
||||
echo ""
|
||||
echo -e "${BLUE}Opening HTML report...${NC}"
|
||||
|
||||
# Try different browsers/commands
|
||||
if command -v xdg-open &> /dev/null; then
|
||||
xdg-open "htmlcov/index.html"
|
||||
elif command -v open &> /dev/null; then
|
||||
open "htmlcov/index.html"
|
||||
elif command -v start &> /dev/null; then
|
||||
start "htmlcov/index.html"
|
||||
else
|
||||
echo -e "${YELLOW}Could not open browser automatically${NC}"
|
||||
echo "Please open: file://$PROJECT_ROOT/htmlcov/index.html"
|
||||
fi
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}Error: coverage.json not generated${NC}"
|
||||
TEST_EXIT_CODE=1
|
||||
fi
|
||||
|
||||
echo ""
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
|
||||
exit $TEST_EXIT_CODE
|
||||
59
scripts/quick_test.sh
Executable file
59
scripts/quick_test.sh
Executable file
@@ -0,0 +1,59 @@
|
||||
#!/bin/bash
|
||||
# AI-Trader Quick Test Script
|
||||
# Fast test run for rapid feedback during development
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}AI-Trader Quick Test${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Running unit tests (no coverage, fail-fast)${NC}"
|
||||
echo ""
|
||||
|
||||
# Change to project root
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Check if virtual environment exists
|
||||
if [ ! -d "./venv" ]; then
|
||||
echo -e "${RED}Error: Virtual environment not found${NC}"
|
||||
echo -e "${YELLOW}Please run: python3 -m venv venv && ./venv/bin/pip install -r requirements.txt${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Run unit tests only, no coverage, fail on first error
|
||||
./venv/bin/python -m pytest tests/ \
|
||||
-v \
|
||||
-m "unit and not slow" \
|
||||
-x \
|
||||
--tb=short \
|
||||
--no-cov
|
||||
|
||||
TEST_EXIT_CODE=$?
|
||||
|
||||
echo ""
|
||||
if [ $TEST_EXIT_CODE -eq 0 ]; then
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
echo -e "${GREEN}✓ Quick tests passed!${NC}"
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}For full test suite with coverage, run:${NC}"
|
||||
echo " bash scripts/run_tests.sh"
|
||||
else
|
||||
echo -e "${RED}========================================${NC}"
|
||||
echo -e "${RED}✗ Quick tests failed${NC}"
|
||||
echo -e "${RED}========================================${NC}"
|
||||
fi
|
||||
|
||||
exit $TEST_EXIT_CODE
|
||||
221
scripts/run_tests.sh
Executable file
221
scripts/run_tests.sh
Executable file
@@ -0,0 +1,221 @@
|
||||
#!/bin/bash
|
||||
# AI-Trader Test Runner
|
||||
# Standardized script for running tests with various options
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
# Default values
|
||||
TEST_TYPE="all"
|
||||
COVERAGE=true
|
||||
VERBOSE=false
|
||||
FAIL_FAST=false
|
||||
MARKERS=""
|
||||
PARALLEL=false
|
||||
HTML_REPORT=true
|
||||
|
||||
# Usage information
|
||||
usage() {
|
||||
cat << EOF
|
||||
Usage: $0 [OPTIONS]
|
||||
|
||||
Run AI-Trader test suite with standardized configuration.
|
||||
|
||||
OPTIONS:
|
||||
-t, --type TYPE Test type: all, unit, integration, e2e (default: all)
|
||||
-m, --markers MARKERS Run tests matching markers (e.g., "unit and not slow")
|
||||
-f, --fail-fast Stop on first failure
|
||||
-n, --no-coverage Skip coverage reporting
|
||||
-v, --verbose Verbose output
|
||||
-p, --parallel Run tests in parallel (requires pytest-xdist)
|
||||
--no-html Skip HTML coverage report
|
||||
-h, --help Show this help message
|
||||
|
||||
EXAMPLES:
|
||||
# Run all tests with coverage
|
||||
$0
|
||||
|
||||
# Run only unit tests
|
||||
$0 -t unit
|
||||
|
||||
# Run integration tests without coverage
|
||||
$0 -t integration -n
|
||||
|
||||
# Run specific markers with fail-fast
|
||||
$0 -m "unit and not slow" -f
|
||||
|
||||
# Run tests in parallel
|
||||
$0 -p
|
||||
|
||||
# Quick test run (unit only, no coverage, fail-fast)
|
||||
$0 -t unit -n -f
|
||||
|
||||
MARKERS:
|
||||
unit - Fast, isolated unit tests
|
||||
integration - Tests with real dependencies
|
||||
e2e - End-to-end tests (requires Docker)
|
||||
slow - Tests taking >10 seconds
|
||||
performance - Performance benchmarks
|
||||
security - Security tests
|
||||
|
||||
EOF
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Parse command line arguments
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
-t|--type)
|
||||
TEST_TYPE="$2"
|
||||
shift 2
|
||||
;;
|
||||
-m|--markers)
|
||||
MARKERS="$2"
|
||||
shift 2
|
||||
;;
|
||||
-f|--fail-fast)
|
||||
FAIL_FAST=true
|
||||
shift
|
||||
;;
|
||||
-n|--no-coverage)
|
||||
COVERAGE=false
|
||||
shift
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=true
|
||||
shift
|
||||
;;
|
||||
-p|--parallel)
|
||||
PARALLEL=true
|
||||
shift
|
||||
;;
|
||||
--no-html)
|
||||
HTML_REPORT=false
|
||||
shift
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Unknown option: $1${NC}"
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Build pytest command
|
||||
PYTEST_CMD="./venv/bin/python -m pytest"
|
||||
PYTEST_ARGS="-v --tb=short"
|
||||
|
||||
# Add test type markers
|
||||
if [ "$TEST_TYPE" != "all" ]; then
|
||||
if [ -n "$MARKERS" ]; then
|
||||
MARKERS="$TEST_TYPE and ($MARKERS)"
|
||||
else
|
||||
MARKERS="$TEST_TYPE"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Add custom markers
|
||||
if [ -n "$MARKERS" ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -m \"$MARKERS\""
|
||||
fi
|
||||
|
||||
# Add coverage options
|
||||
if [ "$COVERAGE" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov=api --cov=agent --cov=tools"
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=term-missing"
|
||||
|
||||
if [ "$HTML_REPORT" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS --cov-report=html:htmlcov"
|
||||
fi
|
||||
else
|
||||
PYTEST_ARGS="$PYTEST_ARGS --no-cov"
|
||||
fi
|
||||
|
||||
# Add fail-fast
|
||||
if [ "$FAIL_FAST" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -x"
|
||||
fi
|
||||
|
||||
# Add parallel execution
|
||||
if [ "$PARALLEL" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -n auto"
|
||||
fi
|
||||
|
||||
# Add verbosity
|
||||
if [ "$VERBOSE" = true ]; then
|
||||
PYTEST_ARGS="$PYTEST_ARGS -vv"
|
||||
fi
|
||||
|
||||
# Add test path
|
||||
PYTEST_ARGS="$PYTEST_ARGS tests/"
|
||||
|
||||
# Print configuration
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}AI-Trader Test Runner${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Configuration:${NC}"
|
||||
echo " Test Type: $TEST_TYPE"
|
||||
echo " Markers: ${MARKERS:-none}"
|
||||
echo " Coverage: $COVERAGE"
|
||||
echo " Fail Fast: $FAIL_FAST"
|
||||
echo " Parallel: $PARALLEL"
|
||||
echo " Verbose: $VERBOSE"
|
||||
echo ""
|
||||
|
||||
# Check if virtual environment exists
|
||||
if [ ! -d "$PROJECT_ROOT/venv" ]; then
|
||||
echo -e "${RED}Error: Virtual environment not found at $PROJECT_ROOT/venv${NC}"
|
||||
echo -e "${YELLOW}Please run: python3 -m venv venv && ./venv/bin/pip install -r requirements.txt${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if pytest is installed
|
||||
if ! ./venv/bin/python -c "import pytest" 2>/dev/null; then
|
||||
echo -e "${RED}Error: pytest not installed${NC}"
|
||||
echo -e "${YELLOW}Please run: ./venv/bin/pip install -r requirements.txt${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Change to project root
|
||||
cd "$PROJECT_ROOT"
|
||||
|
||||
# Run tests
|
||||
echo -e "${BLUE}Running tests...${NC}"
|
||||
echo ""
|
||||
|
||||
# Execute pytest with eval to handle quotes properly
|
||||
eval "$PYTEST_CMD $PYTEST_ARGS"
|
||||
TEST_EXIT_CODE=$?
|
||||
|
||||
# Print results
|
||||
echo ""
|
||||
if [ $TEST_EXIT_CODE -eq 0 ]; then
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
echo -e "${GREEN}✓ All tests passed!${NC}"
|
||||
echo -e "${GREEN}========================================${NC}"
|
||||
|
||||
if [ "$COVERAGE" = true ] && [ "$HTML_REPORT" = true ]; then
|
||||
echo ""
|
||||
echo -e "${YELLOW}Coverage report generated:${NC}"
|
||||
echo " HTML: file://$PROJECT_ROOT/htmlcov/index.html"
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}========================================${NC}"
|
||||
echo -e "${RED}✗ Tests failed${NC}"
|
||||
echo -e "${RED}========================================${NC}"
|
||||
fi
|
||||
|
||||
exit $TEST_EXIT_CODE
|
||||
249
scripts/test.sh
Executable file
249
scripts/test.sh
Executable file
@@ -0,0 +1,249 @@
|
||||
#!/bin/bash
|
||||
# AI-Trader Test Helper
|
||||
# Interactive menu for common test operations
|
||||
|
||||
set -e
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
CYAN='\033[0;36m'
|
||||
NC='\033[0m'
|
||||
|
||||
# Script directory
|
||||
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
|
||||
|
||||
show_menu() {
|
||||
clear
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE} AI-Trader Test Helper${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${CYAN}Quick Actions:${NC}"
|
||||
echo " 1) Quick test (unit only, no coverage)"
|
||||
echo " 2) Full test suite (with coverage)"
|
||||
echo " 3) Coverage report"
|
||||
echo ""
|
||||
echo -e "${CYAN}Specific Test Types:${NC}"
|
||||
echo " 4) Unit tests only"
|
||||
echo " 5) Integration tests only"
|
||||
echo " 6) E2E tests only (requires Docker)"
|
||||
echo ""
|
||||
echo -e "${CYAN}Advanced Options:${NC}"
|
||||
echo " 7) Run with custom markers"
|
||||
echo " 8) Parallel execution"
|
||||
echo " 9) CI mode (for automation)"
|
||||
echo ""
|
||||
echo -e "${CYAN}Other:${NC}"
|
||||
echo " h) Show help"
|
||||
echo " q) Quit"
|
||||
echo ""
|
||||
echo -ne "${YELLOW}Select an option: ${NC}"
|
||||
}
|
||||
|
||||
run_quick_test() {
|
||||
echo -e "${BLUE}Running quick test...${NC}"
|
||||
bash "$SCRIPT_DIR/quick_test.sh"
|
||||
}
|
||||
|
||||
run_full_test() {
|
||||
echo -e "${BLUE}Running full test suite...${NC}"
|
||||
bash "$SCRIPT_DIR/run_tests.sh"
|
||||
}
|
||||
|
||||
run_coverage() {
|
||||
echo -e "${BLUE}Generating coverage report...${NC}"
|
||||
bash "$SCRIPT_DIR/coverage_report.sh" -o
|
||||
}
|
||||
|
||||
run_unit() {
|
||||
echo -e "${BLUE}Running unit tests...${NC}"
|
||||
bash "$SCRIPT_DIR/run_tests.sh" -t unit
|
||||
}
|
||||
|
||||
run_integration() {
|
||||
echo -e "${BLUE}Running integration tests...${NC}"
|
||||
bash "$SCRIPT_DIR/run_tests.sh" -t integration
|
||||
}
|
||||
|
||||
run_e2e() {
|
||||
echo -e "${BLUE}Running E2E tests...${NC}"
|
||||
echo -e "${YELLOW}Note: This requires Docker to be running${NC}"
|
||||
read -p "Continue? (y/n) " -n 1 -r
|
||||
echo
|
||||
if [[ $REPLY =~ ^[Yy]$ ]]; then
|
||||
bash "$SCRIPT_DIR/run_tests.sh" -t e2e
|
||||
fi
|
||||
}
|
||||
|
||||
run_custom_markers() {
|
||||
echo ""
|
||||
echo -e "${YELLOW}Available markers:${NC}"
|
||||
echo " - unit"
|
||||
echo " - integration"
|
||||
echo " - e2e"
|
||||
echo " - slow"
|
||||
echo " - performance"
|
||||
echo " - security"
|
||||
echo ""
|
||||
echo -e "${YELLOW}Examples:${NC}"
|
||||
echo " unit and not slow"
|
||||
echo " integration or performance"
|
||||
echo " not e2e"
|
||||
echo ""
|
||||
read -p "Enter markers expression: " markers
|
||||
|
||||
if [ -n "$markers" ]; then
|
||||
echo -e "${BLUE}Running tests with markers: $markers${NC}"
|
||||
bash "$SCRIPT_DIR/run_tests.sh" -m "$markers"
|
||||
else
|
||||
echo -e "${RED}No markers provided, skipping${NC}"
|
||||
sleep 2
|
||||
fi
|
||||
}
|
||||
|
||||
run_parallel() {
|
||||
echo -e "${BLUE}Running tests in parallel...${NC}"
|
||||
bash "$SCRIPT_DIR/run_tests.sh" -p
|
||||
}
|
||||
|
||||
run_ci() {
|
||||
echo -e "${BLUE}Running in CI mode...${NC}"
|
||||
bash "$SCRIPT_DIR/ci_test.sh"
|
||||
}
|
||||
|
||||
show_help() {
|
||||
clear
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo -e "${BLUE}AI-Trader Test Scripts Help${NC}"
|
||||
echo -e "${BLUE}========================================${NC}"
|
||||
echo ""
|
||||
echo -e "${CYAN}Available Scripts:${NC}"
|
||||
echo ""
|
||||
echo -e "${GREEN}1. quick_test.sh${NC}"
|
||||
echo " Fast feedback loop for development"
|
||||
echo " - Runs unit tests only"
|
||||
echo " - No coverage reporting"
|
||||
echo " - Fails fast on first error"
|
||||
echo " Usage: bash scripts/quick_test.sh"
|
||||
echo ""
|
||||
echo -e "${GREEN}2. run_tests.sh${NC}"
|
||||
echo " Main test runner with full options"
|
||||
echo " - Supports all test types (unit, integration, e2e)"
|
||||
echo " - Coverage reporting"
|
||||
echo " - Custom marker filtering"
|
||||
echo " - Parallel execution"
|
||||
echo " Usage: bash scripts/run_tests.sh [OPTIONS]"
|
||||
echo " Examples:"
|
||||
echo " bash scripts/run_tests.sh -t unit"
|
||||
echo " bash scripts/run_tests.sh -m 'not slow' -f"
|
||||
echo " bash scripts/run_tests.sh -p"
|
||||
echo ""
|
||||
echo -e "${GREEN}3. coverage_report.sh${NC}"
|
||||
echo " Generate detailed coverage reports"
|
||||
echo " - HTML, JSON, and terminal reports"
|
||||
echo " - Configurable coverage thresholds"
|
||||
echo " - Can open HTML report in browser"
|
||||
echo " Usage: bash scripts/coverage_report.sh [OPTIONS]"
|
||||
echo " Examples:"
|
||||
echo " bash scripts/coverage_report.sh -o"
|
||||
echo " bash scripts/coverage_report.sh -m 90"
|
||||
echo ""
|
||||
echo -e "${GREEN}4. ci_test.sh${NC}"
|
||||
echo " CI/CD optimized test runner"
|
||||
echo " - JUnit XML output"
|
||||
echo " - Coverage XML for CI tools"
|
||||
echo " - Environment variable configuration"
|
||||
echo " - Skips Docker-dependent tests"
|
||||
echo " Usage: bash scripts/ci_test.sh [OPTIONS]"
|
||||
echo " Examples:"
|
||||
echo " bash scripts/ci_test.sh -f -m 90"
|
||||
echo " CI_PARALLEL=true bash scripts/ci_test.sh"
|
||||
echo ""
|
||||
echo -e "${CYAN}Common Options:${NC}"
|
||||
echo " -t, --type Test type (unit, integration, e2e, all)"
|
||||
echo " -m, --markers Pytest markers expression"
|
||||
echo " -f, --fail-fast Stop on first failure"
|
||||
echo " -p, --parallel Run tests in parallel"
|
||||
echo " -n, --no-coverage Skip coverage reporting"
|
||||
echo " -v, --verbose Verbose output"
|
||||
echo " -h, --help Show help"
|
||||
echo ""
|
||||
echo -e "${CYAN}Test Markers:${NC}"
|
||||
echo " unit - Fast, isolated unit tests"
|
||||
echo " integration - Tests with real dependencies"
|
||||
echo " e2e - End-to-end tests (requires Docker)"
|
||||
echo " slow - Tests taking >10 seconds"
|
||||
echo " performance - Performance benchmarks"
|
||||
echo " security - Security tests"
|
||||
echo ""
|
||||
echo -e "Press any key to return to menu..."
|
||||
read -n 1 -s
|
||||
}
|
||||
|
||||
# Main menu loop
|
||||
if [ $# -eq 0 ]; then
|
||||
# Interactive mode
|
||||
while true; do
|
||||
show_menu
|
||||
read -n 1 choice
|
||||
echo ""
|
||||
|
||||
case $choice in
|
||||
1)
|
||||
run_quick_test
|
||||
;;
|
||||
2)
|
||||
run_full_test
|
||||
;;
|
||||
3)
|
||||
run_coverage
|
||||
;;
|
||||
4)
|
||||
run_unit
|
||||
;;
|
||||
5)
|
||||
run_integration
|
||||
;;
|
||||
6)
|
||||
run_e2e
|
||||
;;
|
||||
7)
|
||||
run_custom_markers
|
||||
;;
|
||||
8)
|
||||
run_parallel
|
||||
;;
|
||||
9)
|
||||
run_ci
|
||||
;;
|
||||
h|H)
|
||||
show_help
|
||||
;;
|
||||
q|Q)
|
||||
echo -e "${GREEN}Goodbye!${NC}"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo -e "${RED}Invalid option${NC}"
|
||||
sleep 1
|
||||
;;
|
||||
esac
|
||||
|
||||
if [ $? -eq 0 ]; then
|
||||
echo ""
|
||||
echo -e "${GREEN}Operation completed successfully!${NC}"
|
||||
else
|
||||
echo ""
|
||||
echo -e "${RED}Operation failed!${NC}"
|
||||
fi
|
||||
|
||||
echo ""
|
||||
read -p "Press Enter to continue..."
|
||||
done
|
||||
else
|
||||
# Non-interactive: forward to run_tests.sh
|
||||
bash "$SCRIPT_DIR/run_tests.sh" "$@"
|
||||
fi
|
||||
Reference in New Issue
Block a user