refactor: remove batch mode, simplify to API-only deployment

Removes dual-mode deployment complexity, focusing on REST API service only.

Changes:
- Removed batch mode from docker-compose.yml (now single ai-trader service)
- Deleted scripts/test_batch_mode.sh validation script
- Renamed entrypoint-api.sh to entrypoint.sh (now default)
- Simplified Dockerfile (single entrypoint, removed CMD)
- Updated validation scripts to use 'ai-trader' service name
- Updated documentation (README.md, TESTING_GUIDE.md, CHANGELOG.md)

Benefits:
- Eliminates port conflicts between batch and API services
- Simpler configuration and deployment
- API-first architecture aligned with Windmill integration
- Reduced maintenance complexity

Breaking Changes:
- Batch mode no longer available
- All simulations must use REST API endpoints
This commit is contained in:
2025-10-31 13:54:14 -04:00
parent a9f9560f76
commit 357e561b1f
10 changed files with 75 additions and 495 deletions

View File

@@ -7,6 +7,19 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [Unreleased]
### Changed
- **Simplified Deployment** - Removed batch mode, now API-only
- Single docker-compose service (ai-trader) instead of dual mode
- Removed scripts/test_batch_mode.sh
- Streamlined entrypoint (entrypoint.sh now runs API server)
- Simplified docker-compose.yml configuration
### Removed
- **Batch Mode** - Eliminated one-time batch simulation mode
- All simulations now run through REST API
- Removes complexity of dual-mode deployment
- Focus on API-first architecture for Windmill integration
## [0.3.0] - 2025-10-31
### Added - API Service Transformation

View File

@@ -24,8 +24,8 @@ RUN mkdir -p /app/scripts && \
# Create necessary directories
RUN mkdir -p data logs data/agent_data
# Make entrypoints executable
RUN chmod +x entrypoint.sh entrypoint-api.sh
# Make entrypoint executable
RUN chmod +x entrypoint.sh
# Expose MCP service ports, API server, and web dashboard
EXPOSE 8000 8001 8002 8003 8080 8888
@@ -33,6 +33,5 @@ EXPOSE 8000 8001 8002 8003 8080 8888
# Set Python to run unbuffered for real-time logs
ENV PYTHONUNBUFFERED=1
# Use entrypoint script
# Use API entrypoint script (no CMD needed - FastAPI runs as service)
ENTRYPOINT ["./entrypoint.sh"]
CMD ["configs/default_config.json"]

View File

@@ -48,9 +48,9 @@
- Job tracking and lifecycle management
- Position records with P&L tracking
- AI reasoning logs and tool usage analytics
- 🐳 **Dual Docker Deployment** - API server mode + Batch mode
- API mode: Persistent REST service with health checks
- Batch mode: One-time simulations (backwards compatible)
- 🐳 **Docker Deployment** - Persistent REST API service
- Health checks and automatic restarts
- Volume persistence for database and logs
- 🧪 **Comprehensive Testing** - 102 tests with 85% coverage
- Unit tests for all components
- Integration tests for API endpoints
@@ -227,9 +227,7 @@ AI-Trader Bench/
### 🐳 **Docker Deployment (Recommended)**
**Two deployment modes available:**
#### 🌐 API Server Mode (Windmill Integration)
#### 🌐 REST API Server (Windmill Integration)
```bash
# 1. Clone and configure
git clone https://github.com/Xe138/AI-Trader.git
@@ -238,7 +236,7 @@ cp .env.example .env
# Edit .env and add your API keys
# 2. Start API server
docker-compose up -d ai-trader-api
docker-compose up -d
# 3. Test API
curl http://localhost:8080/health
@@ -253,16 +251,7 @@ curl -X POST http://localhost:8080/simulate/trigger \
}'
```
See [DOCKER_API.md](DOCKER_API.md) for complete API documentation.
#### 🎯 Batch Mode (One-time Simulation)
```bash
# Run single simulation
docker-compose --profile batch up ai-trader-batch
# With custom config
docker-compose --profile batch run ai-trader-batch configs/custom.json
```
See [DOCKER_API.md](DOCKER_API.md) for complete API documentation and [TESTING_GUIDE.md](TESTING_GUIDE.md) for validation procedures.
---

View File

@@ -20,9 +20,6 @@ bash scripts/validate_docker_build.sh
# 3. Test API endpoints
bash scripts/test_api_endpoints.sh
# 4. Test batch mode
bash scripts/test_batch_mode.sh
```
---
@@ -85,7 +82,7 @@ Health response: {"status":"healthy","database":"connected","timestamp":"..."}
- Check Docker Desktop is running
- Verify `.env` has all required keys
- Check port 8080 is not already in use
- Review logs: `docker logs ai-trader-api`
- Review logs: `docker logs ai-trader`
---
@@ -96,7 +93,7 @@ Health response: {"status":"healthy","database":"connected","timestamp":"..."}
**Command:**
```bash
# Ensure API is running first
docker-compose up -d ai-trader-api
docker-compose up -d ai-trader
# Run tests
bash scripts/test_api_endpoints.sh
@@ -154,74 +151,13 @@ Test 9: Error handling
```
**If it fails:**
- Ensure container is running: `docker ps | grep ai-trader-api`
- Check API logs: `docker logs ai-trader-api`
- Ensure container is running: `docker ps | grep ai-trader`
- Check API logs: `docker logs ai-trader`
- Verify port 8080 is accessible: `curl http://localhost:8080/health`
- Check MCP services started: `docker exec ai-trader-api ps aux | grep python`
- Check MCP services started: `docker exec ai-trader ps aux | grep python`
---
### Test 3: Batch Mode Testing
**Purpose:** Verify one-time simulation execution works
**Command:**
```bash
bash scripts/test_batch_mode.sh
```
**What it tests:**
- ✅ Batch mode container starts
- ✅ Simulation executes to completion
- ✅ Exit code is 0 (success)
- ✅ Position files created
- ✅ Log files generated
- ✅ Price data persists between runs
**Expected output:**
```
==========================================
AI-Trader Batch Mode Testing
==========================================
✓ Prerequisites OK
Using config: configs/default_config.json
Test 1: Building Docker image
✓ Image built successfully
Test 2: Running batch simulation
🚀 Starting AI-Trader...
✅ Environment variables validated
📊 Fetching and merging price data...
🔧 Starting MCP services...
🤖 Starting trading agent...
[Trading output...]
Test 3: Checking exit status
✓ Batch simulation completed successfully (exit code: 0)
Test 4: Verifying output files
✓ Found 1 position file(s)
Sample position data: {...}
✓ Found 3 log file(s)
Test 5: Checking price data
✓ Price data exists: 100 stocks
Test 6: Testing data persistence
✓ Second run completed successfully
✓ Price data was reused
```
**If it fails:**
- Check `.env` has valid API keys
- Verify internet connection (for price data)
- Check available disk space
- Review batch logs: `docker logs ai-trader-batch`
- Check data directory permissions
---
## Manual Testing Procedures
@@ -229,7 +165,7 @@ Test 6: Testing data persistence
```bash
# Start API
docker-compose up -d ai-trader-api
docker-compose up -d ai-trader
# Test health endpoint
curl http://localhost:8080/health
@@ -299,7 +235,7 @@ ls -lh data/jobs.db
ls -R data/agent_data
# Restart container
docker-compose up -d ai-trader-api
docker-compose up -d ai-trader
# Data should still be accessible via API
curl http://localhost:8080/results | jq '.count'
@@ -312,13 +248,13 @@ curl http://localhost:8080/results | jq '.count'
### Problem: Container won't start
**Symptoms:**
- `docker ps` shows no ai-trader-api container
- `docker ps` shows no ai-trader container
- Container exits immediately
**Debug steps:**
```bash
# Check logs
docker logs ai-trader-api
docker logs ai-trader
# Common issues:
# 1. Missing API keys in .env
@@ -348,25 +284,25 @@ chmod -R 755 data logs
**Debug steps:**
```bash
# Check if API process is running
docker exec ai-trader-api ps aux | grep uvicorn
docker exec ai-trader ps aux | grep uvicorn
# Check internal health
docker exec ai-trader-api curl http://localhost:8080/health
docker exec ai-trader curl http://localhost:8080/health
# Check logs for startup errors
docker logs ai-trader-api | grep -i error
docker logs ai-trader | grep -i error
```
**Solutions:**
```bash
# If MCP services didn't start:
docker exec ai-trader-api ps aux | grep python
docker exec ai-trader ps aux | grep python
# If database issues:
docker exec ai-trader-api ls -l /app/data/jobs.db
docker exec ai-trader ls -l /app/data/jobs.db
# Restart container
docker-compose restart ai-trader-api
docker-compose restart ai-trader
```
### Problem: Job stays in "pending" status
@@ -378,19 +314,19 @@ docker-compose restart ai-trader-api
**Debug steps:**
```bash
# Check worker logs
docker logs ai-trader-api | grep -i "worker\|simulation"
docker logs ai-trader | grep -i "worker\|simulation"
# Check database
docker exec ai-trader-api sqlite3 /app/data/jobs.db "SELECT * FROM job_details;"
docker exec ai-trader sqlite3 /app/data/jobs.db "SELECT * FROM job_details;"
# Check if MCP services are accessible
docker exec ai-trader-api curl http://localhost:8000/health
docker exec ai-trader curl http://localhost:8000/health
```
**Solutions:**
```bash
# Restart container (jobs resume automatically)
docker-compose restart ai-trader-api
docker-compose restart ai-trader
# Check specific job status
curl http://localhost:8080/simulate/status/$JOB_ID | jq '.details'
@@ -411,7 +347,7 @@ curl http://localhost:8080/simulate/status/$JOB_ID | jq '.details'
watch -n 30 "curl -s http://localhost:8080/simulate/status/$JOB_ID | jq '.status, .progress'"
# Check agent logs for slowness
docker logs ai-trader-api | tail -100
docker logs ai-trader | tail -100
```
---
@@ -451,20 +387,20 @@ docker logs ai-trader-api | tail -100
```bash
# Docker handles log rotation, but monitor size:
docker logs ai-trader-api --tail 100
docker logs ai-trader --tail 100
# Clear old logs if needed:
docker logs ai-trader-api > /dev/null 2>&1
docker logs ai-trader > /dev/null 2>&1
```
### Database Size
```bash
# Monitor database growth
docker exec ai-trader-api du -h /app/data/jobs.db
docker exec ai-trader du -h /app/data/jobs.db
# Vacuum periodically
docker exec ai-trader-api sqlite3 /app/data/jobs.db "VACUUM;"
docker exec ai-trader sqlite3 /app/data/jobs.db "VACUUM;"
```
---
@@ -473,12 +409,11 @@ docker exec ai-trader-api sqlite3 /app/data/jobs.db "VACUUM;"
### Validation Complete When:
-All 3 test scripts pass without errors
-Both test scripts pass without errors
- ✅ Health endpoint returns "healthy" status
- ✅ Can trigger and complete simulation job
- ✅ Results are retrievable via API
- ✅ Data persists after container restart
- ✅ Batch mode completes successfully
- ✅ No critical errors in logs
### Ready for Production When:
@@ -508,6 +443,6 @@ docker exec ai-trader-api sqlite3 /app/data/jobs.db "VACUUM;"
For issues not covered in this guide:
1. Check `DOCKER_API.md` for detailed API documentation
2. Review container logs: `docker logs ai-trader-api`
3. Check database: `docker exec ai-trader-api sqlite3 /app/data/jobs.db ".tables"`
2. Review container logs: `docker logs ai-trader`
3. Check database: `docker exec ai-trader sqlite3 /app/data/jobs.db ".tables"`
4. Open issue on GitHub with logs and error messages

View File

@@ -1,52 +1,10 @@
services:
# Batch mode: Run one-time simulations with config file
ai-trader-batch:
# REST API server for Windmill integration
ai-trader:
# image: ghcr.io/xe138/ai-trader:latest
# Uncomment to build locally instead of pulling:
build: .
container_name: ai-trader-batch
volumes:
- ${VOLUME_PATH:-.}/data:/app/data
- ${VOLUME_PATH:-.}/logs:/app/logs
- ${VOLUME_PATH:-.}/configs:/app/configs
environment:
# AI Model API Configuration
- OPENAI_API_BASE=${OPENAI_API_BASE}
- OPENAI_API_KEY=${OPENAI_API_KEY}
# Data Source Configuration
- ALPHAADVANTAGE_API_KEY=${ALPHAADVANTAGE_API_KEY}
- JINA_API_KEY=${JINA_API_KEY}
# System Configuration
- RUNTIME_ENV_PATH=/app/data/runtime_env.json
# MCP Service Ports (fixed internally)
- MATH_HTTP_PORT=8000
- SEARCH_HTTP_PORT=8001
- TRADE_HTTP_PORT=8002
- GETPRICE_HTTP_PORT=8003
# Agent Configuration
- AGENT_MAX_STEP=${AGENT_MAX_STEP:-30}
ports:
# Format: "HOST:CONTAINER" - container ports are fixed, host ports configurable via .env
- "${MATH_HTTP_PORT:-8000}:8000"
- "${SEARCH_HTTP_PORT:-8001}:8001"
- "${TRADE_HTTP_PORT:-8002}:8002"
- "${GETPRICE_HTTP_PORT:-8003}:8003"
- "${WEB_HTTP_PORT:-8888}:8888"
restart: "no" # Batch jobs should not auto-restart
profiles:
- batch # Only start with: docker-compose --profile batch up
# API mode: REST API server for Windmill integration
ai-trader-api:
# image: ghcr.io/xe138/ai-trader:latest
# Uncomment to build locally instead of pulling:
build: .
container_name: ai-trader-api
entrypoint: ["./entrypoint-api.sh"]
container_name: ai-trader
volumes:
- ${VOLUME_PATH:-.}/data:/app/data
- ${VOLUME_PATH:-.}/logs:/app/logs

View File

@@ -1,65 +0,0 @@
#!/bin/bash
set -e # Exit on any error
echo "🚀 Starting AI-Trader API Server..."
# Validate required environment variables
echo "🔍 Validating environment variables..."
MISSING_VARS=()
if [ -z "$OPENAI_API_KEY" ]; then
MISSING_VARS+=("OPENAI_API_KEY")
fi
if [ -z "$ALPHAADVANTAGE_API_KEY" ]; then
MISSING_VARS+=("ALPHAADVANTAGE_API_KEY")
fi
if [ -z "$JINA_API_KEY" ]; then
MISSING_VARS+=("JINA_API_KEY")
fi
if [ ${#MISSING_VARS[@]} -gt 0 ]; then
echo ""
echo "❌ ERROR: Missing required environment variables:"
for var in "${MISSING_VARS[@]}"; do
echo " - $var"
done
echo ""
echo "Please set these variables in your .env file:"
echo " 1. Copy .env.example to .env"
echo " 2. Edit .env and add your API keys"
echo " 3. Restart the container"
echo ""
exit 1
fi
echo "✅ Environment variables validated"
# Step 1: Initialize database
echo "📊 Initializing database..."
python -c "from api.database import initialize_database; initialize_database('data/jobs.db')"
echo "✅ Database initialized"
# Step 2: Start MCP services in background
echo "🔧 Starting MCP services..."
cd /app
python agent_tools/start_mcp_services.py &
MCP_PID=$!
# Step 3: Wait for services to initialize
echo "⏳ Waiting for MCP services to start..."
sleep 3
# Step 4: Start FastAPI server with uvicorn
# Note: Container always uses port 8080 internally
# The API_PORT env var only affects the host port mapping in docker-compose.yml
echo "🌐 Starting FastAPI server on port 8080..."
uvicorn api.main:app \
--host 0.0.0.0 \
--port 8080 \
--log-level info \
--access-log
# Cleanup on exit
trap "echo '🛑 Stopping services...'; kill $MCP_PID 2>/dev/null; exit 0" EXIT SIGTERM SIGINT

View File

@@ -1,7 +1,7 @@
#!/bin/bash
set -e # Exit on any error
echo "🚀 Starting AI-Trader..."
echo "🚀 Starting AI-Trader API Server..."
# Validate required environment variables
echo "🔍 Validating environment variables..."
@@ -31,25 +31,15 @@ if [ ${#MISSING_VARS[@]} -gt 0 ]; then
echo " 2. Edit .env and add your API keys"
echo " 3. Restart the container"
echo ""
echo "See docs/DOCKER.md for more information."
exit 1
fi
echo "✅ Environment variables validated"
# Step 1: Data preparation
echo "📊 Checking price data..."
if [ -f "/app/data/merged.jsonl" ] && [ -s "/app/data/merged.jsonl" ]; then
echo "✅ Using existing price data ($(wc -l < /app/data/merged.jsonl) stocks)"
echo " To refresh data, delete /app/data/merged.jsonl and restart"
else
echo "📊 Fetching and merging price data..."
# Run script from /app/scripts but output to /app/data
# Note: get_daily_price.py now automatically calls merge_jsonl.py after fetching
cd /app/data
python /app/scripts/get_daily_price.py
cd /app
fi
# Step 1: Initialize database
echo "📊 Initializing database..."
python -c "from api.database import initialize_database; initialize_database('data/jobs.db')"
echo "✅ Database initialized"
# Step 2: Start MCP services in background
echo "🔧 Starting MCP services..."
@@ -61,22 +51,15 @@ MCP_PID=$!
echo "⏳ Waiting for MCP services to start..."
sleep 3
# Step 4: Run trading agent with config file
echo "🤖 Starting trading agent..."
# Smart config selection: custom_config.json takes precedence if it exists
if [ -f "configs/custom_config.json" ]; then
CONFIG_FILE="configs/custom_config.json"
echo "✅ Using custom configuration: configs/custom_config.json"
elif [ -n "$1" ]; then
CONFIG_FILE="$1"
echo "✅ Using specified configuration: $CONFIG_FILE"
else
CONFIG_FILE="configs/default_config.json"
echo "✅ Using default configuration: configs/default_config.json"
fi
python main.py "$CONFIG_FILE"
# Step 4: Start FastAPI server with uvicorn
# Note: Container always uses port 8080 internally
# The API_PORT env var only affects the host port mapping in docker-compose.yml
echo "🌐 Starting FastAPI server on port 8080..."
uvicorn api.main:app \
--host 0.0.0.0 \
--port 8080 \
--log-level info \
--access-log
# Cleanup on exit
trap "echo '🛑 Stopping MCP services...'; kill $MCP_PID 2>/dev/null; exit 0" EXIT SIGTERM SIGINT
trap "echo '🛑 Stopping services...'; kill $MCP_PID 2>/dev/null; exit 0" EXIT SIGTERM SIGINT

View File

@@ -25,7 +25,7 @@ echo "Checking if API is accessible..."
if ! curl -f "$API_BASE_URL/health" &> /dev/null; then
echo -e "${RED}${NC} API is not accessible at $API_BASE_URL"
echo "Make sure the container is running:"
echo " docker-compose up -d ai-trader-api"
echo " docker-compose up -d ai-trader"
exit 1
fi
echo -e "${GREEN}${NC} API is accessible"

View File

@@ -1,232 +0,0 @@
#!/bin/bash
# Batch Mode Testing Script
# Tests Docker batch mode with one-time simulation
set -e
echo "=========================================="
echo "AI-Trader Batch Mode Testing"
echo "=========================================="
echo ""
# Colors
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m'
# Check prerequisites
echo "Checking prerequisites..."
if ! command -v docker &> /dev/null; then
echo -e "${RED}${NC} Docker not installed"
exit 1
fi
if [ ! -f .env ]; then
echo -e "${RED}${NC} .env file not found"
echo "Copy .env.example to .env and configure API keys"
exit 1
fi
echo -e "${GREEN}${NC} Prerequisites OK"
echo ""
# Check if custom config exists
CONFIG_FILE=${1:-configs/default_config.json}
if [ ! -f "$CONFIG_FILE" ]; then
echo -e "${YELLOW}${NC} Config file not found: $CONFIG_FILE"
echo "Creating test config..."
mkdir -p configs
cat > configs/test_batch.json <<EOF
{
"agent_type": "BaseAgent",
"date_range": {
"init_date": "2025-01-16",
"end_date": "2025-01-17"
},
"models": [
{
"name": "GPT-4 Test",
"basemodel": "gpt-4",
"signature": "gpt-4-test",
"enabled": true
}
],
"agent_config": {
"max_steps": 10,
"initial_cash": 10000.0
},
"log_config": {
"log_path": "./data/agent_data"
}
}
EOF
CONFIG_FILE="configs/test_batch.json"
echo -e "${GREEN}${NC} Created test config: $CONFIG_FILE"
fi
echo "Using config: $CONFIG_FILE"
echo ""
# Test 1: Build image
echo -e "${BLUE}Test 1: Building Docker image${NC}"
echo "This may take a few minutes..."
if docker build -t ai-trader-batch-test . > /tmp/docker-build.log 2>&1; then
echo -e "${GREEN}${NC} Image built successfully"
else
echo -e "${RED}${NC} Build failed"
echo "Check logs: /tmp/docker-build.log"
tail -20 /tmp/docker-build.log
exit 1
fi
echo ""
# Test 2: Run batch simulation
echo -e "${BLUE}Test 2: Running batch simulation${NC}"
echo "Starting container in batch mode..."
echo "Config: $CONFIG_FILE"
echo ""
# Use docker-compose if available, otherwise docker run
if command -v docker-compose &> /dev/null || docker compose version &> /dev/null; then
echo "Using docker-compose..."
# Ensure API is stopped
docker-compose down 2>/dev/null || true
# Run batch mode
echo "Executing: docker-compose --profile batch run --rm ai-trader-batch $CONFIG_FILE"
docker-compose --profile batch run --rm ai-trader-batch "$CONFIG_FILE"
BATCH_EXIT_CODE=$?
else
echo "Using docker run..."
docker run --rm \
--env-file .env \
-v "$(pwd)/data:/app/data" \
-v "$(pwd)/logs:/app/logs" \
-v "$(pwd)/configs:/app/configs" \
ai-trader-batch-test \
"$CONFIG_FILE"
BATCH_EXIT_CODE=$?
fi
echo ""
# Test 3: Check exit code
echo -e "${BLUE}Test 3: Checking exit status${NC}"
if [ $BATCH_EXIT_CODE -eq 0 ]; then
echo -e "${GREEN}${NC} Batch simulation completed successfully (exit code: 0)"
else
echo -e "${RED}${NC} Batch simulation failed (exit code: $BATCH_EXIT_CODE)"
echo "Check logs in ./logs/ directory"
exit 1
fi
echo ""
# Test 4: Verify output files
echo -e "${BLUE}Test 4: Verifying output files${NC}"
# Check if data directory has position files
POSITION_FILES=$(find data/agent_data -name "position.jsonl" 2>/dev/null | wc -l)
if [ $POSITION_FILES -gt 0 ]; then
echo -e "${GREEN}${NC} Found $POSITION_FILES position file(s)"
# Show sample position data
SAMPLE_POSITION=$(find data/agent_data -name "position.jsonl" 2>/dev/null | head -1)
if [ -n "$SAMPLE_POSITION" ]; then
echo "Sample position data from: $SAMPLE_POSITION"
head -1 "$SAMPLE_POSITION" | jq '.' 2>/dev/null || head -1 "$SAMPLE_POSITION"
fi
else
echo -e "${YELLOW}${NC} No position files found"
echo "This could indicate the simulation didn't complete trading"
fi
echo ""
# Check log files
LOG_COUNT=$(find logs -name "*.log" 2>/dev/null | wc -l)
if [ $LOG_COUNT -gt 0 ]; then
echo -e "${GREEN}${NC} Found $LOG_COUNT log file(s)"
else
echo -e "${YELLOW}${NC} No log files found"
fi
echo ""
# Test 5: Check price data
echo -e "${BLUE}Test 5: Checking price data${NC}"
if [ -f "data/merged.jsonl" ]; then
STOCK_COUNT=$(wc -l < data/merged.jsonl)
echo -e "${GREEN}${NC} Price data exists: $STOCK_COUNT stocks"
else
echo -e "${YELLOW}${NC} No price data file found"
echo "First run will download price data"
fi
echo ""
# Test 6: Re-run to test data persistence
echo -e "${BLUE}Test 6: Testing data persistence${NC}"
echo "Running batch mode again to verify data persists..."
echo ""
if command -v docker-compose &> /dev/null || docker compose version &> /dev/null; then
docker-compose --profile batch run --rm ai-trader-batch "$CONFIG_FILE" > /tmp/batch-second-run.log 2>&1
SECOND_EXIT_CODE=$?
else
docker run --rm \
--env-file .env \
-v "$(pwd)/data:/app/data" \
-v "$(pwd)/logs:/app/logs" \
-v "$(pwd)/configs:/app/configs" \
ai-trader-batch-test \
"$CONFIG_FILE" > /tmp/batch-second-run.log 2>&1
SECOND_EXIT_CODE=$?
fi
if [ $SECOND_EXIT_CODE -eq 0 ]; then
echo -e "${GREEN}${NC} Second run completed successfully"
# Check if it reused price data (should be faster)
if grep -q "Using existing price data" /tmp/batch-second-run.log; then
echo -e "${GREEN}${NC} Price data was reused (data persistence working)"
else
echo -e "${YELLOW}${NC} Could not verify price data reuse"
fi
else
echo -e "${RED}${NC} Second run failed"
fi
echo ""
# Summary
echo "=========================================="
echo "Batch Mode Test Summary"
echo "=========================================="
echo ""
echo "Tests completed:"
echo " ✓ Docker image build"
echo " ✓ Batch mode execution"
echo " ✓ Exit code verification"
echo " ✓ Output file generation"
echo " ✓ Data persistence"
echo ""
echo "Output locations:"
echo " Position data: data/agent_data/*/position/"
echo " Trading logs: data/agent_data/*/log/"
echo " System logs: logs/"
echo " Price data: data/merged.jsonl"
echo ""
echo "To view position data:"
echo " find data/agent_data -name 'position.jsonl' -exec cat {} \;"
echo ""
echo "To view trading logs:"
echo " find data/agent_data -name 'log.jsonl' | head -1 | xargs cat"
echo ""

View File

@@ -137,7 +137,7 @@ echo ""
echo "Step 5: Testing API mode startup..."
echo "Starting container in background..."
$COMPOSE_CMD up -d ai-trader-api
$COMPOSE_CMD up -d ai-trader
if [ $? -eq 0 ]; then
print_status 0 "Container started successfully"
@@ -146,20 +146,20 @@ if [ $? -eq 0 ]; then
sleep 10
# Check if container is still running
if docker ps | grep -q ai-trader-api; then
if docker ps | grep -q ai-trader; then
print_status 0 "Container is running"
# Check logs for errors
ERROR_COUNT=$(docker logs ai-trader-api 2>&1 | grep -i "error" | grep -v "ERROR:" | wc -l)
ERROR_COUNT=$(docker logs ai-trader 2>&1 | grep -i "error" | grep -v "ERROR:" | wc -l)
if [ $ERROR_COUNT -gt 0 ]; then
print_warning "Found $ERROR_COUNT error messages in logs"
echo "Check logs with: docker logs ai-trader-api"
echo "Check logs with: docker logs ai-trader"
else
print_status 0 "No critical errors in logs"
fi
else
print_status 1 "Container stopped unexpectedly"
echo "Check logs with: docker logs ai-trader-api"
echo "Check logs with: docker logs ai-trader"
exit 1
fi
else
@@ -188,7 +188,7 @@ else
echo " - Port 8080 is already in use"
echo " - MCP services failed to initialize"
echo ""
echo "Check logs with: docker logs ai-trader-api"
echo "Check logs with: docker logs ai-trader"
fi
echo ""
@@ -215,7 +215,7 @@ echo "2. Test batch mode:"
echo " bash scripts/test_batch_mode.sh"
echo ""
echo "3. If any checks failed, review logs:"
echo " docker logs ai-trader-api"
echo " docker logs ai-trader"
echo ""
echo "4. For troubleshooting, see: DOCKER_API.md"
echo ""