feat: remove /reasoning endpoint (replaced by /results)

- Delete Pydantic models: ReasoningMessage, PositionSummary, TradingSessionResponse, ReasoningResponse
- Delete /reasoning endpoint from api/main.py
- Remove /reasoning documentation from API_REFERENCE.md
- Delete old endpoint tests (test_api_reasoning_endpoint.py)
- Add integration tests verifying /results replaces /reasoning

The /reasoning endpoint has been replaced by /results with reasoning parameter:
- GET /reasoning?job_id=X -> GET /results?job_id=X&reasoning=summary
- GET /reasoning?job_id=X&include_full_conversation=true -> GET /results?job_id=X&reasoning=full

Benefits of new endpoint:
- Day-centric structure (easier to understand portfolio progression)
- Daily P&L metrics included
- AI-generated reasoning summaries
- Unified data model (trading_days, actions, holdings)
This commit is contained in:
2025-11-04 09:58:39 -05:00
parent 60ea9ab802
commit 9c1c96d4f6
4 changed files with 100 additions and 770 deletions

View File

@@ -665,241 +665,6 @@ curl "http://localhost:8080/results?job_id=550e8400-e29b-41d4-a716-446655440000&
---
### GET /reasoning
Retrieve AI reasoning logs for simulation days with optional filters. Returns trading sessions with positions and optionally full conversation history including all AI messages, tool calls, and responses.
**Query Parameters:**
| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `job_id` | string | No | Filter by job UUID |
| `date` | string | No | Filter by trading date (YYYY-MM-DD) |
| `model` | string | No | Filter by model signature |
| `include_full_conversation` | boolean | No | Include all messages and tool calls (default: false, only returns summaries) |
**Response (200 OK) - Summary Only (default):**
```json
{
"sessions": [
{
"session_id": 1,
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"date": "2025-01-16",
"model": "gpt-4",
"session_summary": "Agent analyzed market conditions, purchased 10 shares of AAPL at $250.50, and 5 shares of MSFT at $380.20. Total portfolio value increased to $10,105.00.",
"started_at": "2025-01-16T10:00:05Z",
"completed_at": "2025-01-16T10:05:23Z",
"total_messages": 8,
"positions": [
{
"action_id": 1,
"action_type": "buy",
"symbol": "AAPL",
"amount": 10,
"price": 250.50,
"cash_after": 7495.00,
"portfolio_value": 10000.00
},
{
"action_id": 2,
"action_type": "buy",
"symbol": "MSFT",
"amount": 5,
"price": 380.20,
"cash_after": 5594.00,
"portfolio_value": 10105.00
}
],
"conversation": null
}
],
"count": 1,
"deployment_mode": "PROD",
"is_dev_mode": false,
"preserve_dev_data": null
}
```
**Response (200 OK) - With Full Conversation:**
```json
{
"sessions": [
{
"session_id": 1,
"job_id": "550e8400-e29b-41d4-a716-446655440000",
"date": "2025-01-16",
"model": "gpt-4",
"session_summary": "Agent analyzed market conditions, purchased 10 shares of AAPL at $250.50, and 5 shares of MSFT at $380.20. Total portfolio value increased to $10,105.00.",
"started_at": "2025-01-16T10:00:05Z",
"completed_at": "2025-01-16T10:05:23Z",
"total_messages": 8,
"positions": [
{
"action_id": 1,
"action_type": "buy",
"symbol": "AAPL",
"amount": 10,
"price": 250.50,
"cash_after": 7495.00,
"portfolio_value": 10000.00
},
{
"action_id": 2,
"action_type": "buy",
"symbol": "MSFT",
"amount": 5,
"price": 380.20,
"cash_after": 5594.00,
"portfolio_value": 10105.00
}
],
"conversation": [
{
"message_index": 0,
"role": "user",
"content": "You are a trading agent. Current date: 2025-01-16. Cash: $10000.00. Previous positions: {}. Yesterday's prices: {...}",
"summary": null,
"tool_name": null,
"tool_input": null,
"timestamp": "2025-01-16T10:00:05Z"
},
{
"message_index": 1,
"role": "assistant",
"content": "I'll analyze the market and make trading decisions...",
"summary": "Agent analyzes market conditions and decides to purchase AAPL",
"tool_name": null,
"tool_input": null,
"timestamp": "2025-01-16T10:00:12Z"
},
{
"message_index": 2,
"role": "tool",
"content": "{\"status\": \"success\", \"symbol\": \"AAPL\", \"shares\": 10, \"price\": 250.50}",
"summary": null,
"tool_name": "trade",
"tool_input": "{\"action\": \"buy\", \"symbol\": \"AAPL\", \"amount\": 10}",
"timestamp": "2025-01-16T10:00:13Z"
},
{
"message_index": 3,
"role": "assistant",
"content": "Trade executed successfully. Now purchasing MSFT...",
"summary": "Agent confirms AAPL purchase and initiates MSFT trade",
"tool_name": null,
"tool_input": null,
"timestamp": "2025-01-16T10:00:18Z"
}
]
}
],
"count": 1,
"deployment_mode": "PROD",
"is_dev_mode": false,
"preserve_dev_data": null
}
```
**Response Fields:**
| Field | Type | Description |
|-------|------|-------------|
| `sessions` | array[object] | Array of trading sessions |
| `count` | integer | Number of sessions returned |
| `deployment_mode` | string | Deployment mode: "PROD" or "DEV" |
| `is_dev_mode` | boolean | True if running in development mode |
| `preserve_dev_data` | boolean\|null | DEV mode only: whether dev data is preserved between runs |
**Trading Session Fields:**
| Field | Type | Description |
|-------|------|-------------|
| `session_id` | integer | Unique session ID |
| `job_id` | string | Job UUID this session belongs to |
| `date` | string | Trading date (YYYY-MM-DD) |
| `model` | string | Model signature |
| `session_summary` | string | High-level summary of AI decisions and actions |
| `started_at` | string | ISO 8601 timestamp when session started |
| `completed_at` | string | ISO 8601 timestamp when session completed |
| `total_messages` | integer | Total number of messages in conversation |
| `positions` | array[object] | All trading actions taken this day |
| `conversation` | array[object]\|null | Full message history (null unless `include_full_conversation=true`) |
**Position Summary Fields:**
| Field | Type | Description |
|-------|------|-------------|
| `action_id` | integer | Action sequence number (1, 2, 3...) for this session |
| `action_type` | string | Action taken: `buy`, `sell`, or `hold` |
| `symbol` | string | Stock symbol traded (or null for `hold`) |
| `amount` | integer | Quantity traded (or null for `hold`) |
| `price` | float | Price per share (or null for `hold`) |
| `cash_after` | float | Cash balance after this action |
| `portfolio_value` | float | Total portfolio value (cash + holdings) |
**Reasoning Message Fields:**
| Field | Type | Description |
|-------|------|-------------|
| `message_index` | integer | Message sequence number starting from 0 |
| `role` | string | Message role: `user`, `assistant`, or `tool` |
| `content` | string | Full message content |
| `summary` | string\|null | Human-readable summary (for assistant messages only) |
| `tool_name` | string\|null | Tool name (for tool messages only) |
| `tool_input` | string\|null | Tool input parameters (for tool messages only) |
| `timestamp` | string | ISO 8601 timestamp |
**Error Responses:**
**400 Bad Request** - Invalid date format
```json
{
"detail": "Invalid date format: 2025-1-16. Expected YYYY-MM-DD"
}
```
**404 Not Found** - No sessions found matching filters
```json
{
"detail": "No trading sessions found matching the specified criteria"
}
```
**Examples:**
All sessions for a specific job (summaries only):
```bash
curl "http://localhost:8080/reasoning?job_id=550e8400-e29b-41d4-a716-446655440000"
```
Sessions for a specific date with full conversation:
```bash
curl "http://localhost:8080/reasoning?date=2025-01-16&include_full_conversation=true"
```
Sessions for a specific model:
```bash
curl "http://localhost:8080/reasoning?model=gpt-4"
```
Combine filters to get full conversation for specific model-day:
```bash
curl "http://localhost:8080/reasoning?job_id=550e8400-e29b-41d4-a716-446655440000&date=2025-01-16&model=gpt-4&include_full_conversation=true"
```
**Use Cases:**
- **Debugging AI decisions**: Examine full conversation history to understand why specific trades were made
- **Performance analysis**: Review session summaries to identify patterns in successful trading strategies
- **Model comparison**: Compare reasoning approaches between different AI models on the same trading day
- **Audit trail**: Document AI decision-making process for compliance or research purposes
- **Strategy refinement**: Analyze tool usage patterns and message sequences to optimize agent prompts
---
### GET /health
Health check endpoint for monitoring and orchestration services.

View File

@@ -115,49 +115,6 @@ class HealthResponse(BaseModel):
preserve_dev_data: Optional[bool] = None
class ReasoningMessage(BaseModel):
"""Individual message in a reasoning conversation."""
message_index: int
role: str
content: str
summary: Optional[str] = None
tool_name: Optional[str] = None
tool_input: Optional[str] = None
timestamp: str
class PositionSummary(BaseModel):
"""Trading position summary."""
action_id: int
action_type: Optional[str] = None
symbol: Optional[str] = None
amount: Optional[int] = None
price: Optional[float] = None
cash_after: float
portfolio_value: float
class TradingSessionResponse(BaseModel):
"""Single trading session with positions and optional conversation."""
session_id: int
job_id: str
date: str
model: str
session_summary: Optional[str] = None
started_at: str
completed_at: Optional[str] = None
total_messages: Optional[int] = None
positions: List[PositionSummary]
conversation: Optional[List[ReasoningMessage]] = None
class ReasoningResponse(BaseModel):
"""Response body for GET /reasoning."""
sessions: List[TradingSessionResponse]
count: int
deployment_mode: str
is_dev_mode: bool
preserve_dev_data: Optional[bool] = None
def create_app(
@@ -429,181 +386,6 @@ def create_app(
# This endpoint used the old positions table schema and is no longer needed
# The new endpoint is defined in api/routes/results_v2.py
@app.get("/reasoning", response_model=ReasoningResponse)
async def get_reasoning(
job_id: Optional[str] = Query(None, description="Filter by job ID"),
date: Optional[str] = Query(None, description="Filter by date (YYYY-MM-DD)"),
model: Optional[str] = Query(None, description="Filter by model signature"),
include_full_conversation: bool = Query(False, description="Include full conversation history")
):
"""
Query reasoning logs from trading sessions.
Supports filtering by job_id, date, and/or model.
Returns session summaries with positions and optionally full conversation history.
Args:
job_id: Optional job UUID filter
date: Optional date filter (YYYY-MM-DD)
model: Optional model signature filter
include_full_conversation: Include all messages (default: false, only returns summaries)
Returns:
List of trading sessions with positions and optional conversation
Raises:
HTTPException 400: Invalid date format
HTTPException 404: No sessions found matching filters
"""
try:
# Validate date format if provided
if date:
try:
datetime.strptime(date, "%Y-%m-%d")
except ValueError:
raise HTTPException(
status_code=400,
detail=f"Invalid date format: {date}. Expected YYYY-MM-DD"
)
conn = get_db_connection(app.state.db_path)
cursor = conn.cursor()
# Build query for trading sessions with filters
query = """
SELECT
ts.id,
ts.job_id,
ts.date,
ts.model,
ts.session_summary,
ts.started_at,
ts.completed_at,
ts.total_messages
FROM trading_sessions ts
WHERE 1=1
"""
params = []
if job_id:
query += " AND ts.job_id = ?"
params.append(job_id)
if date:
query += " AND ts.date = ?"
params.append(date)
if model:
query += " AND ts.model = ?"
params.append(model)
query += " ORDER BY ts.date, ts.model"
cursor.execute(query, params)
session_rows = cursor.fetchall()
if not session_rows:
conn.close()
raise HTTPException(
status_code=404,
detail="No trading sessions found matching the provided filters"
)
sessions = []
for session_row in session_rows:
session_id = session_row[0]
# Fetch positions for this session
cursor.execute("""
SELECT
p.action_id,
p.action_type,
p.symbol,
p.amount,
p.price,
p.cash,
p.portfolio_value
FROM positions p
WHERE p.session_id = ?
ORDER BY p.action_id
""", (session_id,))
position_rows = cursor.fetchall()
positions = [
PositionSummary(
action_id=row[0],
action_type=row[1],
symbol=row[2],
amount=row[3],
price=row[4],
cash_after=row[5],
portfolio_value=row[6]
)
for row in position_rows
]
# Optionally fetch full conversation
conversation = None
if include_full_conversation:
cursor.execute("""
SELECT
rl.message_index,
rl.role,
rl.content,
rl.summary,
rl.tool_name,
rl.tool_input,
rl.timestamp
FROM reasoning_logs rl
WHERE rl.session_id = ?
ORDER BY rl.message_index
""", (session_id,))
message_rows = cursor.fetchall()
conversation = [
ReasoningMessage(
message_index=row[0],
role=row[1],
content=row[2],
summary=row[3],
tool_name=row[4],
tool_input=row[5],
timestamp=row[6]
)
for row in message_rows
]
sessions.append(
TradingSessionResponse(
session_id=session_row[0],
job_id=session_row[1],
date=session_row[2],
model=session_row[3],
session_summary=session_row[4],
started_at=session_row[5],
completed_at=session_row[6],
total_messages=session_row[7],
positions=positions,
conversation=conversation
)
)
conn.close()
# Get deployment mode info
deployment_info = get_deployment_mode_dict()
return ReasoningResponse(
sessions=sessions,
count=len(sessions),
**deployment_info
)
except HTTPException:
raise
except Exception as e:
logger.error(f"Failed to query reasoning logs: {e}", exc_info=True)
raise HTTPException(status_code=500, detail=f"Internal server error: {str(e)}")
@app.get("/health", response_model=HealthResponse)
async def health_check():

View File

@@ -0,0 +1,100 @@
"""Verify /results endpoint replaces /reasoning endpoint."""
import pytest
from fastapi.testclient import TestClient
from api.main import create_app
from api.database import Database
def test_results_with_full_reasoning_replaces_old_endpoint(tmp_path):
"""Test /results?reasoning=full provides same data as old /reasoning."""
# Create test database with file path (not in-memory, to avoid sharing issues)
import json
db_path = str(tmp_path / "test.db")
db = Database(db_path)
# Create job first
db.connection.execute("""
INSERT INTO jobs (job_id, config_path, status, date_range, models, created_at)
VALUES (?, ?, ?, ?, ?, ?)
""", ('test-job-123', 'test_config.json', 'completed',
json.dumps({'init_date': '2025-01-15', 'end_date': '2025-01-15'}),
json.dumps(['test-model']), '2025-01-15T10:00:00Z'))
db.connection.commit()
trading_day_id = db.create_trading_day(
job_id='test-job-123',
model='test-model',
date='2025-01-15',
starting_cash=10000.0,
starting_portfolio_value=10000.0,
ending_cash=8500.0,
ending_portfolio_value=10000.0,
daily_profit=0.0,
daily_return_pct=0.0,
days_since_last_trading=0
)
# Add actions
db.create_action(trading_day_id, 'buy', 'AAPL', 10, 150.0)
# Add holdings
db.create_holding(trading_day_id, 'AAPL', 10)
# Update with reasoning
db.connection.execute("""
UPDATE trading_days
SET reasoning_summary = 'Bought AAPL based on earnings',
reasoning_full = ?,
total_actions = 1
WHERE id = ?
""", (json.dumps([
{"role": "user", "content": "System prompt"},
{"role": "assistant", "content": "I will buy AAPL"}
]), trading_day_id))
db.connection.commit()
db.connection.close()
# Create test app with the test database
app = create_app(db_path=db_path)
app.state.test_mode = True
# Override the database dependency to use our test database
from api.routes.results_v2 import get_database
def override_get_database():
return Database(db_path)
app.dependency_overrides[get_database] = override_get_database
client = TestClient(app)
# Query new endpoint
response = client.get("/results?job_id=test-job-123&reasoning=full")
assert response.status_code == 200
data = response.json()
# Verify structure matches old endpoint needs
assert data['count'] == 1
result = data['results'][0]
assert result['date'] == '2025-01-15'
assert result['model'] == 'test-model'
assert result['trades'][0]['action_type'] == 'buy'
assert result['trades'][0]['symbol'] == 'AAPL'
assert isinstance(result['reasoning'], list)
assert len(result['reasoning']) == 2
def test_reasoning_endpoint_returns_404():
"""Verify /reasoning endpoint is removed."""
app = create_app(db_path=":memory:")
client = TestClient(app)
response = client.get("/reasoning?job_id=test-job-123")
assert response.status_code == 404

View File

@@ -1,317 +0,0 @@
"""
Unit tests for GET /reasoning API endpoint.
Coverage target: 95%+
Tests verify:
- Filtering by job_id, date, and model
- Full conversation vs summaries only
- Error handling (404, 400)
- Deployment mode info in responses
"""
import pytest
from datetime import datetime
from api.database import get_db_connection
@pytest.fixture
def sample_trading_session(clean_db):
"""Create a sample trading session with positions and reasoning logs."""
conn = get_db_connection(clean_db)
cursor = conn.cursor()
# Create job
cursor.execute("""
INSERT INTO jobs (job_id, config_path, status, date_range, models, created_at)
VALUES (?, ?, ?, ?, ?, ?)
""", (
"test-job-123",
"configs/test.json",
"completed",
'["2025-10-02"]',
'["gpt-5"]',
"2025-10-02T10:00:00Z"
))
# Create trading session
cursor.execute("""
INSERT INTO trading_sessions (job_id, date, model, session_summary, started_at, completed_at, total_messages)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (
"test-job-123",
"2025-10-02",
"gpt-5",
"Analyzed AI infrastructure market. Bought NVDA and GOOGL based on secular AI trends.",
"2025-10-02T10:00:00Z",
"2025-10-02T10:05:23Z",
4
))
session_id = cursor.lastrowid
# Create positions linked to session
cursor.execute("""
INSERT INTO positions (
job_id, date, model, action_id, action_type, symbol, amount, price,
cash, portfolio_value, daily_profit, daily_return_pct, session_id, created_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
"test-job-123", "2025-10-02", "gpt-5", 1, "buy", "NVDA", 10, 189.60,
8104.00, 10000.00, 0.0, 0.0, session_id, "2025-10-02T10:05:00Z"
))
cursor.execute("""
INSERT INTO positions (
job_id, date, model, action_id, action_type, symbol, amount, price,
cash, portfolio_value, daily_profit, daily_return_pct, session_id, created_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
"test-job-123", "2025-10-02", "gpt-5", 2, "buy", "GOOGL", 6, 245.15,
6633.10, 10104.00, 104.00, 1.04, session_id, "2025-10-02T10:05:10Z"
))
# Create reasoning logs
cursor.execute("""
INSERT INTO reasoning_logs (session_id, message_index, role, content, summary, timestamp)
VALUES (?, ?, ?, ?, ?, ?)
""", (
session_id, 0, "user",
"Please analyze and update today's (2025-10-02) positions.",
None,
"2025-10-02T10:00:00Z"
))
cursor.execute("""
INSERT INTO reasoning_logs (session_id, message_index, role, content, summary, timestamp)
VALUES (?, ?, ?, ?, ?, ?)
""", (
session_id, 1, "assistant",
"Key intermediate steps\n\n- Read yesterday's positions...",
"Analyzed market conditions and decided to buy NVDA (10 shares) and GOOGL (6 shares).",
"2025-10-02T10:05:20Z"
))
cursor.execute("""
INSERT INTO reasoning_logs (session_id, message_index, role, content, summary, tool_name, tool_input, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", (
session_id, 2, "tool",
"Successfully bought 10 shares of NVDA at $189.60",
None,
"trade",
'{"action": "buy", "symbol": "NVDA", "amount": 10}',
"2025-10-02T10:05:21Z"
))
cursor.execute("""
INSERT INTO reasoning_logs (session_id, message_index, role, content, summary, tool_name, tool_input, timestamp)
VALUES (?, ?, ?, ?, ?, ?, ?, ?)
""", (
session_id, 3, "tool",
"Successfully bought 6 shares of GOOGL at $245.15",
None,
"trade",
'{"action": "buy", "symbol": "GOOGL", "amount": 6}',
"2025-10-02T10:05:22Z"
))
conn.commit()
conn.close()
return {
"session_id": session_id,
"job_id": "test-job-123",
"date": "2025-10-02",
"model": "gpt-5"
}
@pytest.fixture
def multiple_sessions(clean_db):
"""Create multiple trading sessions for testing filters."""
conn = get_db_connection(clean_db)
cursor = conn.cursor()
# Create job
cursor.execute("""
INSERT INTO jobs (job_id, config_path, status, date_range, models, created_at)
VALUES (?, ?, ?, ?, ?, ?)
""", (
"test-job-456",
"configs/test.json",
"completed",
'["2025-10-03", "2025-10-04"]',
'["gpt-5", "claude-4"]',
"2025-10-03T10:00:00Z"
))
# Session 1: gpt-5, 2025-10-03
cursor.execute("""
INSERT INTO trading_sessions (job_id, date, model, session_summary, started_at, completed_at, total_messages)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (
"test-job-456", "2025-10-03", "gpt-5",
"Session 1 summary", "2025-10-03T10:00:00Z", "2025-10-03T10:05:00Z", 2
))
session1_id = cursor.lastrowid
# Session 2: claude-4, 2025-10-03
cursor.execute("""
INSERT INTO trading_sessions (job_id, date, model, session_summary, started_at, completed_at, total_messages)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (
"test-job-456", "2025-10-03", "claude-4",
"Session 2 summary", "2025-10-03T10:00:00Z", "2025-10-03T10:05:00Z", 2
))
session2_id = cursor.lastrowid
# Session 3: gpt-5, 2025-10-04
cursor.execute("""
INSERT INTO trading_sessions (job_id, date, model, session_summary, started_at, completed_at, total_messages)
VALUES (?, ?, ?, ?, ?, ?, ?)
""", (
"test-job-456", "2025-10-04", "gpt-5",
"Session 3 summary", "2025-10-04T10:00:00Z", "2025-10-04T10:05:00Z", 2
))
session3_id = cursor.lastrowid
# Add positions for each session
for session_id, date, model in [(session1_id, "2025-10-03", "gpt-5"),
(session2_id, "2025-10-03", "claude-4"),
(session3_id, "2025-10-04", "gpt-5")]:
cursor.execute("""
INSERT INTO positions (
job_id, date, model, action_id, action_type, symbol, amount, price,
cash, portfolio_value, session_id, created_at
)
VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
""", (
"test-job-456", date, model, 1, "buy", "AAPL", 5, 250.00,
8750.00, 10000.00, session_id, f"{date}T10:05:00Z"
))
conn.commit()
conn.close()
return {
"job_id": "test-job-456",
"session_ids": [session1_id, session2_id, session3_id]
}
@pytest.mark.unit
class TestGetReasoningEndpoint:
"""Test GET /reasoning endpoint."""
def test_get_reasoning_with_job_id_filter(self, client, sample_trading_session):
"""Should return sessions filtered by job_id."""
response = client.get(f"/reasoning?job_id={sample_trading_session['job_id']}")
assert response.status_code == 200
data = response.json()
assert data["count"] == 1
assert len(data["sessions"]) == 1
assert data["sessions"][0]["job_id"] == sample_trading_session["job_id"]
assert data["sessions"][0]["date"] == sample_trading_session["date"]
assert data["sessions"][0]["model"] == sample_trading_session["model"]
assert data["sessions"][0]["session_summary"] is not None
assert len(data["sessions"][0]["positions"]) == 2
def test_get_reasoning_with_date_filter(self, client, multiple_sessions):
"""Should return sessions filtered by date."""
response = client.get("/reasoning?date=2025-10-03")
assert response.status_code == 200
data = response.json()
assert data["count"] == 2 # Both gpt-5 and claude-4 on 2025-10-03
assert all(s["date"] == "2025-10-03" for s in data["sessions"])
def test_get_reasoning_with_model_filter(self, client, multiple_sessions):
"""Should return sessions filtered by model."""
response = client.get("/reasoning?model=gpt-5")
assert response.status_code == 200
data = response.json()
assert data["count"] == 2 # gpt-5 on both dates
assert all(s["model"] == "gpt-5" for s in data["sessions"])
def test_get_reasoning_with_full_conversation(self, client, sample_trading_session):
"""Should include full conversation when requested."""
response = client.get(
f"/reasoning?job_id={sample_trading_session['job_id']}&include_full_conversation=true"
)
assert response.status_code == 200
data = response.json()
assert data["count"] == 1
session = data["sessions"][0]
assert session["conversation"] is not None
assert len(session["conversation"]) == 4 # 1 user + 1 assistant + 2 tool messages
# Verify message structure
messages = session["conversation"]
assert messages[0]["role"] == "user"
assert messages[0]["message_index"] == 0
assert messages[0]["summary"] is None
assert messages[1]["role"] == "assistant"
assert messages[1]["message_index"] == 1
assert messages[1]["summary"] is not None
assert messages[2]["role"] == "tool"
assert messages[2]["message_index"] == 2
assert messages[2]["tool_name"] == "trade"
assert messages[2]["tool_input"] is not None
def test_get_reasoning_summaries_only(self, client, sample_trading_session):
"""Should not include conversation when include_full_conversation=false (default)."""
response = client.get(f"/reasoning?job_id={sample_trading_session['job_id']}")
assert response.status_code == 200
data = response.json()
assert data["count"] == 1
session = data["sessions"][0]
assert session["conversation"] is None
assert session["session_summary"] is not None
assert session["total_messages"] == 4
def test_get_reasoning_no_results_returns_404(self, client, clean_db):
"""Should return 404 when no sessions match filters."""
response = client.get("/reasoning?job_id=nonexistent-job")
assert response.status_code == 404
assert "No trading sessions found" in response.json()["detail"]
def test_get_reasoning_invalid_date_returns_400(self, client, clean_db):
"""Should return 400 for invalid date format."""
response = client.get("/reasoning?date=invalid-date")
assert response.status_code == 400
assert "Invalid date format" in response.json()["detail"]
def test_get_reasoning_includes_deployment_mode(self, client, sample_trading_session):
"""Should include deployment mode info in response."""
response = client.get(f"/reasoning?job_id={sample_trading_session['job_id']}")
assert response.status_code == 200
data = response.json()
assert "deployment_mode" in data
assert "is_dev_mode" in data
assert isinstance(data["is_dev_mode"], bool)
@pytest.fixture
def client(clean_db):
"""Create FastAPI test client with clean database."""
from fastapi.testclient import TestClient
from api.main import create_app
app = create_app(db_path=clean_db)
app.state.test_mode = True # Prevent background worker from starting
return TestClient(app)