mirror of
https://github.com/Xe138/AI-Trader.git
synced 2026-04-01 17:17:24 -04:00
Systematic debugging revealed the root cause of Pydantic validation errors: - DeepSeek correctly returns tool_calls.arguments as JSON strings - My wrapper was incorrectly converting strings to dicts - This caused LangChain's parse_tool_call() to fail (json.loads(dict) error) - Failure created invalid_tool_calls with dict args (should be string) - Result: Pydantic validation error on invalid_tool_calls Solution: Remove all conversion logic. DeepSeek format is already correct. ToolCallArgsParsingWrapper now acts as a simple passthrough proxy. Trading session completes successfully with no errors. Fixes the systematic-debugging investigation that identified the issue was in our fix attempt, not in the original API response. Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
52 lines
1.7 KiB
Python
52 lines
1.7 KiB
Python
"""
|
|
Chat model wrapper - Passthrough wrapper for ChatOpenAI models.
|
|
|
|
Originally created to fix DeepSeek tool_calls arg parsing issues, but investigation
|
|
revealed DeepSeek already returns the correct format (arguments as JSON strings).
|
|
|
|
This wrapper is now a simple passthrough that proxies all calls to the underlying model.
|
|
Kept for backward compatibility and potential future use.
|
|
"""
|
|
|
|
from typing import Any
|
|
|
|
|
|
class ToolCallArgsParsingWrapper:
|
|
"""
|
|
Passthrough wrapper around ChatOpenAI models.
|
|
|
|
After systematic debugging, determined that DeepSeek returns tool_calls.arguments
|
|
as JSON strings (correct format), so no parsing/conversion is needed.
|
|
|
|
This wrapper simply proxies all calls to the wrapped model.
|
|
"""
|
|
|
|
def __init__(self, model: Any, **kwargs):
|
|
"""
|
|
Initialize wrapper around a chat model.
|
|
|
|
Args:
|
|
model: The chat model to wrap
|
|
**kwargs: Additional parameters (ignored, for compatibility)
|
|
"""
|
|
self.wrapped_model = model
|
|
|
|
@property
|
|
def _llm_type(self) -> str:
|
|
"""Return identifier for this LLM type"""
|
|
if hasattr(self.wrapped_model, '_llm_type'):
|
|
return f"wrapped-{self.wrapped_model._llm_type}"
|
|
return "wrapped-chat-model"
|
|
|
|
def __getattr__(self, name: str):
|
|
"""Proxy all attributes/methods to the wrapped model"""
|
|
return getattr(self.wrapped_model, name)
|
|
|
|
def bind_tools(self, tools: Any, **kwargs):
|
|
"""Bind tools to the wrapped model"""
|
|
return self.wrapped_model.bind_tools(tools, **kwargs)
|
|
|
|
def bind(self, **kwargs):
|
|
"""Bind settings to the wrapped model"""
|
|
return self.wrapped_model.bind(**kwargs)
|