-
Notifications
You must be signed in to change notification settings - Fork 14
Description
First: super-cool idea and tool! Really looking forward to playing around with this more.
I may not be using Bespoken correctly out-of-the-box, but thought I'd raise this issue in case it wasn't just me.
Description
When using Bespoken with Ollama-backed models (for example, mistral-small3.2:latest), passing a Toolbox instance directly into the tools list does not work. However, if you extract the individual callable methods from that toolbox and pass those instead, tool invocation succeeds. This appears to be specific to the Ollama integration in Bespoken.
Non-Working Example
from bespoken import chat
from bespoken.tools import FileTool, TodoTools, PlaywrightTool
chat(
model_name="mistral-small3.2:latest",
tools=[ FileTool("edit.py") ], # ❌ Passing the Toolbox instance directly
system_prompt="You are a coding assistant that can make edits to a single file.",
debug=True,
)- Symptom: Model output shows malformed tool calls (e.g.
", "parameters": {}}) or ignores the tool entirely.
Working Example
from bespoken import chat
from bespoken.tools import FileTool
# Instantiate the toolbox
file_tool = FileTool("edit.py")
# Manually extract only the methods you want to expose as tools
tool_methods = []
for name in dir(file_tool):
if name in ["read_file", "replace_in_file"]:
method = getattr(file_tool, name)
if callable(method):
tool_methods.append(method)
print("Using tool methods:", [m.__name__ for m in tool_methods])
chat(
model_name="mistral-small3.2:latest",
tools=tool_methods, # ✅ Passing individual callable methods
system_prompt=(
"You are a coding assistant that can make edits to a single file. "
"Use the available tools when asked about files."
),
debug=True,
)- Result: Ollama model correctly invokes
read_fileorreplace_in_fileand returns the tool’s output.
Likely Ollama-Specific Issue
-
The same pattern (passing a toolbox instance) works when you use the
llmCLI directly:llm -m mistral-small3.2:latest -T llm_time "What time is it?" -
This suggests Bespoken’s adapter for Ollama does not automatically unpack
Toolboxinstances into individual tool descriptors.
Expected Behavior
Bespoken should accept either:
- A list of
Toolboxinstances—unpacking them internally into the required callables. - Or clearly document that only callables may be passed, not toolbox objects.
Proposed Solutions
-
Automatic Unpacking
Detect when a user passes aToolboxinstance tobespoken.chat(tools=…)and internally call itsmethod_tools()(or equivalent) to extract callable methods. -
Documentation Update
Make it explicit in the Bespoken README and API docs thattoolsmust be a list of callables, not toolbox instances, when using Ollama models.
Environment
-
OS: macOS
-
Python: 3.13.x
-
Virtual Env Manager:
uv -
Bespoken Version: 0.2.2
-
llm-ollama Plugin Version: latest
-
Tested Ollama Models:
mistral-small3.2:latestllama3.1:8b