Skip to content

Toolbox Instance Fails to Invoke Tools with Ollama Models; Explicit Method Extraction Works #3

@ddarmon

Description

@ddarmon

First: super-cool idea and tool! Really looking forward to playing around with this more.

I may not be using Bespoken correctly out-of-the-box, but thought I'd raise this issue in case it wasn't just me.

Description

When using Bespoken with Ollama-backed models (for example, mistral-small3.2:latest), passing a Toolbox instance directly into the tools list does not work. However, if you extract the individual callable methods from that toolbox and pass those instead, tool invocation succeeds. This appears to be specific to the Ollama integration in Bespoken.


Non-Working Example

from bespoken import chat
from bespoken.tools import FileTool, TodoTools, PlaywrightTool

chat(
    model_name="mistral-small3.2:latest",
    tools=[ FileTool("edit.py") ],      # ❌ Passing the Toolbox instance directly
    system_prompt="You are a coding assistant that can make edits to a single file.",
    debug=True,
)
  • Symptom: Model output shows malformed tool calls (e.g. ", "parameters": {}}) or ignores the tool entirely.

Working Example

from bespoken import chat
from bespoken.tools import FileTool

# Instantiate the toolbox
file_tool = FileTool("edit.py")

# Manually extract only the methods you want to expose as tools
tool_methods = []
for name in dir(file_tool):
    if name in ["read_file", "replace_in_file"]:
        method = getattr(file_tool, name)
        if callable(method):
            tool_methods.append(method)

print("Using tool methods:", [m.__name__ for m in tool_methods])

chat(
    model_name="mistral-small3.2:latest",
    tools=tool_methods,                # ✅ Passing individual callable methods
    system_prompt=(
        "You are a coding assistant that can make edits to a single file. "
        "Use the available tools when asked about files."
    ),
    debug=True,
)
  • Result: Ollama model correctly invokes read_file or replace_in_file and returns the tool’s output.

Likely Ollama-Specific Issue

  • The same pattern (passing a toolbox instance) works when you use the llm CLI directly:

    llm -m mistral-small3.2:latest -T llm_time "What time is it?"
  • This suggests Bespoken’s adapter for Ollama does not automatically unpack Toolbox instances into individual tool descriptors.


Expected Behavior

Bespoken should accept either:

  1. A list of Toolbox instances—unpacking them internally into the required callables.
  2. Or clearly document that only callables may be passed, not toolbox objects.

Proposed Solutions

  1. Automatic Unpacking
    Detect when a user passes a Toolbox instance to bespoken.chat(tools=…) and internally call its method_tools() (or equivalent) to extract callable methods.

  2. Documentation Update
    Make it explicit in the Bespoken README and API docs that tools must be a list of callables, not toolbox instances, when using Ollama models.


Environment

  • OS: macOS

  • Python: 3.13.x

  • Virtual Env Manager: uv

  • Bespoken Version: 0.2.2

  • llm-ollama Plugin Version: latest

  • Tested Ollama Models:

    • mistral-small3.2:latest
    • llama3.1:8b

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions