Tool Calling (Function Calling)
Enable models to call external functions and APIs through structured tools.
Overview
Tool calling lets the model request external data or actions. You define tools and the model returns tool calls with JSON arguments.
Your app executes the tool and returns results as tool messages so the model can respond.
Common use cases
- Real-time data (weather, prices, metrics).
- Search and retrieval for fresh information.
- Calculations and transformations.
- Internal APIs, databases, and business logic.
Model availability varies
/v1/models endpoint for live support.Defining tools
Each tool is a JSON object with type: "function" and a function definition.
Arguments are defined with JSON Schema under parameters. Required fields: name, description, parameters.
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "City name, e.g., "Tokyo""
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "Temperature unit"
}
},
"required": ["location"]
}
}
}Definition tips
- Use short, stable names in snake_case and avoid spaces.
- Write explicit descriptions and include input hints.
- Add enums, min/max, and required fields to narrow arguments.
Making tool-enabled requests
Attach the tools array and optionally choose a tool policy with tool_choice.
{
"model": "qwen/qwen3-235b-a22b-instruct-2507-fp8",
"messages": [
{ "role": "user", "content": "What's the weather in Tokyo?" }
],
"tools": [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get current weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": { "type": "string" },
"unit": { "type": "string", "enum": ["celsius", "fahrenheit"] }
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}Request fields are documented in the chat completions endpoint of the API reference.
tool_choice options
Control when the model should use tools.
auto- Behavior
- Model decides whether to call a tool.
none- Behavior
- Disable tool use for this request.
required- Behavior
- Force the model to call at least one tool.
{ type: "function", function: { name: "tool_name" } }- Behavior
- Force a specific tool by name.
Handling tool calls
When a tool is needed, the assistant message includes tool_calls and finish_reason: "tool_calls".
{
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_weather",
"arguments": "{"location":"Tokyo","unit":"celsius"}"
}
}
]
},
"finish_reason": "tool_calls"
}
]
}Returning tool results
Send tool outputs back as role: "tool" messages that reference the tool_call_id.
{
"role": "tool",
"tool_call_id": "call_abc123",
"content": "{"location":"Tokyo","temp_c":24,"conditions":"Cloudy"}"
}Output format
Complete flow example
End-to-end flow: define tools, handle tool calls, return results, and ask for the final response.
from openai import OpenAI
import json
client = OpenAI(
base_url="https://api.gonkagate.com/v1",
api_key="gp-your-api-key"
)
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get weather for a location",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}
]
def get_weather(location: str, unit: str = "celsius"):
# Replace with your real API call
return {"location": location, "unit": unit, "temp": 24, "conditions": "Cloudy"}
messages = [{"role": "user", "content": "What's the weather in Tokyo?"}]
response = client.chat.completions.create(
model="qwen/qwen3-235b-a22b-instruct-2507-fp8",
messages=messages,
tools=tools,
tool_choice="auto"
)
tool_calls = response.choices[0].message.tool_calls or []
if tool_calls:
# Add assistant tool call message
messages.append(response.choices[0].message)
for call in tool_calls:
args = json.loads(call.function.arguments or "{}")
result = get_weather(**args)
messages.append({
"role": "tool",
"tool_call_id": call.id,
"content": json.dumps(result)
})
final_response = client.chat.completions.create(
model="qwen/qwen3-235b-a22b-instruct-2507-fp8",
messages=messages,
tools=tools
)
print(final_response.choices[0].message.content)Parallel tool calls
Models can request multiple tools in one turn. Execute them in parallel and return one tool message per call.
const toolCalls = response.choices[0]?.message?.tool_calls ?? [];
messages.push(response.choices[0].message);
const results = await Promise.all(
toolCalls.map(async (call) => {
const args = JSON.parse(call.function.arguments || "{}");
const output = await runTool(call.function.name, args);
return {
role: "tool",
tool_call_id: call.id,
content: JSON.stringify(output)
};
})
);
messages.push(...results);Streaming with tools
In streaming mode, tool_calls arrive as deltas. Accumulate partial arguments per tool call before parsing JSON.
const stream = await client.chat.completions.create({
model: "qwen/qwen3-235b-a22b-instruct-2507-fp8",
messages,
tools,
stream: true
});
const toolCalls: Record<string, { name?: string; arguments: string }> = {};
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta;
const calls = delta?.tool_calls ?? [];
for (const call of calls) {
const id = call.id ?? String(call.index);
const current = toolCalls[id] ?? { arguments: "" };
if (call.function?.name) {
current.name = call.function.name;
}
if (call.function?.arguments) {
current.arguments += call.function.arguments;
}
toolCalls[id] = current;
}
}
const finalized = Object.values(toolCalls).map((call) => ({
name: call.name ?? "unknown",
args: JSON.parse(call.arguments || "{}")
}));Arguments arrive in chunks
Best practices
Build reliable tool flows with these safeguards:
- Keep the tool list small and relevant to the task.
- Validate and sanitize arguments before execution.
- Use timeouts and error handling for external APIs.
- Return compact, structured outputs to the model.
- Limit tool call depth to avoid loops.
Model compatibility
Tool calling support depends on the model. Verify capabilities before shipping.
Use the live Models page or check GET /models in the API reference for current availability.
- What to look for
- Capabilities badges and notes for each model.
- Why it matters
- Quick visual confirmation.
- What to look for
- Capabilities fields returned by /v1/models.
- Why it matters
- Programmatic verification in CI.
- What to look for
- Handle missing tool support with a no-tools path.
- Why it matters
- Avoid runtime failures.
Troubleshooting
Common issues and fixes:
- Tool never called β tighten descriptions, set tool_choice to required, or switch to a tool-capable model.
- Invalid JSON arguments β re-ask the model or validate/repair before execution.
- tool_call_id mismatch β ensure each tool result references the correct id and is included in order.
Key Takeaways
- Define tools with clear JSON Schema and descriptions.
- Execute tool_calls and return role: "tool" messages with matching ids.
- Verify model support before production.