# Tool Calling

The Infron supports OpenAI-compatible function calling, allowing models to call tools and functions. This follows the same specification as the [OpenAI Function Calling API](https://platform.openai.com/docs/guides/function-calling).

### **Basic tool calls**

{% tabs %}
{% tab title="Python" %}

```python
import os
from openai import OpenAI
 
client = OpenAI(
    api_key='<API_KEY>',
    base_url='https://llm.onerouter.pro/v1'
)
 
tools = [
    {
        'type': 'function',
        'function': {
            'name': 'get_weather',
            'description': 'Get the current weather in a given location',
            'parameters': {
                'type': 'object',
                'properties': {
                    'location': {
                        'type': 'string',
                        'description': 'The city and state, e.g. San Francisco, CA'
                    },
                    'unit': {
                        'type': 'string',
                        'enum': ['celsius', 'fahrenheit'],
                        'description': 'The unit for temperature'
                    }
                },
                'required': ['location']
            }
        }
    }
]
 
completion = client.chat.completions.create(
    model='claude-sonnet-4-5@20250929',
    messages=[
        {
            'role': 'user',
            'content': 'What is the weather like in San Francisco?'
        }
    ],
    tools=tools,
    tool_choice='auto',
    stream=False,
)
 
print('Assistant:', completion.choices[0].message.content)
print('Tool calls:', completion.choices[0].message.tool_calls)
```

{% endtab %}

{% tab title="TypeScript" %}

```typescript
import OpenAI from 'openai';
 
const openai = new OpenAI({
  '<API_KEY>',
  baseURL: 'https://llm.onerouter.pro/v1',
});
 
const tools: OpenAI.Chat.Completions.ChatCompletionTool[] = [
  {
    type: 'function',
    function: {
      name: 'get_weather',
      description: 'Get the current weather in a given location',
      parameters: {
        type: 'object',
        properties: {
          location: {
            type: 'string',
            description: 'The city and state, e.g. San Francisco, CA',
          },
          unit: {
            type: 'string',
            enum: ['celsius', 'fahrenheit'],
            description: 'The unit for temperature',
          },
        },
        required: ['location'],
      },
    },
  },
];
 
const completion = await openai.chat.completions.create({
  model: 'anthropic/claude-sonnet-4.5',
  messages: [
    {
      role: 'user',
      content: 'What is the weather like in San Francisco?',
    },
  ],
  tools: tools,
  tool_choice: 'auto',
  stream: false,
});
 
console.log('Assistant:', completion.choices[0].message.content);
console.log('Tool calls:', completion.choices[0].message.tool_calls);
```

{% endtab %}
{% endtabs %}

Controlling tool selection: By default, `tool_choice` is set to `'auto'`, allowing the model to decide when to use tools. You can also:

* Set to `'none'` to disable tool calls
* Force a specific tool with: `tool_choice: { type: 'function', function: { name: 'your_function_name' } }`

**Tool call response format**

When the model makes tool calls, the response includes tool call information:

```json
{
  "id": "msg_bdrk_01C8nvCh9VwSAnSuZZG1crX3",
  "choices": [
    {
      "finish_reason": "tool_calls",
      "index": 0,
      "logprobs": null,
      "message": {
        "content": "",
        "refusal": null,
        "role": "assistant",
        "annotations": null,
        "audio": null,
        "function_call": null,
        "tool_calls": [
          {
            "id": "toolu_bdrk_01MDzkbNZ7Vw6UQ51uEB9LbB",
            "function": {
              "arguments": "{\"location\":\"San Francisco, CA\"}",
              "name": "get_weather"
            },
            "type": "function"
          }
        ]
      }
    }
  ],
  "created": 1771655880,
  "model": "claude-sonnet-4-5@20250929",
  "object": "chat.completion",
  "service_tier": null,
  "system_fingerprint": null,
  "usage": {
    "completion_tokens": 56,
    "prompt_tokens": 615,
    "total_tokens": 671,
    "completion_tokens_details": null,
    "prompt_tokens_details": {
      "audio_tokens": null,
      "cached_tokens": null
    },
    "input_tokens": 0,
    "output_tokens": 0,
    "server_tool_use": {
      "web_search_requests": ""
    },
    "ttft": 0
  }
}
```

### Next steps

{% content-ref url="<https://app.gitbook.com/s/Z9C9AjT7j46HAcQrOVWw/features/tool-calling>" %}
[Tool Calling](https://app.gitbook.com/s/Z9C9AjT7j46HAcQrOVWw/features/tool-calling)
{% endcontent-ref %}


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://infronai.gitbook.io/docs/llm-apis/openai-compatible-api/tool-calling.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
