Function calling

Function calling enables dynamic workflows by allowing the model to select and suggest function calls based on user input, which helps in building agentic workflows. By defining a set of functions, or tools, you provide context that lets the model recommend and fill in function arguments as needed.

How function calling works

Function calling enables adaptive workflows that leverage real-time data and structured outputs, creating more dynamic and responsive model interactions.

  1. Submit a Query with tools: Start by submitting a user query along with available tools defined in JSON Schema. This schema specifies parameters for each function.
  2. The model processes and suggests: The model interprets the query, assesses intent, and decides if it will respond conversationally or suggest function calls. If a function is called, it fills in the arguments based on the schema.
  3. Receive a model response: You’ll get a response from the model, which may include a function call suggestion. Execute the function with the provided arguments and return the result to the model for further interaction.

Supported models

  • Meta-Llama-3.1-8B-Instruct
  • Meta-Llama-3.3-70B-Instruct
  • DeepSeek-V3-0324
  • Llama-4-Maverick-17B-128E-Instruct
  • Qwen3-32B
  • gpt-oss-120b
  • DeepSeek-V3.1

Example usage

The examples below describe each step of using function calling with an end-to-end example after the last step.

Step 1: Define the function schema

Define a JSON schema for your function. You will need to specify:

  • The name of the function.
  • A description of what it does.
  • The parameters, their data types, and descriptions.
1{
2  "type": "function",
3  "function": {
4    "name": "get_weather",
5    "description": "Gets the current weather information for a given city.",
6    "parameters": {
7      "type": "object",
8      "properties": {
9        "city": {
10          "type": "string",
11          "description": "Name of the city to get weather information for."
12        }
13      },
14      "required": ["city"]
15    }
16  }
17}
18

Step 2: Configure function calling in your request

When sending a request, include the function definition in the tools parameter and set tool_choice to the following:

  • auto : allows the model to choose between generating a message or calling a function. This is the default tool choice when the field is not specified.
  • required : This forces the model to generate a function call. The model will then always select one or more function(s) to call.
  • To enforce a specific function call, set tool_choice = {"type": "function", "function": {"name": "get_weather"}}. This ensures the model will only use the specified function.
1import openai
2import random
3import json
4
5# Initialize the client with the scx base URL and API key
6client = openai.OpenAI(
7    base_url="https://api.scx.ai/v1", 
8    api_key="your-scx-api-key"
9)
10
11MODEL = 'Meta-Llama-3.3-70B-Instruct'
12
13def get_weather(city: str) -> dict:
14    """
15    Fake weather lookup: returns a random temperature between 20°C and 50°C.
16    """
17    temp = random.randint(20, 50)
18    return {
19        "city": city,
20        "temperature_celsius": temp
21    }
22
23tools = [
24    {
25        "type": "function",
26        "function": {
27            "name": "get_weather",
28            "description": "Get weather of an location, the user shoud supply a location first",
29            "parameters": {
30                "type": "object",
31                "properties": {
32                    "city": {
33                        "type": "string",
34                        "description": "The city and state, e.g. San Francisco, CA",
35                    }
36                },
37                "required": ["city"]
38            },
39        }
40    },
41]
42
43messages = [{"role": "user", "content": "What's the weather like in Paris today?"}]
44
45
46completion = client.chat.completions.create(
47    model=MODEL,
48    messages=messages,
49    tools=tools
50)
51
52print(completion)
53

Step 3: Handle tool calls

If the model chooses to call a function, you will find tool_calls in the response. Extract the function call details and execute the corresponding function with the provided parameters.

1tool_call = completion.choices[0].message.tool_calls[0]
2args = json.loads(tool_call.function.arguments)
3
4result = get_weather(args["city"])
5

Step 4: Provide function results back to the model

Once you have computed the result, pass it back to the model to continue the conversation or confirm the output.

1messages.append(completion.choices[0].message)  # append model's function call message
2messages.append({                               # append result message
3    "role": "tool",
4    "tool_call_id": tool_call.id,
5    "content": str(result)
6})
7
8completion_2 = client.chat.completions.create(
9    model=MODEL,
10    messages=messages,
11 
12)
13print(completion_2.choices[0].message.content)
14

Step 5: Example output

An example output is shown below.

1The current weather in Paris is 25 degrees Celsius.
2

End-to-end example

1import openai
2import random
3import json
4
5# Define the OpenAI client
6client = openai.OpenAI(
7    base_url="https://api.scx.ai/v1", 
8    api_key="your-scx-api-key"
9)
10
11MODEL = 'Meta-Llama-3.3-70B-Instruct'
12
13def get_weather(city: str) -> dict: 
14    """
15    Fake weather lookup: returns a random temperature between 20°C and 50°C.
16    """
17    temp = random.randint(20, 50)
18    return {
19        "city": city,
20        "temperature_celsius": temp
21    }
22
23
24tools = [
25    {
26        "type": "function",
27        "function": {
28            "name": "get_weather",
29            "description": "Get weather of an location, the user shoud supply a location first",
30            "parameters": {
31                "type": "object",
32                "properties": {
33                    "city": {
34                        "type": "string",
35                        "description": "The city and state, e.g. San Francisco, CA",
36                    }
37                },
38                "required": ["city"]
39            },
40        }
41    },
42]
43
44messages = [{"role": "user", "content": "What's the weather like in Paris today?"}]
45
46
47completion = client.chat.completions.create(
48    model=MODEL,
49    messages=messages,
50    tools=tools
51)
52
53print(completion)
54
55
56tool_call = completion.choices[0].message.tool_calls[0]
57args = json.loads(tool_call.function.arguments)
58
59result = get_weather(args["city"])
60
61
62messages.append({                               # append model's function call message
63 "role": "assistant",
64 "content": None,
65 "tool_calls": [tool_call]
66})
67
68messages.append({                               # append result message
69 "role": "tool",                           
70 "tool_call_id": tool_call.id,
71 "content": json.dumps(result)
72})
73
74completion_2 = client.chat.completions.create(
75    model=MODEL,
76    messages=messages,
77 
78)
79print(completion_2.choices[0].message.content)
80