(1/6) OpenAI API Mastery: Innovating with GPT-4 Turbo and DALL·E 3 – Parallel Function Calling

Welcome to part 1 of the course! My name is Dirk van Meerveld and I will be your host and guide for this series in which we’re going to be exploring all the new features of the OpenAI APIs and what we can do with them.

To get started we’re going to be looking at the OpenAI function calling updates, especially the new ability to call multiple functions in parallel. We’ll also discuss some of the important syntax changes to go along with this and other new functionality in the API.

Let’s create a new folder and file in our base directory to get started.

📁FINX_OPENAI_UPDATES (root project folder)
    📁1_Parallel_function_calling
        📄function_descriptions.py

Function Descriptions

Open up function_descriptions.py.

Here we’ll describe the functions. These objects are for ChatGPT to know what functions are available and what names they have.

It describes what the function does and what parameters it needs as input. Notice that this is all text.

The only purpose of these objects is for ChatGPT to know what functions are available, when it should use a particular function, and what arguments it needs to provide to call a specific function. As such, they are not the functions themselves, which we have separately, but merely a description of the functions.

Let’s get started:

describe_get_current_weather = {
    "type": "function",
    "function": {

Note the syntax has slightly changed from what function calling used to be. We now wrap the entire object inside a "function" key and also have a "type": "function" key-value pair on the outermost level.

        "name": "get_current_weather",
        "description": "This function provides the current weather in a specific location.",

The name we provide here is the name that ChatGPT will use when it wants to call this particular function.

        "parameters": {

Here you describe to ChatGPT when you want it to call this function, and what the purpose of this function is.

            "type": "object",

Here you describe what parameters this function needs to be able to run.

The overall parameters are an object, and as properties, it needs a location which is of type string. We also provide a description of what this parameter should contain, namely the name of a city.

Note the required key, which is an array of the required parameters (you can specify multiple parameters here).

            "properties": {
                "location": {
                    "type": "string",
                    "description": "The location as a city name, e.g. Amsterdam.",
                },
            },
            "required": ["location"],
        },
    },
}

So the whole description is :

describe_get_current_weather = {
    "type": "function",
    "function": {
        "name": "get_current_weather",
        "description": "This function provides the current weather in a specific location.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The location as a city name, e.g. Amsterdam.",
                },
            },
            "required": ["location"],
        },
    },
}

Now we have the second one which is much the same:

describe_get_weather_forecast = {
    "type": "function",
    "function": {
        "name": "get_weather_forecast",
        "description": "This function provides the weather forecast in a specific location for a specified number of days.",

Here we have multiple parameters. Note that only one of them is required.

        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The location as a city name, e.g. Amsterdam.",
                },
                "days": {
                    "type": "integer",
                    "description": "The number of days to forecast, between 1 and 14.",
                },
            },
            "required": ["location"],
        },
    },
}

The entire function_descriptions.py file now looks like this:

describe_get_current_weather = {
    "type": "function",
    "function": {
        "name": "get_current_weather",
        "description": "This function provides the current weather in a specific location.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The location as a city name, e.g. Amsterdam.",
                },
            },
            "required": ["location"],
        },
    },
}


describe_get_weather_forecast = {
    "type": "function",
    "function": {
        "name": "get_weather_forecast",
        "description": "This function provides the weather forecast in a specific location for a specified number of days.",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {
                    "type": "string",
                    "description": "The location as a city name, e.g. Amsterdam.",
                },
                "days": {
                    "type": "integer",
                    "description": "The number of days to forecast, between 1 and 14.",
                },
            },
            "required": ["location"],
        },
    },
}

Prompt Setup

Ok go ahead and close that, and create another file in the '1_Parallel_function_calling' folder called 'prompt_setup.py'. This is where we’ll set up the prompt for ChatGPT to use.

📁FINX_OPENAI_UPDATES (root project folder)
    📁1_Parallel_function_calling
        📄function_descriptions.py
        📄prompt_setup.py

Inside put the following variable:

current_and_forecast_setup = "You are a regular ChatGPT chatbot, just like normal, however, you also have access to some functions that can be called if you need them. One will provide the current weather and one will provide the weather forecast. IF THE USER DOES NOT ASK A WEATHER RELATED QUESTION JUST ANSWER THEM AS NORMAL WITHOUT CALLING ANY FUNCTIONS."

This is just a basic prompt setup telling the model it has functions available but also emphasizing that we don’t want to use them if they are not needed to answer the question. You can always play around with the specific wording and details of this prompt to see what works best for you.

We put it in a separate file to keep large string variables outside the main code to keep it readable as in a larger project the setup would likely be longer and have several versions.

Weather API

Now save and close that file as well. It’s time to create the actual functions that we’re going to be giving to ChatGPT to call. Create a new file in the same folder called 'weather.py':

📁FINX_OPENAI_UPDATES (root project folder)
    📁1_Parallel_function_calling
        📄function_descriptions.py
        📄prompt_setup.py
        📄weather.py

First, sign up for a free account on weatherapi.com.

They will give you pro for 14 days for free but it will automatically switch back to free afterward and you don’t have to provide any payment or credit card information, so don’t worry about it, you can use this API for free without any hassle.

Now create a '.env' file in the base directory of your project:

📁FINX_OPENAI_UPDATES (root project folder)
    📁1_Parallel_function_calling
        📄function_descriptions.py
        📄prompt_setup.py
        📄weather.py
    📄.env

And inside this file put both your weatherapi API key and OpenAI API key using the following syntax, making sure not to use quotes or spaces:

CHATGPT_API_KEY=supersecretchatgptapikeygoeshere
WEATHER_API_KEY=yoursupersecretweatherapikeygoeshere

Close and save that file so we can load our secret API keys from this file later. Now open a terminal and run the following command:

pip install python-decouple

Writing the Functions

This library will allow us to load our API keys from the .env file we just created. Now open up weather.py (which is still empty) and put the following code inside:

from decouple import config
from json import dumps
import requests

Config will allow us to easily read the content of our .env file, allowing us to load our API keys without coding their values in our code. The json module is a part of Python’s standard library and provides methods for working with JSON data.

The dumps function is used to convert a Python object into a JSON string, which basically holds the same information but in a string format. This is useful as ChatGPT cannot take Python objects as input, but it can take strings.

Finally, the requests module is a Python library used for making HTTP requests, providing us with a simple API. We’ll use it to send requests to the weatherapi.com API.

Now we define a simple function below:

def get_current_weather(location) -> str:
    if not location:
        return (
            "Please provide a location and call the get_current_weather_function again."
        )
    API_params = {
        "key": config("WEATHER_API_KEY"),
        "q": location,
        "aqi": "no",
        "alerts": "no",
    }

We use the config function to load the API key from the .env file. (make sure the name matches exactly and the .env file does not have any spaces).

Q holds the location, and AQI (air quality index) and alerts are not needed.

Continue inside the function:

    response: requests.models.Response = requests.get(
        "http://api.weatherapi.com/v1/current.json", params=API_params
    )
    str_response: str = dumps(response.json())
    return str_response

We make a get request, passing in our URL and parameters, and get a response object, which contains the server’s response.

We then convert the response to a dictionary by calling the .json method and convert this dictionary to a string using the dumps function we imported above.

This is the whole function:

def get_current_weather(location) -> str:
    if not location:
        return (
            "Please provide a location and call the get_current_weather_function again."
        )
    API_params = {
        "key": config("WEATHER_API_KEY"),
        "q": location,
        "aqi": "no",
        "alerts": "no",
    }
    response: requests.models.Response = requests.get(
        "http://api.weatherapi.com/v1/current.json", params=API_params
    )
    str_response: str = dumps(response.json())
    return str_response

Testing

Give it a quick test run to make sure it’s working. Add the print statement below and run your file:

print(get_current_weather("Seoul"))

You should see something like this in your terminal:

{"location": {"name": "Seoul", "region": "", "country": "South Korea", "lat": 37.57, "lon": 127.0, "tz_id": "Asia/Seoul", "localtime_epoch": 1699705164, "localtime": "2023-11-11 21:19"}, "current": {"last_updated_epoch": 1699704900, "last_updated": "2023-11-11 21:15", "temp_c": 1.0, "temp_f": 33.8, "is_day": 0, "condition": {"text": "Clear", "icon": "//cdn.weatherapi.com/weather/64x64/night/113.png", "code": 1000}, "wind_mph": 6.9, "wind_kph": 11.2, "wind_degree": 330, "wind_dir": "NNW", "pressure_mb": 1029.0, "pressure_in": 30.39, "precip_mm": 0.0, "precip_in": 0.0, "humidity": 55, "cloud": 0, "feelslike_c": -3.1, "feelslike_f": 26.3, "vis_km": 10.0, "vis_miles": 6.0, "uv": 1.0, "gust_mph": 12.1, "gust_kph": 19.4}}

Make sure you comment out the print statement so it won’t run every time we import this file in the future.

Writing the Second Function

Now we’ll create a second function to get the weather forecast.

This one is a bit more complicated as we need to provide a number of days to forecast. We’ll also need to do some error handling to make sure the user provides a valid number of days.

def get_weather_forecast(location, days=7) -> str:
    try:
        days = 1 if days < 1 else 14 if days > 14 else days
    except TypeError:
        days = 7

We take a location and set a default of 7 days.

If the days variable is less than 1 we set it to 1, but if it’s more than 14 we set it to 14. If neither condition is true the user provided a valid value and we just use the input argument value.

Finally, if some weird type gets passed in we just default to 7 days.

    params = {
        "key": config("WEATHER_API_KEY"),
        "q": location,
        "days": days,
        "aqi": "no",
        "alerts": "no",
    }

    response: requests.models.Response = requests.get(
        "http://api.weatherapi.com/v1/forecast.json", params=params
    )

Parameters are largely the same except we have a number of days now. The only problem is that the API sends back a lot of data, even hourly data so 24 entries per day, which is way too much, so we need to do some filtering:

    response: dict = response.json()
    filtered_response = {}
    filtered_response["location"] = response["location"]
    filtered_response["current"] = response["current"]
    filtered_response["forecast"] = [
        [day["date"], day["day"]] for day in response["forecast"]["forecastday"]
    ]
    return dumps(filtered_response)

First convert the response to a dictionary. Keep the location and the current weather by copying them from the response to the empty dictionary named filtered_response we just created.

For the forecast, we only want the daily data, as the hourly data will completely overload the response. The line just extracts only the data we want and is based on the structure of the response from the API.

I don’t want to get too deeply into it here as this course is on OpenAI and not on list comprehensions but basically, we extract the date and day data from each day in the forecast and put it in a list.

Finally, we convert the filtered_response dictionary to a string and return it, without all the hourly data that the API sent to us.

The second function now looks like this:

def get_weather_forecast(location, days=7) -> str:
    try:
        days = 1 if days < 1 else 14 if days > 14 else days
    except TypeError:
        days = 7

    params = {
        "key": config("WEATHER_API_KEY"),
        "q": location,
        "days": days,
        "aqi": "no",
        "alerts": "no",
    }

    response: requests.models.Response = requests.get(
        "http://api.weatherapi.com/v1/forecast.json", params=params
    )

    response: dict = response.json()
    filtered_response = {}
    filtered_response["location"] = response["location"]
    filtered_response["current"] = response["current"]
    filtered_response["forecast"] = [
        [day["date"], day["day"]] for day in response["forecast"]["forecastday"]
    ]
    return dumps(filtered_response)

Give it a test run:

print(get_weather_forecast("Seoul", days=3))

And you should get a fairly large output in your terminal. Again, make sure you comment out the print statement so it won’t run every time we import this file in the future.

Parallel Function Calling

Ok go ahead and close your weather.py file. We’re done with it for now. Now we’ll create a new file called 'parallel_function_calling.py' in the same folder:

📁FINX_OPENAI_UPDATES (root project folder)
    📁1_Parallel_function_calling
        📄function_descriptions.py
        📄prompt_setup.py
        📄weather.py
        📄parallel_function_calling.py
    📄.env

Important! Before we get started make sure you run this in a terminal window:

pip install openai --upgrade

This gets the latest version of the openai library, to make sure your syntax is the same as mine, as there are quite some differences between the old and new versions, which we’ll be going over in the coming parts.

Open the parallel_function_calling.py file and let’s have some fun!

import json
from decouple import config
from openai import OpenAI
from typing import Callable

We import the built-in json module to work with JSON data, config to load our OpenAI API key from the .env file, and OpenAI to access the API.

Note that the syntax is different, where we would just import 'openai' itself in the past, in this new version of the openai library, we need to import OpenAI instead. The 'Callable' from typing is just used to clear up something in our code later on.

from weather import get_current_weather, get_weather_forecast
from prompt_setup import current_and_forecast_setup
from function_descriptions import (
    describe_get_current_weather,
    describe_get_weather_forecast,
)

Here we just import our own stuff we prepared ahead of time.

MODEL = "gpt-3.5-turbo-1106"
client = OpenAI(api_key=config("OPENAI_API_KEY"))

Define the model up top, and then we create a 'client' by calling the OpenAI class we imported and passing in the api_key by loading it from the .env file using config.

This is part of the new standard syntax, we will interact with this client object to make API calls to OpenAI’s various API endpoints from here on.

Utility Printer Function

Now create a quick utility to print the output in a more readable manner:

def quick_dirty_printer(messages):
    """
    Prints messages in alternating colors (irrespective of role) and the final message in green. (92 is green, 93 is yellow, 94 is blue)
    """
    for index, message in enumerate(messages):
        if index == len(messages) - 1:
            print(f"\033[92m {message} \033[0m")
        elif index % 2 == 0:
            print(f"\033[93m {message} \033[0m")
        else:
            print(f"\033[94m {message} \033[0m")

This function takes a list of messages and then loops over each index and message in the messages. If the index is the last one, it prints the message in green, otherwise, it prints it in alternating yellow and blue colors using the remainder operator to distinguish odd and even indexes. This is just a quick and dirty way to make the output more readable.

The \033[92m part is an ANSI color code, which is a special character sequence that tells the terminal to change the color of the text. The \033[0m part resets the color back to the default.

GPT Function

Now lets start on our GPT function:

def ask_weather_gpt(query, message_history=None, simulate_failure=False):
    need_to_fail_once = simulate_failure
    messages = []

We’re going to take a query as input, and optionally a message history. So if we want to call this for a second time with an already established message history of the messages sent between chat GPT and the user, we can call this function again and pass in the already existing message history.

I’m going to also add this key simulate_failure because we’re going to be building in something just in case ChatGPT fails somehow. And we want to be able to test this because it’s actually not that likely that it will fail. So we’re going to have a very simple simulate_failure feature so we can just test that our fail-saves are working.

Then we’re going to have this variable need_to_fill_once, just a boolean value based on whether something was passed in here or not.

And then we’re going to create the messages. This is just going to be a list that’s going to have all the messages. So perhaps the system message first, which tells chat GPT you’re a helpful assistant that’s supposed to do this or that, then we maybe have the user message with a query. Then we could have the assistant message from chat GPT coming back to us, giving us an answer. And all of these messages are going to be appended to this list.

So every single time chat GPT is going to send us a response, we’re going to be appending it to this message history list.

As we’re going to be doing this several times over, and there’s also currently a small bug that we want to avoid, we’re going to create a small inner function that we can call every single time we want to append something to this list:

    def handle_and_append_response(response):
        """
        Appends message to history and extracts the message,
        prevents a current bug by explicitly setting .content and .function_call
        """
        response_message = response.choices[0].message
        if response_message.content is None:
            response_message.content = ""
        if response_message.function_call is None:
            del response_message.function_call
        messages.append(response_message)
        return response_message

This may look a little bit confusing, but let’s go over it. So this function is going to handle and append a response. When we make a call to chat GPT and we get a response in return, we’re going to just put the response into this function.

This inner handle_and_append_response function is going to append the message to the history, or the list of ‘messages‘.

First, we extract the message from the response and save it as ‘response_message‘. Then we’re going to prevent a current bug by explicitly setting the .content and .function_call.

So if the response message .content is none, which is the case when ChatGPT tries to call a function, we’re going to set the response message’s .content to an empty string.

Now, why do we do this?

There’s currently a bug that if you append a message to the messages history, and then you send this back to ChatGPT, it’s going to complain that there’s no message.content. So this is kind of a bug that we’re circumventing by making sure this key exists, even though it’s an empty string.

The same goes for the second one, if response_message.function_call is None, then we’re just going to get rid of this particular key, just to make sure it doesn’t bug out on us. If you’re watching this in the future, they may have actually fixed this so you can try removing these lines later on.

Then we’re just going to take the messages and append our response message with these small edits and also return the response_message from the function.

Now we’re outside of the inner function again:

def ask_weather_gpt(...)
    ...
    def handle_and_append_response(...)
        ...
        ...

    # continue down here outside the inner function
    if message_history:
        messages = message_history
    else:
        messages = [
            {"role": "system", "content": current_and_forecast_setup},
            {"role": "user", "content": query},
        ]

If we passed in a message_history as an argument when calling the function, we’re going to use that as the message_history.

Otherwise, we’ll define a basic message history with a system message containing our prompt from prompt_setup.py and the user query in the second message.

    tools = [
        describe_get_current_weather,
        describe_get_weather_forecast,
    ]

Now we create a list of ‘tools‘.

Notice OpenAI has adopted the LangChain tool naming convention. This is a list of the descriptions of the functions from function_descriptions.py, and not the actual functions themselves!

    response = client.chat.completions.create(
        model=MODEL,
        messages=messages,
        tools=tools,
        tool_choice="auto",
    )

Now we make a call to ChatGPT, passing in our model, messages, and tools. We set the tool choice to 'auto' to let ChatGPT decide if and which function(s) it should call. You can force a call by setting a specific tool name here.

Now we’re going to handle the response using our inner function:

    response_message = handle_and_append_response(response)

Which means it’s now also in our messages list.

    while response_message.tool_calls:
        tool_calls = response_message.tool_calls
        available_functions = {
            "get_current_weather": get_current_weather,
            "get_weather_forecast": get_weather_forecast,
        }

We open a while loop. As long as ChatGPT wants to call a function, the response_message will have a .tool_calls attribute, which is a list of the functions it wants to call.

So while ChatGPT wants to call functions we will run this loop. We save this list as tool_calls. Then we define a simple dictionary of available functions, mapping the function names we gave to ChatGPT to the actual functions we defined in weather.py.

        try:
            if need_to_fail_once:
                need_to_fail_once = False
                raise Exception("Simulating failure")
            for call in tool_calls:
                func_name: str = call.function.name
                func_to_call: Callable = available_functions[func_name]
                func_args: dict = json.loads(call.function.arguments)
                func_response = func_to_call(**func_args)

                messages.append(
                    {
                        "tool_call_id": call.id,
                        "role": "tool",
                        "name": func_name,
                        "content": func_response,
                    }
                )

We run a try/except block from now on. Remember ChatGPT is generating the function names and input arguments from now on. If it makes any name or syntax mistakes our function might blow up, which is why we use a try and except block to catch any errors and handle them.

First, if the need_to_fail_once variable is set to true, we simulate failure by raising an exception. We also make sure to set the variable to false so we only raise an exception once. By raising an exception we force the except block to run so we can test out our fail-safe code.

Then we loop over each call in the tool_calls list. We extract the function name from the call, and then we get the actual function from our available_functions dictionary.

We also extract the function arguments from the call and convert them from a string to a dictionary using json.loads.

We then call the function passing in the arguments dictionary using the ** operator. Finally, we append a message to our messages list, containing the tool_call_id, the role, the name of the function, and the response from the function.

We will feed this message history back to ChatGPT again later and the call id helps ChatGPT discern which answer is related to which function call as multiple functions are being called in parallel here.

Now we go to the except block:

        except:
            messages.pop()
            messages.append(
                {
                    "role": "system",
                    "content": "Based on the above information, please generate the appropriate tool calls with valid arguments per the schema provided.",
                }
            )
            return ask_weather_gpt(query, message_history=messages)

If we get an exception, we pop the last message from the messages list, and then we append a system message telling ChatGPT to generate the appropriate tool calls with valid arguments per the schema provided. Then we call the ask_weather_gpt function again, passing in the query and the message history, which now contains the system message we just appended.

Basically what this comes down to is that ChatGPT generated faulty arguments, we popped this generation off the stack and put in a system message reminding ChatGPT to generate correct arguments. Then we return out of this function by calling the function itself again passing in our message history with the reminder.

This is not actually a perfect error handling at all, but I just want to give you a starting point, an idea from which you can start to build your own error handling, without making this example too complex.

Now outside the try/catch block:

        response = client.chat.completions.create(
            model=MODEL,
            messages=messages,
        )

        response_message = handle_and_append_response(response)

        quick_dirty_printer(messages)
        return response_message

    quick_dirty_printer(messages)
    return response_message

We make a second request to ChatGPT passing in the message history which now contains all the responses from the functions we called. We then handle the response using our inner function, append the response to the message history, print the messages using our quick and dirty printer utility function, and return the response message.

After that, we call the quick and dirty printer and return the response once more, but notice this is indented one level more to the outside. In case there was no function call to begin with, the user asked a question that didn’t require a function call, we just bypass the whole while loop and directly print the messages and return the response message.

As this can be a bit confusing in snippets here is the whole function once more:

def ask_weather_gpt(query, message_history=None, simulate_failure=False):
    need_to_fail_once = simulate_failure
    messages = []

    def handle_and_append_response(response):
        """
        Appends message to history and extracts the message,
        prevents a current bug by explicitly setting .content and .function_call
        """
        response_message = response.choices[0].message
        if response_message.content is None:
            response_message.content = ""
        if response_message.function_call is None:
            del response_message.function_call
        messages.append(response_message)
        return response_message

    if message_history:
        messages = message_history
    else:
        messages = [
            {"role": "system", "content": current_and_forecast_setup},
            {"role": "user", "content": query},
        ]

    tools = [
        describe_get_current_weather,
        describe_get_weather_forecast,
    ]

    response = client.chat.completions.create(
        model=MODEL,
        messages=messages,
        tools=tools,
        tool_choice="auto",
    )

    response_message = handle_and_append_response(response)

    while response_message.tool_calls:
        tool_calls = response_message.tool_calls
        available_functions = {
            "get_current_weather": get_current_weather,
            "get_weather_forecast": get_weather_forecast,
        }

        try:
            if need_to_fail_once:
                need_to_fail_once = False
                raise Exception("Simulating failure")
            for call in tool_calls:
                func_name: str = call.function.name
                func_to_call: Callable = available_functions[func_name]
                func_args: dict = json.loads(call.function.arguments)
                func_response = func_to_call(**func_args)

                messages.append(
                    {
                        "tool_call_id": call.id,
                        "role": "tool",
                        "name": func_name,
                        "content": func_response,
                    }
                )

        except:
            messages.pop()
            messages.append(
                {
                    "role": "system",
                    "content": "Based on the above information, please generate the appropriate tool calls with valid arguments per the schema provided.",
                }
            )
            return ask_weather_gpt(query, message_history=messages)

        response = client.chat.completions.create(
            model=MODEL,
            messages=messages,
        )

        response_message = handle_and_append_response(response)

        quick_dirty_printer(messages)
        return response_message

    quick_dirty_printer(messages)
    return response_message

Running a Single Function Call

So let’s try it out! Add the following print statement and run your file:

ask_weather_gpt("What's the weather in San Francisco?", simulate_failure=False)

You should see something like this in your terminal:

{'role': 'system', 'content': 'You are a regular ChatGPT chatbot, just like normal, however you also have access to some functions that can be called if you need them. One will provide the current weather and one will provide the weather forecast. IF THE USER DOES NOT ASK A WEATHER RELATED QUESTION JUST ANSWER THEM AS NORMAL WITHOUT CALLING ANY FUNCTIONS.'}

{'role': 'user', 'content': "What's the weather in San Francisco?"}

ChatCompletionMessage(content='', role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_8oWdO9OoMqwXUp7QEE7kaaCX', function=Function(arguments='{"location":"San Francisco"}', name='get_current_weather'), type='function')])

{'tool_call_id': 'call_8oWdO9OoMqwXUp7QEE7kaaCX', 'role': 'tool', 'name': 'get_current_weather', 'content': '{"location": {"name": "San Francisco", "region": "California", "country": "United States of America", "lat": 37.78, "lon": -122.42, "tz_id": "America/Los_Angeles", "localtime_epoch": 1699771110, "localtime": "2023-11-11 22:38"}, "current": {"last_updated_epoch": 1699770600, "last_updated": "2023-11-11 22:30", "temp_c": 11.1, "temp_f": 52.0, "is_day": 0, "condition": {"text": "Clear", "icon": "//cdn.weatherapi.com/weather/64x64/night/113.png", "code": 1000}, "wind_mph": 2.2, "wind_kph": 3.6, "wind_degree": 10, "wind_dir": "N", "pressure_mb": 1019.0, "pressure_in": 30.1, "precip_mm": 0.0, "precip_in": 0.0, "humidity": 83, "cloud": 0, "feelslike_c": 11.5, "feelslike_f": 52.6, "vis_km": 16.0, "vis_miles": 9.0, "uv": 1.0, "gust_mph": 1.3, "gust_kph": 2.1}}'}

ChatCompletionMessage(content='The current weather in San Francisco is clear with a temperature of 52.0°F. The wind is blowing at 3.6 km/h from the north, and the humidity is at 83%.', role='assistant', tool_calls=None)

So first we have the system message we set up followed by the user query. We can then see that ChatGPT sends us a request to call functions and passes the arguments to us and also gives this call an id. We then have the tool call results with the matching ID to link them together and finally, ChatGPT gives us a readable final answer!

Testing the Failsafe

Before we get into multiple function calls let’s quickly test out our fail-safe. Change the print statement to this:

ask_weather_gpt("What's the weather in San Francisco?", simulate_failure=True)

Your output should look exactly the same as above, but with one extra entry in between the user query and the ChatGPT message:

{'role': 'system', 'content': 'setup...'}

{'role': 'user', 'content': "What's the weather in San Francisco?"}

{'role': 'system', 'content': 'Based on the above information, please generate the appropriate tool calls with valid arguments per the schema provided.'}

ChatCompletionMessage(content='', role='assistant', tool_calls=[....])

{'tool_call_id': 'call_QrLTbUe3RqPYyfbXUkyIVqZe', 'role': 'tool', 'name': 'get_current_weather', 'content': '....'}

Exactly as expected, the error triggered, sent the first messages back to ChatGPT again with the third one appended, reminding the model to please generate appropriate tool calls with valid arguments and made a new call like nothing happened.

Running Parallel Function Calls

Comment out the above print statement and let’s try parallel function calls now:

ask_weather_gpt(
    "Please give me the current weather in Seoul and the weather forecast in Amsterdam for the coming three days."
)

And you can see two function calls being sent back simultaneously. We then call both functions and return the results to ChatGPT which gives us the final answer:

{'role': 'system', 'content': 'setup....'}

{'role': 'user', 'content': 'Please give me the current weather in Seoul and the weather forecast in Amsterdam for the coming three days.'}

ChatCompletionMessage(content='', role='assistant', tool_calls=[ChatCompletionMessageToolCall(id='call_QKpzrTXdoh2Carn0bvyhott5', function=Function(arguments='{"location": "Seoul"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_n7BmrrjgnKEAROSZHaOoWxLf', function=Function(arguments='{"location": "Amsterdam", "days": 3}', name='get_weather_forecast'), type='function')])

{'tool_call_id': 'call_QKpzrTXdoh2Carn0bvyhott5', 'role': 'tool', 'name': 'get_current_weather', 'content': '{......}'}

{'tool_call_id': 'call_n7BmrrjgnKEAROSZHaOoWxLf', 'role': 'tool', 'name': 'get_weather_forecast', 'content': '{......}'}

ChatCompletionMessage(content='The current weather in Seoul is sunny with a temperature of 5°C (41°F). The wind is blowing from the north at 11.2 km/h.\n\nIn Amsterdam, the current weather is foggy with a temperature of 5°C (41°F). Over the next three days, expect patchy rain with a
high of 8.8°C (47.8°F) and a low of 6.0°C (42.8°F) tomorrow, followed by moderate rain with a high of 14.6°C (58.3°F) and a low of 5.9°C (42.6°F) the day after, and more moderate rain with a high of 11.9°C (53.4°F) and a low of 10.1°C (50.2°F) on the third day.', role='assistant', tool_calls=None)

Perfect! We can now call multiple tools at the same time without having to loop through ChatGPT several times, greatly speeding up the process.

Asking a Simple Question

Finally ask a normal query to make sure ChatGPT will still answer normal questions without calling functions when not needed:

ask_weather_gpt("What is a zombie watermelon?")

{'role': 'system', 'content': 'You are a regular ChatGPT chatbot, just like normal, however you also have access to some functions that can be called if you need them. One will provide the current weather and one will provide the weather forecast. IF THE USER DOES NOT ASK A WEATHER RELATED QUESTION JUST ANSWER THEM AS NORMAL WITHOUT CALLING ANY FUNCTIONS.'}

{'role': 'user', 'content': 'What is a zombie watermelon?'}

ChatCompletionMessage(content='A "zombie watermelon" is a term that\'s used for a watermelon that has been left in the field for an extended period, causing it to turn mushy and ooze behind its rind after being picked. This causes the inside to rot while the exterior remains vibrant
and green, hence the name "zombie watermelon." It\'s not an official term but more of a colloquial description.', role='assistant', tool_calls=None)

Yep, ChatGPT goes straight into the answer. Now that you are up to date with parallel function calls and the new syntax, let’s look at the new JSON mode and seeds in the next part. See you there soon!

👉 Take Me Back to the Full Course