Lesson 5: Calling functions that do not exist to extract structured data

💡 Full Course with Videos and Course Certificate (PDF): https://academy.finxter.com/university/openai-api-function-calls-and-embeddings/

Welcome back to part 5, Where we’ll be calling functions that do not even exist. We’ll also briefly touch on pitfalls, failures, and retrying to make our code more robust as introducing AI tends to make our code a little more unpredictable and therefore potentially fragile.

Why would we want to call a function that doesn’t even exist? Well, technically we won’t be calling the function at all. We’ll just pretend that it exists and give ChatGPT a description of this nonexistent function. Why would we do this? For the arguments! You see, this new function calling ability gives us the side effect of being able to use ChatGPT to extract structured data directly into actual objects.

Say we have a large piece of text, for example. And we want to extract all the names of people and their birthdays from it. Let’s assume they do not have the same formatting for the dates so we cannot just use a regular expression to look for matches in this text. We could of course feed the text to ChatGPT and just ask it to provide us with the names and birthdays. But even though this answer would probably be reasonably well formatted it would still just be a piece of text containing the names and birthdays. We would then have to go and code something to parse the resulting text and pray that ChatGPT will use the exact same format every time (which it won’t!). We also have the problem of ChatGPT’s tendency to provide introductions or extra filler in its answers, which would again mess up our parser, and all of this gets very hairy and unreliable quickly.

But what if we pretend that there was a function, and this function requires a list of persons and their birthdays in a fixed format as parameters? We can feed ChatGPT the text containing all this data throughout and ask it to call this function (which doesn’t exist, but no matter) with the extracted names and birthdays as parameters. ChatGPT will return a function call as we’ve seen many times before and there will be an arguments object we can easily parse and extract containing all the data we need in whatever format we specified in the function description. Boom. My mind is blown. This alone makes function calls super powerful even if we’re not calling any functions!

Set up

Let’s whip up an example! First, some setup, as you may be used to by now. Create a file named reader in your utils folder:

> utils
    > reader.py

Inside, let’s code up a quick .txt file reader that will read our base text containing the data we want to extract:

def read_txt_file(file_path):
    with open(file_path, "r") as file:
        file_content = file.read()
    return file_content

The function takes a file path as an argument and then opens it in ‘r’ or read mode. We use ‘with open’ as a context manager to make sure the file is automatically closed again after we’re done with it. We then read the file and return the content. Go ahead and save and close this file.

Now let’s create a new prompt setup in our prompt_setups folder named extract.py:

> prompt_setups
    > extract.py

Inside we’ll declare a simple variable containing our prompt setup as a string:

data_extraction_setup = "You will be provided with a block of textual data. You will also be provided with a function. Your job is to call the function with the correct arguments. To get these arguments you will need to methodically extract them from the textual data provided."

Save and close this one and let’s create the function description next. Create a new file named extract.py in your func_descriptions folder:

> func_descriptions
    > extract.py

Function description

Our job in here is to pretend we have a function and describe how it works to get ChatGPT to pass the correct arguments to our nonexistent function so we can extract the data from the arguments. I’m going to be extracting certain events from a historical text, and the dates they occurred. We’ll pretend for ChatGPT that this function will somehow draw a nicer representation of the data and therefore needs the data as arguments.

describe_extract_dates = {
    "name": "make_list_of_dates",
    "description": """
        Use this function to get a nice representation of historical dates and their happenings.

        Input argument instructions:
        The input should be an array containing arrays structured exactly like this ['date', 'happening'].
        Each item in each list should be a string and the date must come first.
        The happening should be single-sentence description (e.g. 'Battle of Waterloo - Napoleon defeated').
        MAKE SURE YOU INCLUDE ALL HAPPENINGS FOR WHICH YOU CAN FIND A DATE (not unknown dates!), but only include happenings you extracted from the user provided text.
        Example: [1592, "Japanese invasions of Korea"],
    "parameters": {
        "type": "object",
        "properties": {
            "list_of_happenings": {
                "type": "array",
                "description": "an array containing arrays with dates in index 0 and happenings in index 1.",
                "items": {
                    "type": "array",
                    "description": "an array containing a date and a happening. THE DATE ALWAYS COMES FIRST, the happening second.",
                    "items": {
                        "date": {
                            "type": "string",
                            "description": "the date for the happening.",
                        "happening": {
                            "type": "string",
                            "description": "a single-sentence description for the happening (e.g. 'Battle of Waterloo - Napoleon defeated')",
        "required": ["list_of_happenings"],

This is a pretty long description, but it’s not that complicated. The input should be an array containing arrays structured exactly like this [‘date’, ‘happening’]. Each item in each list should be a string and the date must come first. The happening should be a single-sentence description (e.g. ‘Battle of Waterloo – Napoleon defeated’). MAKE SURE YOU INCLUDE ALL HAPPENINGS FOR WHICH YOU CAN FIND A DATE (not unknown dates!), but only include happenings you extracted from the user-provided text.

We then state that the required properties are a list of happenings, which is of type array and is an array containing arrays with dates in index 0 and happenings in index 1. The items in this array are thus also of type array and have two items, a date, and a happening. The date is a string and the happening is a string. We also state that the date must come first and the happening second and give an example format for the happening.

Preparing some text data

Go ahead and close this file and now let’s get some text to use. I’ll be using https://en.wikipedia.org/wiki/History_of_Korea a Korean history article as my text. I’ve used the introduction on this page up until the ‘Prehistory’ heading. Go ahead and copy and paste the text you chose (or use my article) into a new .txt file in your base directory named Ex_text_data.txt:

> Ex_text_data.txt (paste your text in here)

Note that if you use a large piece of text and run the code we’re about to make many many times in a row, you’ll use about $0.10 dollars worth of GPT credits over all the runs combined. Still not a big deal but the large parts of text we send to ChatGPT do add up slightly quicker than normal calls, just something to be aware of. I ran the code quite a lot of times for testing purposes though, and only came to 10 cents, so no worries.

Extracting the data

Save and close this file and now let’s extract some data. Create a new file named Ea_nonexistent_functions.py in your base directory:

> Ea_nonexistent_functions.py

And let’s add our imports first:

import json

from utils.printer import ColorPrinter as Printer
from utils.reader import read_txt_file
from apis.chat_gpt import gpt_3_5_turbo_0613
from prompt_setups.extract import data_extraction_setup
from func_descriptions.extract import describe_extract_dates

We’ll be using json to read the function arguments again. The rest are our own imports.

def extract_structured_info(text_data):
    messages = [
        {"role": "system", "content": data_extraction_setup},
        {"role": "user", "content": text_data},
    functions = [describe_extract_dates]

    current_response = gpt_3_5_turbo_0613(
        messages, functions, function_call={"name": "make_list_of_dates"}
    current_message = current_response["choices"][0]["message"]

    if current_message.get("function_call"):
        structured_data: dict = json.loads(current_message["function_call"]["arguments"])
        list_of_happenings: list = structured_data["list_of_happenings"]

        for list_item in list_of_happenings:
            print(f"{list_item[0]} - {list_item[1]}")

Our function takes text_data as an argument. We set up the message history putting our setup message in as usual and also set up our functions variable. We make the initial GPT call, forcing the function call of our nonexistent make_list_of_dates function, and then catch the response and append it to message history.

We then extract the structured data by simply loading the function arguments into a dictionary using json.loads. There is a ‘list_of_happenings’ key in here that we can access for our data. We then print the message history and loop through the list of happenings and print them out.

Let’s continue below (outside the function block):

text_data = read_txt_file("Ex_text_data.txt")


We read the text file and then call our function with the text data as argument. Go ahead and save and run this file:

###### Conversation History ######
system : You will be provided with a block of textual data. You will also be provided with a function. Your job is to call the function with the correct arguments. To get these arguments you will need to methodically extract them from the textual data provided.
user : ... (huge block of text data, omitted for brevity) ...
assistant : make_list_of_dates({
"list_of_happenings": [
    ["Half a million years ago", "Lower Paleolithic era on the Korean Peninsula and in Manchuria began"],
    ["Around 8000 BC", "Earliest known Korean pottery dates"],
    ["6000 BC", "Neolithic period began"],
    ["2000 BC", "Bronze Age began"],
    ["700 BC", "Iron Age began"],
    ["2000 BC", "Neolithic People estimated to be direct ancestors of present Korean people"],
    ["2333 BC", "Gojoseon (Old Joseon) kingdom founded"],
    ["12th century BC", "Gija Joseon purportedly founded"],
    ["4th century BC", "Gojoseon existed on the Korean Peninsula and Manchuria"],
    ["3rd century BC", "Jin state formed in southern Korea"],
    ["2nd century BC", "Gija Joseon replaced by Wiman Joseon"],
    ["1st century BC", "Goguryeo, Baekje, and Silla controlled the peninsula and Manchuria"],
    ["57 BC", "Three Kingdoms of Korea formed"],
    ["676", "Silla unified the Three Kingdoms of Korea"],
    ["698", "Balhae established"],
    ["892", "Later Three Kingdoms period began"],
    ["1392", "Joseon dynasty established"],
    ["1418", "King Sejong the Great implemented numerous reforms and created Hangul"],
    ["1592", "Japanese invasions of Korea began"],
    ["1897", "Korean Empire came into existence"],
    ["1910", "Japan annexed the Korean Empire"],
    ["1919", "March 1st Movement"],
    ["1945", "Allies divided Korea into North and South"],
    ["1950", "Korean War began"],
    ["1953", "Cease-fire agreement in the Korean War"],
    ["1991", "Both North and South Korea accepted into the United Nations"],
    ["2018", "Agreement to work toward formal end of Korean conflict"]

And there we go. While it is not perfect, this is still pretty cool. It does have some limitations, for example, you cannot ask ChatGPT to order these dates for you. ChatGPT does not do iterative loops over the data, it’s like a river that flows down wherever the flow takes it and has no idea where it is going or memory of where it came from. It only knows where it is currently flowing. This is still pretty cool though, as we no longer have to parse this data ourselves. It’s already present in Python list or Dictionary format or whatever you want it to be. I’m sure this will get even more robust in the future.

Retrying on failure

Let’s move on to the next part where we’ll be looking at some pitfalls and how to handle the slightly less predictable nature of AI. Our usual and totally deterministic manner of programming is reasonably predictable. We generally know what will happen when we run our code. But AI is a little more unpredictable. We’ll look at some ways to handle this.

First, let’s take a quick look at a library called Tenacity, run the following command in a terminal window:

pip install tenacity

This library is a retrying library. It allows us to retry a function call if it fails. Now ChatGPT won’t normally fail to return a response, it will just tell you that it doesn’t know, although there are actual errors like server overload that occur occasionally! But when we start calling functions we have the potential for our functions to crash and burn if ChatGPT fails to provide the correct arguments. Though this doesn’t happen often in my experimenting, even with GPT 3.5, we should still take some measures.

So let’s explore how tenacity works. Create a new file named Eb_handling_failures_with_tenacity.py in your base directory:

> Eb_handling_failures_with_tenacity.py

And let’s add our imports first:

from tenacity import retry, stop_after_attempt, stop_after_delay
from apis.chat_gpt import gpt_3_5_turbo_0613

We import some stuff from tenacity we’ll explain in a moment, and our own simplified gpt_3_5_turbo_0613 function. Now let’s create a GPT call that will artificially fail a couple of times before succeeding:

fail_counter = 0

# Define a function that will retry on failure
def ask_gpt(query):
    ## Simulate failure ##
    global fail_counter
    fail_counter += 1
    if fail_counter < 2:
        raise ValueError("GPT failed")

    response = gpt_3_5_turbo_0613(query)
    return response

messages = [
        "role": "user",
        "content": "What is the capital of France?",


We first define a simple variable fail_counter and set it to 0. Above our function, we use the @retry decorator from the tenacity library. We set it to stop after 3 attempts. This means that if the function defined below crashes out it will try a total of 3 times before giving up. We then define our function ask_gpt which takes a query as an argument. We then simulate a failure by incrementing our global fail_counter variable and raising a ValueError if the fail_counter is less than 2. This means that the first two times we call this function it will fail and error. The third time it will succeed. We then set up a quick messages object and call our gpt_3_5_turbo_0613 function and return the response.

Go ahead and run this file and you will get your answer like normal, even though the function will throw an error twice before succeeding:

    "choices": [
        "finish_reason": "stop",
        "index": 0,
        "message": {
            "content": "The capital of France is Paris.",
            "role": "assistant"
    "created": 1691276333,
    "id": "chatcmpl-7kKWTVM7djclvzK9KJYYBMyUvoA5S",
    "model": "gpt-3.5-turbo-0613",
    "object": "chat.completion",
    "usage": {
        "completion_tokens": 7,
        "prompt_tokens": 14,
        "total_tokens": 21

Try and comment our the @retry(stop=stop_after_attempt(3)) decorator and run the file again, and you’ll instantly get an error.

ValueError: GPT failed

We can also use the tenacity library to try and give up after a set amount of time has passed:

def try_something_stupid():
    ## Simulate failure ##
    print("Trying to do something stupid and dangerous")
    raise Exception("Something stupid and dangerous happened")


The retry decorator will try to run this function for 3 seconds before giving up. If you run this function you’ll see “Trying to do something stupid and dangerous” printed to your console over and over for 3 seconds, but the error will be caught and muted while tenacity tries to run the function over and over. When the 3 seconds are up you’ll get the error:

Exception: Something stupid and dangerous happened

You can also combine trying for a certain period of time or a certain number of attempts, whichever condition runs out first, like so:

@retry(stop=(stop_after_attempt(10) | stop_after_delay(5)))
def eat_watermelon_in_one_bite():
    ## Simulate failure ##
    print("Trying to do eat a watermelon in one bite")
    raise Exception("You cannot eat a watermelon in one bite!")


As running 10 print statements is much faster than 5 seconds, this function will run 10 times and then give up and print that ‘You cannot eat a watermelon in one bite!’ error. If this is an API call with a potential for a lengthy timeout, however, this combination of conditions may be useful to limit the time sat waiting for a response.

We can also put waiting times between retries, implement exponential back-off and much more, check out the Tenacity docs if you want to learn more.

You should always think about adding a retry somewhere especially when incorporating non-deterministic type intelligence into your code and having it call functions with AI-generated arguments. The same goes for the database example in part 4 where you should have a failsafe of catching an error that might occur on an incorrectly generated SQL query. The easiest way to add a failsafe is to add the retry decorator to our ask_company_db function and have it just make the call once or twice more if it fails.

Other ways to handle failure/errors

Alternatively, we can also send this error back to ChatGPT, asking it to try generating the SQL query again and then trying to run the function again. So instead of running the request again from scratch you would add the error to the history and ask ChatGPT to do another function call. We send something like “The SQL query generated was not valid, please try calling the function again passing in only a valid and fully formed SQL query as string argument”. Just make sure to use a ‘while’ loop just like we did in the multi-function call example (use the flow from the Cb_multi_functions_multi_calls.py file) so you keep running the loop as long as ChatGPT is generating function calls, in case the first SQL query fails.

Yet another option is to wrap the database SQL call in a try/except block and if it fails you append not the function call response to the message history but a message with the role of ‘user’ and content of ‘Your SQL query was erroneous, please try calling the same function again with the corrected SQL query’. This is basically the same as the previous failsafe but in this case, we pretend to ChatGPT that the end-user is asking for a correction of the SQL query which is quite effective as ChatGPT is programmed mostly to follow user instructions.

To ChatGPT’s compliment, I’ve not actually had much trouble with erroneous SQL queries so far, even with GPT 3.5! But for a production environment where high reliability is a top priority one of these extra failsafe methods is definitely worth considering.

Now that we’ve had a lot of fun with function calls it is time to have our minds blown again in the next part where we’ll take a look at an equally amazing feature called embeddings. See you there!

💡 Full Course with Videos and Course Certificate (PDF): https://academy.finxter.com/university/openai-api-function-calls-and-embeddings/

Leave a Comment