Hi, and welcome. I’m Dirk van Meerveld, and I’ll be your host and guide for this tutorial series, where we’ll be focussing on OpenAi’s ChatGPT function calls and the embeddings API.
Function calls will allow us to make ChatGPT even smarter by giving it access to additional information or our own custom functionality through calling functions and feeding the return back to ChatGPT. Embeddings will allow us to compare certain pieces of text by meaning instead of by exact character and word matches, which is very powerful. Both of these tools are game changers and have mind-blowing potential in the ever-expanding field of AI. So let’s jump right in!
A simple ChatGPT call
I assume if you’re watching this you are at least somewhat familiar with ChatGPT calls using Python. We’ll quickly cover the basics so we’re all on the same page setup-wise, but won’t go too far into the details and basic settings. If you’re using ChatGPT for the first time, you should still be able to follow along though!
Before we get started, you need to make sure you have the newest version of the openai library installed. If you have not used the library before, run:
pip install openai
Else, run the following in your terminal to upgrade to the latest version:
pip install --upgrade openai
I will be using version 1.3.8 for this tutorial. There have been significant syntax updates recently, so make sure you are not running an old version! You can check out which version you have by running the following command:
pip show openai
For the purpose of this tutorial, I will be storing my API keys in a .env
file and reading them using the config function in the decouple package. You can follow along with my method or use environment variables on your local machine or any other method you prefer. Just make sure to never ever hardcode API keys in your source code!!
Run the below in your terminal to install the decouple package if you don’t have it installed already:
pip install python-decouple
Now create a .env
, which is simply a file by the name of '.env'
with no name but only the extension of .env
, in the base directory:
📁FINX_FUNC_EMBED (root project folder - whatever you want to name it) 📄.env
Then add the following line to it:
OPENAI_API_KEY=superdupersecretapikeygoeshere
Make sure to insert your own API key from Openai.com, or sign up for an account if you do not have one yet. Also, do not use any spaces as it will not work for .env
files.
Now create a new folder named '1_Simple_function_call
and then inside create a new Python called get_joke.py
:
📁FINX_FUNC_EMBED (root project folder - whatever you want to name it) 📁1_Simple_function_call 📄get_joke.py 📄.env
I’ll be using numbered folders for the different parts of this tutorial series as this will make it easy for learning and later reference purposes. Of course, you would not want to structure a real software project in this way, but for progressive lessons, it will make things nice and ordered for us.
A quick overview of the basics before we dive in
Now add the following code to your get_joke.py
file:
from decouple import config from openai import OpenAI client = OpenAI(api_key=config("OPENAI_API_KEY")) JOKE_SETUP = "I will give you a subject each time for all the following prompts. You will return to me a joke, but it should not be too long (4 lines at most). Please do not provide an introduction like 'Here's a joke for you' but get straight into the joke."
First, we import config from the decouple package, which will allow us to read our API key from the .env
file. Then we import OpenAI from the openai
package, which we will use to make our ChatGPT calls. We then set up our client object, calling OpenAI and passing in our API key, which we read from the .env
file using the config function. Note that this client object is part of the new openai
library syntax, which has changed quite a lot recently. This course has been updated to use all the newest syntax changes.
Finally, we set a constant variable called JOKE_SETUP
, which contains a setup with instructions for ChatGPT to follow. We will be asking ChatGPT to generate a joke for us, an idea taken from the ‘Giggle search’ tutorial series also available on the Finxter academy.
Now continue with the following code:
def get_joke(query: str) -> str: result = client.chat.completions.create( model="gpt-3.5-turbo", temperature=0.4, max_tokens=200, messages=[ {"role": "system", "content": JOKE_SETUP}, {"role": "user", "content": query}, ], ) return result.choices[0].message.content
We define a function called get_joke
, which is just a simple ChatGPT API call. We take a query as a string argument, which is the subject of the joke we want ChatGPT to generate for us. We type hint (-> str
) that the function will return a string at the end, which is not necessary but can help with code readability.
We then call the client’s chat.completions.create
function, and save the result in a variable named result
. We pass in the model we want to use and set the temperature to 0.4, which is a measure of how creative we want ChatGPT to be. The higher the temperature, the more creative ChatGPT will be, but the more nonsensical the response will potentially get if you push it too high. Max_tokens
is self-explanatory. Note that you can leave out the temperature parameter for a default value.
Note that the model name, gpt-3.5 turbo
, will automatically refer to the newest model of gpt-3.5-turbo
. There are currently two versions, namely gpt-3.5-turbo-0613
and gpt-3.5-turbo-1106
, so it’s a good practice to use the generic gpt-3.5 turbo name to allow your code to make use of quality improvements when they are available. GPT-4 models are also an option, but as they are more expensive, I’ll leave it to your discretion whether or not you want to use them. Often 3.5 will do the job just fine at a lower cost, and you can use all the code we will be using here by simply changing the model name if you choose to do so later.
Messages is a list of dictionaries, which will contain the messages we want to send to ChatGPT. Each dictionary has a role and a content key, and these messages will function as a sort of history of the conversation so far for ChatGPT to use. We set the first message to be a system message, which is the setup we defined earlier, and the second message to be the query that was passed into the function.
ChatGPT will send an object in response to us that looks something like the below (There are some other properties that we will get into later which have been left out for simplicity):
{ "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "Why don't penguins like talking to strangers at parties?\nBecause they find it hard to break the ice!", "role": "assistant" } } ], "created": 1690110711, "id": "chatcmpl-7fRI7RKqBl9y1oGv86ShTenOd9km5", "model": "gpt-3.5-turbo-0613", "object": "chat.completion", "usage": { "completion_tokens": 22, "prompt_tokens": 70, "total_tokens": 92 } }
As the OpenAI module has already parsed the JSON into an object for us we don’t have to worry about this and we simply access the properties of the object like so 'return result.choices[0].message.content'
to get the response message from ChatGPT.
Now try running your function by adding a print
statement to the bottom of your file like so:
print(get_joke("Penguins"))
And you get a response in the terminal that looks something like this:
Why don't you ever see penguins in the UK? Because they're afraid of Wales!
Ok, now that we have reviewed the bare basics let’s get into the fun stuff!
Creating a function for ChatGPT to call
Before we go ham having ChatGPT calling functions for us we should first create a function for ChatGPT to call. Let’s start with a simple function that will allow ChatGPT access to an API that will extend its functionality so we can make our joke generator more powerful. Inside your 1_Simple_function_call
folder create another file called 'random_word.py'
:
📁FINX_FUNC_EMBED 📁1_Simple_function_call 📄get_joke.py 📄random_word.py 📄.env
Now add the following code to your random_word.py
file:
import requests def get_random_word() -> str: response = requests.get("https://random-word-api.vercel.app/api?words=1") return response.json()[0]
We first import the request library so we can make an API call. Then we define a function called get_random_word
, which will return a random word to us. We make a get request to the random word API, stating we only want 1 word back. The response will be caught in the variable named ‘response
‘ which is in JSON format. We call the .json()
method to parse it into a valid Python object and then return index 0 of the object, which is the random word we requested.
Once again, in case you’re not familiar with the '-> str'
syntax, it’s just a type hint we can add that states that this function is supposed to return a string. It’s not necessary and does not affect the functionality of the code, but without going too deep into typing here which we won’t, sometimes it’s just nice to state that this function we just created will return a string type, for clarity.
Now let’s test our function by adding the following code to the bottom of your random_word.py
file:
if __name__ == "__main__": print(get_random_word())
The if __name__ == "__main__"
part is just a Python trick that will run the code inside the if block if we execute this 'random_word.py'
file directly, but will not run if we import the file from inside another file. It works because the name of the file will be set to __main__
inside Pythons variables when we run the file directly, but will have a different value when we import the file from inside another file.
Now go ahead and run your random_word.py
file and you should see some random work pop up in your terminal:
demanding
As we used the if __name__ == "__main__"
trick, you can just leave the print
statement in the file and don’t have to worry about commenting it out or removing it.
Message history
Great! Now we have a function that will return a random word to us, we can close this file for now and use it in our ChatGPT function calling later. Before we get into function calling though I want to make one more helper function for us to use throughout this tutorial series. As ChatGPT will be telling us to call functions and we call them and then feed the return back into ChatGPT which in turn returns a message for the end user again, the message history is going to get a bit complicated. We will end up with a message history list of dictionaries that will look roughly like this:
[ {"role": "system", "content": "Setup here"}, {"role": "user", "content": "Query here"}, {"role": "assistant", //call a function //}, {"role": "function", "content": //function response//}, {"role": "assistant", "content": //ChatGPT response to enduser//}, ]
As the purpose of this tutorial series is to get a good understanding of how ChatGPT deals with function calls, and the message history will generally be a huge garbled mess with long function responses in there, we will create a simple helper to help us pretty print the above style message history to the console. This will visually help our learning experience over the coming tutorials as we can see exactly what we are doing every step along the way.
First, create a new file in your '1_Simple_function_call'
folder named 'printer.py'
:
📁FINX_FUNC_EMBED 📁1_Simple_function_call 📄get_joke.py 📄printer.py 📄random_word.py 📄.env
Now add the following code to your printer.py
file:
class ColorPrinter: _colors = { "yellow": "\033[33m", "green": "\033[32m", "blue": "\033[34m", "purple": "\033[35m", "cyan": "\033[36m", "white": "\033[37m", "closing_tag": "\033[00m", }
First, we define a class called ColorPrinter
, and then we define a dictionary called
_colors. Inside we store a bunch of ANSI codes, which are codes that will change the color of the text in the console. Each color represents the string value’s color in terminal text color code (ANSI). The final one in the list, 'closing_tag'
, simply resets the color back to the default color.
We prepend an _
in front of the name colors, indicating that this is a private property and should not be used outside the class itself but is for internal use.
Now add the following method inside your ColorPrinter
class below the _colors
property:
def _get_current_color(index, no_of_messages) -> str: if index == 0: return ColorPrinter._colors["yellow"] elif index == no_of_messages - 1: return ColorPrinter._colors["purple"] elif index % 2 == 0: return ColorPrinter._colors["blue"] return ColorPrinter._colors["green"]
This method takes an index and the total number of messages.
- The
index
represents the current message’s number, like message 1 out of 5 for example (but remember indexes start at 0!). - The
no_of_messages
is simply the total number of messages in the message history.
We then check if the index is 0, meaning it’s the first message, and if so we return the yellow color. If the index is the last message, we return the purple color. If the index is even, meaning the remainder is 0 if we divide the number by 2, we return the blue color, and if it’s odd we return the green color.
This is just a simple way to alternate the colors between messages so we can easily distinguish between them and see where one begins and the other ends. We could build a much more sophisticated version identifying and coloring each role like system and user and ChatGPT messages with a specific color, but I don’t want to get too far off-topic with this tutorial.
Now add the following method to your ColorPrinter class below the _get_current_color
method:
def color_print(messages) -> None: no_of_messages = len(messages) cyan_open_tag = ColorPrinter._colors["cyan"] color_closing_tag = ColorPrinter._colors["closing_tag"] print(f"\n{cyan_open_tag}###### Conversation History ######{color_closing_tag}") for index, message in enumerate(messages): color = ColorPrinter._get_current_color(index, no_of_messages) print(f"{color}{message}{color_closing_tag}") print(f"{cyan_open_tag}##################################{color_closing_tag}\n")
This is the public interface, so we did not prepend an _
character. The method takes messages as an argument and returns nothing (None
), as it just prints to the console. We initialize a variable named no_of_messages
, which is simply the length of the messages
list.
We first set the cyan_open_tag
and color_closing_tag
variables using our _colors
object, and then print a header message to the console sandwiched in between some hashtags. Then we loop over each index and message in the enumerate messages, which will not only give us the content for each item in the list but also its index.
We then call the _get_current_color
method we defined earlier, passing in the index and the total number of messages, and save the result in a variable named color
. We then print the message to the console, prepending the color and appending the color_closing_tag
. Finally, we print a footer message to the console which is just some cyan hashtags.
Your whole helper class inside the printer.py
file now looks like this:
class ColorPrinter: _colors = { "yellow": "\033[33m", "green": "\033[32m", "blue": "\033[34m", "purple": "\033[35m", "cyan": "\033[36m", "white": "\033[37m", "closing_tag": "\033[00m", } def _get_current_color(index, no_of_messages) -> str: if index == 0: return ColorPrinter._colors["yellow"] elif index == no_of_messages - 1: return ColorPrinter._colors["purple"] elif index % 2 == 0: return ColorPrinter._colors["blue"] return ColorPrinter._colors["green"] def color_print(messages) -> None: no_of_messages = len(messages) cyan_open_tag = ColorPrinter._colors["cyan"] color_closing_tag = ColorPrinter._colors["closing_tag"] print(f"\n{cyan_open_tag}###### Conversation History ######{color_closing_tag}") for index, message in enumerate(messages): color = ColorPrinter._get_current_color(index, no_of_messages) print(f"{color}{message}{color_closing_tag}") print(f"{cyan_open_tag}##################################{color_closing_tag}\n")
That was quite a lot of work for a simple helper function, but it will help us see and understand exactly what is going on with the back and forth between our code, functions, and ChatGPT.
Calling a function from ChatGPT
Create a new file in your '1_Simple_function_call'
directory named 'get_joke_w_function.py'
:
📁FINX_FUNC_EMBED 📁1_Simple_function_call 📄get_joke_w_function.py 📄get_joke.py 📄printer.py 📄random_word.py 📄.env
Inside this 'get_joke_w_function.py'
file first add some imports:
from openai import OpenAI from decouple import config from random_word import get_random_word from printer import ColorPrinter as Printer client = OpenAI(api_key=config("OPENAI_API_KEY"))
First, we import openai
and config
(to read the .env
file for our API key), and then we import both helpers we made earlier. Then we create the client object which is our interface to the openai
library, passing in our API key by calling config just like we did before. Now let’s store the model name we want to use in a simple string variable and then write out our prompt setup message for ChatGPT to follow:
MODEL = "gpt-3.5-turbo-1106" JOKE_SETUP = """ You will be given a subject by the user. You will return a joke, but it should not be too long (4 lines at most). You will not provide an introduction like 'Here's a joke for you' but get straight into the joke. There is a function called 'get_random_word'. If the user does not provide a subject, you should call this function and use the result as the subject. If the user does provide a subject, you should not call this function. The only exception is if the user asks for a random joke, in which case you should call the function and use the result as the subject. Example: {user: 'penguins'} = Do not call the function => provide a joke about penguins. Example: {user: ''} = Call the function => provide a joke about the result of the function. Example: {user: 'soul music'} = Do not call the function => provide a joke about soul music. Example: {user: 'random'} = Call the function => provide a joke about the result of the function. Example: {user: 'guitars'} = Do not call the function => provide a joke about guitars. Example: {user: 'give me a random joke'} = Call the function => provide a joke about the result of the function. IF YOU CALL THE FUNCTION, YOU MUST USE THE RESULT AS THE SUBJECT. """
We defined the prompt setup up top because it is very long and better put in a separate variable for code readability. You will notice the description is quite lengthy and includes a lot of examples exactly of what we expect. When you’re having trouble getting ChatGPT to do exactly what you want it to do, examples are a powerful tool to use.
Take the time to really think about what you want ChatGPT to do in different situations and write them out. Do not skimp on this part, you need to think of English as a programming language here and use it to 'code'
your prompt so to speak. The better your description, the better ChatGPT will do what you want it to do. And yes, it may seem childish, but SHOUTY CAPSLOCK WORDS actually do seem to hold a greater weight with ChatGPT, so don’t be afraid to use them sometimes.
Now define a function named 'get_joke_result'
:
def get_joke_result(query): messages = [ {"role": "system", "content": JOKE_SETUP}, {"role": "user", "content": query}, ]
The function takes a user query as an argument and then sets up a message history list of dictionaries, which we will use to feed into ChatGPT. We set the first message to be a system message and pass in the setup we defined earlier as content. The second message is the user query that was passed into the function.
Below the messages list, still inside your get_joke_result
function, add a list named ‘tools
‘ (This list used to be called functions, but this is now deprecated in favor of the tools name):
tools = [ { "type": "function", "function": { "name": "get_random_word", "description": "Get a subject for your joke.", "parameters": { "type": "object", "properties": { "number_of_words": { "type": "integer", "description": "The number of words to generate.", } }, }, }, } ]
This is a list of dictionaries, which contains the functions ChatGPT will be able to call. Note that this is not an object that has any real functionality, it’s more us describing in text what this function does, so ChatGPT has a rough idea of what this function is and what it can be used for. The function name doesn’t even have to be the real name of the function, as ChatGPT will not directly call the function itself, which you’ll see in a bit. For the description just literally say what the function does.
In the properties
, we can describe the arguments the function needs to be called. ChatGPT will generate these arguments when requesting the function call from us. Even though the get_random_word
helper function we defined doesn’t need any arguments we’ll get in some practice by defining an argument. we pretend that our function named get_random_word
, needs an argument of type integer which will indicate the number of words to generate. (Again, this is not actually implemented in our get_random_word
function, but just for practice, you’ll learn to extract arguments and send them to the function calls in the next part).
While this is mostly a text-based object that just uses strings to describe a function, do make sure you follow the structure provided so ChatGPT will be able to understand your input correctly.
Now add the following code below the tools list (still inside the get_joke_result
function):
first_response = ( client.chat.completions.create( model=MODEL, messages=messages, tools=tools, tool_choice="auto", # auto is default ) .choices[0] .message ) messages.append(first_response)
So we call chat.completions.create
using the client object and save the result in a variable named first_response
. We pass in the model we want to use, the list of messages, and the list of tools, which has only 1 item for now but is still called tools
as it is a list that can hold multiple tools. We set the tool_choice
to auto, which means ChatGPT gets to decide whether or not a function call is needed.
The .choices[0].message
at the end is just to access the properties of the response object and get the message from it, which is what we save in our message history using the append method. Why does this 'choices'
property exist at all? Well, it is possible to generate multiple responses at once using ChatGPT, and in this case, each choice will be in a different index of the 'choices'
list. As we only generate a single response we will generally just index into index 0.
Now below and still inside the get_joke_result
function add:
if first_response.tool_calls: tool_call_id = first_response.tool_calls[0].id function_response = get_random_word() messages.append( { "tool_call_id": tool_call_id, "role": "tool", "name": "get_random_word", "content": function_response, } ) second_response = ( client.chat.completions.create( model=MODEL, messages=messages, ) .choices[0] .message ) messages.append(second_response) Printer.color_print(messages) return second_response.content Printer.color_print(messages) return first_response.content
We test if the first_response
we got back has a property named ‘tool_calls
‘, indicating that ChatGPT wanted to call a function. If it wanted to call a function, we get the tool_call_id
from the first_response
object, which is a unique identifier for the function call, but more info on that in the next part.
We then simply set a variable named ‘function_response
‘ to the result of calling our get_random_word
helper. We ignore the argument as stated before as it was just for practice, and only has a single potential function that can be called for now, so we don’t have to worry about that. Note that we are the ones actually calling the function, not ChatGPT!
We then append a new message to our message history, which is the function response. We pass in the tool_call_id
, set the role to ‘tool
‘, the name to the name of the function, and the content to the function response. We then make a new chat.completions.create
call, passing in the model and the messages, now containing the result of our function call as well, and save the result in a variable named second_response
, once again accessing the .choices[0].message
property.
We append the second_response
to our message history and then print the message history to the console using our helper function. Finally, we return the content of the second_response
as the final result.
If the first_response
did not have a tool_calls
property, we simply bypass the if-block, print the message history to the console, and return the content of the first_response as the final result.
def get_joke_result(query): messages = [ {"role": "system", "content": JOKE_SETUP}, {"role": "user", "content": query}, ] tools = [ { "type": "function", "function": { "name": "get_random_word", "description": "Get a subject for your joke.", "parameters": { "type": "object", "properties": { "number_of_words": { "type": "integer", "description": "The number of words to generate.", } }, }, }, } ] first_response = ( client.chat.completions.create( model=MODEL, messages=messages, tools=tools, tool_choice="auto", # auto is default ) .choices[0] .message ) messages.append(first_response) if first_response.tool_calls: tool_call_id = first_response.tool_calls[0].id function_response = get_random_word() messages.append( { "tool_call_id": tool_call_id, "role": "tool", "name": "get_random_word", "content": function_response, } ) second_response = ( client.chat.completions.create( model=MODEL, messages=messages, ) .choices[0] .message ) messages.append(second_response) Printer.color_print(messages) return second_response.content Printer.color_print(messages) return first_response.content
So let’s try this out. First, add the following to the bottom of your file:
print(get_joke_result("penguins"))
And then run your file. You should get a response in the terminal that looks something like this:
###### Conversation History ###### {'role': 'system', 'content': "\nYou will be given a subject by the user. You will return a joke, but it should not be too long (4 lines at most). You will not provide an introduction like 'Here's a joke for you' but get straight into the joke.\nThere is a function called 'get_random_word'. If the user does not provide a subject, you should call this function and use the result as the subject. If the user does provide a subject, you should not call this function. The only exception is if the user asks for a random joke, in which case you should call the function and use the result as the subject.\nExample: {user: 'penguins'} = Do not call the function => provide a joke about penguins.\nExample: {user: ''} = Call the function => provide a joke about the result of the function.\nExample: {user: 'soul music'} = Do not call the function => provide a joke about soul music.\nExample: {user: 'random'} = Call the function => provide a joke about the result of the function.\nExample: {user: 'guitars'} = Do not call the function => provide a joke about guitars.\nExample: {user: 'give me a random joke'} = Call the function => provide a joke about the result of the function.\nIF YOU CALL THE FUNCTION, YOU MUST USE THE RESULT AS THE SUBJECT.\n"} {'role': 'user', 'content': 'penguins'} ChatCompletionMessage(content="Why don't penguins like talking to strangers at parties? Because they find it hard to break the ice!", role='assistant', function_call=None, tool_calls=None) ################################## Why don't penguins like talking to strangers at parties? Because they find it hard to break the ice!
As we can see, no function was called, since the user provided a valid subject. So far so good.
Now replace the print
statement with the following print statement:
print(get_joke_result("random"))
And run your file again. You should get a response in the terminal that looks something like this:
###### Conversation History ###### {'role': 'system', 'content': "\nYou will be given a subject by the user. You will return a joke, but it should not be too long (4 lines at most). You will not provide an introduction like 'Here's a joke for you' but get straight into the joke.\nThere is a function called 'get_random_word'. If the user does not provide a subject, you should call this function and use the result as the subject. If the user does provide a subject, you should not call this function. The only exception is if the user asks for a random joke, in which case you should call the function and use the result as the subject.\nExample: {user: 'penguins'} = Do not call the function => provide a joke about penguins.\nExample: {user: ''} = Call the function => provide a joke about the result of the function.\nExample: {user: 'soul music'} = Do not call the function => provide a joke about soul music.\nExample: {user: 'random'} = Call the function => provide a joke about the result of the function.\nExample: {user: 'guitars'} = Do not call the function => provide a joke about guitars.\nExample: {user: 'give me a random joke'} = Call the function => provide a joke about the result of the function.\nIF YOU CALL THE FUNCTION, YOU MUST USE THE RESULT AS THE SUBJECT.\n"} {'role': 'user', 'content': 'random'} ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_IucIRyf8n27TVcCwEpbUOgx4', function=Function(arguments='{"number_of_words": 1}', name='get_random_word'), type='function')]) {'tool_call_id': 'call_IucIRyf8n27TVcCwEpbUOgx4', 'role': 'tool', 'name': 'get_random_word', 'content': 'raider'} ChatCompletionMessage(content='Why did the raider bring a ladder to the bar? Because they heard the drinks were on the house!', role='assistant', function_call=None, tool_calls=None) ################################## Why did the raider bring a ladder to the bar? Because they heard the drinks were on the house!
We can see from our pretty printed history (which will have colors in your terminal) that the assistant first requested us to call a function. The function returned a word and the assistant (ChatGPT) then used that word as the subject for the joke. Great! Note that ChatGPT doesn’t always make the greatest jokes, so sometimes you need a couple of runs to get a really good one. You can also switch the model name for a GPT-4 model in the ‘MODEL
‘ variable at the top of the file if you want to play around with this a little.
Now replace the print statement with the following print statement to finish up our testing and see what happens if the user provides no query at all:
print(get_joke_result(""))
Go ahead and run your file again:
###### Conversation History ###### {'role': 'system', 'content': "\nYou will be given a subject by the user. You will return a joke, but it should not be too long (4 lines at most). You will not provide an introduction like 'Here's a joke for you' but get straight into the joke.\nThere is a function called 'get_random_word'. If the user does not provide a subject, you should call this function and use the result as the subject. If the user does provide a subject, you should not call this function. The only exception is if the user asks for a random joke, in which case you should call the function and use the result as the subject.\nExample: {user: 'penguins'} = Do not call the function => provide a joke about penguins.\nExample: {user: ''} = Call the function => provide a joke about the result of the function.\nExample: {user: 'soul music'} = Do not call the function => provide a joke about soul music.\nExample: {user: 'random'} = Call the function => provide a joke about the result of the function.\nExample: {user: 'guitars'} = Do not call the function => provide a joke about guitars.\nExample: {user: 'give me a random joke'} = Call the function => provide a joke about the result of the function.\nIF YOU CALL THE FUNCTION, YOU MUST USE THE RESULT AS THE SUBJECT.\n"} {'role': 'user', 'content': ''} ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_L6sokGIUoiVpIyPHye3UfiAY', function=Function(arguments='{"number_of_words":1}', name='get_random_word'), type='function')]) {'tool_call_id': 'call_L6sokGIUoiVpIyPHye3UfiAY', 'role': 'tool', 'name': 'get_random_word', 'content': 'doorway'} ChatCompletionMessage(content='Why did the doorway go to therapy? It had a real problem with opening up!', role='assistant', function_call=None, tool_calls=None) ################################## Why did the doorway go to therapy? It had a real problem with opening up!
Yep, still works fine! Note that the function call only triggered when we provided no subject or asked for something random. If we provide a subject ChatGPT will just work as normal as it doesn’t need the function.
Now that we know how to make a simple function call let’s take things to the next level in part 2. See you there!