(6/6) OpenAI API Mastery: Innovating with GPT-4 Turbo and DALL·E 3 – Assistants

Hi, and welcome back to part 6 of this tutorial series, where we’ll be looking at the new OpenAI Assistant API, which is where things really get wild.

For this part of the tutorial, we’ll be using Jupyter Notebooks, as it’s more practical to code interactively and keep our Python kernel running while we add more code.

Install the Jupyter Notebooks extension for VS Code if you don’t have it already, by going to the extensions tab and searching for Jupyter Notebooks. It’s the extension with an insanely high number of downloads.

I won’t be going into a detailed explanation of Jupyter Notebooks here, but if you’re not quite familiar with them, don’t worry, just follow along with the video version of this tutorial, and you’ll get a feel for how it works.

As you may know, I don’t like to have too much theoretical explanation before we ever get started with the code, so let’s just jump right in and we’ll explain what Assistants are and how to use the API as we go along!

First, create a new folder named ‘6_Assistants‘ in your project folder, and inside that folder create a new file named 'coding_assistant.ipynb', like this:

📁FINX_OPENAI_UPDATES (root project folder)
    📁1_Parallel_function_calling
    📁2_JSON_mode_and_seeds
    📁3_GPT4_turbo
    📁4_DALLE
    📁5_Text_to_speech
    📁6_Assistants
        📄coding_assistant.ipynb

Now in the first code cell let’s start with our imports and setup:

from openai import OpenAI
import decouple

config = decouple.AutoConfig(' ')
client = OpenAI(api_key=config("OPENAI_API_KEY"))

We simply import decouple and OpenAI and then set up our client object as always. One thing you may notice is the decouple.AutoConfig line which is new.

We need to set AutoConfig with an empty string to have decouple detect our .env file in the root directory of our project. This is a Jupyter Notebook quirk and that’s why we have not needed to use this so far but need it now.

What is an Assistant?

An assistant basically can run functions just like the function calls we defined in part 1 with the function calling. The assistant is also capable of parallel calls, no surprise there.

The functions are called Tools however, and besides the Tools that we can define ourselves as local functions inside our own code as we did in part 1 of this tutorial series, there are also predefined tools that we can use, made by OpenAI.

As of now, the main ones are the ‘code interpreter’ and ‘retrieval’.

As we’ve already looked at writing our own functions we’ll take a look at OpenAI’s Tools, starting with the ‘code interpreter’, but be aware you can also pass in your own functions with the exact same syntax as we wrote in part 1 to code any function you like.

The ‘code interpreter’ has both the ability to generate Python code, and to run it inside a sandbox environment. If the assistant tests its own code and it fails to run it will then iteratively try again to make the code work.

So let’s look at how we can create an Assistant and give it the Tool 'code_interpreter' to use.

In a new code cell below the imports, write the following code:

assistant = client.beta.assistants.create(
    name="Code Tutor",
    instructions="You are a coding assistant that writes short snippets of code and explains how they work.",
    tools=[{"type": "code_interpreter"}],
    model="gpt-4-1106-preview",
)

The syntax is fairly intuitive and follows what we have been using so far.

We call client.beta.assistants.create, though this probably won’t be beta in the future, and we give our Assistant a name, I’ll call mine “Code Tutor”.

We give our assistant instructions, which is basically like a system message prompt setup, and then we give it a list of Tools, just think of this as the list of functions we want to use.

Again we can pass in our own functions or OpenAI’s ‘code_interpreter‘ or ‘retrieval‘, which we’ll look at later.

Finally, we have a model that the assistant will run on, we’ll use the newest GPT-4 Turbo model.

So that is the basic concept of an assistant, which has a name and instructions on how it should handle user requests and a set of tools it can use.

OpenAI Threads

Now normally we’d have to keep track of some kind of message history and pass it in with every call as the list of messages right?

This is to make sure ChatGPT has the context of what has been said and which functions have been called so far in order to generate its next response.

For Assistants, OpenAI has come up with a new way of doing this called ‘threads‘.

This is not related to ‘threading’ as we used in the previous tutorial part to run multiple functions at the same time, but rather the ‘thread‘ you use when you say forum thread.

A thread is like the conversation history objects that we have always sent along as the list of messages so far, except that now we don’t have to manage the message history ourselves and send it along every time.

Basically, OpenAI is now handling and storing the message history for us.

(It will also do things like truncate the number of messages if it gets too long for the model’s context limit, etc).

So in a new cell, first we just create an empty thread with nothing in it:

thread = client.beta.threads.create()

Now make sure you execute the cells we have written so far, including the above one so that a thread object is created and exists in memory.

Adding Messages to a Thread

After that, we simply append a user query to the thread by referencing its specific ID.

(No assistant or anything is coupled or linked to this thread yet, this is just an empty thread with only this user query being appended.)

message = client.beta.threads.messages.create(
    thread_id=thread.id,
    role="user",
    content="I need to write a binary sort algorithm in Python, can you teach me?",
)

Because we created the thread object in the previous cell, we can access the thread.id in this one. We call client.beta.threads.messages.create and pass in the role of user and our query as the content.

Doing a Run

Let’s continue in the next cell:

run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id,
    instructions="Please address the user as Mr. Brown. Provide both the full code and ample explanation for how it works.",
)

We do a ‘run‘, which is an invocation of an Assistant on a Thread. The Assistant we defined above will take in the thread’s messages and then perform tasks by calling OpenAI models and any Tools and Functions we gave it.

In the process of doing so, the Assistant will generate messages and append them to the Thread.

So basically, the assistant is separate from the thread but it takes the thread as input so to speak, and also appends its output to the end of the thread.

Notice that in the client.beta.threads.runs.create call we pass in both the thread_id and the assistant_id, but also some extra instructions.

These instructions are specific to this run and very useful for personalizing output. In this case, we pretend this bot is running on our website or mobile app and the user is logged in as ‘Mr. Brown’, so we tell the assistant to address the user by their username.

Go ahead and make sure you run all the cells up to here. Note that when you do a run, you will not receive output. The output will be appended to the thread, which you can read out by reading the thread when the run is done, therefore the output will not be available instantly.

Checking When the Run is Done

Remember that the thread and the Assistant are two separate entities and if the Assistant is not done working on the Thread, the Thread will not have any new messages to retrieve yet. So how do we know if the run is done?

We can run a runs.retrieve call which holds basic information about the specific run and will also contain its status. In the next cell write:

run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
print(run.status)
print(run)

If you ran this immediately after creating the run, the run.status would return ‘in_progress‘, but if you wait a few seconds and run it again, it will return ‘completed‘.

This way we can see if the run is done.

Getting the Messages from the Thread

When it’s done we can read the thread and list the messages. So in the next cell:

messages = client.beta.threads.messages.list(thread_id=thread.id)

for index, message in enumerate(messages.data[::-1]):
    print(f"\n{index+1}:\n{message.content[0].text.value}")

We first get the messages by calling client.beta.threads.messages.list and passing in the thread_id. It’s that simple.

The answers are stored in the messages.data list, but they are in reverse order.

We will loop over each index and message in the messages.data list, reversing the list’s order by applying the slice [::-1] to it.

Within each object in this list, the actual message is stored in the content field, which is a list of which the zeroth index has a text field which has a value field which is the actual answer. (I know, it’s a bit convoluted.)

We just print each message, adding a number in front and then the message itself so we can read the message history.

When I run this cell my full output is a detailed answer and explanation of binary insertion sort, which we’ll not go into the details of as it is not the point here (just look over it briefly):

1:
I need to write a binary sort algorithm in Python, can you teach me?

2:
Certainly, Mr. Brown! A binary sort is not a standard term in sorting algorithms; however, I believe you might be referring to a binary search algorithm applied in the context of sorting, or you might be talking about a sorting algorithm that utilizes binary search technique to sort the elements (like in insertion sort).

For the sake of understanding, I will explain how a binary insertion sort works, which is a variant of insertion sort and uses binary search to reduce the number of comparisons. In standard insertion sort, elements are compared linearly to find the insertion point, while in binary insertion sort, binary search is used to find the insertion point and then the elements are shifted to make space for the inserted element.

Here's how a binary insertion sort works:

1. Iterate over the array from the second element to the last.
2. For each element, use binary search to find the position where it should be inserted among the already sorted elements (i.e., the subarray before it).
3. Shift all elements in the sorted subarray that are greater than the element to be inserted, one position to the right.
4. Insert the element at its correct position.

Let me write and explain the full code for you.

3:
Here's the `binary_insertion_sort` function that sorts an array using the binary insertion sort algorithm:

```python
def binary_insertion_sort(arr):
    for i in range(1, len(arr)):
        key = arr[i]

        # Find the insertion point using binary search
        left = 0
        right = i - 1
        while left <= right:
            mid = (left + right) // 2
            if arr[mid] > key:
                right = mid - 1
            else:
                left = mid + 1

        # Shift elements to the right to create the position for the key
        j = i - 1
        while j >= left:
            arr[j + 1] = arr[j]
            j -= 1

        # Insert the key at the found position
        arr[left] = key
```

The above function takes an array `arr` as an input and sorts it in place. Let me explain how the code works step by step:

1. Loop over the array starting from the second element since the first element is considered sorted on its own.
2. The variable `key` represents the current element to be positioned in the sorted portion of the array.
3. The `left` and `right` pointers represents the range of the sorted portion to apply binary search.
4. The binary search within the sorted part of the array is used to find the correct position for `key`. If the `mid` value is greater than `key`, it means `key` should be to the left of `mid`. Otherwise, `key` belongs to the right.
5. Once the correct position `left` is found, we need to make space for `key`. We do this by moving all elements that are greater than `key` (and to the right of `left`) one position to the right.
6. Finally, we insert the `key` at its correct sorted position.

We ran our `binary_insertion_sort` function with a sample `sample_array = [9, 5, 1, 4, 3]`, and as you can see from the output, the array is sorted to `[1, 3, 4, 5, 9]`.

Remember, although using binary search reduces the number of comparisons, the time complexity in the worst case remains O(n^2) because of the element shifting operation that is still O(n) per insertion. However, for datasets where comparison is much more expensive than swapping, binary insertion sort may provide some performance benefits.

As you can see the assistant used the ‘code_interpreter‘ tool to generate the code and explanation for binary insertion sort.

But more impressive than all of that it actually also ran the code in its own sandbox environment with an input of [9, 5, 1, 4, 3] to test if the output was correct.

So the assistant doesn’t just generate code and assume it works here, but it actually tests it and makes sure it works, which is pretty mind-blowing!

Asking Another Question

Now Mr.Brown can ask another question.

(Note that this will fail if the previous run is not done yet, you cannot append new messages while the previous run is still processing).

So in a new cell add:

message = client.beta.threads.messages.create(
    thread_id=thread.id,
    role="user",
    content="Please write a python function to reverse strings. Then run this function for me passing in the sentence '!dlrow eht revo ekat ot gniog era slerriuqS'.",
)

We can see that the syntax is exactly the same and we can just add to the same thread again. Now we create another run:

run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id,
    instructions="Please address the user as Mr. Brown. Provide both the full code and ample explanation for how it works.",
)

And let’s check to see if it’s done in the next cell:

run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
print(run.status)

If you see completed, go ahead and get the messages again in the next cell:

messages = client.beta.threads.messages.list(thread_id=thread.id)
for index, message in enumerate(messages.data[::-1]):
    print(f"\n{index+1}:\n{message.content[0].text.value}")

You will notice the output has the enormous explanation about binary sort insertion in there before it gets to our new question:

1:
I need to write a binary sort algorithm in Python, can you teach me?

2:
..huge explanation to the first question.

3:
..long code for the first question.

4:
Please write a python function to reverse strings. Then run this function for me passing in the sentence '!dlrow eht revo ekat ot gniog era slerriuqS'.

5:
Here is the Python function to reverse strings:

```python
def reverse_string(s):
    return s[::-1]
```

This function takes a string `s` as input and returns the reversed string. It uses slice notation with `[::-1]`, which is a common Python idiom for reversing sequences.

Using this function with the sentence you provided, `'!dlrow eht revo ekat ot gniog era slerriuqS'`, we obtain the reversed sentence:

`'Squirrels are going to take over the world!'`

This reads correctly as intended.

Note here that I used the same thread on purpose to show you that the previous unrelated question is still in the context.

All of this information is going to be part of your token usage, so while this is great for creating a scroll-down type web interface like a live chat where the page keeps scrolling down, it may be useful to create new Threads for unrelated questions.

If you keep sending unrelated old questions and their answers along to the assistant by calling it on a massive thread history you will burn through tokens much faster.

The point with these assistants and threads being separate means you can create a reusable assistant.

For each user that logs into your site, you can create a new thread and then run the assistant on that thread. In the run step you can pass additional instructions like the user’s username to personalize the answers for them specifically.

Each user’s thread will store their history, and unrelated lines of questioning for the same user could also be stored in separate threads (to keep them getting really long).

If you’re familiar with LangChain and its Agents, you will see that this looks surprisingly similar! It seems all AI directions are converging onto the path of autonomous agents taking actions in this type of manner.

Retrieval

Now let’s look at the other major built-in Tool that OpenAI has released with this first version of Assistants, namely the ‘retrieval‘ tool.

Now that you’re familiar with the concept of Assistants and Threads, it’s time to have some fun and build something cool!

To understand what the retrieval tool does let’s take a brief look back at embeddings. Do you remember how in the ‘function calls and embeddings‘ or the ‘langchain‘ courses we worked with embeddings to store and retrieve data?

The basic process was like this:

  • Take the large document or collection of documents and chunk them into smaller pieces.
  • Create embeddings for each of the smaller pieces, with an embedding being a numerical representation that captures the meaning of the text, and is easily searchable by computers.
  • Store the embeddings in some type of vector database.
  • When the user searches for a specific query, we convert the query into an embedding and then search the vector database for the closest matching embedding(s) with similar meaning.
  • Return the results to ChatGPT which then uses the information to return a natural language response to the end user.

(If you’re not familiar with this and want more information, see my ‘function calls and embeddings‘ course here on the Finxter academy for more details, specifically the last chapters on embeddings)

However, if this all sounds very mystical to you, worry no more.

The retrieval tool we’re going to look at for this example does all of that for us. We upload files to OpenAI and pass them to the Assistant. OpenAI will automatically chunk down our documents and store the embeddings.

It will also implement the vector search to retrieve content from the files relevant to the user query for us. All we need to do is to upload the files and pass them to the Assistant, who will then search them for us.

WiFi Dinosaur Egg Incubator Customer Service Chatbot

So let’s give this a spin to demonstrate it! I’ve provided a file with this tutorial called 'FAQ_wifi_dinosaur_egg_incubator.txt'. In case you cannot find it I will also paste the full text at the end of this written tutorial so you can find it there as well.

Inside this file, you will find a bunch of questions and answers dealing with a product called a ‘Wifi dinosaur egg incubator’, which is a silly and fictional product I came up with for this tutorial.

We’re going to feed these questions and answers to the assistant using the retrieval tool, and then have our customer service bot answer user questions about the product using this data.

This particular file is not that long, but it works just the same if you have thousands of questions and answers or extensive documentation over multiple files.

Go ahead and download and save the 'FAQ_wifi_dinosaur_egg_incubator.txt' file to your '6_Assistants' folder, and also create a new file named 'retrieval_tool.ipynb':

📁FINX_OPENAI_UPDATES (root project folder)
    📁1_Parallel_function_calling
    📁2_JSON_mode_and_seeds
    📁3_GPT4_turbo
    📁4_DALLE
    📁5_Text_to_speech
    📁6_Assistants
        📄coding_assistant.ipynb
        📄retrieval_tool.ipynb  (new empty file)
        📄FAQ_wifi_dinosaur_egg_incubator.txt

Ok so let’s get started on our Wifi dinosaur egg incubator customer service chatbot.

Open up the empty 'retrieval_tool.ipynb' file and inside start with the imports in our first cell:

from openai import OpenAI
import decouple
config = decouple.AutoConfig(' ')

client = OpenAI(api_key=config("OPENAI_API_KEY"))

This is all standard, and notice we call the decouple.Autoconfig again like we did in the last file, as a Jupyter Notebooks only requirement.

Uploading the Files

We’re going to be passing files to OpenAI which represents our documentation base. Again our file is pretty short but you can pass much more and longer documentation.

We have two options.

The first is to provide the files on an assistant level, so one assistant has access to the same files for all calls it will execute.

The second option is to provide the files on a thread level, so for each conversation thread, we can have a specific set of files appropriate for that conversation.

For this tutorial, we’re going to supply the files on the assistant level, as we want the customer service bot to have access to our entire Q&A documentation base. First, we upload the file or files separately to OpenAI. In a new code cell, run:

dino_incubator_faq_file = client.files.create(
    file=open("FAQ_wifi_dinosaur_egg_incubator.txt", "rb"),
    purpose='assistants'
)

Make sure you run the two cells so far. The client.files.create call with the purpose of assistants will return an object to us that we catch in the variable 'dino_incubator_faq_file'.

This object has the .id property that we can use to reference it.

Creating the Assistant

Now in the next cell, let’s create our assistant:

assistant = client.beta.assistants.create(
    name="Customer service assistant",
    instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.",
    tools=[{"type": "retrieval"}],
    model="gpt-4-1106-preview",
    file_ids=[dino_incubator_faq_file.id],
)

We named the bot ‘customer service assistant’ and gave it similar instructions.

For the tools we pass in the ‘retrieval’ tool we’ve been talking about. Notice that the tool must be passed in as an object, and as there can be multiple tools this object must be wrapped inside a list.

For the model, we use the newest GPT-4 Turbo model, and finally, we pass in the file_ids which is a list of the file IDs we want to use, enabling us to pass in multiple files.

As we discussed, these files are hosted, processed, and embedded by OpenAI appropriately, depending on their size and the optimum retrieval strategy, so we don’t have to worry about this.

Creating a Thread and Running the Assistant

In the next code cell, let’s first create an empty thread:

thread = client.beta.threads.create()

Ok, now one of our users has come to our website and accessed our customer service chatbot, asking the question “How do I set the correct temperature for my dinosaur eggs?”.

We’re going to pass this question along by appending it to the thread:

message = client.beta.threads.messages.create(
    thread_id=thread.id,
    role="user",
    content="How do I set the correct temperature for my dinosaur eggs?",
)

Now we need to create a run to have the customer service bot work on our thread, using its tools and data:

run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id,
    instructions="You are customer support, always be polite and understanding. Answer the questions using your knowledge base.",
)

Now let’s check to see if our run is done yet:

run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id)
print(run.status)

In production code, you want to have some kind of loop polling if the run.status is ‘completed’, and as soon as it is, we return the answer to the end-user:

messages = client.beta.threads.messages.list(thread_id=thread.id)

for index, message in enumerate(messages.data[::-1]):
    print(f"\n{index+1}:\n{message.content[0].text.value}")

The Result

When we run all of this and wait till the run is done, I get the following output:

1:
How do I set the correct temperature for my dinosaur eggs?

2:
To set the correct temperature for your dinosaur eggs, follow these steps:

1. Power on the incubator.
2. Use the touchscreen panel to navigate to the 'Species Settings' menu.
3. Select the specific species of your dinosaur eggs from the pre-programmed species list (for example, 'Velociraptor').
4. The Wifi dinosaur egg incubator will then automatically adjust to the ideal temperature and humidity levels for the selected dinosaur species' eggs【7†source】.

Excellent! So here is what happened behind the scenes:

  • The assistant got our question.
  • The retrieval tool was used to search our documentation base for the most relevant answer to our question using embeddings.
  • The most relevant information related to the user question was passed back to the assistant by the retrieval tool. In this case, it is the snippet below:
  • Question: How do I set the correct temperature for my velociraptor eggs? Answer: Once powered on, use the touchscreen panel to navigate to the ‘Species Settings’ menu. Select ‘Velociraptor’ from the pre-programmed species list, and the Wifi dinosaur egg incubator will automatically adjust to the ideal temperature and humidity levels for your velociraptor eggs.
  • The assistant then used the GPT-4 Turbo model to generate a natural language response to the user question, using the retrieved information as a source.

The main point here is that our documentation could be absolutely huge, but using embeddings behind the scenes the retrieval tool will find only the relevant information and give this back to the assistant so it can quickly and efficiently answer the question without having to search the entire documentation! Pretty cool right?

Run Steps

Before we wrap this up I want to just give you one last quick tip, if you want to check out the steps the assistant took to get to the answer, you can retrieve the run steps object. In a new cell add the following call to threads.runs.steps.list:

run_steps = client.beta.threads.runs.steps.list(thread_id=thread.id, run_id=run.id)
print(run_steps)

Now if we get the return to this it will look something like the following (I cut out some data to shorten it):

SyncCursorPage[RunStep](
    data=[
        RunStep(
            ...some data cut out for brevity...
            status="completed",
            step_details=MessageCreationStepDetails(
                message_creation=MessageCreation(
                    message_id="msg_XEzZvkhAzhKEoG50hWVImSQB"
                ),
                type="message_creation",
            ),
            thread_id="thread_BohzHGxBDeEP7tVIB092OUKd",
            type="message_creation",
            expires_at=None,
        ),
        RunStep(
            ...some data cut out for brevity...
            status="completed",
            step_details=ToolCallsStepDetails(
                tool_calls=[
                    RetrievalToolCall(
                        id="call_pDgsE1zBVZ86NSOrlcxgT9tS",
                        retrieval={},
                        type="retrieval",
                    )
                ],
                type="tool_calls",
            ),
            thread_id="thread_BohzHGxBDeEP7tVIB092OUKd",
            type="tool_calls",
            expires_at=None,
        ),
    ],
    object="list",
    first_id="step_quN3SFOwaa0QNWSbZdOMxkqT",
    last_id="step_UtQOTnUlOxMIc0burI7z5wF6",
    has_more=False,
)

So if we read from the bottom up, we can see that the RetrievalTool was called first, and after this tool call was completed the MessageCreation step ran to append a message to the thread.

These were the only two steps needed to answer this particular question.

I think you can see that Assistants are a very powerful thing combining them with any functions we write ourselves and retrieval and the code interpreter plus whatever features are going to come out in the future. This was only a basic overview but with some imagination, you can build amazing things.

That’s it for the last part. I hope you enjoyed this course, and as always, it’s been my honor and my pleasure, and I’ll see you in the next one.

End of tutorial – ‘FAQ_wifi_dinosaur_egg_incubator.txt’ provided below:

Question: What exactly does the Wifi feature do on my dinosaur egg incubator?
Answer: The Wifi feature allows you to connect your dinosaur egg incubator to our dedicated mobile app, enabling you to monitor the temperature, humidity, and estimated hatching time from your smartphone or tablet. You can also receive push notifications for critical temperature changes or when it’s time for the eggs to hatch.

Question: How do I set the correct temperature for my velociraptor eggs?
Answer: Once powered on, use the touchscreen panel to navigate to the ‘Species Settings’ menu. Select ‘Velociraptor’ from the pre-programmed species list, and the Wifi dinosaur egg incubator will automatically adjust to the ideal temperature and humidity levels for your velociraptor eggs.

Question: Is it possible to incubate different species at the same time?
Answer: For optimal results, we recommend incubating one species at a time as different dinosaur species require specific environmental conditions. However, our advanced model offers separate compartments with individual settings if you wish to incubate multiple species simultaneously.

Question: How often should I turn the dinosaur eggs?
Answer: The Wifi dinosaur egg incubator is equipped with an automatic turning mechanism that gently rotates the eggs at optimal intervals. No manual turning is required. You can monitor and adjust the turning frequency via the mobile app if needed.

Question: Can I connect multiple egg incubators to the app?
Answer: Yes, you can connect and manage multiple Wifi dinosaur egg incubators through our dedicated app. This way, you can maintain a careful watch over numerous eggs of different species, each with their respective settings, all from a single device.

Question: How do I know if the humidity level is correct?
Answer: The incubator’s built-in hygrometer measures the moisture content inside. The ideal range will be set automatically based on the species you’ve selected. You can manually adjust humidity levels through the control panel or app.

Question: What if there’s a power outage?
Answer: Our Wifi dinosaur egg incubator comes with a built-in rechargeable battery backup that can last up to 4 hours. The app will notify you in the event of a power outage so you can take necessary action.

Question: How do I clean the incubator after hatching?
Answer: Ensure the incubator is unplugged and cooled down. Remove any hatched shells and organic material. Clean the interior with a soft, damp cloth and a mild disinfectant. Thoroughly dry before reusing or storing the incubator.

Question: Does the incubator provide nutrition to the eggs?
Answer: The incubator creates an optimal environment for egg gestation but does not supply nutrition directly to the eggs. Nutritional requirements for growing embryos should be accomplished naturally within the egg, as per the species’ biological standards.

Question: How do I troubleshoot a connectivity issue with the Wifi feature?
Answer: Check to ensure your internet connection is stable. Try disconnecting and reconnecting to your Wifi network. If problems persist, restart your incubator and router. For further assistance, contact our customer support through the app.

Question: What indicators will the incubator show when it’s time for the eggs to hatch?
Answer: The incubator’s display and the app will both show a countdown timer that estimates hatching time. As hatching day approaches, you will receive alerts. Additionally, the incubator will increase internal humidity to aid the hatching process.

Question: Is there a warranty for the Wifi dinosaur egg incubator?
Answer: Yes, our product comes with a one-year warranty covering any manufacturer defects or malfunctions. Be sure to register your incubator through the app to activate your warranty.

Question: Can the incubator be used for bird eggs as well?
Answer: While the incubator is designed specifically for dinosaur eggs, it can be adjusted for bird eggs. Keep in mind that settings and features may not align perfectly with the needs of modern avian species.

Question: What happens if an egg fails to hatch?
Answer: In the unfortunate event an egg does not hatch within the expected timeframe, consult with a paleo-avian fertility expert. The Wifi dinosaur egg incubator will keep the egg under consistent monitoring conditions, which you can review for any anomalies during the incubation period.