💡 This is the supporting text tutorial of our Finxter Academy Course Creating a High-Performance Extensible ChatGPT Chatbot with Python
I wanted to learn how to read from the text of the US Army field manual and build a chatbot to answer questions from it. The military is embracing ChatBots, so there is demand for a question and answer Chatbots that can give them their operational data at their fingertips. They want to use their curated data and the highly conversant nature of ChatGPT.
💻 Project: So, I created a simple, high-performance Q&A ChatGPT Chatbot with less than 100 lines of code. About 50 lines of code for data loading and 50 for the simple chat feature.
I used ChatGPT so it can quickly and impressively answer questions from text data or PDFs.
While this chatbot does not have many lines of code, a lot is happening under the hood. Also, the chatbot can easily be extended to be a web chatbot or a desktop application.
While creating this, I learned several new and powerful new technologies, such as the Generative Pre-Trained transformer ChatGPT, the vector database Pinecone, and Langchain, a large language model data pipeline.
I will show you how I built the Chatbot first and then provide a brief summary of the concepts and give you some more in-depth links to where you can learn more about these great technologies.
Let’s get started Building the Chatbot. 👇
Building the Chatbot
Sign up for the APIs.
✅ Step 1: Sign up for the ChatGPT API. This lucid Finxter article tells you how to sign up for ChatGPT.
✅ Step 2: Sign up for PineCone API – Here’s how to sign up for the Pinecone API Key. You can get a free developer instance.
✅ Step 3: Select a PDF, Text File, or URL. I recommend using the text file I’ve provided to ensure your code will work.
✅ Step 4: If you select your own PDF, you may find the file corrupted or unusable from the PDF loader. I found this to be the case so US Army field manual, so I converted the PDF to a text file. So here’s a link to the pdf cleaner that I created to convert the PDF to a text file.
✅ Step 5: The code is in Google Colab. Here’s a link to the Google Colab. I recommend following along with the google Colab file and the video. Open the Colab and you should see this.
The vector database gets loaded with the text of the file. My steps to do the data cleaning are in the comments. The main idea of the Juypter is to load the data from a text file into the Pinecone vector database. If you want to follow along not using Google Colab the code is also here.
✅ Step 6: Here’s the ChatBot code. It’s also on GitHub.
from langchain.llms import OpenAI from langchain.chains.question_answering import load_qa_chain from langchain.vectorstores import Pinecone from langchain.embeddings import OpenAIEmbeddings import os pinecone.init( api_key=os.environ['PINECONE_API_KEY'], environment=os.environ['PINECONE_API_ENV'] ) index_name = "fieldmanual" embeddings = OpenAIEmbeddings(openai_api_key=os.environ['OPENAI_API_KEY']) pinecone = Pinecone.from_existing_index(index_name,embeddings) openAI = OpenAI(temperature=0, openai_api_key=os.environ['OPENAI_API_KEY']) chain = load_qa_chain(openAI, chain_type="stuff") def askGPT(prompt): docs = pinecone.similarity_search(prompt, include_metadata=True) ch = chain.run(input_documents=docs, question=prompt) print(ch) def main(): while True: print('Open AI + Pinecone: Field Manual Querying\n') prompt = "prompt:" + input() askGPT(prompt) print('\n') main()
Here’s a brief explanation of the code.
First, you will need to
pip install Langchain.
Then Pinecone, Langchain, and OpenAI are all imported from the langchain library.
Then, instantiate the Pinecone
connection object. It’ll need the key and environment. I stored the keys in
Then embeddings from OpenAI are created. The OpenAI temperature is set to 0, which is important. 0 means the AI chatbot will return the same responses given the same prompt. This ChatBot needs factual and consistent information.
Next, Pinecone is loaded from its index database name.
The Pinecone is explained in depth here similarity_search.
A similarity search is a semantic representation of our data and finds similar items fast. The semantic representations are in the form of vectors.
After that, the data is sent in a Langchain call via the
chain = load_qa_chain(openAI, chain_type="stuff") involves fetching multiple documents and then asking a question of them. The LLM response will contain the answer to your question based on the content of the documents.
Then, a prompt gets run every time a question is answered. The query is executed by the
ch = chain.run(input_documents=docs, question=prompt). It returns the result from the chain.
The prompt is set in a look so questions are answered until the user exits the loop.
Here’s the results from chatting with the bot. My prompts are in question form.
Here’s what Midjourney thinks the ChatBot should look like: 😀
Below are some great resources on Langchain, Pinecone, and ChatGPT to learn more about the technologies.
Here’s an overview of the technologies used in the code and where to learn more about them.
👉 Pinecone is a high-performance vector database that vectors can quickly be retrieved from. It has a free developer version. Their pitch is that Pinecone is a “vector database that makes it easy to build high-performance vector search applications. Developer-friendly, fully managed, and easily scalable without infrastructure hassles.”
👉 Langchain is an awesome library. It is both easy to use and great conceptually. It is Large Language Model (LLM) agnostic so it is not tied to any one Generative Pretrained Transformer (GPT). It has great tutorials and documentation. It is a way to pipe data to and from a large language model from data sources like prompts, search engines, and knowledge databases like Wolfram Alpha. The goal of Langchain is to assist developers in creating applications that use Large Language models alongside other technologies like search engines and enable them to create great applications.
👉 ChatGPT is a Chat Generative Pretrained Transformer (GPT) software that allows a user to ask it questions using conversational or natural language. It’s called a large language model, which consists of a neural network with many parameters, trained on large quantities of unlabelled text using self-supervised learning.
I was extremely impressed with the technologies I came across while creating this Chatbot.
The Chatbot answers questions quickly and thoroughly, and factually.
I hope this article and video convince you to try these technologies, especially if you create a Chatbot.
The technologies and Langchain is an extremely promising libraries and vector databases are only going to get bigger in the probable Age of AI.