Hi and welcome! I’m pleased to introduce this course on the Python LangChain library for working with Large Language Models. We’ll be using LangChain to greatly increase our ChatGPT superpowers and look at cool, but also real-world practical applications. Even though we’ll be using ChatGPT in this series, you will also be able to apply everything to other LLMs. This is the power of LangChain!
A quick overview of what we’ll cover:
- In the first part, we’ll focus on the basics of LangChain and we will use LangChain to summarize a large text which is way too long to send in the context limit of a normal ChatGPT call.
- Then we’ll be looking at making a book-chat that can answer any questions we ask it using the contents of the entire book as its knowledge base. We’ll be able to ask natural language questions to our book and our book will give us natural language answers!
- In the third part, we’ll be looking at LangChain’s built-in tools and agents. Giving our LLMs the ability to use tools like functions and the ability to make decisions on what to do next using an agent.
- After that, we’ll focus on building our own tools from scratch, so that we can potentially add any functionality to our AI agents and LLMs that we want them to have.
- In part 5, we’ll dig in and build our own agent step by step. In this part, you will gain a deeper understanding of how we go from an LLM that can only generate text, to a powerful agent that can make decisions and use tools.
- In the final part we’ll look at the new LangChain LCEL syntax and how we can use it to keep our code readable and easy to understand by building an RCI chain that finds and fixes its own mistakes.
I hope you’re excited and when you’re ready to get going, let’s jump right in. I’ll see you in part 1!
This tutorial is part of our original course on Python LangChain. You can find the course URL here: 👇
🧑💻 Original Course Link: Becoming a Langchain Prompt Engineer with Python – and Build Cool Stuff 🦜🔗