👉 Back to the Full Course on local models and Hugging Face (+Videos)
Hi and welcome to this course on running LLMs and other Machine Learning models on your local machine. In this course, we’ll look at the free and open-source models out there and how you can run them on your local machine, spanning from LLM models to image generation, text-to-speech, and more.
- In part 1, we’ll get started with the basics of running an LLM locally using Ollama, also looking at the various models available in the open-source community.
- In the next part we’ll learn how to communicate with our LLM using LangChain, so we can address our model programmatically. We’ll also add memory so it can remember our conversation so far, and look at model preloading to speed up the response time.
- In part 3, we’ll implement a simple and effective interface so we can interact with our local LLMs in a more convenient and user-friendly way.
- Part 4 is where we switch over to the Hugging Face community. Here we’ll take the SDXL-Turbo model and use it to generate images on our local machine. We will look at both text-to-image generation where the image is generated from a prompt, and image-to-image generation, which will allow us to change existing images.
- In part 5 we’ll look at generating audio files from written text using a TTS model. We’ll also tackle a myriad of problems and challenges that you may run into when running models locally on your own machine, and learn about Docker basics and the Gradio API as well.
- In the final part we’ll have a look at some really cool niche models. We’ll be generating music from text prompts, and also 3D models from image input.
I hope you’re excited to get started. Let’s jump right in and I’ll see you in part 1!
👉 Back to the Full Course on local models and Hugging Face (+Videos)
Excited right now, 09:01, 5/27/2024, to learn Hugging Face!
🤩🤩🤩