Hands-on Development

Welcome to the final module of this bootcamp, after which we'll head towards building our project that leverages the power of open source, RAG, and LLMs! 💪

Here, we're going to guide you through the process of setting up a Retrieval Augmented Generation (RAG) architecture using LLM App, an open-source production framework for building and serving AI applications and LLM-enabled real-time data pipelines.

While you're working with this tool, consider starring it on GitHub. It is an effortless way to bookmark it for future and track updates, and it also helps the community discover the resource.

By the end of this module, you'll be able to build your LLM application that works with realtime data. This implementation guide is aimed at users of Mac, Linux, and Windows systems.

Note: If you have already completed your first project by consulting the documentation on the LLM App's open-source repository, that's excellent! In that scenario, you may choose to review the videos in this module for additional perspective and proceed to the 'Final Project + Giveaways' module.

Last updated