💪
3 Week Bootcamp: Building Realtime LLM Application
  • Introduction
    • Timelines and Structure
    • Course Syllabus
    • Meet your Instructors
    • Action Items
  • Basics of LLM
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of Large Language Models
    • Bonus Resource: Multimodal LLMs and Google Gemini
  • Word Vectors Simplified
    • What is a Word Vector
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
      • Neural Networks and Transformers (Bonus Module)
      • Attention and Transformers (Bonus Module)
      • Multi-Head Attention, Transformers Architecture, and Further Reads (Bonus Module)
    • Graded Quiz 1
  • Prompt Engineering
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • Best Practices to Follow in Prompt Engineering
    • Token Limits in Prompts
    • Ungraded Prompt Engineering Excercise
      • Story for the Excercise: The eSports Enigma
      • Your Task
  • Retrieval Augmented Generation and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-Trained and Fine-Tuned LLMs
    • In-Context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • LLM Architecture Diagram and Various Steps
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in Retrieval-Augmented Generation (RAG)
    • Key Benefits of RAG for Enterprise-Grade LLM Applications
    • Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search in Vector Embeddings (Bonus Module)
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites
    • Dropbox Retrieval App in 15 Minutes
      • Building the app without Dockerization
      • Understanding Docker
      • Building the Dockerized App
    • Amazon Discounts App
      • How the Project Works
      • Repository Walkthrough
    • How to Run 'Examples'
  • Bonus Resource: Recorded Interactions from the Archives
  • Bootcamp Keynote Session on Vision Transformers
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Tracks for Submission
    • Final Submission
Powered by GitBook
On this page
  • Brief Overview
  • Syllabus
  • Understanding the Power of Real-time
  • Your Role as a Learner
  1. Introduction

Course Syllabus

Brief Overview

The course aims to:

  • Introduce the basics of LLMs and vector embeddings.

  • Explore the intricacies of prompt engineering.

  • Demystify LLM architectures and Retrieval Augmented Generation (RAG), pivotal in modern LLM applications.

  • Empower you to develop meaningful, real-time RAG-based applications.

Syllabus

Module
Topics

Basics of LLMs

  • What is generative AI and how it's different

  • Understanding LLMs

  • Advantages and Common Industry Applications of LLMs

  • Bonus section: Google Gemini and Multimodal LLMs

Word Vectors

  • What are word vectors and word-vector relationships

  • Role of context in LLMs

  • Transforming vectors in LLM responses

  • Bonus section: Word2Vec and Similarity Search in Vectors

Prompt Engineering

  • Introduction and in-context learning

  • Best practices to follow: Few Shot Prompting and more

  • Token Limits

  • Prompt Engineering Exercise

RAG and LLM Architecture

  • Introduction to RAG

  • LLM Architecture Used by Enterprises

  • Architecture Diagram and LLM Pipeline

  • RAG vs Fine-Tuning and Prompt Engineering

  • Key Benefits of RAG for Realtime Applications

  • Bonus Resource: Incremental Indexing in Pathway | Advanced Read

Hands-on Project

  • Installing Dependencies and Pre-requisites

  • Building a Dropbox RAG App using open-source

  • Building Realtime Discounted Products Fetcher for Amazon Users

  • Problem Statements for Projects

  • Standards for Project Submission

  • Project Submission

  • Project Evaluation, Feedback incorporation, and Bootcamp Graduation

Understanding the Power of Real-time

A central theme of this course is the integration of real-time data with Large Language Models (LLMs). This powerful combination opens doors to innovative solutions for complex societal and business challenges. While you'll gain proficiency in developing custom LLM applications for static data, our chosen open-source framework simplifies the transition between real-time ("Streaming") and static ("Batch") data with minimal adjustments in Python.

In today's digital era, combining up-to-the-minute data with LLMs is not just innovative – it's transformative. This synergy accelerates everything from financial processes to healthcare responses. Imagine financial transactions, once taking days, now completed in milliseconds. By weaving real-time data streams into LLMs, we create applications that are not only responsive but also capable of making significant contributions to society. That's a cornerstone of what this course aims to achieve.

Your Role as a Learner

The essence of learning and discovery lies with you. While we provide the foundation and tools, the true artistry—the application, innovation, and breakthroughs—stems from your engagement and creativity.

As we embark on this transformative journey, the question is: Are you ready to explore the untapped potential of LLMs merged with real-time data for the greater good? Join us, and let's venture into this exciting realm together! 🚀

PreviousTimelines and StructureNextMeet your Instructors

Last updated 1 year ago