πŸ’ͺ
3 Week Bootcamp: Building Realtime LLM Application
  • Introduction
    • Timelines and Structure
    • Course Syllabus
    • Meet your Instructors
    • Action Items
  • Basics of LLM
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of Large Language Models
    • Bonus Resource: Multimodal LLMs and Google Gemini
  • Word Vectors Simplified
    • What is a Word Vector
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
      • Neural Networks and Transformers (Bonus Module)
      • Attention and Transformers (Bonus Module)
      • Multi-Head Attention, Transformers Architecture, and Further Reads (Bonus Module)
    • Graded Quiz 1
  • Prompt Engineering
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • Best Practices to Follow in Prompt Engineering
    • Token Limits in Prompts
    • Ungraded Prompt Engineering Excercise
      • Story for the Excercise: The eSports Enigma
      • Your Task
  • Retrieval Augmented Generation and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-Trained and Fine-Tuned LLMs
    • In-Context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • LLM Architecture Diagram and Various Steps
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in Retrieval-Augmented Generation (RAG)
    • Key Benefits of RAG for Enterprise-Grade LLM Applications
    • Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search in Vector Embeddings (Bonus Module)
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites
    • Dropbox Retrieval App in 15 Minutes
      • Building the app without Dockerization
      • Understanding Docker
      • Building the Dockerized App
    • Amazon Discounts App
      • How the Project Works
      • Repository Walkthrough
    • How to Run 'Examples'
  • Bonus Resource: Recorded Interactions from the Archives
  • Bootcamp Keynote Session on Vision Transformers
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Tracks for Submission
    • Final Submission
Powered by GitBook
On this page
  • Your Educators
  • About Pathway
  • Why is Pathway so Fast
  1. Introduction

Meet your Instructors

Your Educators

Our heartfelt thanks go to the distinguished professionals who have enriched this course with their expertise and insights:

  • Adrian Kosowski, Chief Product Officer at Pathway | Earned PhD at 20 | Former Professor at Γ‰cole Polytechnique and Co-founder of SPOJ | 100+ Research publications

  • Anup Surendran, Head of Product Marketing & Growth at Pathway | Previously Vice President at QuestionPro | Advisor, Texas A&M University

  • Jan Chorowski, Chief Technology Officer at Pathway | PhD in Neural Networks | Co-authored papers with Yoshua Bengio and Geoff Hinton, two of the three Godfathers of AI | Former Researcher at Google Brain, MILA AI

  • Sergey Kulik, Lead Software Research Engineer and Solutions Architect at Pathway | International Olympiad in Informatics Gold Medalist | Former Head of Service at Yandex.

  • Mudit Srivastava, Growth Manager at Pathway | Former Founding Member and Growth Head at AI Planet (prev DPhi).

  • Bobur Umurzokov, Developer Advocate at Pathway | Former Lead at Microsoft Tallinn.

  • Berke Can Rizai, LLM Research Engineer at Pathway | Former Data Scientist at Getir.

Special thanks to Mike Chambers from Amazon Web Services for granting permission to use his invaluable educational resources from the BuildOnAWS YouTube channel.

Throughout this course, we've utilized various resources to enhance your learning experience and have endeavoured to give appropriate credits whenever possible. If there's an oversight or a missed acknowledgement, please don't hesitate to inform us.

Furthermore, if we invite external educators beyond those already mentioned, we will update this section with their details. This will enable you to connect with them later through their social media profiles, extending your learning network.

About Pathway

Pathway is the world's fastest data processing engine supporting unified workflows for batch, streaming data, and LLM applications.

Pathway is the single, fastest integrated data processing layer for real-time intelligence.

  • Mix-and-match: batch, streaming, API calls including LLMs.

  • Effortless transition from batch to real-time - just like setting a flag in your Spark code.

  • Powered by an extremely efficient and scalable Rust engine, reducing costs of any computations.

  • Enabling use cases enterprises were craving, making advanced data transformations lightning-fast and easy to implement.

Why is Pathway so Fast

The Pathway engine is built in Rust. We love Rust πŸ¦€. Rust is built for speed, parallel computation and low-level control over hardware resources. This allows us to execute maximum optimization for performance and speed.

PreviousCourse SyllabusNextAction Items

Last updated 1 year ago

We also love Python 🐍 – which is why you can write your data processing code in Python and Pathway will automagically compile it into a Rust dataflow. In other words, with Pathway you don’t need to know anything about Rust to enjoy its enormous performance benefits! For now, this is a simple enough starting point (that said, feel free to find more details in our – your first bonus resource πŸ™‚).

ArXiv Paper