💪
3 Week Bootcamp: Building Realtime LLM Application
  • Introduction
    • Timelines and Structure
    • Course Syllabus
    • Meet your Instructors
    • Action Items
  • Basics of LLM
    • What is Generative AI?
    • What is a Large Language Model?
    • Advantages and Applications of Large Language Models
    • Bonus Resource: Multimodal LLMs and Google Gemini
  • Word Vectors Simplified
    • What is a Word Vector
    • Word Vector Relationships
    • Role of Context in LLMs
    • Transforming Vectors into LLM Responses
      • Neural Networks and Transformers (Bonus Module)
      • Attention and Transformers (Bonus Module)
      • Multi-Head Attention, Transformers Architecture, and Further Reads (Bonus Module)
    • Graded Quiz 1
  • Prompt Engineering
    • What is Prompt Engineering
    • Prompt Engineering and In-context Learning
    • Best Practices to Follow in Prompt Engineering
    • Token Limits in Prompts
    • Ungraded Prompt Engineering Excercise
      • Story for the Excercise: The eSports Enigma
      • Your Task
  • Retrieval Augmented Generation and LLM Architecture
    • What is Retrieval Augmented Generation (RAG)?
    • Primer to RAG: Pre-Trained and Fine-Tuned LLMs
    • In-Context Learning
    • High-level LLM Architecture Components for In-context Learning
    • Diving Deeper: LLM Architecture Components
    • LLM Architecture Diagram and Various Steps
    • RAG versus Fine-Tuning and Prompt Engineering
    • Versatility and Efficiency in Retrieval-Augmented Generation (RAG)
    • Key Benefits of RAG for Enterprise-Grade LLM Applications
    • Similarity Search in Vectors (Bonus Module)
    • Using kNN and LSH to Enhance Similarity Search in Vector Embeddings (Bonus Module)
    • Graded Quiz 2
  • Hands-on Development
    • Prerequisites
    • Dropbox Retrieval App in 15 Minutes
      • Building the app without Dockerization
      • Understanding Docker
      • Building the Dockerized App
    • Amazon Discounts App
      • How the Project Works
      • Repository Walkthrough
    • How to Run 'Examples'
  • Bonus Resource: Recorded Interactions from the Archives
  • Bootcamp Keynote Session on Vision Transformers
  • Final Project + Giveaways
    • Prizes and Giveaways
    • Tracks for Submission
    • Final Submission
Powered by GitBook
On this page
  • Step 1: Cloning the Repository
  • Step 2: Setting the Environment Variables
  • Step 3: Building and Starting Containers
  • Step 4: Accessing Applications
  • Step 5: Stopping Containers
  • An interesting thing to notice
  1. Hands-on Development
  2. Dropbox Retrieval App in 15 Minutes

Building the Dockerized App

Welcome to this module on how to use Docker to set up and run the Dropbox AI Chat application. Before we begin, it's essential to ensure that you meet the prerequisites and understand each step thoroughly.

Basic Prerequisites:

  • Ensure you have Docker installed on your machine.

  • Dropbox must also be installed.

Step 1: Cloning the Repository

Open your terminal and run:

git clone https://github.com/pathway-labs/dropbox-ai-chat 
cd dropbox-ai-chat

If you have previously cloned an older version of the repository, ensure you're in the correct repository directory and update it using:

git pull https://github.com/pathway-labs/dropbox-ai-chat

What this does: The git pull command will update your local repository with the latest changes from the remote repository.

Step 2: Setting the Environment Variables

Overview:

  • The .env file sets crucial environment variables for your application.

  • If you're using macOS, the .env file might be hidden by default when viewed through Finder but is visible via Terminal. This being said, regardless of the OS, it's important to note that this file plays a pivotal role.

  • The primary change you'll make in this entire implementation is to the {DROPBOX_LOCAL_FOLDER_PATH} variable, which the .env file uses.

Understanding DROPBOX_LOCAL_FOLDER_PATH variable used here:

  • This variable defines the relative path from your project to your Dropbox folder.

Setting the environment variables in MacOS or Linux

  1. Create an .env file in the project's root directory using touch.

    touch .env
  2. Edit the .env file using a text editor like nano or vim.

    nano .env
  3. Populate the .env file with the following content, replacing placeholders with actual values: Note: Replace the following while using the environment variables from below:

    1. {REPLACE_WITH_DROPBOX_RELATIVE_PATH} with the relative path where the Dropbox folder is located

      OPENAI_API_TOKEN={OPENAI_API_KEY}
      EMBEDDER_LOCATOR=text-embedding-ada-002
      EMBEDDING_DIMENSION=1536
      MODEL_LOCATOR=gpt-3.5-turbo
      MAX_TOKENS=200
      TEMPERATURE=0.0
      DROPBOX_LOCAL_FOLDER_PATH={REPLACE_WITH_DROPBOX_RELATIVE_PATH}
    2. Alternative to using export: If .env doesn't work for you, you can set these variables directly in your shell using the command below. However, it is important to note that variables set with export (Linux/macOS) or set (Windows, as seen below) last only for the current session. If you want them to persist, you'll need to add them to shell configuration files like add to .bashrc or .bash_profile for Linux/macOS, or use System Properties on Windows.

      export OPENAI_API_TOKEN={OPENAI_API_KEY}
      export EMBEDDER_LOCATOR=text-embedding-ada-002
      export EMBEDDING_DIMENSION=1536
      export MODEL_LOCATOR=gpt-3.5-turbo
      export MAX_TOKENS=200
      export TEMPERATURE=0.0
      export DROPBOX_LOCAL_FOLDER_PATH={REPLACE_WITH_DROPBOX_RELATIVE_PATH}

Setting the environment variables in Windows:

  1. Create an .env file using a text editor of your choice.

  2. Populate the .env file as shown above.

  3. Alternative: Use the set command in Command Prompt to set environment variables. Note: Replace {OPENAI_API_KEY} with your OpenAI API key and {REPLACE_WITH_DROPBOX_RELATIVE_PATH} with your local Dropbox path.

    set OPENAI_API_TOKEN={OPENAI_API_KEY}
    set EMBEDDER_LOCATOR=text-embedding-ada-002
    set EMBEDDING_DIMENSION=1536
    set MODEL_LOCATOR=gpt-3.5-turbo
    set MAX_TOKENS=200
    set TEMPERATURE=0.0
    set DROPBOX_LOCAL_FOLDER_PATH={REPLACE_WITH_DROPBOX_RELATIVE_PATH}

Step 3: Building and Starting Containers

Now we will build the Docker image and start the containers.

Navigate to the cloned directorydropbox-ai-chat. Then run these commands.

docker-compose build 
docker-compose up

Behind the Scenes with Docker:

  • Dockerfile: This file contains a set of instructions that Docker follows to build an image. It's like a blueprint for your application. Docker reads these instructions and creates a Docker image based on them. This image contains everything your app needs to run.

  • docker-compose: It's a tool for defining and running multi-container Docker applications. In our context, docker-compose uses the docker-compose.yml file to understand how to set up and run the app's services.

  • When you run docker-compose up, it starts the services as defined.

Step 4: Accessing Applications

What it does: Opens access to your API and Streamlit UI.

  • Access Points for your application:

Step 5: Stopping Containers

To stop the services and remove the containers, execute:

docker-compose down

By following these steps, you should be able to get both the main application and the Streamlit UI up and running using Docker Compose.

Now let's look at another example where we use an API as a data source.

PreviousUnderstanding DockerNextAmazon Discounts App

Last updated 1 year ago

If you want to quickly understand how relative path works in the context of Linux, you can check this quick or read these comprehensive explanations by or .

{OPENAI_API_KEY} with your OpenAI API key. You can get from once you've logged in)

API:

Streamlit UI:

An interesting thing to notice

Interestingly if you quickly revisit the we saw earlier, there was a prompt from a Customer Support Executive trying to understand the product release notes made by the development team. With something like the Dropbox app, that problem can be easily addressed.

💡
video by Udacity
RedHat
Coding Rooms
here
http://localhost:8080/
http://localhost:8501/
LLM Architecture diagram