- What are LLMs, applications
- Difference and evolution from traditional NLPs, RNN, LSTMs, and Transformers
Transformer Explainer - Prompt Engineering Techniques (Advantages of a well-written prompt)
- Prompt Demonstration:
Google AI Studio - Tokenization:
OpenAI Tokenizer - Types of Tokenization (WordPiece, Byte-Pair Encoding, SentencePiece) and Demonstration:
Tokenization Demo
- Preprocess resume text
- Tokenize and create embeddings
- Create an account and access token
- Understand the utilities in the platform
- Access HF API to get feedback on different resumes
- Add custom instructions to see the performance
- Why Fine-Tune?
- Process of Fine-Tuning
- LoRA (Low-Rank Adaptation)
- Fine-Tune a T5 Model for Resume Analysis
- Push onto Hugging Face Hub
- Create a Simple UI using Streamlit
- Integrate Fine-Tuned Model to Generate Responses
- Deploy App
- Discuss questions from the previous day
- Conduct a quiz session
- What is RAG and its advantages?
- Comparison: RAG vs. Fine-Tuning
- RAG Architecture
- What are Vector Databases?
- Different Types of Searches
- FAISS and how it works
- RAG Implementation
- Integrate RAG and FAISS with the Fine-Tuned Model
-
Assignment:
- Develop an app to summarize lengthy documents
- Take a document as input and choose the summary format (bullet points, short paragraphs)
- Fine-tune a suitable LLM and integrate with a simple UI
-
Open Discussion
-
Interview Question Bank