Skip to content

pranalibose/LangVisionWorkshop

Repository files navigation

Workshop Agenda

Day 1

Part 1: Introduction

  • What are LLMs, applications
  • Difference and evolution from traditional NLPs, RNN, LSTMs, and Transformers
    Transformer Explainer
  • Prompt Engineering Techniques (Advantages of a well-written prompt)
  • Prompt Demonstration:
    Google AI Studio
  • Tokenization:
    OpenAI Tokenizer
  • Types of Tokenization (WordPiece, Byte-Pair Encoding, SentencePiece) and Demonstration:
    Tokenization Demo

Hands-on 1

Preprocess Text

  • Preprocess resume text
  • Tokenize and create embeddings

Part 2: Hugging Face Platform

  • Create an account and access token
  • Understand the utilities in the platform

Hands-on 2

HF API call

  • Access HF API to get feedback on different resumes
  • Add custom instructions to see the performance

Part 3: Fine-Tuning

  • Why Fine-Tune?
  • Process of Fine-Tuning
  • LoRA (Low-Rank Adaptation)

Hands-on 3


Part 4: Streamlit UI

Hands-on 4


Day 2

Part 1: Query Resolution & Quiz

  • Discuss questions from the previous day
  • Conduct a quiz session

Part 2: Retrieval-Augmented Generation (RAG)

  • What is RAG and its advantages?
  • Comparison: RAG vs. Fine-Tuning
  • RAG Architecture
  • What are Vector Databases?
  • Different Types of Searches
  • FAISS and how it works

Part 3: Hands-on 1

  • RAG Implementation
  • Integrate RAG and FAISS with the Fine-Tuned Model

Part 4: What’s Next?

  • Assignment:

    • Develop an app to summarize lengthy documents
    • Take a document as input and choose the summary format (bullet points, short paragraphs)
    • Fine-tune a suitable LLM and integrate with a simple UI
  • Open Discussion

  • Interview Question Bank

Deck

About

Fine-tuned LLM to analyse resumes and provide feedback

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors