In this workshop, you will learn how to use HuggingFace to run AI in your browser and how to integrate your fine-tuned model in your application. The workshop is divided into two chapters. In the first chapter, you will learn how to use pretrained models in a
Next.jsSSR application usingtransformers.jsto solve different NLP tasks. In the second chapter, you will learn how to integrate your fine-tuned model in aPython FastAPIapplication.
- Node.js 16.14 or later 📡
- Python 3.9.X (
venvandpipare included) 🐍 - Docker 🐳 - Required only for DevContainer setup
- VSCode 📝
- Clone the repository
- Install client dependencies inside
clientdirectory:
npm install- Verify that Python 3.9.X is installed on your machine:
python --version # or python3 --version- Install server dependencies inside
serverdirectory:
Linux/MacOS:
/usr/bin/python3 -m venv .env
source .env/bin/activate
pip install -r requirements.txtWindows:
python -m venv .env
.env\Scripts\activate.bat
pip install -r requirements.txt
⚠️ For some reason the model inferrence doesn't work as expected as it gets stuck⚠️
- Make sure you have Docker installed and running on your machine
- Open VSCode
- Install the
Remote - Containersextension - Press
F1, select or search forDev Containers: Clone Repository in Container Volumeand paste the repository URL - Wait for the container to build and to install the dependencies
Run the following command inside client directory:
npm run dev- Open your browser and navigate to
http://localhost:3000. 🌐 - Type a sentence in English in the text area: My name is John, I live in Singapore and work at Microsoft.
- Analyze the the result. 👀
- Try other sentences and see how the model performs.
Let's discover together the HuggingFace Platform and then analyze the code 🔍
- Change the model from NER to Fill Mask (E.g.
Xenova/distilbert-base-cased): - Try the same sentence as before: My name is John, I live in Singapore and work at Microsoft. but replace the word work with [MASK].
- Analyze the the result. 🧐
- Let's do a zero-shot classification (E.g.
Xenova/bart-large-mnli) - Open
client/src/app/api/text-classification/route.ts - Modify the code so you can support different labels/categories
await classifier(text, ["cat1", "cat2", "cat3"]);
Everything looks great but how can I customize existing models?
Check this awesome explaination: A brief introduction to transformers 📰
Then we will follow the notebook in Google Collab 📓
Copy the model directory to server
Let's analyze the code 🔍
Run the following command inside server directory:
uvicorn main:app --reload- Open your browser and navigate to
http://localhost:8000/docs. You should see the Swagger UI. - Open the
POST /endpoint and click onTry it out. - Choose a file to upload and select your resume (E.g.
resume.pdf). - Click on
Executeand analyze the JSON Response.
- Open your browser and navigate to
http://localhost:3000/resume. - Upload your resume and analyze the UI result.
- In Google Colab, open
nlp_workshop_resume_analysis.ipynband follow the instructions.