Hey there! HandScribe is a cool little full-stack app I built. You can doodle on a digital canvas, and it'll figure out what you wrote in real-time! It's all powered by a React frontend, a Node.js backend to connect everything, and a Python service with a smart TensorFlow model doing the handwriting recognition.
Wanna try it out? The live app is right here, give it a go!
https://handscribe-six.vercel.app/
- Draw in Real-Time: The canvas is super smooth and responsive, perfect for jotting down a few lines.
- Handwriting Recognition: It uses a machine learning model to turn your drawings into actual text!
- Full-Stack Power: This isn't just a frontend trick. It's a robust system with a separate frontend, backend, and ML model.
- Ready to Deploy: The backend is containerized with Docker, so it's consistent and easy to scale.
Curious about what's under the hood? Here's the tech I used:
Frontend
- React, Vite, Axios, CSS
Backend
- Node.js, Express.js, Python (for an image processing script)
ML Model
- Python, TensorFlow, Keras (based on SimpleHTR)
Deployment
- Vercel (Frontend), Render (Backend & Model API), Docker
So, how does it all work together? I set it up with three main parts to keep things organized:
-
The Frontend (on Vercel): This is what you see! A user draws on the canvas in React, and when they're done, the drawing data gets sent off to the backend.
-
The Backend (on Render): This is the orchestrator. It grabs the drawing data, tells a quick Python script to turn it into a PNG image, and then sends that image over to the brains of the operation—the Model API.
-
The Model API (also on Render): This is a separate Python service that hosts the pre-trained SimpleHTR model. It gets the image, works its magic to figure out the text, and sends it back.
Finally, the backend passes the recognized text back to the frontend for you to see. Pretty neat, right?!
Want to get this running on your own machine? Awesome! Here's how to get the frontend and backend up and running. For local development, you can just use the live ML model I've already hosted on Render, which makes things way easier!
You'll need a few things installed first:
- Node.js (v18 or later)
- Python (v3.8 is recommended)
- npm
The backend is the middle-man that connects the frontend and the model.
-
Head over to the backend directory:
cd backend -
Install the Node packages:
npm install
-
Get the Python stuff ready:
# Create a virtual environment python -m venv venv # Fire it up (on Windows) .\venv\Scripts\activate # Or on macOS/Linux # source venv/bin/activate # Install the Python packages pip install -r requirements.txt
-
Create your environment file:
- Make a new file called
.envin the/backendfolder. - Just copy and paste the stuff from the
.env.examplesection below.
- Make a new file called
-
Start the backend server!
node server.js
You should see it running at http://localhost:5000.
-
Jump into the frontend directory:
cd frontend -
Install the Node packages:
npm install
-
Start the dev server:
npm run dev
Your browser should open up to http://localhost:5173.
You'll just need to create a couple of .env files to tell the apps where to find each other on your local machine.
# The URL for your LOCAL frontend dev server
FRONTEND_URL_DEV="http://localhost:5173"
# We'll use the live model API for local dev to keep things simple
MODEL_API_URL="https://handscribe.onrender.com/predict"# The URL for your LOCAL backend server
VITE_API_BASE_URL="http://localhost:5000"A huge shout-out to Harald Scheidl! The handwriting recognition model is based on their amazing SimpleHTR project. You can check out the original work here: