A Python-based REST API backend that demonstrates modern web development practices. It's a paragraph management system with user authentication, text processing, and search functionality. The core features include JWT-based authentication, paragraph submission, automatic word indexing using background tasks, and frequency-based search."
Client (Browser/Postman)
↓ HTTP Requests
FastAPI Application
↓ Database Operations
SQLAlchemy ORM
↓ Data Storage
SQLite Database
↓ Background Processing
FastAPI BackgroundTasks (Word Indexing)
- User Authentication: JWT-based auth with register/login/logout
- Paragraph Management: Submit multiple paragraphs in a single request
- Word Indexing: Automatic background indexing with word frequency tracking
- Smart Search: Find top 10 paragraphs ranked by word frequency
- Containerized: Docker setup with lightweight images
- Interactive Docs: Auto-generated Swagger UI documentation
- Backend: FastAPI (Python 3.11+)
- Database: SQLAlchemy ORM with SQLite
- Authentication: JWT tokens with bcrypt password hashing
- Background Tasks: FastAPI BackgroundTasks for word indexing
- Testing: pytest with comprehensive test coverage
- Deployment: Docker, Render-ready configuration
- Python 3.11+
- pip (or Docker for containerized setup)
-
Clone the repository:
git clone https://github.com/Suk022/Python-based-backend.git cd backend-assignment -
Install dependencies:
pip install -r requirements.txt
-
Run the application:
uvicorn app.main:app --reload --port 8000
-
Access the API:
- API Documentation:
http://localhost:8000/docs - Health Check:
http://localhost:8000/health - Root Endpoint:
http://localhost:8000/
- API Documentation:
POST /auth/register- Register new userPOST /auth/login- User login (returns JWT token)POST /auth/logout- Logout and invalidate refresh token
POST /paragraphs- Submit multiple paragraphsGET /paragraphs- List user paragraphs (paginated)GET /paragraphs/search?word=<word>- Search paragraphs by word frequency
GET /docs- Interactive Swagger UIGET /- API informationGET /health- Health check endpoint
-
Start the application:
# Using Python directly uvicorn app.main:app --reload --port 8000 # Using Docker docker-compose -f docker-compose.dev.yml up --build
-
Open Swagger UI:
http://localhost:8000/docs -
Test User Registration:
- Find
POST /auth/register - Click "Try it out"
- Enter test data:
{ "email": "test@example.com", "password": "password123" } - Click "Execute"
- Expected: Status 200, user_id returned
- Find
-
Test User Login:
- Find
POST /auth/login - Use same credentials as registration
- Click "Execute"
- Copy the
access_tokenfrom response
- Find
-
Authorize for Protected Endpoints:
- Click "Authorize" button (top right)
- Enter:
YOUR_ACCESS_TOKEN - Click "Authorize"
-
Test Paragraph Submission:
- Find
POST /paragraphs/ - Enter test paragraphs:
{ "paragraphs": [ "Python is a great programming language. Python is easy to learn.", "I love coding in Python. Python has many libraries.", "JavaScript is popular, but Python is my favorite programming language." ] } - Click "Execute"
- Expected: Status 200, success message
- Find
-
Test Word Search:
- Find
GET /paragraphs/search - Enter search word:
python - Click "Execute"
- Expected: Paragraphs ranked by word frequency
- Find
-
Test Paragraph Listing:
- Find
GET /paragraphs/ - Click "Execute"
- Expected: Paginated list of your paragraphs
- Find
- Import Collection: Create requests for each endpoint
- Set Environment Variable:
{{token}}for authorization - Test Flow: Register - Login - Save token - Test protected endpoints
# Register user
curl -X POST "http://localhost:8000/auth/register" \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "password123"}'
# Login
curl -X POST "http://localhost:8000/auth/login" \
-H "Content-Type: application/json" \
-d '{"email": "user@example.com", "password": "password123"}'
# Submit paragraphs (use token from login)
curl -X POST "http://localhost:8000/paragraphs/" \
-H "Authorization: Bearer <your-token>" \
-H "Content-Type: application/json" \
-d '{"paragraphs": ["I love apple pie. Apple is sweet.", "The apple tree grows apples."]}'
# Search for word
curl -X GET "http://localhost:8000/paragraphs/search?word=apple" \
-H "Authorization: Bearer <your-token>"# Run tests (uses SQLite automatically)
make test
# Or directly with pytest
USE_SQLITE=true python -m pytest tests/ -vmake build # Build Docker images
make up # Start production mode (all services)
make dev-up # Start development mode (web only)
make test # Run tests
make clean # Clean Docker resources
make clean-all # Clean everything including images
make dev-local # Run locally without DockerUser- User accounts with email/passwordParagraph- User-submitted text contentWordCount- Global word frequency per userParagraphWordCount- Word frequency per paragraphRefreshToken- JWT refresh token storage
- User submits paragraphs via
POST /paragraphs - Paragraphs are stored immediately
- Background task indexes words:
- Tokenizes text (lowercase, remove punctuation)
- Updates
ParagraphWordCount(per paragraph) - Updates
WordCount(global user totals) - Uses SQL upserts for concurrency safety
- Query
ParagraphWordCountfor user + word - Join with
Paragraphfor content - Order by word count descending
- Return top 10 results
Optimized for fast search:
(user_id, word)- Word lookup(user_id, word, count)- Frequency sorting(paragraph_id)- Paragraph joins
The project is deployed on Render and can be accessed via https://python-based-backend.onrender.com/. Test the APIs directly through the link.