HireSense is an AI-powered interview coaching platform that provides real-time feedback, content moderation, personalized suggestions, progress tracking, and audio support to help job seekers improve their interview performance.
- Whisper Integration: Record answers using your microphone
- Real-time Transcription: Convert speech to text using OpenAI Whisper
- Audio Analysis: Get feedback on both text and voice responses
- Professional Audio Processing: Noise suppression and echo cancellation
- Multi-provider support: Groq (primary), OpenAI, Anthropic, Google AI
- Real-time feedback with detailed scoring (1-10 scale)
- Smart fallback system with automatic provider switching
- Session History: Automatic saving of all interview sessions
- Performance Metrics: Track scores across different question categories
- Improvement Analytics: Identify strengths and areas for improvement
- Goal Setting: Weekly and monthly session targets
- Progress Visualization: Charts and trends for performance monitoring
- Advanced filtering for inappropriate or unprofessional responses
- Professional standards enforcement for interview scenarios
- Safety mechanisms to prevent harmful content
- Detailed scoring with strengths, weaknesses, and actionable suggestions
- Category-specific feedback for behavioral, technical, and situational questions
- Industry-standard interview evaluation criteria
- SQLite Database: Local data storage for development
- Session Management: Automatic saving of questions, answers, and feedback
- User Progress: Persistent tracking across sessions
- Analytics Dashboard: Comprehensive performance insights
- Sub-second response times with Groq AI
- High availability with robust error handling
- Scalable architecture built on Next.js 15
- Real-time audio processing and transcription
- Frontend: Next.js 15, React 19, TypeScript, Tailwind CSS
- AI/ML: Groq, OpenAI (GPT-4 + Whisper), Anthropic Claude, Google Gemini
- Database: Prisma ORM with SQLite (development) / PostgreSQL (production)
- Audio Processing: Web Audio API, MediaRecorder, OpenAI Whisper
- Deployment: Vercel, Netlify, or any Node.js hosting platform
- Node.js 18+ and npm
- Groq API key (free tier: 14,400 requests/day)
- OpenAI API key (optional, for Whisper audio transcription)
- Clone and install dependencies:
git clone https://github.com/gupta-nu/HireSense.git
cd HireSense
npm install- Database setup:
npx prisma generate
npx prisma db push- Environment setup:
cp .env.example .env.localAdd your API keys to .env.local:
# Database
DATABASE_URL="file:./dev.db"
# Primary Provider (Required)
GROQ_API_KEY=gsk_your_groq_api_key_here
# OpenAI for Whisper (Audio Transcription)
OPENAI_API_KEY=sk_your_openai_key_here
# Optional Fallback Providers
ANTHROPIC_API_KEY=your_anthropic_key_here
GOOGLE_API_KEY=your_google_ai_key_here
# Demo Mode (set to 'true' to use without API keys)
NEXT_PUBLIC_DEMO_MODE=false- Start development server:
npm run devVisit http://localhost:3000 to start practicing interviews.
HireSense supports multiple AI providers with automatic fallback:
| Provider | Speed | Free Tier | Best For |
|---|---|---|---|
| Groq | Fastest | 14,400/day | Primary choice |
| OpenAI | Fast | Limited | High quality |
| Anthropic | Good | Limited | Detailed analysis |
| Google AI | Good | Generous | Backup option |
Groq (Recommended - FREE):
- Visit console.groq.com
- Sign up with Google/GitHub
- Create API key (starts with
gsk_)
OpenAI (Optional):
- Visit platform.openai.com
- Create account and add billing
- Generate API key (starts with
sk-)
POST /api/interview/analyze
// Request
{
"question": "Tell me about yourself",
"answer": "I am a software engineer...",
"category": "general" | "behavioral" | "technical" | "situational",
"questionId": "unique-question-id",
"duration": 120, // seconds
"userId": "user-123",
"isAudioAnswer": false,
"transcript": "transcribed audio text" // if audio
}
// Response
{
"success": true,
"feedback": {
"score": 8,
"strengths": ["Clear communication", "Relevant experience"],
"weaknesses": ["Could add specific examples"],
"suggestions": ["Include quantifiable achievements"],
"overallFeedback": "Strong response with room for improvement..."
}
}POST /api/audio/transcribe
// Request (FormData)
audio: File // Audio file (WebM, MP3, WAV, etc.)
// Response
{
"success": true,
"transcript": "Transcribed text from audio",
"duration": 45, // seconds
"wordCount": 67
}GET /api/user/progress?userId=user-123
// Response
{
"success": true,
"progress": {
"totalSessions": 15,
"averageScore": 7.2,
"categoryScores": {
"behavioral": 8.1,
"technical": 6.8,
"situational": 7.0,
"general": 7.5
},
"improvementAreas": ["Adding specific examples", "Quantifying achievements"],
"strengths": ["Clear communication", "Technical knowledge"],
"recentSessions": [...] // Last 10 sessions
}
}// Error Response
{
"success": false,
"error": "Error message",
"errorType": "quota_exceeded" | "invalid_api_key" | "rate_limit"
}The system uses a multi-provider AI architecture with automatic fallback:
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β π€ Audio Input βββββΆβ π Next.js βββββΆβ π€ AI Provider β
β + Text Input β β API Routes β β Manager β
β (Whisper) β β /analyze β β (Multi-LLM) β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β ποΈ Database β β π Response β β β‘ Groq AI β
β (SQLite/ β β Parser & β β (Primary) β
β PostgreSQL) β β Validator β β 14.4k req/day β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β β β
βΌ βΌ βΌ
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β π Progress β β π― Frontend β β π Fallback β
β Analytics & ββββββ Interview ββββββ OpenAI β β
β Tracking β β Simulator β β Anthropic β β
β Dashboard β β (React) β β Demo Mode β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β
βΌ
βββββββββββββββββββ
β π‘οΈ Content β
β Moderation β
β & Safety β
βββββββββββββββββββ
- Audio Processing: Web Audio API + OpenAI Whisper for speech-to-text
- AI Provider Manager: Multi-provider system with intelligent fallback
- Database Layer: Prisma ORM with SQLite/PostgreSQL for session storage
- Progress Analytics: Real-time tracking and performance visualization
- Content Moderation: Advanced filtering and safety checks
- Response Parser: Standardized feedback format validation
- Interview Simulator: React component with audio/text input modes
HireSense/
βββ src/
β βββ app/
β β βββ api/
β β β βββ interview/analyze/ # Interview analysis endpoint
β β β βββ audio/transcribe/ # Whisper audio transcription
β β β βββ user/progress/ # User progress tracking
β β β βββ analytics/ # Platform analytics
β β βββ globals.css # Global styles
β β βββ layout.tsx # Root layout
β β βββ page.tsx # Home page
β βββ components/
β β βββ InterviewSimulator.tsx # Main interview interface
β β βββ ProgressDashboard.tsx # Progress visualization
β βββ lib/
β β βββ ai-providers-simple.ts # Multi-provider AI system
β β βββ database.ts # Database operations
β β βββ demo-feedback.ts # Demo mode responses
β β βββ interview-utils.ts # Shared utilities
β βββ types/
β βββ interview.ts # TypeScript definitions
βββ prisma/
β βββ schema.prisma # Database schema
βββ public/ # Static assets
βββ .env.example # Environment template
βββ package.json # Dependencies
βββ README.md # Documentation
We welcome contributions! Please read our Contributing Guidelines for details on:
- Code style and standards
- Development workflow
- Pull request process
- Issue reporting
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Make your changes and add tests
- Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.