AI-Powered Accessibility Platform for Inclusive Reading Experiences
ReadEaseAI transforms PDF documents into personalized, accessible reading experiences tailored for individuals with diverse learning needs and disabilities. Powered by Claude AI and OpenAI, our platform adapts content in real-time to support dyslexia, visual impairments, autism, and ADHD.
- Features
- Disability-Specific Modes
- Tech Stack
- Getting Started
- Installation
- Environment Variables
- Usage
- API Documentation
- Architecture
- Contributing
- Roadmap
- License
- π― Multi-Modal Accessibility: Four specialized reading modes optimized for different disabilities
- π€ AI-Powered Transformation: Claude Sonnet 4.5 intelligently adapts content structure and complexity
- π Text-to-Speech: Natural voice narration powered by OpenAI TTS
- π PDF Intelligence: Advanced document parsing with context-aware extraction
- π¨ Customizable Display: Adaptive fonts, spacing, and visual layouts
- βΏ WCAG Compliant: Designed with accessibility-first principles
- Semantic Simplification: Multi-level text complexity reduction
- Conversational Q&A: Interactive document exploration with memory
- Structured Learning: Auto-generated lessons with quizzes and visual aids
- Content Chunking: ADHD-optimized micro-lessons with engagement tracking
- Voice Navigation: Hands-free document interaction for blind users
Optimized for reading comprehension challenges
- β OpenDyslexic Font: Specialized typeface with weighted bottoms
- β Adjustable Letter Spacing: 0-5px customization via sidebar
- β
Three Reading Levels:
- Mild: Simplified sentence structure (15-20 words)
- Moderate: Middle-school vocabulary (β€15 words/sentence)
- Severe: Elementary-level language (β€10 words/sentence)
- β Color Overlays: Reduce visual stress
- β Text-to-Speech: Synchronized audio reading
AI Processing: Claude analyzes text complexity and rewrites content while preserving meaning (api/levels)
Complete audio-first experience
- β Conversational AI Assistant: Ask questions about document content
- β Voice-Controlled Navigation: Spacebar-activated speech recognition
- β PDF-to-Narration Conversion: Describes text AND images
- β Keyboard-Only Controls: Full accessibility without mouse
- β Conversation Memory: Context-aware dialogue across questions
Interaction Flow:
- Press & hold SPACEBAR β Ask question
- Release SPACEBAR β Submit query
- Claude analyzes document β Generates spoken answer
- Press SPACEBAR during playback β Interrupt/new question
AI Processing:
/api/narrate- Converts PDF to conversational narration/api/conversation- Handles Q&A with document context
Structured, predictable learning environment
- β Chunked Content: Manageable sections with clear progression
- β Concrete Examples: Abstract concepts explained literally
- β Visual Vocabulary Aids: Term β Definition β Example tables
- β
Interactive Assessments:
- True/False with explanations
- Multiple choice (4 options)
- Short answer with rubrics
- β "Draw-It" Visual Learning: Diagram instructions with labels
- β Spaced Repetition: Auto-generated review schedule
- β Age Personalization: Content adapts to grade level
AI Processing: /api/generate-lesson returns strict JSON schema:
{
"summary": ["5-7 bullet points"],
"vocabulary": [{"term": "...", "definition": "...", "example": "..."}],
"questions": {
"trueFalse": {...},
"multipleChoice": {...},
"shortAnswer": {...}
},
"drawIt": {"title": "...", "labels": [...], "caption": "..."},
"reviewPlan": ["Tomorrow", "In 3 days"]
}Engagement-optimized micro-learning
- β Micro-Lessons: 100-150 word chunks with emoji anchors
- β Focus Timer: 25-minute Pomodoro sessions
- β Gamified Progress: π±βπΏβπ³βπ milestone tracking
- β Active Highlighting: Visual focus on current section
- β Break Reminders: Prevents burnout
- β Interactive Checkpoints: Quick questions after each chunk
- β 6 Voice Options: Personalize narration (alloy, echo, fable, onyx, nova, shimmer)
Content Structure:
π Quick Summary (3-5 bullets)
β
π― Micro-Lesson 1 (100-150 words)
β
β
Checkpoint Question
β
π― Micro-Lesson 2...
AI Processing: /api/convert transforms PDFs with ADHD-specific constraints (short sentences, emojis, whitespace optimization)
| Technology | Version | Purpose |
|---|---|---|
| Next.js | 15.2.4 | React framework with App Router |
| React | 19.0.0 | UI library |
| TypeScript | 5.x | Type safety |
| Tailwind CSS | 4.x | Utility-first styling |
| Radix UI | Latest | Accessible component primitives |
| Lucide React | 0.487.0 | Icon library |
| react-markdown | 10.1.0 | Markdown rendering |
| Technology | Version | Purpose |
|---|---|---|
| Claude AI | Sonnet 4.5 | Document processing, Q&A, content transformation |
| OpenAI | 4.91.1 | Text-to-Speech (TTS-1, TTS-1-HD) |
| pdf-parse | 1.1.1 | PDF text extraction |
| pdfjs-dist | 5.1.91 | PDF rendering |
| Web Speech API | Native | Browser speech recognition (blindness mode) |
- Vercel: Deployment platform
- Vercel Analytics: Usage tracking
- Environment Variables: Secure API key management
- Node.js 18.x or higher
- pnpm (recommended) or npm/yarn
- Anthropic API Key (Get one here)
- OpenAI API Key (Get one here)
-
Clone the repository
git clone https://github.com/yourusername/ReadEaseAI.git cd ReadEaseAI -
Install dependencies
pnpm install # or npm install -
Set up environment variables
cp .env.example .env
Edit
.envand add your API keys:ANTHROPIC_API_KEY=your_anthropic_key_here OPENAI_API_KEY=your_openai_key_here
-
Run the development server
pnpm dev # or npm run dev -
Open your browser Navigate to http://localhost:3000
Create a .env file in the root directory:
# Required
ANTHROPIC_API_KEY=sk-ant-xxxxx
OPENAI_API_KEY=sk-xxxxx
# Optional
NODE_ENV=development
NEXT_PUBLIC_ANALYTICS_ID=your_analytics_id- Select Accessibility Mode: Choose from Dyslexia, Blindness, Autism, or ADHD on the home page
- Upload PDF: Drag & drop or select a PDF document
- AI Processing: Document is analyzed and transformed (10-30 seconds)
- Customized Reading: Interact with content optimized for your needs
- Use the sidebar to adjust letter spacing (find your sweet spot)
- Try different reading levels to match your comfort
- Enable TTS for audio reinforcement
- Press and HOLD spacebar to ask questions
- Navigate with Tab/Enter for keyboard-only control
- Ask: "Summarize page 3" or "What is the main argument?"
- Progress through sections linearly for predictable structure
- Use the Draw-It section for visual reinforcement
- Review vocabulary before diving into content
- Set a realistic timer goal (start with 10 minutes)
- Complete checkpoints to maintain engagement
- Celebrate milestones (track your π±βπΏβπ³βπ journey)
Convert PDF to audio narration
Request:
{
file: Buffer, // PDF file as buffer
fileName: string // Original filename
}
Response:
{
narration: string, // Conversational narration text
success: boolean
}Model: Claude Sonnet 4.5 (4000 tokens)
Q&A with document context
Request:
{
question: string,
documentContext: string,
conversationHistory: Message[]
}
Response:
{
answer: string, // Audio-optimized response
success: boolean
}Model: Claude Sonnet 4.5 (1000 tokens)
Generate autism-friendly structured lesson
Request:
{
file: Buffer,
age: number,
sectionNumber: number
}
Response:
{
summary: string[],
vocabulary: Vocabulary[],
questions: Questions,
drawIt: DrawItSection,
reviewPlan: string[]
}Model: Claude Sonnet 4.5 (4000 tokens)
Transform PDF for ADHD mode
Request:
{
file: Buffer,
mode: "adhd" | "dyslexic" | "deaf" | "autism"
}
Response:
{
converted: string, // Markdown content with "---" separators
success: boolean
}Model: Claude Sonnet 4.5 (4000 tokens)
Extract and clean PDF text for dyslexia
Request:
{
file: Buffer,
useAI?: boolean // Default: false
}
Response:
{
text: string, // Cleaned, dyslexia-friendly text
success: boolean
}Model: Claude Sonnet 4.5 (4096 tokens) - if useAI=true
Adjust reading complexity level
Request:
{
text: string,
level: "mild" | "moderate" | "severe"
}
Response:
{
summary: string,
rephrased: string
}Model: Claude Sonnet 4.5 (4096 tokens, temp: 0.3)
Generate Text-to-Speech audio
Request:
{
text: string,
voice?: "alloy" | "echo" | "fable" | "onyx" | "nova" | "shimmer"
}
Response:
{
audio: string // Base64-encoded MP3
}Model: OpenAI TTS-1
ReadEaseAI/
βββ src/
β βββ app/
β β βββ api/ # API Routes
β β β βββ ai-process/ # Reading level simplification
β β β βββ conversation/ # Blindness Q&A
β β β βββ convert/ # ADHD content transformation
β β β βββ generate-lesson/ # Autism lesson generation
β β β βββ narrate/ # PDF to narration
β β β βββ parse/ # PDF parsing
β β β βββ tts/ # Text-to-Speech
β β β βββ tts_adhd/ # ADHD-specific TTS
β β β
β β βββ adhd/ # ADHD mode pages
β β βββ autism/ # Autism mode pages
β β βββ blindness/ # Blindness mode pages
β β βββ dyslexia/ # Dyslexia mode pages
β β βββ processed/ # Dyslexia reader view
β β βββ reader/ # ADHD chunked reader
β β βββ refined/ # Alternative dyslexia reader
β β β
β β βββ fonts/ # Font files (OpenDyslexic)
β β βββ layout.tsx # Root layout with global sidebar
β β βββ page.tsx # Home page (mode selection)
β β βββ globals.css # Global styles
β β
β βββ components/
β β βββ ui/ # Reusable UI components
β β β βββ button.tsx
β β β βββ card.tsx
β β β βββ dialog.tsx
β β β βββ FileUploader.tsx
β β β βββ fontSelector.tsx
β β β βββ input.tsx
β β β βββ slider.tsx
β β β βββ ...
β β βββ app-sidebar.tsx # Global sidebar (font/spacing controls)
β β
β βββ hooks/
β β βββ use-mobile.ts # Mobile detection hook
β β
β βββ lib/
β βββ utils.ts # Utility functions
β
βββ public/
β βββ fonts/ # Public font assets
β
βββ .env # Environment variables (gitignored)
βββ .env.example # Environment template
βββ package.json # Dependencies
βββ tsconfig.json # TypeScript config
βββ next.config.ts # Next.js config
βββ tailwind.config.ts # Tailwind config
βββ components.json # shadcn/ui config
βββ README.md # This file
User uploads PDF
β
Next.js API Route
β
PDF Parser (pdf-parse)
β
Claude AI Processing
ββ Narration generation
ββ Text simplification
ββ Lesson structuring
ββ Q&A answering
β
OpenAI TTS (optional)
β
Client-side rendering
ββ Custom fonts
ββ Accessibility controls
ββ Interactive UI
We welcome contributions from the community! Here's how to get started:
- Fork the repository
- Create a feature branch
git checkout -b feature/amazing-feature
- Make your changes
- Follow TypeScript best practices
- Maintain accessibility standards
- Test across all disability modes
- Commit with descriptive messages
git commit -m "feat(dyslexia): add color overlay customization" - Push to your fork
git push origin feature/amazing-feature
- Open a Pull Request
- TypeScript: Strict mode enabled, no implicit
any - Accessibility: WCAG 2.1 Level AA compliance
- Formatting: Use Prettier (run
pnpm format) - Linting: Pass ESLint checks (
pnpm lint) - Testing: Include tests for new features
- π Internationalization: Support for non-English languages
- π± Mobile Optimization: Enhanced touch interfaces
- π¨ Design: Improved visual accessibility
- π§ͺ Testing: Unit/integration test coverage
- π Documentation: Tutorials and user guides
- βΏ Accessibility: Additional disability modes (e.g., hearing impairment)
See FUTURE_ROADMAP.md for detailed expansion plans including:
- Multi-Agent Systems: AI orchestration with LangGraph
- RAG Integration: Vector database for document intelligence
- MCP Support: Model Context Protocol for extensibility
- Advanced Features: Eye-tracking, spatial audio, haptic feedback
- Personalization: User profiles and adaptive learning
- PDF Processing: 10-30 seconds (varies by document size)
- Text-to-Speech: Real-time streaming
- API Response Times:
- Narration: ~15-20s (4000 tokens)
- Q&A: ~3-5s (1000 tokens)
- Lesson Generation: ~20-25s (4000 tokens)
- Browser Support: Chrome, Firefox, Safari (latest 2 versions)
- No Data Storage: PDFs are processed in-memory and discarded
- API Key Security: Keys stored in environment variables, never exposed to client
- HTTPS Only: All communication encrypted in transit
- Client-Side Processing: Where possible, processing happens in browser (Web Speech API)
This project is licensed under the MIT License - see the LICENSE file for details.
- Anthropic: For Claude AI and accessibility research
- OpenAI: For TTS capabilities
- Radix UI: For accessible component primitives
- Vercel: For hosting and deployment infrastructure
- OpenDyslexic: For the dyslexia-friendly font
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: support@readease.ai
If this project helps you or someone you know, please consider starring it on GitHub! β
Built with β€οΈ for accessibility and inclusion
Website β’ Documentation β’ Demo