Skip to content

🦯 LiDAR-powered iOS app helping visually impaired users navigate safely using real-time obstacle detection, object recognition, and text reading

License

Notifications You must be signed in to change notification settings

yadava5/VisualAssist

Visual Assist

Visual Assist

Empowering independence through intelligent visual assistance


⚠️ BETA VERSION β€” This app is currently in active development and testing. Features may change and bugs may exist. Report issues β†’


Platform Swift Xcode License


Beta Version Build


Features Β· Requirements Β· Getting Started Β· Architecture Β· Privacy


LiDAR Required ARKit On-Device ML Accessible



🎯 Overview

Visual Assist is a native iOS application designed to help visually impaired users navigate their environment safely and independently. Built with Apple's latest frameworks, it leverages the power of:

ARKit
LiDAR + ARKit
Depth Sensing
Vision
Vision
Text Recognition
Core ML
Core ML
Object Detection
SwiftUI
SwiftUI
Modern Interface


πŸ§ͺ Beta Status

TestFlight

Visual Assist is currently in beta testing.

This means:

  • πŸ”¨ Active Development β€” New features being added regularly
  • πŸ› Bug Fixes β€” Known issues are being addressed
  • πŸ“ Feedback Welcome β€” Your input helps improve the app
  • ⚠️ Not Production Ready β€” Use with awareness of potential issues

Known Limitations

Area Status Notes
Navigation Mode βœ… Working Core functionality complete
Text Reading βœ… Working OCR may vary with lighting
Object Detection πŸ”„ Testing Accuracy improvements ongoing
Voice Commands βœ… Working English only for now
Apple Watch οΏ½οΏ½ Planned Coming in future release


✨ Features

🧭 Navigation Mode

Real-time obstacle detection powered by LiDAR sensor technology.

Feature
3-Zone Scanning βœ“
Distance Alerts βœ“
Haptic Feedback βœ“
Floor Detection βœ“

πŸ“– Text Reading

Point-and-read OCR with natural speech synthesis.

Feature
Live OCR βœ“
Freeze Frame βœ“
Natural Speech βœ“
Tap to Focus βœ“

πŸ‘οΈ Object Awareness

AI-powered scene understanding and description.

Feature
Object Detection βœ“
Scene Description βœ“
People Counting βœ“
On-Device ML βœ“

🎀 Voice Commands

πŸ—£οΈ "Navigate"           β†’ Start obstacle detection
πŸ—£οΈ "Read text"          β†’ Begin text reading
πŸ—£οΈ "What's around me"   β†’ Describe surroundings
πŸ—£οΈ "Stop"               β†’ Stop current action
πŸ—£οΈ "Faster" / "Slower"  β†’ Adjust speech rate
πŸ—£οΈ "Help"               β†’ List all commands


πŸ“‹ Requirements

πŸ“± Hardware

Device LiDAR
iPhone 12 Pro / Pro Max βœ“
iPhone 13 Pro / Pro Max βœ“
iPhone 14 Pro / Pro Max βœ“
iPhone 15 Pro / Pro Max βœ“
iPhone 16 Pro / Pro Max βœ“

πŸ’» Software

Requirement Version
iOS 17.0+
Xcode 15.0+
Swift 5.9+

πŸ”‘ Permissions

  • πŸ“· Camera
  • 🎀 Microphone
  • πŸ—£οΈ Speech Recognition


πŸš€ Getting Started

Installation

# Clone the repository
git clone https://github.com/yadava5/VisualAssist.git

# Navigate to project
cd VisualAssist

# Open in Xcode
open VisualAssist.xcodeproj

Build & Run

Step Action
1️⃣ Select your Development Team in Signing & Capabilities
2️⃣ Connect your iPhone Pro via USB
3️⃣ Press ⌘ + R to build and run

First Launch

Grant permissions β†’ App announces "Visual Assist ready" β†’ Start using!



πŸ—οΈ Architecture

VisualAssist/
β”œβ”€β”€ πŸ“ App/                    # Entry point & state
β”‚   β”œβ”€β”€ VisualAssistApp.swift
β”‚   └── AppState.swift
β”œβ”€β”€ πŸ“ Views/                  # SwiftUI interface
β”‚   β”œβ”€β”€ HomeView.swift
β”‚   β”œβ”€β”€ NavigationModeView.swift
β”‚   β”œβ”€β”€ TextReadingModeView.swift
β”‚   β”œβ”€β”€ ObjectAwarenessModeView.swift
β”‚   └── Components/
β”œβ”€β”€ πŸ“ Services/               # Business logic
β”‚   β”œβ”€β”€ LiDARService.swift
β”‚   β”œβ”€β”€ CameraService.swift
β”‚   β”œβ”€β”€ SpeechService.swift
β”‚   └── HapticService.swift
β”œβ”€β”€ πŸ“ Models/                 # Data structures
└── πŸ“ Utilities/              # Helpers

Technology Stack

Framework Purpose
ARKit LiDAR depth sensing πŸ”΅
Vision Text recognition (OCR) 🟒
Core ML Object detection 🟣
AVFoundation Camera capture 🟠
Speech Voice commands πŸ”΄
Core Haptics Haptic feedback 🟑

Design Patterns

Pattern Usage
MVVM Clean view/logic separation
Combine Reactive @Published properties
Swift Concurrency Modern async/await
iOS 26 Design Liquid glass UI effects


οΏ½οΏ½ Privacy

Feature Description
πŸ” On-Device Processing All ML runs locally on your iPhone
πŸ“‘ No Network Required Works completely offline
🚫 No Data Collection Nothing leaves your device
πŸ“Š No Analytics Zero tracking or telemetry
πŸ‘€ No Account Use immediately, no sign-up


β™Ώ Accessibility

Visual Assist is built with accessibility as a core principle:

VoiceOver & UI

  • βœ… Full VoiceOver support
  • βœ… Dynamic Type compatible
  • βœ… High Contrast mode
  • βœ… Reduce Motion respected
  • βœ… Large touch targets (44pt min)

Haptic Patterns

Pattern Meaning
Β· Action confirmed
Β·Β· Mode changed
~~~ Critical obstacle
Β·Β·Β· Warning


πŸ—ΊοΈ Roadmap

  • ⌚ Apple Watch companion app
  • πŸ—ΊοΈ Indoor mapping & saved locations
  • πŸ’΅ Currency recognition
  • 🌍 Multi-language support
  • πŸ”— Siri Shortcuts integration
  • πŸš— CarPlay navigation support


πŸ“œ License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.

Permission
βœ“ Share β€” Copy and redistribute βœ…
βœ“ Adapt β€” Remix and build upon βœ…
βœ— Commercial use without permission ❌

For commercial licensing, contact the author.

CC BY-NC 4.0

View License



πŸ“š Documentation

This project uses DocC for API documentation.

# Build documentation in Xcode
# Product β†’ Build Documentation (βŒƒβ‡§βŒ˜D)

# Or via command line
xcodebuild docbuild -scheme VisualAssist -derivedDataPath ./docs



Built with ❀️ for accessibility

Β© 2026 Ayush. All rights reserved.


Beta

Visual Assist is not affiliated with Apple Inc.
iPhone, LiDAR, ARKit, and other Apple trademarks are property of Apple Inc.


Made with Swift Built for iOS

About

🦯 LiDAR-powered iOS app helping visually impaired users navigate safely using real-time obstacle detection, object recognition, and text reading

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Sponsor this project

Packages

No packages published

Languages