A simple demonstration of using DepthAnything for depth-based robot navigation. This demo shows how to process real-time camera feed, estimate depth information, and generate structured navigation commands for a robot based on depth analysis.
- Real-time camera feed processing
- Depth estimation using DepthAnything model
- Region-based depth analysis for navigation
- Object detection based on depth gradients
- Automatic navigation command generation
- Web-based visualization interface
- Real-time depth visualization
The system uses DepthAnything to provide detailed depth information:
- Divides the image into 5 regions (center, top, bottom, left, right)
- Provides depth statistics for each region:
- Mean depth
- Minimum depth
- Maximum depth
- Relative distance classification (near/far)
- Detects objects based on depth gradients
- For each object provides:
- Position (normalized x, y coordinates)
- Size (width and height as proportion of image)
- Depth information (mean depth and relative distance)
- Generates depth map visualization
- Shows region divisions
- Highlights detected objects
- Create a virtual environment:
python -m venv venv
source venv/Scripts/activate- Install requirements:
pip install -r requirements.txt- Start the server:
python app.py-
Open your web browser to
http://localhost:53549/ -
Allow camera access when prompted
-
Click "Start Processing" to begin real-time processing
- Start the depth analysis interface:
python web_interface.py-
Open your web browser to
http://localhost:53549/ -
Use the interface to:
- Upload images for analysis
- View depth maps and visualizations
- See region-based depth analysis
- Inspect detected objects
You can also use the depth analyzer directly from the command line:
python depth_analyzer.py path/to/image.jpgThis will:
- Generate a depth analysis
- Save visualization to 'depth_analysis.png'
- Print detailed analysis results
The system generates JSON commands in the following format:
{
"velocity_command": {
"linear_velocity_mps": 0.5, // Forward/backward speed (-1.0 to 1.0)
"angular_velocity_radps": 0.2 // Turning speed (-1.0 to 1.0)
},
"gait_mode": "trotting", // Robot's movement style
"reasoning": "Moving forward to approach the target object while avoiding the obstacle on the left",
"timestamp": "2024-01-31T17:00:03Z"
}