Bewegungsschrift is a tool for generating Labanotation from video inputs, designed exclusively for analyzing human movement. This tool takes a video file as input, uses advanced computer vision to detect human body movements, and converts these movements into Labanotation symbols, creating a precise representation of human motion.
- Features
- Technology Stack
- Installation
- Usage
- Configuration
- Examples
- Limitations
- Contributing
- License
- Video Input Analysis: Accepts video files containing human subjects and analyzes the movement.
- Human Pose Estimation: Utilizes cutting-edge pose estimation models to detect and track the joints of human figures.
- Labanotation Output: Converts the extracted motion into Labanotation symbols, providing a detailed and standard representation of movement.
- Optimized for Human Models: Exclusively designed to detect and represent human movements, ensuring high accuracy.
- Geometry and Programming Integration: Combines elements of geometry for movement analysis and programming for automatic conversion to Labanotation.
- Human Selection: Allows selecting a human in the video by drawing a border around it.
- Support for Various Video Formats: Ensures compatibility with multiple video formats using OpenCV's
cv2.VideoCapture.
- Python: Core language used for scripting and processing.
- OpenCV: For video processing, frame extraction, and human motion detection.
- Mediapipe: A machine learning framework used for pose estimation and detecting human body joints.
- NumPy: For handling mathematical computations related to geometry.
- Matplotlib: Used to visualize the detected movements and their corresponding Labanotation.
- LabanWriter Integration: The generated Labanotation is compatible with LabanWriter for further editing.
To get started with Bewegungsschrift, you need to install the required tools and dependencies.
- Python 3.8 or above
- Git
- Virtual environment (optional, but recommended)
- Clone the repository:
git clone https://github.com/username/Bewegungsschrift.git cd Bewegungsschrift - Create and activate a virtual environment:
python -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
After installing the necessary dependencies, you can start using Bewegungsschrift to convert videos into Labanotation.
-
Basic Usage:
python bewegungsschrift.py --input path/to/your/video.mp4 --output path/to/output.yaml
--input: Path to the video file.--output: Path to save the generated Labanotation.
-
Optional Parameters:
--select-human: Enable human selection by drawing a border around the human in the video.--webcam-test: Launch test from webcam.--webcam-cube: Launch cube test from webcam.
To analyze a video of a dance routine and generate the corresponding Labanotation:
python bewegungsschrift.py --input videos/dance.mp4 --output notations/dance_notation.yamlTo analyze a video and select a human by drawing a border around it:
python bewegungsschrift.py --input videos/dance.mp4 --output notations/dance_notation.yaml --select-humanTo launch a webcam test:
python bewegungsschrift.py --webcam-testTo launch a cube test from webcam:
python bewegungsschrift.py --webcam-cubeYou can modify the settings of the tool by editing the configuration file config.yaml. Key configuration options include:
- Frame Rate: Set the rate at which frames are sampled for analysis.
- Pose Estimation Model: Choose between different pose estimation models (e.g., Mediapipe, BlazePose).
- Output Format: Specify the format for Labanotation output (e.g., JSON, XML).
The tool accepts a video of a human performing movements, such as a dance or exercise routine. The video should clearly show the human subject without obstructions.
The output is a Labanotation file that can be visualized or edited with software like LabanWriter.
Example output (YAML format):
movements:
- frame: 1
body_part: left_arm
direction: upward
angle: 45
- frame: 1
body_part: right_leg
direction: forward
angle: 30- Human Only: The tool is designed to work exclusively with human models. It may not accurately detect animals or non-human objects.
- Video Quality: The accuracy of Labanotation generation is dependent on the quality of the input video. High-resolution videos with minimal background noise are recommended.
- Complex Movements: Very fast or complex movements may result in reduced accuracy in pose estimation.
Contributions are welcome! If you have ideas for new features or find bugs, please open an issue or submit a pull request.
- Fork the repository.
- Create a new branch:
git checkout -b feature-name
- Make your changes and commit them:
git commit -m "Description of changes" - Push to your fork and submit a pull request.
This project is licensed under the MIT License. See the LICENSE file for more details.