Revolutionizing motion capture with real-time AI-powered animation workflows!
Akira-app combines Google MediaPipe's cutting-edge pose estimation with Electron's cross-platform capabilities to deliver professional-grade 3D animation tools for creators across industries - from MMD enthusiasts to game developers, VR creators, and digital artists. Create stunning animations in minutes using webcam input, no specialized hardware required.
- Next-Gen Motion Capture: Leverage Google MediaPipe's enhanced pose estimation with new hybrid tracking algorithms combining 2D video analysis and 3D skeletal modeling.
- Cross-Platform Power: Web-based Electron framework works seamlessly on Windows/macOS/Linux.
- GPU Turbocharging: NVIDIA/AMD GPU acceleration enables 3x faster processing for complex animations.
- Multi-Language Support ✅
- ✅ Now Available: English, Russian, Japanese, Chinese
- 🌐 Easily extendable localization system for future language additions
- Universal Animation Tools:
- Real-time motion capture from webcam/video input
- Multi-scene workspace with timeline synchronization
- MP4 export/Vmd export/glTF export
- Visual Customization:
- Dark/Purple/White themes with dynamic UI adaptation
- Bone Structure Visualizer for precise skeletal adjustments
- Zero Setup Required: Works instantly in browser with WebGL 2.0 compatible GPUs
- Hardware-Aware Performance: Automatic resource management for optimal frame rates
- Streamlined Workflow: Capture → Adjust → Export in 3 simple steps
- Node.js v16+
- Electron-compatible browser (Chrome 90+/Edge recommended)
- WebGL 2.0 compatible GPU
# Clone repository
git clone https://github.com/GOH23/akira-app.git
# Install dependencies
npm install
# Launch Electron app
npm start- Enable browser "Hardware Acceleration" in settings
- Use Chrome/Edge for best WebGL performance
- Optimal resolution: 1280x720 for real-time preview
- Close background apps during capture sessions
- Utilize GPU acceleration toggle in settings for complex scenes
Akira-app features an integrated AI Assistant powered by local Large Language Models (LLMs) via Ollama. This allows you to chat with Akira, generate responses, and trigger character animations using natural language.
- Open the AI Assistant: Click the AI icon or open the "AI Assistant" drawer in the app interface.
- Type your message: Enter your prompt or question in the chat input field.
- Send: Press Enter or click the send button. Akira will respond using the selected LLM model, and may trigger an animation and voice response.
- Change Language: The AI will reply in the language selected in the app settings (English, Russian, Japanese, Chinese).
- Audio Output: Enable or disable voice output using the toggle in the chat footer.
Note: For the AI Assistant to work, you must have a local Ollama server running with a compatible LLM model installed (see below).
Ollama is a local LLM server that allows you to run models like Llama 3, Mistral, Gemma, and others on your own machine. Akira-app connects to Ollama to provide AI chat and animation features.
- Windows/macOS/Linux:
- Go to https://ollama.com/download and download the installer for your OS.
- Follow the installation instructions for your platform.
- After installation, start the Ollama server:
- Windows: Launch "Ollama" from the Start menu or run
ollama servein Command Prompt. - macOS/Linux: Run
ollama servein Terminal.
- Windows: Launch "Ollama" from the Start menu or run
- By default, Ollama runs at
http://localhost:11434.
- In Terminal/Command Prompt, run:
Or choose another supported model (see Ollama models).
ollama pull llama3
- In the app, open Settings > Ollama Host and ensure the URL matches your Ollama server (default:
http://localhost:11434). - Select the desired model from the model dropdown.
- Open the AI Assistant and start chatting!
- If you see errors like "Failed to connect to Ollama" or "No models found":
- Make sure Ollama is running and accessible at the configured host.
- Ensure you have pulled at least one model (e.g.,
ollama pull llama3). - Check firewall/antivirus settings if running on Windows.
For more help, see the Ollama documentation or open an issue in this repository.