Home Assistant Venice AI Conversation Integration
The Venice AI integration allows you to enhance your Home Assistant setup with advanced conversational capabilities powered by Venice AI. This integration enables seamless interaction with your smart home devices through natural language processing.
- Natural language understanding for smart home commands
- Customizable responses based on user preferences
- Integration with various Home Assistant components
- Dynamic model selection from available Venice AI models
- Make sure you have HACS installed in your Home Assistant instance.
- Click on HACS in the sidebar.
- Go to "Integrations".
- Click the three dots in the top right corner and select "Custom repositories".
- Add
https://github.com/grasponcrypto/venice_aias a repository with category "Integration". - Click "Add".
- Search for "Venice AI" in the integrations tab.
- Click "Download" and follow the installation instructions.
- Restart Home Assistant.
-
Clone the repository:
git clone https://github.com/grasponcrypto/venice_ai.git
-
Copy the integration files: Place the
venice_aifolder in your Home Assistantcustom_componentsdirectory. -
Restart Home Assistant: After copying the files, restart your Home Assistant instance to load the new integration.
To configure the Venice AI integration:
- Go to Settings → Devices & Services
- Click "Add Integration" and search for "Venice AI"
- Enter your Venice AI API key
- Configure additional options as needed:
- Select your preferred model
- Adjust temperature, max tokens, and other parameters
- Customize the system prompt if desired
The Venice AI integration automatically filters and displays only models that support function calling, which is required for Home Assistant device control.
The current default model is Llama 3.3 70B (llama-3.3-70b), which provides excellent function calling capabilities for smart home automation.
For reasoning models like Venice Reasoning (qwen-2.5-qwq-32b) or DeepSeek R1 671B, you can disable thinking for lower latency by enabling the "Disable thinking" option in the configuration.
If you encounter any issues or have feature requests, please open an issue on our GitHub Issues page.
This project is licensed under the MIT License - see the LICENSE file for details.