Skip to content

grasponcrypto/venice_ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Venice AI

Home Assistant Venice AI Conversation Integration

Overview

The Venice AI integration allows you to enhance your Home Assistant setup with advanced conversational capabilities powered by Venice AI. This integration enables seamless interaction with your smart home devices through natural language processing.

Features

  • Natural language understanding for smart home commands
  • Customizable responses based on user preferences
  • Integration with various Home Assistant components
  • Dynamic model selection from available Venice AI models

Installation

Option 1: HACS (Recommended)

  1. Make sure you have HACS installed in your Home Assistant instance.
  2. Click on HACS in the sidebar.
  3. Go to "Integrations".
  4. Click the three dots in the top right corner and select "Custom repositories".
  5. Add https://github.com/grasponcrypto/venice_ai as a repository with category "Integration".
  6. Click "Add".
  7. Search for "Venice AI" in the integrations tab.
  8. Click "Download" and follow the installation instructions.
  9. Restart Home Assistant.

Option 2: Manual Installation

  1. Clone the repository:

    git clone https://github.com/grasponcrypto/venice_ai.git
  2. Copy the integration files: Place the venice_ai folder in your Home Assistant custom_components directory.

  3. Restart Home Assistant: After copying the files, restart your Home Assistant instance to load the new integration.

Configuration

To configure the Venice AI integration:

  1. Go to Settings → Devices & Services
  2. Click "Add Integration" and search for "Venice AI"
  3. Enter your Venice AI API key
  4. Configure additional options as needed:
    • Select your preferred model
    • Adjust temperature, max tokens, and other parameters
    • Customize the system prompt if desired

Models

The Venice AI integration automatically filters and displays only models that support function calling, which is required for Home Assistant device control.

The current default model is Llama 3.3 70B (llama-3.3-70b), which provides excellent function calling capabilities for smart home automation.

For reasoning models like Venice Reasoning (qwen-2.5-qwq-32b) or DeepSeek R1 671B, you can disable thinking for lower latency by enabling the "Disable thinking" option in the configuration.

Support

If you encounter any issues or have feature requests, please open an issue on our GitHub Issues page.

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

home assistant venice ai conversation integration

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 5

Languages