This repository contains a sample solution demonstrating how to connect to an AI model in Azure and to localhost using SQL Server 2025
-
Updated
Dec 11, 2025 - Python
This repository contains a sample solution demonstrating how to connect to an AI model in Azure and to localhost using SQL Server 2025
Edge AI -mininimalna implementacja infrastruktury w celu wytworzenia małego modelu w kontrolowanych warunkach lokalnie na maszynie z GPU
getllm is a Python package for managing LLM models with Ollama integration and generating Python code. It allows you to install, list, set the default model, update the model list, and generate code using LLM models. GetLLM is part of the PyLama ecosystem and integrates with LogLama as the primary service for centralized logging and environment man
Knowledge base for LLMs, including ollama running locally, Google ADK examples, with detailed descriptions and articles
Rag build specifically for Polish language. By default it uses Bielik 11b-8bit. All of those components (llm, embeddings model and reranker) are meant to be self hosted. It is meant to be self hosted
Lokalny, w pełni prywatny asystent prawny typu RAG działający na GPU. Wykorzystuje model Bielik-11B, bazę wektorową Qdrant oraz polskie kodeksy prawne do generowania precyzyjnych opinii prawnych z cytowaniem źródeł.
🤖 Build and deploy OpenAI-compatible local FastAPI endpoints with SQL Server 2025 AI integration for seamless AI workflows and secure connections.
Add a description, image, and links to the bielik topic page so that developers can more easily learn about it.
To associate your repository with the bielik topic, visit your repo's landing page and select "manage topics."