This is a vibecodded application to serve as an HTTP API for CLIP models like from Jina or OpenAI. It uses the sentence-transformers library with the clip-ViT-B-32 model to generate high-quality, multimodal embeddings for both text and images.
The service is fully containerized using Docker and Docker Compose, and it utilizes a persistent volume to cache the large model file, ensuring fast startup times after the initial download.
- Docker
- Docker Compose
The service uses the following environment variables:
MODEL_NAME: The name of the model to use. Defaults toclip-ViT-B-32.TRANSFORMERS_CACHE: The path to the persistent volume where the model cache will be stored.
-
Build and Run: Execute the following command in the root directory:
docker compose -f docker/compose.yml up --build -d
The first run will download the
clip-ViT-B-32model by default (or the model you put in the env variables) (approx. 600MB) and store it in the persistentmodel_cachevolume. -
Access the API: The service will be available at
http://localhost:8000.
The load balancing is handled by nginx and a set of replica containers are created. calling the nginx server would automatically choose the target instance.
Note : In order to have a single downloading instance of the default model, it's better to run the single instance compose file to populate the volume with the model files. After that all instances will just read and load their copy of the model into VRAM.
Note : Ensure that you have enough VRAM to hold the model copies, otherwise some instances might crash or spill into system RAM.
-
Build and Run: Execute the following command in the root directory:
docker compose -f docker/compose_load_balanced.yml up --build -d
This will also download the default model,
-
Access the API: The service will be available at
http://localhost:8004.
The service exposes three primary endpoints for generating embeddings.
- Endpoint:
POST /embed/text - Description: Generates embeddings for a list of text strings.
- Request Body (
application/json):{ "texts": [ "A photo of a cat sitting on a couch.", "The quick brown fox jumps over the lazy dog." ] } - Response Body (
application/json):{ "embeddings": [ [0.123, 0.456, ...], [0.789, 0.101, ...] ], "model": "clip-ViT-B-32" }
- Endpoint:
POST /embed/image - Description: Downloads images from provided URLs and generates embeddings.
- Request Body (
application/json):{ "image_urls": [ "https://example.com/image1.jpg", "https://example.com/image2.png" ] } - Response Body (
application/json):{ "embeddings": [ [0.123, 0.456, ...], [0.789, 0.101, ...] ], "model": "clip-ViT-B-32" }
- Endpoint:
POST /v1/embeddings - Description: Generates embeddings for images using an OpenAI-compatible API interface.
- Request Body (
application/json):{ "model": "clip-ViT-B-32", "input": [ "https://example.com/image1.jpg", "https://example.com/image2.png" ] } - Response Body (
application/json):{ "object": "list", "data": [ { "object": "embedding", "embedding": [0.123, 0.456, ...], "index": 0 }, { "object": "embedding", "embedding": [0.789, 0.101, ...], "index": 1 } ], "model": "clip-ViT-B-32", "usage": { "prompt_tokens": 0, "total_tokens": 0 } }
- Endpoint:
GET /health - Description: Checks if the service is running and the model is loaded.
We welcome contributions! Please follow these basic rules:
- Fork the repository and create your feature branch (
git checkout -b feature/AmazingFeature). - Ensure your code adheres to the existing style and conventions (Python, FastAPI).
- Write clear, concise commit messages.
- Open a Pull Request describing your changes.
This project is licensed under the MIT License. See the LICENSE file for details.