Skip to content
/ CLIP_yq Public
forked from openai/CLIP

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

License

Notifications You must be signed in to change notification settings

m2b3/CLIP_yq

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

76 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CLIP

[Blog] [Paper] [Model Card] [Colab]

CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. We found CLIP matches the performance of the original ResNet50 on ImageNet “zero-shot” without using any of the original 1.28M labeled examples, overcoming several major challenges in computer vision.

Approach

CLIP

Usage

First, install PyTorch 1.7.1 (or later) and torchvision, as well as small additional dependencies, and then install this repo as a Python package. On a CUDA GPU machine, the following will do the trick:

$ conda install --yes -c pytorch pytorch=1.7.1 torchvision cudatoolkit=11.0
$ pip install ftfy regex tqdm
$ pip install git+https://github.com/openai/CLIP.git

Replace cudatoolkit=11.0 above with the appropriate CUDA version on your machine or cpuonly when installing on a machine without a GPU.

import torch
import clip
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load("ViT-B/32", device=device)

image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)

with torch.no_grad():
    image_features = model.encode_image(image)
    text_features = model.encode_text(text)
    
    logits_per_image, logits_per_text = model(image, text)
    probs = logits_per_image.softmax(dim=-1).cpu().numpy()

print("Label probs:", probs)  # prints: [[0.9927937  0.00421068 0.00299572]]

Yiqing demo

In order to use ComputeCanada to run project, here is the demo. Here I write the loadClipModel.py to load the model I want to use.

  1. zip the necessary data, models, and codes in this way
zip -r Tmp.zip clip houses patches_output tests CLIP.png hubconf.py requirements.txt yiqing_test.py yiqing_test2.py
  1. For the cliptest2.sh file, I run my yiqing_test.py yiqing_test2.py together, and store the patches_output folder back.
chmod 600 cliptest2.sh
sbatch cliptest2.sh

API

The CLIP module clip provides the following methods:

clip.available_models()

Returns the names of the available CLIP models.

clip.load(name, device=..., jit=False)

Returns the model and the TorchVision transform needed by the model, specified by the model name returned by clip.available_models(). It will download the model as necessary. The name argument can also be a path to a local checkpoint.

The device to run the model can be optionally specified, and the default is to use the first CUDA device if there is any, otherwise the CPU. When jit is False, a non-JIT version of the model will be loaded.

clip.tokenize(text: Union[str, List[str]], context_length=77)

Returns a LongTensor containing tokenized sequences of given text input(s). This can be used as the input to the model


The model returned by clip.load() supports the following methods:

model.encode_image(image: Tensor)

Given a batch of images, returns the image features encoded by the vision portion of the CLIP model.

model.encode_text(text: Tensor)

Given a batch of text tokens, returns the text features encoded by the language portion of the CLIP model.

model(image: Tensor, text: Tensor)

Given a batch of images and a batch of text tokens, returns two Tensors, containing the logit scores corresponding to each image and text input. The values are cosine similarities between the corresponding image and text features, times 100.

More Examples

Zero-Shot Prediction

The code below performs zero-shot prediction using CLIP, as shown in Appendix B in the paper. This example takes an image from the CIFAR-100 dataset, and predicts the most likely labels among the 100 textual labels from the dataset.

import os
import clip
import torch
from torchvision.datasets import CIFAR100

# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load('ViT-B/32', device)

# Download the dataset
cifar100 = CIFAR100(root=os.path.expanduser("~/.cache"), download=True, train=False)

# Prepare the inputs
image, class_id = cifar100[3637]
image_input = preprocess(image).unsqueeze(0).to(device)
text_inputs = torch.cat([clip.tokenize(f"a photo of a {c}") for c in cifar100.classes]).to(device)

# Calculate features
with torch.no_grad():
    image_features = model.encode_image(image_input)
    text_features = model.encode_text(text_inputs)

# Pick the top 5 most similar labels for the image
image_features /= image_features.norm(dim=-1, keepdim=True)
text_features /= text_features.norm(dim=-1, keepdim=True)
similarity = (100.0 * image_features @ text_features.T).softmax(dim=-1)
values, indices = similarity[0].topk(5)

# Print the result
print("\nTop predictions:\n")
for value, index in zip(values, indices):
    print(f"{cifar100.classes[index]:>16s}: {100 * value.item():.2f}%")

The output will look like the following (the exact numbers may be slightly different depending on the compute device):

Top predictions:

           snake: 65.31%
          turtle: 12.29%
    sweet_pepper: 3.83%
          lizard: 1.88%
       crocodile: 1.75%

Note that this example uses the encode_image() and encode_text() methods that return the encoded features of given inputs.

Linear-probe evaluation

The example below uses scikit-learn to perform logistic regression on image features.

import os
import clip
import torch

import numpy as np
from sklearn.linear_model import LogisticRegression
from torch.utils.data import DataLoader
from torchvision.datasets import CIFAR100
from tqdm import tqdm

# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
model, preprocess = clip.load('ViT-B/32', device)

# Load the dataset
root = os.path.expanduser("~/.cache")
train = CIFAR100(root, download=True, train=True, transform=preprocess)
test = CIFAR100(root, download=True, train=False, transform=preprocess)


def get_features(dataset):
    all_features = []
    all_labels = []
    
    with torch.no_grad():
        for images, labels in tqdm(DataLoader(dataset, batch_size=100)):
            features = model.encode_image(images.to(device))

            all_features.append(features)
            all_labels.append(labels)

    return torch.cat(all_features).cpu().numpy(), torch.cat(all_labels).cpu().numpy()

# Calculate the image features
train_features, train_labels = get_features(train)
test_features, test_labels = get_features(test)

# Perform logistic regression
classifier = LogisticRegression(random_state=0, C=0.316, max_iter=1000, verbose=1)
classifier.fit(train_features, train_labels)

# Evaluate using the logistic regression classifier
predictions = classifier.predict(test_features)
accuracy = np.mean((test_labels == predictions).astype(float)) * 100.
print(f"Accuracy = {accuracy:.3f}")

Note that the C value should be determined via a hyperparameter sweep using a validation split.

See Also

Implements of pipelines for YOLO_CLIP and non-overlap geometric crop CLIP.

We use them on COCO Search18 dataset. Since I would use ComputeCanada to get results, here are steps for each pipelines

YOLO_CLIP (Set thresholds for 18 categories based on training data(70%))

  • Store cosine similarities between detected patches' labels with task, in YOLOCLIP_train_sims.json file. Each item format (an example for the structure):
{
  'task' : 'bottle',                     # target category (18 total)
  'name' : '000000478726.jpg',           # image name
  'bbox' : [1063, 68, 95, 334],          # [x, y, w, h] bounding box of the target object in the image
  'pred_boxes'  : [[...],[...],[...],[...],[...]],               # list of YOLO detected boxes eg. [x1,y1,x2,y2]
  'similarities': [0.1848, 0.3203, 1.0000, 0.6230, 0.2622],      # per detection sim (YOLO_square_patches vs. task_text)
  'signal_matched_index': 2,             # index of the detection that matches the signal; -1 if none
}
  • Use ComputeCanada to get the similarity thresholds and store them into an output/category_YOLOClip_thresholds.csv.
  • prepare for submit to ComputeCanada.
  1. zip the necessary data, models, and codes in this way
zip -r Tmp.zip clip new_jsonFile yolov8x.pt COCOSearch18-images-TP.zip src pipelines requirements.txt 
  1. For the Set_18thresholdsYOLO.sh file, I run my pipelines/Set_18thresholdsYOLO.py, and store the output folder back.
chmod 600 Set_18thresholdsYOLO.sh
sbatch Set_18thresholdsYOLO.sh

YOLO_CLIP (Check whether input ONE image contain the target category)

  • Use ComputeCanada to run pipelines/YOLOclip.py file, the thresholds for 18 COCO categories are from output/category_YOLOClip_thresholds.csv. If there are any updates to the set_thresholds method, make sure to also upload the new threshold values accordingly.
  1. zip the necessary data, models, and codes in this way.
zip -r Tmp.zip clip yolov8x.pt COCOSearch18-images-TP.zip src pipelines output requirements.txt
  1. For the YOLOclip.sh file, I run my YOLOclip.py, print the results. Some output images for checking would be store back to local.
chmod 600 YOLOclip.sh
sbatch YOLOclip.sh

GEO_CLIP (Set thresholds for 18 categories based on training data(70%))

  • Store cosine similarities between detected patches' labels with task, in GEOCLIP_train_sims.json file. Each item format (an example for the structure):
{
  'task' : 'bottle',                     # target category (18 total)
  'name' : '000000478726.jpg',           # image name
  'bbox' : [1063, 68, 95, 334],          # [x, y, w, h] bounding box of the target object in the image
  'pred_boxes'  : [[...],[...],[...],[...],[...]],               # list of GEO_cropped boxes eg. [x1,y1,x2,y2]
  'similarities': [0.1848, 0.3203, 1.0000, 0.6230, 0.2622],      # per detection sim (GEO_crop_patches vs. task_text)
  'signal_matched_index': 2,             # index of the detection that matches the signal; -1 if none
}
  • Use ComputeCanada to get the similarity thresholds and store them into an output/category_GEOClip_thresholds.csv.
  • prepare for submit to ComputeCanada.
  1. zip the necessary data, models, and codes in this way
zip -r Tmp.zip clip new_jsonFile COCOSearch18-images-TP.zip src pipelines requirements.txt 
  1. For the Set_18thresholdsGEO.sh file, I run my pipelines/Set_18thresholdsGEO.py, and store the output folder back.
chmod 600 Set_18thresholdsGEO.sh
sbatch Set_18thresholdsGEO.sh

GEO_CLIP (input ONE new image, check whether containing the target category)

  • Use ComputeCanada to run pipelines/clipPipeline.py file, the thresholds for 18 COCO categories are from output/category_GEOClip_thresholds.csv. If there are any updates to the set_thresholds method, make sure to also upload the new threshold values accordingly.
  1. zip the necessary data, models, and codes in this way.
zip -r Tmp.zip clip COCOSearch18-images-TP.zip src pipelines output requirements.txt
  1. For the clipPipeline.sh file, I run my clipPipeline.py, print the results. Some output images for checking would be store back to local.
chmod 600 clipPipeline.sh
sbatch clipPipeline.sh

About

CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 97.4%
  • Python 2.4%
  • Shell 0.2%