Skip to content

Distinction between multi_class_dice_loss & avg_dice_loss #27

@gattia

Description

@gattia

I was looking through the code and thinking I should probably use multi_class_dice_loss with softmax after a quick look but then it wasn't an option - so, I added it. But, then I wondered why it wasn't an option so I did more comparing of multi_class_dice_loss and avg_dice_loss. Below I summarize what I'm interpreting the main difference between them to be.

My general take is that I'd be inclined to use avg_dice_loss because it would be less likely to wash out a few bad results on segments with only a few pixels (e.g., if one image in a batch has thick healthy patellar cartilage and the other has barely any, multi_class would be skewed more towards the healthy, I think?). I'm curious on the original rationale/distinction between them and if I might be missing something.

My take is:
multi_class_dice_loss

  • Flattens the batch of images into shape [batch_size* product(image_dims), n_classes].
  • Use this to calculate a dice loss per class which effectively treats all images in the batch as one image.
    • Shape after dice loss = [n_classes,]
  • Average the dice loss from the multiple classes to get a single value

avg_dice_loss

  • Reshape the batch of images into shape [batch_size, product(image_dims), n_classes].
  • Calcualate dice loss per image & per class
    • Shape after dice loss = [batch_size, n_classes]
  • Average the dice losses

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions