📉 Losses¶
Collection of popular semantic segmentation losses. Adapted from an awesome repo with pytorch utils https://github.com/BloodAxe/pytorch-toolbelt
Constants¶
- segmentation_models_pytorch.losses.constants.BINARY_MODE: str = 'binary'¶
Loss binary mode suppose you are solving binary segmentation task. That mean yor have only one class which pixels are labled as 1, the rest pixels are background and labeled as 0. Target mask shape - (N, H, W), model output mask shape (N, 1, H, W).
- segmentation_models_pytorch.losses.constants.MULTICLASS_MODE: str = 'multiclass'¶
Loss multiclass mode suppose you are solving multi-class segmentation task. That mean you have C = 1..N classes which have unique label values, classes are mutually exclusive and all pixels are labeled with theese values. Target mask shape - (N, H, W), model output mask shape (N, C, H, W).
- segmentation_models_pytorch.losses.constants.MULTILABEL_MODE: str = 'multilabel'¶
Loss multilabel mode suppose you are solving multi-label segmentation task. That mean you have C = 1..N classes which pixels are labeled as 1, classes are not mutually exclusive and each class have its own channel, pixels in each channel which are not belong to class labeled as 0. Target mask shape - (N, C, H, W), model output mask shape (N, C, H, W).
JaccardLoss¶
- class segmentation_models_pytorch.losses.JaccardLoss(mode, classes=None, log_loss=False, from_logits=True, smooth=0.0, eps=1e-07)[source]¶
Implementation of Jaccard loss for image segmentation task. It supports binary, multiclass and multilabel cases
- Parameters
mode – Loss mode ‘binary’, ‘multiclass’ or ‘multilabel’
classes – List of classes that contribute in loss computation. By default, all channels are included.
log_loss – If True, loss computed as - log(jaccard_coeff), otherwise 1 - jaccard_coeff
from_logits – If True, assumes input is raw logits
smooth – Smoothness constant for dice coefficient
ignore_index – Label that indicates ignored pixels (does not contribute to loss)
eps – A small epsilon for numerical stability to avoid zero division error (denominator will be always greater or equal to eps)
- Shape
y_pred - torch.Tensor of shape (N, C, H, W)
y_true - torch.Tensor of shape (N, H, W) or (N, C, H, W)
- Reference
DiceLoss¶
- class segmentation_models_pytorch.losses.DiceLoss(mode, classes=None, log_loss=False, from_logits=True, smooth=0.0, ignore_index=None, eps=1e-07)[source]¶
Implementation of Dice loss for image segmentation task. It supports binary, multiclass and multilabel cases
- Parameters
mode – Loss mode ‘binary’, ‘multiclass’ or ‘multilabel’
classes – List of classes that contribute in loss computation. By default, all channels are included.
log_loss – If True, loss computed as - log(dice_coeff), otherwise 1 - dice_coeff
from_logits – If True, assumes input is raw logits
smooth – Smoothness constant for dice coefficient (a)
ignore_index – Label that indicates ignored pixels (does not contribute to loss)
eps – A small epsilon for numerical stability to avoid zero division error (denominator will be always greater or equal to eps)
- Shape
y_pred - torch.Tensor of shape (N, C, H, W)
y_true - torch.Tensor of shape (N, H, W) or (N, C, H, W)
- Reference
TverskyLoss¶
- class segmentation_models_pytorch.losses.TverskyLoss(mode, classes=None, log_loss=False, from_logits=True, smooth=0.0, ignore_index=None, eps=1e-07, alpha=0.5, beta=0.5, gamma=1.0)[source]¶
Implementation of Tversky loss for image segmentation task. Where TP and FP is weighted by alpha and beta params. With alpha == beta == 0.5, this loss becomes equal DiceLoss. It supports binary, multiclass and multilabel cases
- Parameters
mode – Metric mode {‘binary’, ‘multiclass’, ‘multilabel’}
classes – Optional list of classes that contribute in loss computation;
default (By) –
channels are included. (all) –
log_loss – If True, loss computed as
-log(tversky)
otherwise1 - tversky
from_logits – If True assumes input is raw logits
smooth –
ignore_index – Label that indicates ignored pixels (does not contribute to loss)
eps – Small epsilon for numerical stability
alpha – Weight constant that penalize model for FPs (False Positives)
beta – Weight constant that penalize model for FNs (False Positives)
gamma – Constant that squares the error function. Defaults to
1.0
- Returns
torch.Tensor
- Return type
loss
FocalLoss¶
- class segmentation_models_pytorch.losses.FocalLoss(mode, alpha=None, gamma=2.0, ignore_index=None, reduction='mean', normalized=False, reduced_threshold=None)[source]¶
Compute Focal loss
- Parameters
mode – Loss mode ‘binary’, ‘multiclass’ or ‘multilabel’
alpha – Prior probability of having positive value in target.
gamma – Power factor for dampening weight (focal strength).
ignore_index – If not None, targets may contain values to be ignored. Target values equal to ignore_index will be ignored from loss computation.
normalized – Compute normalized focal loss (https://arxiv.org/pdf/1909.07829.pdf).
reduced_threshold – Switch to reduced focal loss. Note, when using this mode you should use reduction=”sum”.
- Shape
y_pred - torch.Tensor of shape (N, C, H, W)
y_true - torch.Tensor of shape (N, H, W) or (N, C, H, W)
- Reference
LovaszLoss¶
- class segmentation_models_pytorch.losses.LovaszLoss(mode, per_image=False, ignore_index=None, from_logits=True)[source]¶
Implementation of Lovasz loss for image segmentation task. It supports binary, multiclass and multilabel cases
- Parameters
mode – Loss mode ‘binary’, ‘multiclass’ or ‘multilabel’
ignore_index – Label that indicates ignored pixels (does not contribute to loss)
per_image – If True loss computed per each image and then averaged, else computed per whole batch
- Shape
y_pred - torch.Tensor of shape (N, C, H, W)
y_true - torch.Tensor of shape (N, H, W) or (N, C, H, W)
- Reference
SoftBCEWithLogitsLoss¶
- class segmentation_models_pytorch.losses.SoftBCEWithLogitsLoss(weight=None, ignore_index=- 100, reduction='mean', smooth_factor=None, pos_weight=None)[source]¶
Drop-in replacement for torch.nn.BCEWithLogitsLoss with few additions: ignore_index and label_smoothing
- Parameters
ignore_index – Specifies a target value that is ignored and does not contribute to the input gradient.
smooth_factor – Factor to smooth target (e.g. if smooth_factor=0.1 then [1, 0, 1] -> [0.9, 0.1, 0.9])
- Shape
y_pred - torch.Tensor of shape NxCxHxW
y_true - torch.Tensor of shape NxHxW or Nx1xHxW
- Reference
SoftCrossEntropyLoss¶
- class segmentation_models_pytorch.losses.SoftCrossEntropyLoss(reduction='mean', smooth_factor=None, ignore_index=- 100, dim=1)[source]¶
Drop-in replacement for torch.nn.CrossEntropyLoss with label_smoothing
- Parameters
smooth_factor – Factor to smooth target (e.g. if smooth_factor=0.1 then [1, 0, 0] -> [0.9, 0.05, 0.05])
- Shape
y_pred - torch.Tensor of shape (N, C, H, W)
y_true - torch.Tensor of shape (N, H, W)
- Reference