Loss Functions

Additional loss functions

source

BCEWithLogitsLoss

 BCEWithLogitsLoss (weight:Tensor|None=None, reduction:str='mean',
                    pos_weight:Tensor|None=None)

Like nn.BCEWithLogitsLoss, but with ‘batchmean’ reduction from MosiacML. batchmean scales loss by the batch size which results in larger loss values more similar to nn.CrossEntropy then mean reduction.

Type Default Details
weight Tensor | None None Rescaling weieght for each class
reduction str mean Reduction to apply to loss output. Also supports ‘batchmean’.
pos_weight Tensor | None None Weight of positive examples
Returns None

source

ClassBalancedCrossEntropyLoss

 ClassBalancedCrossEntropyLoss (samples_per_class:Tensor, beta:float=0.99,
                                ignore_index:int=-100,
                                reduction:str='mean',
                                label_smoothing:float=0.0, axis:int=-1)

Class Balanced Cross Entropy Loss, from https://arxiv.org/abs/1901.05555.


source

ClassBalancedBCEWithLogitsLoss

 ClassBalancedBCEWithLogitsLoss (samples_per_class:Tensor,
                                 beta:float=0.99, reduction:str='mean',
                                 thresh:float=0.5)

Class Balanced BCE With Logits Loss, from https://arxiv.org/abs/1901.05555 with ‘batchmean’ reduction

Type Default Details
samples_per_class Tensor
beta float 0.99
reduction str mean
thresh float 0.5 Threshold for decodes