lossFunctions module

Package with a bunch of loss function callbacks. If you’re planning to write your own loss function classes, then you have to set l’s loss and lossG fields. lossG is the original loss, still attached to the graph (hence “G”). Then, loss is just lossG.detach().item(). This is so that other utilities can use a shared detached loss value, for performance reasons.

shorts module

For not very complicated loss functions

class k1lib.callbacks.lossFunctions.shorts.LossF(lossF: Callable[[Tuple[Tensor, Tensor]], float])[source]

Bases: Callback

__init__(lossF: Callable[[Tuple[Tensor, Tensor]], float])[source]

Generic loss function. Expected variables in Learner:

Deposits variables into Learner at checkpoint inLoss:

  • lossG: single float tensor value, attached to graph

  • loss: lossG, but single float value

Parameters

lossF – takes in (y, yb) and returns lossG

inLoss()[source]
class k1lib.callbacks.lossFunctions.shorts.LossNLLCross(nll: bool, integrations: bool)[source]

Bases: Callback

__init__(nll: bool, integrations: bool)[source]

Adds a cross-entropy/negative-likelihood loss function.

Parameters
attached()[source]

Called when this is added to a Callback. Overrides this to do custom stuff when this happens.

inLoss()[source]
detach()[source]

Detaches from the parent Callbacks

accuracy module

For not very complicated accuracies functions

class k1lib.callbacks.lossFunctions.accuracy.AccF(predF: Optional[Callable[[Tensor], Tensor]] = None, accF: Optional[Callable[[Tuple[Tensor, Tensor]], float]] = None, integrations: bool = True, variable: str = 'accuracy', hookToLearner: bool = True)[source]

Bases: Callback

__init__(predF: Optional[Callable[[Tensor], Tensor]] = None, accF: Optional[Callable[[Tuple[Tensor, Tensor]], float]] = None, integrations: bool = True, variable: str = 'accuracy', hookToLearner: bool = True)[source]

Generic accuracy function.

Built in default accuracies functions are fine, if you don’t do something too dramatic/different. Expected variables in Learner:

Deposits variables into Learner:

  • preds: detached, batched tensor output of predF

  • accuracies: detached, batched tensor output of accF

  • accuracy: detached, single float, mean of accuracies

Where:

  • N is the batch size. Can be multidimensional, but has to agree between y and yb

  • C is the number of categories

Parameters
  • predF – takes in y, returns predictions (tensor with int elements indicating the categories)

  • accF – takes in (predictions, yb), returns accuracies (tensor with 0 or 1 elements)

  • integrations – whether to integrate ConfusionMatrix or not.

  • variable – variable to deposit into Learner

attached()[source]

Called when this is added to a Callback. Overrides this to do custom stuff when this happens.

endLoss()[source]
detach()[source]

Detaches from the parent Callbacks