micro_dl.train module

Submodules

micro_dl.train.learning_rates module

class micro_dl.train.learning_rates.CyclicLearning(base_lr=0.001, max_lr=0.006, step_size=2.0, gamma=1.0, scale_mode='cycle')

Bases: Callback

Custom Callback implementing cyclical learning rate (CLR) as in the paper: https://arxiv.org/abs/1506.01186.

Learning rate is increased then decreased in a repeated triangular pattern over time. One triangle = one cycle. step-size is the number of iterations / batches in half a cycle. The paper recommends a step-size of 2-10 times the number of batches in an epoch (empirical) i.e step-size = 2-10 epochs. Also best to stop training at the end of a cycle when the learning rate is at minimum value and accuracy/performance potentially peaks. Initial amplitude is scaled by gamma ** iterations. https://keras.io/callbacks/ https://github.com/bckenstler/CLR

clr()

Updates the cyclic learning rate with exponential decline.

Return float clr:

Learning rate as a function of iterations

on_batch_end(batch, logs=None)

Updates the learning rate at the end of each batch. Prints learning rate along with other metrics during training.

Parameters:
  • batch – Batch number from Callback super class

  • logs – Log from super class (required but not used here)

on_epoch_end(epoch, logs)

Log learning rate at the end of each epoch for Tensorboard and CSVLogger.

Parameters:
  • epoch – Epoch number from Callback super class

  • logs – Log from super class

on_train_begin(logs=None)

Set base learning rate at the beginning of training.

Parameters:

logs – Logging from super class

micro_dl.train.losses module

Custom losses

micro_dl.train.losses.binary_crossentropy_loss(y_true, y_pred, mean_loss=True)

Binary cross entropy loss :param y_true: Ground truth :param y_pred: Prediction :return float: Binary cross entropy loss

micro_dl.train.losses.dice_coef_loss(y_true, y_pred)

The Dice loss function is defined by 1 - DSC since the DSC is in the range [0,1] where 1 is perfect overlap and we’re looking to minimize the loss.

Parameters:
  • y_true – true values

  • y_pred – predicted values

Returns:

Dice loss

micro_dl.train.losses.dssim_loss(y_true, y_pred)

Structural dissimilarity loss + L1 loss DSSIM is defined as (1-SSIM)/2 https://en.wikipedia.org/wiki/Structural_similarity

Parameters:
  • y_true (tensor) – Labeled ground truth

  • y_pred (tensor) – Predicted labels, potentially non-binary

Return float:

0.8 * DSSIM + 0.2 * L1

micro_dl.train.losses.kl_divergence_loss(y_true, y_pred)

KL divergence loss D(y||y’) = sum(p(y)*log(p(y)/p(y’))

Parameters:
  • y_true – Ground truth

  • y_pred – Prediction

Return float:

KL divergence loss

micro_dl.train.losses.latent_loss(dummy_ground_truth, outputs)
micro_dl.train.losses.mae_loss(y_true, y_pred, mean_loss=True)

Mean absolute error

Keras losses by default calculate metrics along axis=-1, which works with image_format=’channels_last’. The arrays do not seem to batch flattened, change axis if using ‘channels_first

micro_dl.train.losses.masked_loss(loss_fn, n_channels)

Converts a loss function to mask weighted loss function

Loss is multiplied by mask. Mask could be binary, discrete or float. Provides different weighting of loss according to the mask. https://github.com/keras-team/keras/blob/master/keras/engine/training_utils.py https://github.com/keras-team/keras/issues/3270 https://stackoverflow.com/questions/46858016/keras-custom-loss-function-to-pass-arguments-other-than-y-true-and-y-pred

nested functions -> closures A Closure is a function object that remembers values in enclosing scopes even if they are not present in memory. Read only access!! Histogram and logical operators are not differentiable, avoid them in loss modified_loss = tf.Print(modified_loss, [modified_loss],

message=’modified_loss’, summarize=16)

Parameters:
  • loss_fn (Function) – a loss function that returns a loss image to be multiplied by mask

  • n_channels (int) – number of channels in y_true. The mask is added as the last channel in y_true

:return function masked_loss_fn

micro_dl.train.losses.ms_ssim_loss(y_true, y_pred)

Multiscale structural dissimilarity loss + L1 loss Uses the same combination weight as the original paper by Wang et al.: https://live.ece.utexas.edu/publications/2003/zw_asil2003_msssim.pdf Tensorflow doesn’t have a 3D version so for stacks the MS-SSIM is the mean of individual slices.

Parameters:
  • y_true (tensor) – Labeled ground truth

  • y_pred (tensor) – Predicted labels, potentially non-binary

Return float:

ms-ssim loss

micro_dl.train.losses.mse_loss(y_true, y_pred, mean_loss=True)

Mean squared loss

Parameters:
  • y_true – Ground truth

  • y_pred – Prediction

Return float:

Mean squared error loss

micro_dl.train.lr_finder module

class micro_dl.train.lr_finder.LRFinder(fig_fname, max_epochs=3, base_lr=0.0001, max_lr=0.1)

Bases: Callback

on_batch_end(batch, logs=None)

Increase learning rate gradually after each batch.

Parameters:
  • batch – Batch number from Callback super class

  • logs – Log from super class (required)

on_train_begin(logs=None)

Set base learning rate at the beginning of training and get step size for learning rate increase

Parameters:

logs – Logging from super class

on_train_end(logs=None)

After finishing the increase from base_lr to max_lr, save plot.

Parameters:

logs – Log from super class

micro_dl.train.metrics module

Custom metrics

micro_dl.train.metrics.binary_accuracy(y_true, y_pred)

Calculates the mean accuracy rate across all predictions for binary classification problems.

micro_dl.train.metrics.coeff_determination(y_true, y_pred)

R^2 Goodness of fit, using as a proxy for accuracy in regression

Parameters:
  • y_true – Ground truth

  • y_pred – Prediction

Return float r2:

Coefficient of determination

micro_dl.train.metrics.dice_coef(y_true, y_pred, smooth=1.0)

This is a global non-binary Dice similarity coefficient (DSC) with smoothing. It computes an approximation of Dice but over the whole batch, and it leaves predicted output as continuous. This might help alleviate potential discontinuities a binary image level Dice might introduce. DSC = 2 * |A union B| /(|A| + |B|) = 2 * |ab| / (|a|^2 + |b|^2) where a, b are binary vectors smoothed DSC = (2 * |ab| + s) / (|a|^2 + |b|^2 + s) where s is smoothing constant. Although y_pred is not binary, it is assumed to be near binary (sigmoid transformed) so |y_pred|^2 is approximated by sum(y_pred).

Parameters:
  • y_true (tensor) – Labeled ground truth

  • y_pred (tensor) – Predicted labels, potentially non-binary

  • smooth (float) – Constant added for smoothing and to avoid divide by zeros

Return float dice:

Smoothed non-binary Dice coefficient

micro_dl.train.metrics.flip_dimensions(func)

Decorator to convert channels first tensor to channels last.

Parameters:

func – Function to be decorated

micro_dl.train.metrics.mask_accuracy(n_channels)

split y_true into y_true and mask

For masked_loss there’s an added function/method to split y_true and pass to loss, metrics and callbacks.

Parameters:

n_channels (int) – Number of channels

micro_dl.train.metrics.mask_coeff_determination(n_channels)

split y_true into y_true and mask

For masked_loss there’s an added function/method to split y_true and pass to loss, metrics and callbacks.

Parameters:

n_channels (int) – Number of channels

micro_dl.train.metrics.ms_ssim(y_true, y_pred)

Shifts dimensions to channels last if applicable, before calling func.

Parameters:
  • y_true (tensor) – Gound truth data

  • y_pred (tensor) – Predicted data

Returns:

function called with channels last

micro_dl.train.metrics.pearson_corr(y_true, y_pred)

Pearson correlation :param tensor y_true: Labeled ground truth :param tensor y_pred: Predicted label,

Return float r:

Pearson over all images in the batch

micro_dl.train.metrics.ssim(y_true, y_pred)

Shifts dimensions to channels last if applicable, before calling func.

Parameters:
  • y_true (tensor) – Gound truth data

  • y_pred (tensor) – Predicted data

Returns:

function called with channels last

micro_dl.train.trainer module

Keras trainer

class micro_dl.train.trainer.BaseKerasTrainer(sess, train_config, train_dataset, val_dataset, model, num_target_channels, gpu_ids=0, gpu_mem_frac=0.95)

Bases: object

Keras training class

train()

Train the model

https://stackoverflow.com/questions/44747288/keras-sample-weight-array-error https://gist.github.com/andreimouraviev/2642384705034da92d6954dd9993fb4d https://github.com/keras-team/keras/issues/2115

Suggested: modify generator to return a tuple with (input, output, sample_weights) and use sample_weight_mode=temporal. This doesn’t fit the case for dynamic weighting (i.e. weights change with input image) Use model.fit instead of fit_generator as couldn’t find how sample weights are passed from generator to fit_generator / fit.

FOUND A HACKY WAY TO PASS DYNAMIC WEIGHTS TO LOSS FUNCTION IN KERAS! https://groups.google.com/forum/#!searchin/keras-users/pass$20custom$20loss$20|sort:date/keras-users/ue1S8uAPDKU/x2ml5J7YBwAJ

Module contents

Module for train functions