micro_dl.networks module

Submodules

Submodules

micro_dl.networks.base_conv_net module

Base class for all networks

class micro_dl.networks.base_conv_net.BaseConvNet(network_config, predict=False)

Bases: object

Base class for all networks

abstract build_net()

Assemble/build the network from layers

micro_dl.networks.base_image_to_vector_net module

Network for regressing a vector from a set of images

class micro_dl.networks.base_image_to_vector_net.BaseImageToVectorNet(network_config, predict=False)

Bases: BaseConvNet

Network for regression/classification from a set of images

build_net()

Assemble the network

micro_dl.networks.base_unet module

Base class for U-net

class micro_dl.networks.base_unet.BaseUNet(network_config, predict=False)

Bases: BaseConvNet

Base U-net implementation

  1. Unet: https://arxiv.org/pdf/1505.04597.pdf

2) residual Unet: https://arxiv.org/pdf/1711.10684.pdf border_mode=’same’ preferred over ‘valid’. Else have to interpolate the last block to match the input image size.

build_net()

Assemble the network

micro_dl.networks.conv_blocks module

Collection of different conv blocks typically used in conv nets

micro_dl.networks.conv_blocks.conv_block(layer, network_config, block_idx)

Convolution block

Allowed block-seq: [conv-BN-activation, conv-activation-BN,

BN-activation-conv]

To accommodate params of advanced activations, activation is a dict with

keys ‘type’ and ‘params’.

For a complete list of keys in network_config, refer to BaseConvNet.__init__() in base_conv_net.py

Parameters:
  • layer (keras.layers) – current input layer

  • network_config (dict) – dict with network related keys

  • block_idx (int) – block index in the network

Returns:

keras.layers after performing operations in block-sequence repeated for num_convs_per_block times

TODO: data_format from network_config won’t work for full 3D models in predict if depth is set to None

micro_dl.networks.conv_blocks.downsample_conv_block(layer, network_config, block_idx, downsample_shape=None)

Conv-BN-activation block

Parameters:
  • layer (keras.layers) – current input layer

  • network_config (dict) – please check conv_block()

  • block_idx (int) – block index in the network

  • downsample_shape (tuple) – anisotropic downsampling kernel shape

Returns:

keras.layers after downsampling and conv_block

micro_dl.networks.conv_blocks.pad_channels(input_layer, final_layer, channel_axis)

Zero pad along channels before residual/skip add

Parameters:

input_layer (keras.layers) – input layer to be padded with zeros / 1x1

to match shape of final layer :param keras.layers final_layer: layer whose shape has to be matched :param int channel_axis: dimension along which to pad :return: keras.layer layer_padded - layer with the same shape as final

layer

micro_dl.networks.conv_blocks.residual_conv_block(layer, network_config, block_idx)

Convolution block where the last layer is merged (+) with input layer

Parameters:
  • layer (keras.layers) – current input layer

  • network_config (dict) – please check conv_block()

  • block_idx (int) – block index in the network

Returns:

keras.layers after conv-block and residual merge

micro_dl.networks.conv_blocks.residual_downsample_conv_block(layer, network_config, block_idx, downsample_shape=None)

Convolution block where the last layer is merged (+) with input layer

Parameters:
  • layer (keras.layers) – current input layer

  • network_config (dict) – please check conv_block()

  • block_idx (int) – block index in the network

  • downsample_shape (tuple) – anisotropic downsampling kernel shape

Returns:

keras.layers after conv-block and residual merge

micro_dl.networks.conv_blocks.skip_merge(skip_layers, upsampled_layers, skip_merge_type, data_format, num_dims, padding)

Skip connection concatenate/add to upsampled layer :param keras.layer skip_layers: as named :param keras.layer upsampled_layers: as named :param str skip_merge_type: [add, concat] :param str data_format: [channels_first, channels_last] :param int num_dims: as named :param str padding: same or valid :return: keras.layer skip merged layer

micro_dl.networks.image2D_to_vector_net module

Image 2D to vector / scalar conv net

class micro_dl.networks.image2D_to_vector_net.Image2DToVectorNet(network_config, predict=False)

Bases: BaseImageToVectorNet

Uses 2D images as input

micro_dl.networks.image3D_to_vector_net module

Image 3D to vector / scalar conv net

class micro_dl.networks.image3D_to_vector_net.Image3DToVectorNet(network_config, predict=False)

Bases: BaseImageToVectorNet

Uses 3D images as input

micro_dl.networks.unet2D module

Unet 2D

class micro_dl.networks.unet2D.UNet2D(network_config, predict=False)

Bases: BaseUNet

2D UNet

[batch_size, num_channels, y, x] or [batch_size, y, x, num_channels]

micro_dl.networks.unet3D module

Unet 3D

class micro_dl.networks.unet3D.UNet3D(network_config, predict=False)

Bases: BaseUNet

3D UNet

[batch_size, num_channels, z, y, x] or [batch_size, z, y, x, num_channels]

micro_dl.networks.unet_stack_2D module

Predict the center slice from a stack of 3-5 slices

class micro_dl.networks.unet_stack_2D.UNetStackTo2D(network_config, predict=False)

Bases: BaseUNet

Implements a U-net that takes a stack and predicts the center slice

build_net()

Assemble the network

Treat the downsampling blocks as 3D and the upsampling blocks as 2D. All blocks use 3D filters: either 3x3x3 or 3x3x1. Another variant is: if the stack could be treated as channels similar to RGB and 2D convolutions are sufficient to extract features. This could be done by using UNet2D with num_input_channels = 3 (N) and num_output_channels = 1

micro_dl.networks.unet_stack_stack module

Unet for 3D volumes with anisotropic shape

class micro_dl.networks.unet_stack_stack.UNetStackToStack(network_config, predict=False)

Bases: BaseUNet

Unet for anisotropic stacks

build_net()

Assemble the network

micro_dl.networks.vqnet module

class micro_dl.networks.vqnet.VQVAE(network_config, predict=False)

Bases: BaseConvNet

build_net()

Assemble/build the network from layers

decoder_pass(inputs)
encoder_pass(inputs)
class micro_dl.networks.vqnet.VectorQuantizer(k, **kwargs)

Bases: Layer

build(input_shape)

Creates the layer weights.

Must be implemented on all layers that have weights.

# Arguments
input_shape: Keras tensor (future input to layer)

or list/tuple of Keras tensors to reference for weight shape computations.

call(inputs)

This is where the layer’s logic lives.

# Arguments

inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.

# Returns

A tensor or list/tuple of tensors.

sample(k_index)

Module contents

Classes related to different NN architectures