sparseml.tensorflow_v1.nn package
Submodules
sparseml.tensorflow_v1.nn.layers module

sparseml.tensorflow_v1.nn.layers.
activation
( x_tens : tensorflow.python.framework.ops.Tensor , act : Union [ None , str ] , name : str = 'act' ) [source] 
Create an activation operation in the current graph and scope.
 Parameters


x_tens – the tensor to apply the op to

act – the activation type to apply, supported: [None, relu, relu6, sigmoid, softmax]

name – the name to give to the activation op in the graph

 Returns

the created operation

sparseml.tensorflow_v1.nn.layers.
conv2d
( name : str , x_tens : tensorflow.python.framework.ops.Tensor , in_chan : int , out_chan : int , kernel : int , stride : int , padding : str , act : Optional [ str ] = None ) [source] 
Create a convolutional layer with the proper ops and variables.
 Parameters


name – the name scope to create the layer under

x_tens – the tensor to apply the layer to

in_chan – the number of input channels

out_chan – the number of output channels

kernel – the kernel size to create a convolution for

stride – the stride to apply to the convolution

padding – the padding to apply to the convolution

act – an activation type to add into the layer, supported: [None, relu, sigmoid, softmax]

 Returns

the created layer

sparseml.tensorflow_v1.nn.layers.
conv2d_block
( name: str, x_tens: tensorflow.python.framework.ops.Tensor, training: Union[bool, tensorflow.python.framework.ops.Tensor], channels: int, kernel_size: int, padding: Union[str, int, Tuple[int, ...]] = 'same', stride: int = 1, data_format: str = 'channels_last', include_bn: bool = True, include_bias: Optional[bool] = None, act: Union[None, str] = 'relu', kernel_initializer=<tensorflow.python.ops.init_ops.GlorotUniform object>, bias_initializer=<tensorflow.python.ops.init_ops.Zeros object>, beta_initializer=<tensorflow.python.ops.init_ops.Zeros object>, gamma_initializer=<tensorflow.python.ops.init_ops.Ones object> ) [source] 
Create a convolution op and supporting ops (batch norm, activation, etc) in the current graph and scope.
 Parameters


name – The name to group all ops under in the graph

x_tens – The input tensor to apply a convolution and supporting ops to

training – A bool or tensor to indicate if the net is being run in training mode or not. Used for batch norm

channels – The number of output channels from the conv op

kernel_size – The size of the kernel to use for the conv op

padding – Any padding to apply to the tensor before the convolution; if string then uses tensorflows built in padding, else uses symmetric_pad2d

stride – The stride to apply for the convolution

data_format – Either channels_last or channels_first

include_bn – True to include a batch norm operation after the conv, False otherwise

include_bias – If left unset, will add a bias if not include_bn. Otherwise can be set to True to include a bias after the convolution, False otherwise.

act – The activation to apply after the conv op and batch norm (if included). Default is “relu”, set to None for no activation.

kernel_initializer – The initializer to use for the convolution kernels

bias_initializer – The initializer to use for the bias variable, if a bias is included

beta_initializer – The initializer to use for the beta variable, if batch norm is included

gamma_initializer – The initializer to use for the gamma variable, if gamma is included

 Returns

the tensor after all ops have been applied

sparseml.tensorflow_v1.nn.layers.
dense_block
( name: str, x_tens: tensorflow.python.framework.ops.Tensor, training: Union[bool, tensorflow.python.framework.ops.Tensor], channels: int, include_bn: bool = False, include_bias: Optional[bool] = None, dropout_rate: Optional[float] = None, act: Union[None, str] = 'relu', kernel_initializer=<tensorflow.python.ops.init_ops.GlorotUniform object>, bias_initializer=<tensorflow.python.ops.init_ops.Zeros object>, beta_initializer=<tensorflow.python.ops.init_ops.Zeros object>, gamma_initializer=<tensorflow.python.ops.init_ops.Ones object> ) [source] 
Create a dense or fully connected op and supporting ops (batch norm, activation, etc) in the current graph and scope.
 Parameters


name – The name to group all ops under in the graph

x_tens – The input tensor to apply a fully connected and supporting ops to

training – A bool or tensor to indicate if the net is being run in training mode or not. Used for batch norm and dropout

channels – The number of output channels from the dense op

include_bn – True to include a batch norm operation after the conv, False otherwise

include_bias – If left unset, will add a bias if not include_bn. Otherwise can be set to True to include a bias after the convolution, False otherwise.

dropout_rate – The dropout rate to apply after the fully connected and batch norm if included. If none, will not include batch norm

act – The activation to apply after the conv op and batch norm (if included). Default is “relu”, set to None for no activation.

kernel_initializer – The initializer to use for the fully connected kernels

bias_initializer – The initializer to use for the bias variable, if a bias is included

beta_initializer – The initializer to use for the beta variable, if batch norm is included

gamma_initializer – The initializer to use for the gamma variable, if gamma is included

 Returns

the tensor after all ops have been applied

sparseml.tensorflow_v1.nn.layers.
depthwise_conv2d_block
( name: str, x_tens: tensorflow.python.framework.ops.Tensor, training: Union[bool, tensorflow.python.framework.ops.Tensor], channels: int, kernel_size: int, padding: Union[str, int, Tuple[int, ...]] = 'same', stride: int = 1, data_format: str = 'channels_last', include_bn: bool = True, include_bias: Optional[bool] = None, act: Union[None, str] = 'relu', kernel_initializer=<tensorflow.python.ops.init_ops.GlorotUniform object>, bias_initializer=<tensorflow.python.ops.init_ops.Zeros object>, beta_initializer=<tensorflow.python.ops.init_ops.Zeros object>, gamma_initializer=<tensorflow.python.ops.init_ops.Ones object> ) [source] 
Create a depthwise convolution op and supporting ops (batch norm, activation, etc) in the current graph and scope.
 Parameters


name – The name to group all ops under in the graph

x_tens – The input tensor to apply a convolution and supporting ops to

training – A bool or tensor to indicate if the net is being run in training mode or not. Used for batch norm

channels – The number of output channels from the conv op

kernel_size – The size of the kernel to use for the conv op

padding – Any padding to apply to the tensor before the convolution; if string then uses tensorflows built in padding, else uses symmetric_pad2d

stride – The stride to apply for the convolution

data_format – Either channels_last or channels_first

include_bn – True to include a batch norm operation after the conv, False otherwise

include_bias – If left unset, will add a bias if not include_bn. Otherwise can be set to True to include a bias after the convolution, False otherwise.

act – The activation to apply after the conv op and batch norm (if included). Default is “relu”, set to None for no activation.

kernel_initializer – The initializer to use for the convolution kernels

bias_initializer – The initializer to use for the bias variable, if a bias is included

beta_initializer – The initializer to use for the beta variable, if batch norm is included

gamma_initializer – The initializer to use for the gamma variable, if gamma is included

 Returns

the tensor after all ops have been applied

sparseml.tensorflow_v1.nn.layers.
fc
( name : str , x_tens : tensorflow.python.framework.ops.Tensor , in_chan : int , out_chan : int , act : Optional [ str ] = None ) [source] 
Create a fully connected layer with the proper ops and variables.
 Parameters


name – the name scope to create the layer under

x_tens – the tensor to apply the layer to

in_chan – the number of input channels

out_chan – the number of output channels

act – an activation type to add into the layer, supported: [None, relu, sigmoid, softmax]

 Returns

the created layer

sparseml.tensorflow_v1.nn.layers.
pool2d
( name : str , x_tens : tensorflow.python.framework.ops.Tensor , type_ : str , pool_size : Union [ int , Tuple [ int , int ] ] , strides : Union [ int , Tuple [ int , int ] ] = 1 , padding : Union [ str , int , Tuple [ int , … ] ] = 'same' , data_format : str = 'channels_last' ) [source] 
Create a pool op with the given name in the current graph and scope. Supported are [max, avg, global_avg]
 Parameters


name – the name to given to the pooling op in the graph

x_tens – the input tensor to apply pooling to

type – the type of pooling to apply, one of [max, avg, global_avg]

pool_size – the size of the pooling window to apply, if global_avg then is the desired output size

strides – the stride to apply for the pooling op, if global_avg then is unused

padding – any padding to apply to the tensor before pooling; if string then uses tensorflows built in padding, else uses symmetric_pad2d

data_format – either channels_last or channels_first

 Returns

the tensor after pooling