BatchNormalization
Inherits From: Layer
Defined in tensorflow/python/layers/normalization.py.
Batch Normalization layer from http://arxiv.org/abs/1502.03167.
"Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift"
Sergey Ioffe, Christian Szegedy
axis: Integer, the axis that should be normalized (typically the features axis). For instance, after a Conv2D layer with data_format="channels_first", set axis=1 in BatchNormalization.momentum: Momentum for the moving average.epsilon: Small float added to variance to avoid dividing by zero.center: If True, add offset of beta to normalized tensor. If False, beta is ignored.scale: If True, multiply by gamma. If False, gamma is not used. When the next layer is linear (also e.g. nn.relu), this can be disabled since the scaling can be done by the next layer.beta_initializer: Initializer for the beta weight.gamma_initializer: Initializer for the gamma weight.moving_mean_initializer: Initializer for the moving mean.moving_variance_initializer: Initializer for the moving variance.beta_regularizer: Optional regularizer for the beta weight.gamma_regularizer: Optional regularizer for the gamma weight.beta_constraint: An optional projection function to be applied to the beta weight after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected variable and must return the projected variable (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.gamma_constraint: An optional projection function to be applied to the gamma weight after being updated by an Optimizer.renorm: Whether to use Batch Renormalization (https://arxiv.org/abs/1702.03275). This adds extra variables during training. The inference is the same for either value of this parameter.renorm_clipping: A dictionary that may map keys 'rmax', 'rmin', 'dmax' to scalar Tensors used to clip the renorm correction. The correction (r, d) is used as corrected_value = normalized_value * r + d, with r clipped to [rmin, rmax], and d to [-dmax, dmax]. Missing rmax, rmin, dmax are set to inf, 0, inf, respectively.renorm_momentum: Momentum used to update the moving means and standard deviations with renorm. Unlike momentum, this affects training and should be neither too small (which would add noise) nor too large (which would give stale estimates). Note that momentum is still applied to get the means and variances for inference.fused: if True, use a faster, fused implementation if possible. If None, use the system recommended implementation.trainable: Boolean, if True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).name: A string, the name of the layer.activity_regularizerOptional regularizer function for the output of this layer.
dtypegraphinputRetrieves the input tensor(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer.
Input tensor or list of input tensors.
AttributeError: if the layer is connected to more than one incoming layers.RuntimeError: If called in Eager mode.AttributeError: If no inbound nodes are found.input_shapeRetrieves the input shape(s) of a layer.
Only applicable if the layer has exactly one input, i.e. if it is connected to one incoming layer, or if all inputs have the same shape.
Input shape, as an integer shape tuple (or list of shape tuples, one tuple per input tensor).
AttributeError: if the layer has no defined input_shape.RuntimeError: if called in Eager mode.lossesnamenon_trainable_variablesnon_trainable_weightsoutputRetrieves the output tensor(s) of a layer.
Only applicable if the layer has exactly one output, i.e. if it is connected to one incoming layer.
Output tensor or list of output tensors.
AttributeError: if the layer is connected to more than one incoming layers.RuntimeError: if called in Eager mode.output_shapeRetrieves the output shape(s) of a layer.
Only applicable if the layer has one output, or if all outputs have the same shape.
Output shape, as an integer shape tuple (or list of shape tuples, one tuple per output tensor).
AttributeError: if the layer has no defined output shape.RuntimeError: if called in Eager mode.scope_nametrainable_variablestrainable_weightsupdatesvariablesReturns the list of all layer variables/weights.
A list of variables.
weightsReturns the list of all layer variables/weights.
A list of variables.
__init____init__(
axis=-1,
momentum=0.99,
epsilon=0.001,
center=True,
scale=True,
beta_initializer=tf.zeros_initializer(),
gamma_initializer=tf.ones_initializer(),
moving_mean_initializer=tf.zeros_initializer(),
moving_variance_initializer=tf.ones_initializer(),
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
renorm=False,
renorm_clipping=None,
renorm_momentum=0.99,
fused=None,
trainable=True,
name=None,
**kwargs
)
__call____call__(
inputs,
*args,
**kwargs
)
Wraps call, applying pre- and post-processing steps.
inputs: input tensor(s).*args: additional positional arguments to be passed to self.call.**kwargs: additional keyword arguments to be passed to self.call. Note: kwarg scope is reserved for use by the layer.Output tensor(s).
Note: - If the layer'scallmethod takes ascopekeyword argument, this argument will be automatically set to the current variable scope. - If the layer'scallmethod takes amaskargument (as some Keras layers do), its default value will be set to the mask generated forinputsby the previous layer (ifinputdid come from a layer that generated a corresponding mask, i.e. if it came from a Keras layer with masking support.
ValueError: if the layer's call method returns None (an invalid value).__deepcopy____deepcopy__(memo)
add_lossadd_loss(
losses,
inputs=None
)
Add loss tensor(s), potentially dependent on layer inputs.
Some losses (for instance, activity regularization losses) may be dependent on the inputs passed when calling a layer. Hence, when reusing a same layer on different inputs a and b, some entries in layer.losses may be dependent on a and some on b. This method automatically keeps track of dependencies.
The get_losses_for method allows to retrieve the losses relevant to a specific set of inputs.
losses: Loss tensor, or list/tuple of tensors.inputs: Optional input tensor(s) that the loss(es) depend on. Must match the inputs argument passed to the __call__ method at the time the losses are created. If None is passed, the losses are assumed to be unconditional, and will apply across all dataflows of the layer (e.g. weight regularization losses).RuntimeError: If called in Eager mode.add_updateadd_update(
updates,
inputs=None
)
Add update op(s), potentially dependent on layer inputs.
Weight updates (for instance, the updates of the moving mean and variance in a BatchNormalization layer) may be dependent on the inputs passed when calling a layer. Hence, when reusing a same layer on different inputs a and b, some entries in layer.updates may be dependent on a and some on b. This method automatically keeps track of dependencies.
The get_updates_for method allows to retrieve the updates relevant to a specific set of inputs.
This call is ignored in Eager mode.
updates: Update op, or list/tuple of update ops.inputs: Optional input tensor(s) that the update(s) depend on. Must match the inputs argument passed to the __call__ method at the time the updates are created. If None is passed, the updates are assumed to be unconditional, and will apply across all dataflows of the layer.add_variableadd_variable(
name,
shape,
dtype=None,
initializer=None,
regularizer=None,
trainable=True,
constraint=None
)
Adds a new variable to the layer, or gets an existing one; returns it.
name: variable name.shape: variable shape.dtype: The type of the variable. Defaults to self.dtype or float32.initializer: initializer instance (callable).regularizer: regularizer instance (callable).trainable: whether the variable should be part of the layer's "trainable_variables" (e.g. variables, biases) or "non_trainable_variables" (e.g. BatchNorm mean, stddev).constraint: constraint instance (callable).The created variable.
RuntimeError: If called in Eager mode with regularizers.applyapply(
inputs,
*args,
**kwargs
)
Apply the layer on a input.
This simply wraps self.__call__.
inputs: Input tensor(s).*args: additional positional arguments to be passed to self.call.**kwargs: additional keyword arguments to be passed to self.call.Output tensor(s).
buildbuild(input_shape)
callcall(
inputs,
training=False
)
count_paramscount_params()
Count the total number of scalars composing the weights.
An integer count.
ValueError: if the layer isn't yet built (in which case its weights aren't yet defined).get_input_atget_input_at(node_index)
Retrieves the input tensor(s) of a layer at a given node.
node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.A tensor (or list of tensors if the layer has multiple inputs).
RuntimeError: If called in Eager mode.get_input_shape_atget_input_shape_at(node_index)
Retrieves the input shape(s) of a layer at a given node.
node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.A shape tuple (or list of shape tuples if the layer has multiple inputs).
RuntimeError: If called in Eager mode.get_losses_forget_losses_for(inputs)
Retrieves losses relevant to a specific set of inputs.
inputs: Input tensor or list/tuple of input tensors. Must match the inputs argument passed to the __call__ method at the time the losses were created. If you pass inputs=None, unconditional losses are returned, such as weight regularization losses.List of loss tensors of the layer that depend on inputs.
RuntimeError: If called in Eager mode.get_output_atget_output_at(node_index)
Retrieves the output tensor(s) of a layer at a given node.
node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.A tensor (or list of tensors if the layer has multiple outputs).
RuntimeError: If called in Eager mode.get_output_shape_atget_output_shape_at(node_index)
Retrieves the output shape(s) of a layer at a given node.
node_index: Integer, index of the node from which to retrieve the attribute. E.g. node_index=0 will correspond to the first time the layer was called.A shape tuple (or list of shape tuples if the layer has multiple outputs).
RuntimeError: If called in Eager mode.get_updates_forget_updates_for(inputs)
Retrieves updates relevant to a specific set of inputs.
inputs: Input tensor or list/tuple of input tensors. Must match the inputs argument passed to the __call__ method at the time the updates were created. If you pass inputs=None, unconditional updates are returned.List of update ops of the layer that depend on inputs.
RuntimeError: If called in Eager mode.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/layers/BatchNormalization