Adam
Inherits From: Optimizer
Defined in tensorflow/python/keras/_impl/keras/optimizers.py.
Adam optimizer.
Default parameters follow those provided in the original paper.
lr: float >= 0. Learning rate.beta_1: float, 0 < beta < 1. Generally close to 1.beta_2: float, 0 < beta < 1. Generally close to 1.epsilon: float >= 0. Fuzz factor.decay: float >= 0. Learning rate decay over each update.References: - Adam - A Method for Stochastic Optimization
__init____init__(
lr=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-08,
decay=0.0,
**kwargs
)
from_configfrom_config(
cls,
config
)
get_configget_config()
get_gradientsget_gradients(
loss,
params
)
get_updatesget_updates(
loss,
params
)
get_weightsget_weights()
Returns the current value of the weights of the optimizer.
A list of numpy arrays.
set_weightsset_weights(weights)
Sets the weights of the optimizer, from Numpy arrays.
Should only be called after computing the gradients (otherwise the optimizer has no weights).
weights: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the optimizer (i.e. it should match the output of get_weights).ValueError: in case of incompatible weight shapes.
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam