tf.train
provides a set of classes and functions that help train models.
The Optimizer base class provides methods to compute gradients for a loss and apply gradients to variables. A collection of subclasses implement classic optimization algorithms such as GradientDescent and Adagrad.
You never instantiate the Optimizer class itself, but instead instantiate one of the subclasses.
tf.train.Optimizer
tf.train.GradientDescentOptimizer
tf.train.AdadeltaOptimizer
tf.train.AdagradOptimizer
tf.train.AdagradDAOptimizer
tf.train.MomentumOptimizer
tf.train.AdamOptimizer
tf.train.FtrlOptimizer
tf.train.ProximalGradientDescentOptimizer
tf.train.ProximalAdagradOptimizer
tf.train.RMSPropOptimizer
TensorFlow provides functions to compute the derivatives for a given TensorFlow computation graph, adding operations to the graph. The optimizer classes automatically compute derivatives on your graph, but creators of new Optimizers or expert users can call the lower-level functions below.
TensorFlow provides several operations that you can use to add clipping functions to your graph. You can use these functions to perform general data clipping, but they're particularly useful for handling exploding or vanishing gradients.
tf.train.exponential_decay
tf.train.inverse_time_decay
tf.train.natural_exp_decay
tf.train.piecewise_constant
tf.train.polynomial_decay
Some training algorithms, such as GradientDescent and Momentum often benefit from maintaining a moving average of variables during optimization. Using the moving averages for evaluations often improve results significantly.
See Threading and Queues for how to use threads and queues. For documentation on the Queue API, see Queues.
tf.train.Coordinator
tf.train.QueueRunner
tf.train.LooperThread
tf.train.add_queue_runner
tf.train.start_queue_runners
See Distributed TensorFlow for more information about how to configure a distributed TensorFlow program.
tf.train.Server
tf.train.Supervisor
tf.train.SessionManager
tf.train.ClusterSpec
tf.train.replica_device_setter
tf.train.MonitoredTrainingSession
tf.train.MonitoredSession
tf.train.SingularMonitoredSession
tf.train.Scaffold
tf.train.SessionCreator
tf.train.ChiefSessionCreator
tf.train.WorkerSessionCreator
See Summaries and TensorBoard for an overview of summaries, event files, and visualization in TensorBoard.
Hooks are tools that run in the process of training/evaluation of the model.
tf.train.SessionRunHook
tf.train.SessionRunArgs
tf.train.SessionRunContext
tf.train.SessionRunValues
tf.train.LoggingTensorHook
tf.train.StopAtStepHook
tf.train.CheckpointSaverHook
tf.train.NewCheckpointReader
tf.train.StepCounterHook
tf.train.NanLossDuringTrainingError
tf.train.NanTensorHook
tf.train.SummarySaverHook
tf.train.GlobalStepWaiterHook
tf.train.FinalOpsHook
tf.train.FeedFnHook
tf.train.global_step
tf.train.basic_train_loop
tf.train.get_global_step
tf.train.assert_global_step
tf.train.write_graph
© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_guides/python/train