W3cubDocs

/TensorFlow C++

tensorflow::ops::SparseSoftmaxCrossEntropyWithLogits

#include <nn_ops.h>

Computes softmax cross entropy cost and gradients to backpropagate.

Summary

Unlike SoftmaxCrossEntropyWithLogits, this operation does not accept a matrix of label probabilities, but rather a single label per row of features. This label is considered to have probability 1.0 for the given row.

Inputs are the logits, not probabilities.

Arguments:

  • scope: A Scope object
  • features: batch_size x num_classes matrix
  • labels: batch_size vector with values in [0, num_classes). This is the label for the given minibatch entry.

Returns:

  • Output loss: Per example loss (batch_size vector).
  • Output backprop: backpropagated gradients (batch_size x num_classes matrix).
Constructors and Destructors
SparseSoftmaxCrossEntropyWithLogits(const ::tensorflow::Scope & scope, ::tensorflow::Input features, ::tensorflow::Input labels)
Public attributes
backprop
loss

Public attributes

backprop

::tensorflow::Output backprop

loss

::tensorflow::Output loss

Public functions

SparseSoftmaxCrossEntropyWithLogits

 SparseSoftmaxCrossEntropyWithLogits(
  const ::tensorflow::Scope & scope,
  ::tensorflow::Input features,
  ::tensorflow::Input labels
)

© 2017 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/cc/class/tensorflow/ops/sparse-softmax-cross-entropy-with-logits.html