sklearn.metrics.precision_score(y_true, y_pred, labels=None, pos_label=1, average=’binary’, sample_weight=None)
[source]
Compute the precision
The precision is the ratio tp / (tp + fp)
where tp
is the number of true positives and fp
the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative.
The best value is 1 and the worst value is 0.
Read more in the User Guide.
Parameters: |
y_true : 1d array-like, or label indicator array / sparse matrix Ground truth (correct) target values. y_pred : 1d array-like, or label indicator array / sparse matrix Estimated targets as returned by a classifier. labels : list, optional The set of labels to include when Changed in version 0.17: parameter labels improved for multiclass problem. pos_label : str or int, 1 by default The class to report if average : string, [None, ‘binary’ (default), ‘micro’, ‘macro’, ‘samples’, ‘weighted’] This parameter is required for multiclass/multilabel targets. If
sample_weight : array-like of shape = [n_samples], optional Sample weights. |
---|---|
Returns: |
precision : float (if average is not None) or array of float, shape = [n_unique_labels] Precision of the positive class in binary classification or weighted average of the precision of each class for the multiclass task. |
>>> from sklearn.metrics import precision_score >>> y_true = [0, 1, 2, 0, 1, 2] >>> y_pred = [0, 2, 1, 0, 0, 1] >>> precision_score(y_true, y_pred, average='macro') 0.22... >>> precision_score(y_true, y_pred, average='micro') 0.33... >>> precision_score(y_true, y_pred, average='weighted') ... 0.22... >>> precision_score(y_true, y_pred, average=None) array([ 0.66..., 0. , 0. ])
sklearn.metrics.precision_score
© 2007–2017 The scikit-learn developers
Licensed under the 3-clause BSD License.
http://scikit-learn.org/stable/modules/generated/sklearn.metrics.precision_score.html