numpy.histogram(a, bins=10, range=None, normed=False, weights=None, density=None)
[source]
Compute the histogram of a set of data.
Parameters: |
a : array_like Input data. The histogram is computed over the flattened array. bins : int or sequence of scalars or str, optional If New in version 1.11.0. If
range : (float, float), optional The lower and upper range of the bins. If not provided, range is simply normed : bool, optional This keyword is deprecated in NumPy 1.6.0 due to confusing/buggy behavior. It will be removed in NumPy 2.0.0. Use the weights : array_like, optional An array of weights, of the same shape as density : bool, optional If Overrides the |
---|---|
Returns: |
hist : array The values of the histogram. See bin_edges : array of dtype float Return the bin edges |
See also
All but the last (righthand-most) bin is half-open. In other words, if bins
is:
[1, 2, 3, 4]
then the first bin is [1, 2)
(including 1, but excluding 2) and the second [2, 3)
. The last bin, however, is [3, 4]
, which includes 4.
New in version 1.11.0.
The methods to estimate the optimal number of bins are well founded in literature, and are inspired by the choices R provides for histogram visualisation. Note that having the number of bins proportional to is asymptotically optimal, which is why it appears in most estimators. These are simply plug-in methods that give good starting points for number of bins. In the equations below, is the binwidth and is the number of bins. All estimators that compute bin counts are recast to bin width using the ptp
of the data. The final bin count is obtained from ``np.round(np.ceil(range / h))`.
The binwidth is proportional to the interquartile range (IQR) and inversely proportional to cube root of a.size. Can be too conservative for small datasets, but is quite good for large datasets. The IQR is very robust to outliers.
The binwidth is proportional to the standard deviation of the data and inversely proportional to cube root of x.size
. Can be too conservative for small datasets, but is quite good for large datasets. The standard deviation is not very robust to outliers. Values are very similar to the Freedman-Diaconis estimator in the absence of outliers.
The number of bins is only proportional to cube root of a.size
. It tends to overestimate the number of bins and it does not take into account data variability.
The number of bins is the base 2 log of a.size
. This estimator assumes normality of data and is too conservative for larger, non-normal datasets. This is the default method in R’s hist
method.
An improved version of Sturges’ formula that produces better estimates for non-normal datasets. This estimator attempts to account for the skew of the data.
The simplest and fastest estimator. Only takes into account the data size.
>>> np.histogram([1, 2, 1], bins=[0, 1, 2, 3]) (array([0, 2, 1]), array([0, 1, 2, 3])) >>> np.histogram(np.arange(4), bins=np.arange(5), density=True) (array([ 0.25, 0.25, 0.25, 0.25]), array([0, 1, 2, 3, 4])) >>> np.histogram([[1, 2, 1], [1, 0, 1]], bins=[0,1,2,3]) (array([1, 4, 1]), array([0, 1, 2, 3]))
>>> a = np.arange(5) >>> hist, bin_edges = np.histogram(a, density=True) >>> hist array([ 0.5, 0. , 0.5, 0. , 0. , 0.5, 0. , 0.5, 0. , 0.5]) >>> hist.sum() 2.4999999999999996 >>> np.sum(hist * np.diff(bin_edges)) 1.0
New in version 1.11.0.
Automated Bin Selection Methods example, using 2 peak random data with 2000 points:
>>> import matplotlib.pyplot as plt >>> rng = np.random.RandomState(10) # deterministic random data >>> a = np.hstack((rng.normal(size=1000), ... rng.normal(loc=5, scale=2, size=1000))) >>> plt.hist(a, bins='auto') # arguments are passed to np.histogram >>> plt.title("Histogram with 'auto' bins") >>> plt.show()
(Source code, png, pdf)
© 2008–2017 NumPy Developers
Licensed under the NumPy License.
https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.histogram.html