You are viewing documentation about an older version (1.18.0). View latest version

modin.pandas.Series.value_counts¶

Series.value_counts(normalize: bool = False, sort: bool = True, ascending: bool = False, bins: int | None = None, dropna: bool = True)[source]¶

Return a Series containing counts of unique values.

The resulting object will be in descending order so that the first element is the most frequently-occurring element. Excludes NA values by default.

Parameters:
  • normalize (bool, default False) – If True then the object returned will contain the relative frequencies of the unique values. Being different from native pandas, Snowpark pandas will return a Series with decimal.Decimal values.

  • sort (bool, default True) – Sort by frequencies when True. Preserve the order of the data when False. When there is a tie between counts, the order is still deterministic, but may be different from the result from native pandas.

  • ascending (bool, default False) – Sort in ascending order.

  • bins (int, optional) – Rather than count values, group them into half-open bins, a convenience for pd.cut, only works with numeric data. This argument is not supported yet.

  • dropna (bool, default True) – Don’t include counts of NaN.

Return type:

Series

See also

Series.count

Number of non-NA elements in a Series.

DataFrame.count

Number of non-NA elements in a DataFrame.

DataFrame.value_counts

Equivalent method on DataFrames.

Examples

>>> s = pd.Series([3, 1, 2, 3, 4, np.nan])
>>> s.value_counts()
3.0    2
1.0    1
2.0    1
4.0    1
Name: count, dtype: int64
Copy

With normalize set to True, returns the relative frequency by dividing all values by the sum of values.

>>> s.value_counts(normalize=True)
3.0    0.4
1.0    0.2
2.0    0.2
4.0    0.2
Name: proportion, dtype: float64
Copy

dropna

With dropna set to False we can also see NaN index values.

>>> s.value_counts(dropna=False)
3.0    2
1.0    1
2.0    1
4.0    1
NaN    1
Name: count, dtype: int64
Copy