You are viewing documentation about an older version (1.21.0). View latest version

modin.pandas.DataFrame.value_counts¶

DataFrame.value_counts(subset: Sequence[Hashable] | None = None, normalize: bool = False, sort: bool = True, ascending: bool = False, dropna: bool = True)[source]¶

Return a Series containing the frequency of each distinct row in the Dataframe.

Parameters:
  • subset (label or list of labels, optional) – Columns to use when counting unique combinations.

  • normalize (bool, default False) – Return proportions rather than frequencies. Being different from native pandas, Snowpark pandas will return a Series with decimal.Decimal values.

  • sort (bool, default True) – Sort by frequencies when True. Sort by DataFrame column values when False. When there is a tie between counts, the order is still deterministic, but may be different from the result from native pandas.

  • ascending (bool, default False) – Sort in ascending order.

  • dropna (bool, default True) – Don’t include counts of rows that contain NA values.

Return type:

Series

See also

Series.value_counts : Equivalent method on Series.

Notes

The returned Series will have a MultiIndex with one level per input column but an Index (non-multi) for a single label. By default, rows that contain any NA values are omitted from the result. By default, the resulting Series will be in descending order so that the first element is the most frequently-occurring row.

Examples

>>> df = pd.DataFrame({'num_legs': [2, 4, 4, 6],
...                    'num_wings': [2, 0, 0, 0]},
...                   index=['falcon', 'dog', 'cat', 'ant'])
>>> df
        num_legs  num_wings
falcon         2          2
dog            4          0
cat            4          0
ant            6          0
Copy
>>> df.value_counts()
num_legs  num_wings
4         0            2
2         2            1
6         0            1
Name: count, dtype: int64
Copy
>>> df.value_counts(sort=False)
num_legs  num_wings
2         2            1
4         0            2
6         0            1
Name: count, dtype: int64
Copy
>>> df.value_counts(ascending=True)
num_legs  num_wings
2         2            1
6         0            1
4         0            2
Name: count, dtype: int64
Copy
>>> df.value_counts(normalize=True)
num_legs  num_wings
4         0            0.50
2         2            0.25
6         0            0.25
Name: proportion, dtype: float64
Copy

With dropna set to False we can also count rows with NA values.

>>> df = pd.DataFrame({'first_name': ['John', 'Anne', 'John', 'Beth'],
...                    'middle_name': ['Smith', None, None, 'Louise']})
>>> df
  first_name middle_name
0       John       Smith
1       Anne        None
2       John        None
3       Beth      Louise
Copy
>>> df.value_counts()
first_name  middle_name
John        Smith          1
Beth        Louise         1
Name: count, dtype: int64
Copy
>>> df.value_counts(dropna=False)
first_name  middle_name
John        Smith          1
Anne        NaN            1
John        NaN            1
Beth        Louise         1
Name: count, dtype: int64
Copy
>>> df.value_counts("first_name")
first_name
John    2
Anne    1
Beth    1
Name: count, dtype: int64
Copy