5 Getting Started with pandas
pandas will be a major tool of interest throughout much of the rest of the book. It contains data structures and data manipulation tools designed to make data cleaning and analysis fast and convenient in Python. pandas is often used in tandem with numerical computing tools like NumPy and SciPy, analytical libraries like statsmodels and scikit-learn, and data visualization libraries like matplotlib. pandas adopts significant parts of NumPy's idiomatic style of array-based computing, especially array-based functions and a preference for data processing without for
loops.
While pandas adopts many coding idioms from NumPy, the biggest difference is that pandas is designed for working with tabular or heterogeneous data. NumPy, by contrast, is best suited for working with homogeneously typed numerical array data.
Since becoming an open source project in 2010, pandas has matured into a quite large library that's applicable in a broad set of real-world use cases. The developer community has grown to over 2,500 distinct contributors, who've been helping build the project as they used it to solve their day-to-day data problems. The vibrant pandas developer and user communities have been a key part of its success.
Many people don't know that I haven't been actively involved in day-to-day pandas development since 2013; it has been an entirely community-managed project since then. Be sure to pass on your thanks to the core development and all the contributors for their hard work!
Throughout the rest of the book, I use the following import conventions for NumPy and pandas:
1]: import numpy as np
In [
2]: import pandas as pd In [
Thus, whenever you see pd.
in code, it’s referring to pandas. You may also find it easier to import Series and DataFrame into the local namespace since they are so frequently used:
3]: from pandas import Series, DataFrame In [
5.1 Introduction to pandas Data Structures
To get started with pandas, you will need to get comfortable with its two workhorse data structures: Series and DataFrame. While they are not a universal solution for every problem, they provide a solid foundation for a wide variety of data tasks.
Series
A Series is a one-dimensional array-like object containing a sequence of values (of similar types to NumPy types) of the same type and an associated array of data labels, called its index. The simplest Series is formed from only an array of data:
14]: obj = pd.Series([4, 7, -5, 3])
In [
15]: obj
In [15]:
Out[0 4
1 7
2 -5
3 3
dtype: int64
The string representation of a Series displayed interactively shows the index on the left and the values on the right. Since we did not specify an index for the data, a default one consisting of the integers 0
through N - 1
(where N
is the length of the data) is created. You can get the array representation and index object of the Series via its array
and index
attributes, respectively:
16]: obj.array
In [16]:
Out[<PandasArray>
4, 7, -5, 3]
[4, dtype: int64
Length:
17]: obj.index
In [17]: RangeIndex(start=0, stop=4, step=1) Out[
The result of the .array
attribute is a PandasArray
which usually wraps a NumPy array but can also contain special extension array types which will be discussed more in Ch 7.3: Extension Data Types.
Often, you'll want to create a Series with an index identifying each data point with a label:
18]: obj2 = pd.Series([4, 7, -5, 3], index=["d", "b", "a", "c"])
In [
19]: obj2
In [19]:
Out[4
d 7
b -5
a 3
c
dtype: int64
20]: obj2.index
In [20]: Index(['d', 'b', 'a', 'c'], dtype='object') Out[
Compared with NumPy arrays, you can use labels in the index when selecting single values or a set of values:
21]: obj2["a"]
In [21]: -5
Out[
22]: obj2["d"] = 6
In [
23]: obj2[["c", "a", "d"]]
In [23]:
Out[3
c -5
a 6
d dtype: int64
Here ["c", "a", "d"]
is interpreted as a list of indices, even though it contains strings instead of integers.
Using NumPy functions or NumPy-like operations, such as filtering with a Boolean array, scalar multiplication, or applying math functions, will preserve the index-value link:
24]: obj2[obj2 > 0]
In [24]:
Out[6
d 7
b 3
c
dtype: int64
25]: obj2 * 2
In [25]:
Out[12
d 14
b -10
a 6
c
dtype: int64
26]: import numpy as np
In [
27]: np.exp(obj2)
In [27]:
Out[403.428793
d 1096.633158
b 0.006738
a 20.085537
c dtype: float64
Another way to think about a Series is as a fixed-length, ordered dictionary, as it is a mapping of index values to data values. It can be used in many contexts where you might use a dictionary:
28]: "b" in obj2
In [28]: True
Out[
29]: "e" in obj2
In [29]: False Out[
Should you have data contained in a Python dictionary, you can create a Series from it by passing the dictionary:
30]: sdata = {"Ohio": 35000, "Texas": 71000, "Oregon": 16000, "Utah": 5000}
In [
31]: obj3 = pd.Series(sdata)
In [
32]: obj3
In [32]:
Out[35000
Ohio 71000
Texas 16000
Oregon 5000
Utah dtype: int64
A Series can be converted back to a dictionary with its to_dict
method:
33]: obj3.to_dict()
In [33]: {'Ohio': 35000, 'Texas': 71000, 'Oregon': 16000, 'Utah': 5000} Out[
When you are only passing a dictionary, the index in the resulting Series will respect the order of the keys according to the dictionary's keys
method, which depends on the key insertion order. You can override this by passing an index with the dictionary keys in the order you want them to appear in the resulting Series:
34]: states = ["California", "Ohio", "Oregon", "Texas"]
In [
35]: obj4 = pd.Series(sdata, index=states)
In [
36]: obj4
In [36]:
Out[
California NaN35000.0
Ohio 16000.0
Oregon 71000.0
Texas dtype: float64
Here, three values found in sdata
were placed in the appropriate locations, but since no value for "California"
was found, it appears as NaN
(Not a Number), which is considered in pandas to mark missing or NA values. Since "Utah"
was not included in states
, it is excluded from the resulting object.
I will use the terms “missing,” “NA,” or “null” interchangeably to refer to missing data. The isna
and notna
functions in pandas should be used to detect missing data:
37]: pd.isna(obj4)
In [37]:
Out[True
California False
Ohio False
Oregon False
Texas bool
dtype:
38]: pd.notna(obj4)
In [38]:
Out[False
California True
Ohio True
Oregon True
Texas bool dtype:
Series also has these as instance methods:
39]: obj4.isna()
In [39]:
Out[True
California False
Ohio False
Oregon False
Texas bool dtype:
I discuss working with missing data in more detail in Ch 7: Data Cleaning and Preparation.
A useful Series feature for many applications is that it automatically aligns by index label in arithmetic operations:
40]: obj3
In [40]:
Out[35000
Ohio 71000
Texas 16000
Oregon 5000
Utah
dtype: int64
41]: obj4
In [41]:
Out[
California NaN35000.0
Ohio 16000.0
Oregon 71000.0
Texas
dtype: float64
42]: obj3 + obj4
In [42]:
Out[
California NaN70000.0
Ohio 32000.0
Oregon 142000.0
Texas
Utah NaN dtype: float64
Data alignment features will be addressed in more detail later. If you have experience with databases, you can think about this as being similar to a join operation.
Both the Series object itself and its index have a name
attribute, which integrates with other areas of pandas functionality:
43]: obj4.name = "population"
In [
44]: obj4.index.name = "state"
In [
45]: obj4
In [45]:
Out[
state
California NaN35000.0
Ohio 16000.0
Oregon 71000.0
Texas Name: population, dtype: float64
A Series’s index can be altered in place by assignment:
46]: obj
In [46]:
Out[0 4
1 7
2 -5
3 3
dtype: int64
47]: obj.index = ["Bob", "Steve", "Jeff", "Ryan"]
In [
48]: obj
In [48]:
Out[4
Bob 7
Steve -5
Jeff 3
Ryan dtype: int64
DataFrame
A DataFrame represents a rectangular table of data and contains an ordered, named collection of columns, each of which can be a different value type (numeric, string, Boolean, etc.). The DataFrame has both a row and column index; it can be thought of as a dictionary of Series all sharing the same index.
While a DataFrame is physically two-dimensional, you can use it to represent higher dimensional data in a tabular format using hierarchical indexing, a subject we will discuss in Ch 8: Data Wrangling: Join, Combine, and Reshape and an ingredient in some of the more advanced data-handling features in pandas.
There are many ways to construct a DataFrame, though one of the most common is from a dictionary of equal-length lists or NumPy arrays:
= {"state": ["Ohio", "Ohio", "Ohio", "Nevada", "Nevada", "Nevada"],
data "year": [2000, 2001, 2002, 2001, 2002, 2003],
"pop": [1.5, 1.7, 3.6, 2.4, 2.9, 3.2]}
= pd.DataFrame(data) frame
The resulting DataFrame will have its index assigned automatically, as with Series, and the columns are placed according to the order of the keys in data
(which depends on their insertion order in the dictionary):
50]: frame
In [50]:
Out[
state year pop0 Ohio 2000 1.5
1 Ohio 2001 1.7
2 Ohio 2002 3.6
3 Nevada 2001 2.4
4 Nevada 2002 2.9
5 Nevada 2003 3.2
If you are using the Jupyter notebook, pandas DataFrame objects will be displayed as a more browser-friendly HTML table. See Figure 5.1 for an example.
For large DataFrames, the head
method selects only the first five rows:
51]: frame.head()
In [51]:
Out[
state year pop0 Ohio 2000 1.5
1 Ohio 2001 1.7
2 Ohio 2002 3.6
3 Nevada 2001 2.4
4 Nevada 2002 2.9
Similarly, tail
returns the last five rows:
52]: frame.tail()
In [52]:
Out[
state year pop1 Ohio 2001 1.7
2 Ohio 2002 3.6
3 Nevada 2001 2.4
4 Nevada 2002 2.9
5 Nevada 2003 3.2
If you specify a sequence of columns, the DataFrame’s columns will be arranged in that order:
53]: pd.DataFrame(data, columns=["year", "state", "pop"])
In [53]:
Out[
year state pop0 2000 Ohio 1.5
1 2001 Ohio 1.7
2 2002 Ohio 3.6
3 2001 Nevada 2.4
4 2002 Nevada 2.9
5 2003 Nevada 3.2
If you pass a column that isn’t contained in the dictionary, it will appear with missing values in the result:
54]: frame2 = pd.DataFrame(data, columns=["year", "state", "pop", "debt"])
In [
55]: frame2
In [55]:
Out[
year state pop debt0 2000 Ohio 1.5 NaN
1 2001 Ohio 1.7 NaN
2 2002 Ohio 3.6 NaN
3 2001 Nevada 2.4 NaN
4 2002 Nevada 2.9 NaN
5 2003 Nevada 3.2 NaN
56]: frame2.columns
In [56]: Index(['year', 'state', 'pop', 'debt'], dtype='object') Out[
A column in a DataFrame can be retrieved as a Series either by dictionary-like notation or by using the dot attribute notation:
57]: frame2["state"]
In [57]:
Out[0 Ohio
1 Ohio
2 Ohio
3 Nevada
4 Nevada
5 Nevada
object
Name: state, dtype:
58]: frame2.year
In [58]:
Out[0 2000
1 2001
2 2002
3 2001
4 2002
5 2003
Name: year, dtype: int64
Attribute-like access (e.g., frame2.year
) and tab completion of column names in IPython are provided as a convenience.
frame2[column]
works for any column name, but frame2.column
works only when the column name is a valid Python variable name and does not conflict with any of the method names in DataFrame. For example, if a column's name contains whitespace or symbols other than underscores, it cannot be accessed with the dot attribute method.
Note that the returned Series have the same index as the DataFrame, and their name
attribute has been appropriately set.
Rows can also be retrieved by position or name with the special iloc
and loc
attributes (more on this later in Selection on DataFrame with loc and iloc):
59]: frame2.loc[1]
In [59]:
Out[2001
year
state Ohio1.7
pop
debt NaN1, dtype: object
Name:
60]: frame2.iloc[2]
In [60]:
Out[2002
year
state Ohio3.6
pop
debt NaN2, dtype: object Name:
Columns can be modified by assignment. For example, the empty debt
column could be assigned a scalar value or an array of values:
61]: frame2["debt"] = 16.5
In [
62]: frame2
In [62]:
Out[
year state pop debt0 2000 Ohio 1.5 16.5
1 2001 Ohio 1.7 16.5
2 2002 Ohio 3.6 16.5
3 2001 Nevada 2.4 16.5
4 2002 Nevada 2.9 16.5
5 2003 Nevada 3.2 16.5
63]: frame2["debt"] = np.arange(6.)
In [
64]: frame2
In [64]:
Out[
year state pop debt0 2000 Ohio 1.5 0.0
1 2001 Ohio 1.7 1.0
2 2002 Ohio 3.6 2.0
3 2001 Nevada 2.4 3.0
4 2002 Nevada 2.9 4.0
5 2003 Nevada 3.2 5.0
When you are assigning lists or arrays to a column, the value’s length must match the length of the DataFrame. If you assign a Series, its labels will be realigned exactly to the DataFrame’s index, inserting missing values in any index values not present:
65]: val = pd.Series([-1.2, -1.5, -1.7], index=[2, 4, 5])
In [
66]: frame2["debt"] = val
In [
67]: frame2
In [67]:
Out[
year state pop debt0 2000 Ohio 1.5 NaN
1 2001 Ohio 1.7 NaN
2 2002 Ohio 3.6 -1.2
3 2001 Nevada 2.4 NaN
4 2002 Nevada 2.9 -1.5
5 2003 Nevada 3.2 -1.7
Assigning a column that doesn’t exist will create a new column.
The del
keyword will delete columns like with a dictionary. As an example, I first add a new column of Boolean values where the state
column equals "Ohio"
:
68]: frame2["eastern"] = frame2["state"] == "Ohio"
In [
69]: frame2
In [69]:
Out[
year state pop debt eastern0 2000 Ohio 1.5 NaN True
1 2001 Ohio 1.7 NaN True
2 2002 Ohio 3.6 -1.2 True
3 2001 Nevada 2.4 NaN False
4 2002 Nevada 2.9 -1.5 False
5 2003 Nevada 3.2 -1.7 False
New columns cannot be created with the frame2.eastern
dot attribute notation.
The del
method can then be used to remove this column:
70]: del frame2["eastern"]
In [
71]: frame2.columns
In [71]: Index(['year', 'state', 'pop', 'debt'], dtype='object') Out[
The column returned from indexing a DataFrame is a view on the underlying data, not a copy. Thus, any in-place modifications to the Series will be reflected in the DataFrame. The column can be explicitly copied with the Series’s copy
method.
Another common form of data is a nested dictionary of dictionaries:
72]: populations = {"Ohio": {2000: 1.5, 2001: 1.7, 2002: 3.6},
In ["Nevada": {2001: 2.4, 2002: 2.9}} ....:
If the nested dictionary is passed to the DataFrame, pandas will interpret the outer dictionary keys as the columns, and the inner keys as the row indices:
73]: frame3 = pd.DataFrame(populations)
In [
74]: frame3
In [74]:
Out[
Ohio Nevada2000 1.5 NaN
2001 1.7 2.4
2002 3.6 2.9
You can transpose the DataFrame (swap rows and columns) with similar syntax to a NumPy array:
75]: frame3.T
In [75]:
Out[2000 2001 2002
1.5 1.7 3.6
Ohio 2.4 2.9 Nevada NaN
Note that transposing discards the column data types if the columns do not all have the same data type, so transposing and then transposing back may lose the previous type information. The columns become arrays of pure Python objects in this case.
The keys in the inner dictionaries are combined to form the index in the result. This isn’t true if an explicit index is specified:
76]: pd.DataFrame(populations, index=[2001, 2002, 2003])
In [76]:
Out[
Ohio Nevada2001 1.7 2.4
2002 3.6 2.9
2003 NaN NaN
Dictionaries of Series are treated in much the same way:
77]: pdata = {"Ohio": frame3["Ohio"][:-1],
In ["Nevada": frame3["Nevada"][:2]}
....:
78]: pd.DataFrame(pdata)
In [78]:
Out[
Ohio Nevada2000 1.5 NaN
2001 1.7 2.4
For a list of many of the things you can pass to the DataFrame constructor, see Table 5.1.
Type | Notes |
---|---|
2D ndarray | A matrix of data, passing optional row and column labels |
Dictionary of arrays, lists, or tuples | Each sequence becomes a column in the DataFrame; all sequences must be the same length |
NumPy structured/record array | Treated as the “dictionary of arrays” case |
Dictionary of Series | Each value becomes a column; indexes from each Series are unioned together to form the result’s row index if no explicit index is passed |
Dictionary of dictionaries | Each inner dictionary becomes a column; keys are unioned to form the row index as in the “dictionary of Series” case |
List of dictionaries or Series | Each item becomes a row in the DataFrame; unions of dictionary keys or Series indexes become the DataFrame’s column labels |
List of lists or tuples | Treated as the “2D ndarray” case |
Another DataFrame | The DataFrame’s indexes are used unless different ones are passed |
NumPy MaskedArray | Like the “2D ndarray” case except masked values are missing in the DataFrame result |
If a DataFrame’s index
and columns
have their name
attributes set, these will also be displayed:
79]: frame3.index.name = "year"
In [
80]: frame3.columns.name = "state"
In [
81]: frame3
In [81]:
Out[
state Ohio Nevada
year 2000 1.5 NaN
2001 1.7 2.4
2002 3.6 2.9
Unlike Series, DataFrame does not have a name
attribute. DataFrame's to_numpy
method returns the data contained in the DataFrame as a two-dimensional ndarray:
82]: frame3.to_numpy()
In [82]:
Out[1.5, nan],
array([[1.7, 2.4],
[3.6, 2.9]]) [
If the DataFrame’s columns are different data types, the data type of the returned array will be chosen to accommodate all of the columns:
83]: frame2.to_numpy()
In [83]:
Out[2000, 'Ohio', 1.5, nan],
array([[2001, 'Ohio', 1.7, nan],
[2002, 'Ohio', 3.6, -1.2],
[2001, 'Nevada', 2.4, nan],
[2002, 'Nevada', 2.9, -1.5],
[2003, 'Nevada', 3.2, -1.7]], dtype=object) [
Index Objects
pandas’s Index objects are responsible for holding the axis labels (including a DataFrame's column names) and other metadata (like the axis name or names). Any array or other sequence of labels you use when constructing a Series or DataFrame is internally converted to an Index:
84]: obj = pd.Series(np.arange(3), index=["a", "b", "c"])
In [
85]: index = obj.index
In [
86]: index
In [86]: Index(['a', 'b', 'c'], dtype='object')
Out[
87]: index[1:]
In [87]: Index(['b', 'c'], dtype='object') Out[
Index objects are immutable and thus can’t be modified by the user:
1] = "d" # TypeError index[
Immutability makes it safer to share Index objects among data structures:
88]: labels = pd.Index(np.arange(3))
In [
89]: labels
In [89]: Index([0, 1, 2], dtype='int64')
Out[
90]: obj2 = pd.Series([1.5, -2.5, 0], index=labels)
In [
91]: obj2
In [91]:
Out[0 1.5
1 -2.5
2 0.0
dtype: float64
92]: obj2.index is labels
In [92]: True Out[
Some users will not often take advantage of the capabilities provided by an Index, but because some operations will yield results containing indexed data, it's important to understand how they work.
In addition to being array-like, an Index also behaves like a fixed-size set:
93]: frame3
In [93]:
Out[
state Ohio Nevada
year 2000 1.5 NaN
2001 1.7 2.4
2002 3.6 2.9
94]: frame3.columns
In [94]: Index(['Ohio', 'Nevada'], dtype='object', name='state')
Out[
95]: "Ohio" in frame3.columns
In [95]: True
Out[
96]: 2003 in frame3.index
In [96]: False Out[
Unlike Python sets, a pandas Index can contain duplicate labels:
97]: pd.Index(["foo", "foo", "bar", "bar"])
In [97]: Index(['foo', 'foo', 'bar', 'bar'], dtype='object') Out[
Selections with duplicate labels will select all occurrences of that label.
Each Index has a number of methods and properties for set logic, which answer other common questions about the data it contains. Some useful ones are summarized in Table 5.2.
Method/Property | Description |
---|---|
append() |
Concatenate with additional Index objects, producing a new Index |
difference() |
Compute set difference as an Index |
intersection() |
Compute set intersection |
union() |
Compute set union |
isin() |
Compute Boolean array indicating whether each value is contained in the passed collection |
delete() |
Compute new Index with element at Index i deleted |
drop() |
Compute new Index by deleting passed values |
insert() |
Compute new Index by inserting element at Index i |
is_monotonic |
Returns True if each element is greater than or equal to the previous element |
is_unique |
Returns True if the Index has no duplicate values |
unique() |
Compute the array of unique values in the Index |
5.2 Essential Functionality
This section will walk you through the fundamental mechanics of interacting with the data contained in a Series or DataFrame. In the chapters to come, we will delve more deeply into data analysis and manipulation topics using pandas. This book is not intended to serve as exhaustive documentation for the pandas library; instead, we'll focus on familiarizing you with heavily used features, leaving the less common (i.e., more esoteric) things for you to learn more about by reading the online pandas documentation.
Reindexing
An important method on pandas objects is reindex
, which means to create a new object with the values rearranged to align with the new index. Consider an example:
98]: obj = pd.Series([4.5, 7.2, -5.3, 3.6], index=["d", "b", "a", "c"])
In [
99]: obj
In [99]:
Out[4.5
d 7.2
b -5.3
a 3.6
c dtype: float64
Calling reindex
on this Series rearranges the data according to the new index, introducing missing values if any index values were not already present:
100]: obj2 = obj.reindex(["a", "b", "c", "d", "e"])
In [
101]: obj2
In [101]:
Out[-5.3
a 7.2
b 3.6
c 4.5
d
e NaN dtype: float64
For ordered data like time series, you may want to do some interpolation or filling of values when reindexing. The method
option allows us to do this, using a method such as ffill
, which forward-fills the values:
102]: obj3 = pd.Series(["blue", "purple", "yellow"], index=[0, 2, 4])
In [
103]: obj3
In [103]:
Out[0 blue
2 purple
4 yellow
object
dtype:
104]: obj3.reindex(np.arange(6), method="ffill")
In [104]:
Out[0 blue
1 blue
2 purple
3 purple
4 yellow
5 yellow
object dtype:
With DataFrame, reindex
can alter the (row) index, columns, or both. When passed only a sequence, it reindexes the rows in the result:
105]: frame = pd.DataFrame(np.arange(9).reshape((3, 3)),
In [=["a", "c", "d"],
.....: index=["Ohio", "Texas", "California"])
.....: columns
106]: frame
In [106]:
Out[
Ohio Texas California0 1 2
a 3 4 5
c 6 7 8
d
107]: frame2 = frame.reindex(index=["a", "b", "c", "d"])
In [
108]: frame2
In [108]:
Out[
Ohio Texas California0.0 1.0 2.0
a
b NaN NaN NaN3.0 4.0 5.0
c 6.0 7.0 8.0 d
The columns can be reindexed with the columns
keyword:
109]: states = ["Texas", "Utah", "California"]
In [
110]: frame.reindex(columns=states)
In [110]:
Out[
Texas Utah California1 NaN 2
a 4 NaN 5
c 7 NaN 8 d
Because "Ohio"
was not in states
, the data for that column is dropped from the result.
Another way to reindex a particular axis is to pass the new axis labels as a positional argument and then specify the axis to reindex with the axis
keyword:
111]: frame.reindex(states, axis="columns")
In [111]:
Out[
Texas Utah California1 NaN 2
a 4 NaN 5
c 7 NaN 8 d
See Table 5.3 for more about the arguments to reindex
.
Argument | Description |
---|---|
labels |
New sequence to use as an index. Can be Index instance or any other sequence-like Python data structure. An Index will be used exactly as is without any copying. |
index |
Use the passed sequence as the new index labels. |
columns |
Use the passed sequence as the new column labels. |
axis |
The axis to reindex, whether "index" (rows) or "columns" . The default is "index" . You can alternately do reindex(index=new_labels) or reindex(columns=new_labels) . |
method |
Interpolation (fill) method; "ffill" fills forward, while "bfill" fills backward. |
fill_value |
Substitute value to use when introducing missing data by reindexing. Use fill_value="missing" (the default behavior) when you want absent labels to have null values in the result. |
limit |
When forward filling or backfilling, the maximum size gap (in number of elements) to fill. |
tolerance |
When forward filling or backfilling, the maximum size gap (in absolute numeric distance) to fill for inexact matches. |
level |
Match simple Index on level of MultiIndex; otherwise select subset of. |
copy |
If True , always copy underlying data even if the new index is equivalent to the old index; if False , do not copy the data when the indexes are equivalent. |
As we'll explore later in Selection on DataFrame with loc and iloc, you can also reindex by using the loc
operator, and many users prefer to always do it this way. This works only if all of the new index labels already exist in the DataFrame (whereas reindex
will insert missing data for new labels):
112]: frame.loc[["a", "d", "c"], ["California", "Texas"]]
In [112]:
Out[
California Texas2 1
a 8 7
d 5 4 c
Dropping Entries from an Axis
Dropping one or more entries from an axis is simple if you already have an index array or list without those entries, since you can use the reindex
method or .loc
-based indexing. As that can require a bit of munging and set logic, the drop
method will return a new object with the indicated value or values deleted from an axis:
113]: obj = pd.Series(np.arange(5.), index=["a", "b", "c", "d", "e"])
In [
114]: obj
In [114]:
Out[0.0
a 1.0
b 2.0
c 3.0
d 4.0
e
dtype: float64
115]: new_obj = obj.drop("c")
In [
116]: new_obj
In [116]:
Out[0.0
a 1.0
b 3.0
d 4.0
e
dtype: float64
117]: obj.drop(["d", "c"])
In [117]:
Out[0.0
a 1.0
b 4.0
e dtype: float64
With DataFrame, index values can be deleted from either axis. To illustrate this, we first create an example DataFrame:
118]: data = pd.DataFrame(np.arange(16).reshape((4, 4)),
In [=["Ohio", "Colorado", "Utah", "New York"],
.....: index=["one", "two", "three", "four"])
.....: columns
119]: data
In [119]:
Out[
one two three four0 1 2 3
Ohio 4 5 6 7
Colorado 8 9 10 11
Utah 12 13 14 15 New York
Calling drop
with a sequence of labels will drop values from the row labels (axis 0):
120]: data.drop(index=["Colorado", "Ohio"])
In [120]:
Out[
one two three four8 9 10 11
Utah 12 13 14 15 New York
To drop labels from the columns, instead use the columns
keyword:
121]: data.drop(columns=["two"])
In [121]:
Out[
one three four0 2 3
Ohio 4 6 7
Colorado 8 10 11
Utah 12 14 15 New York
You can also drop values from the columns by passing axis=1
(which is like NumPy) or axis="columns"
:
122]: data.drop("two", axis=1)
In [122]:
Out[
one three four0 2 3
Ohio 4 6 7
Colorado 8 10 11
Utah 12 14 15
New York
123]: data.drop(["two", "four"], axis="columns")
In [123]:
Out[
one three0 2
Ohio 4 6
Colorado 8 10
Utah 12 14 New York
Indexing, Selection, and Filtering
Series indexing (obj[...]
) works analogously to NumPy array indexing, except you can use the Series’s index values instead of only integers. Here are some examples of this:
124]: obj = pd.Series(np.arange(4.), index=["a", "b", "c", "d"])
In [
125]: obj
In [125]:
Out[0.0
a 1.0
b 2.0
c 3.0
d
dtype: float64
126]: obj["b"]
In [126]: 1.0
Out[
127]: obj[1]
In [127]: 1.0
Out[
128]: obj[2:4]
In [128]:
Out[2.0
c 3.0
d
dtype: float64
129]: obj[["b", "a", "d"]]
In [129]:
Out[1.0
b 0.0
a 3.0
d
dtype: float64
130]: obj[[1, 3]]
In [130]:
Out[1.0
b 3.0
d
dtype: float64
131]: obj[obj < 2]
In [131]:
Out[0.0
a 1.0
b dtype: float64
While you can select data by label this way, the preferred way to select index values is with the special loc
operator:
132]: obj.loc[["b", "a", "d"]]
In [132]:
Out[1.0
b 0.0
a 3.0
d dtype: float64
The reason to prefer loc
is because of the different treatment of integers when indexing with []
. Regular []
-based indexing will treat integers as labels if the index contains integers, so the behavior differs depending on the data type of the index. For example:
133]: obj1 = pd.Series([1, 2, 3], index=[2, 0, 1])
In [
134]: obj2 = pd.Series([1, 2, 3], index=["a", "b", "c"])
In [
135]: obj1
In [135]:
Out[2 1
0 2
1 3
dtype: int64
136]: obj2
In [136]:
Out[1
a 2
b 3
c
dtype: int64
137]: obj1[[0, 1, 2]]
In [137]:
Out[0 2
1 3
2 1
dtype: int64
138]: obj2[[0, 1, 2]]
In [138]:
Out[1
a 2
b 3
c dtype: int64
When using loc
, the expression obj.loc[[0, 1, 2]]
will fail when the index does not contain integers:
134]: obj2.loc[[0, 1]]
In [---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/tmp/ipykernel_804589/4185657903.py in <module>
----> 1 obj2.loc[[0, 1]]
^ LONG EXCEPTION ABBREVIATED ^
KeyError: "None of [Int64Index([0, 1], dtype="int64")] are in the [index]"
Since loc
operator indexes exclusively with labels, there is also an iloc
operator that indexes exclusively with integers to work consistently whether or not the index contains integers:
139]: obj1.iloc[[0, 1, 2]]
In [139]:
Out[2 1
0 2
1 3
dtype: int64
140]: obj2.iloc[[0, 1, 2]]
In [140]:
Out[1
a 2
b 3
c dtype: int64
You can also slice with labels, but it works differently from normal Python slicing in that the endpoint is inclusive:
141]: obj2.loc["b":"c"]
In [141]:
Out[2
b 3
c dtype: int64
Assigning values using these methods modifies the corresponding section of the Series:
142]: obj2.loc["b":"c"] = 5
In [
143]: obj2
In [143]:
Out[1
a 5
b 5
c dtype: int64
It can be a common newbie error to try to call loc
or iloc
like functions rather than "indexing into" them with square brackets. The square bracket notation is used to enable slice operations and to allow for indexing on multiple axes with DataFrame objects.
Indexing into a DataFrame retrieves one or more columns either with a single value or sequence:
144]: data = pd.DataFrame(np.arange(16).reshape((4, 4)),
In [=["Ohio", "Colorado", "Utah", "New York"],
.....: index=["one", "two", "three", "four"])
.....: columns
145]: data
In [145]:
Out[
one two three four0 1 2 3
Ohio 4 5 6 7
Colorado 8 9 10 11
Utah 12 13 14 15
New York
146]: data["two"]
In [146]:
Out[1
Ohio 5
Colorado 9
Utah 13
New York
Name: two, dtype: int64
147]: data[["three", "one"]]
In [147]:
Out[
three one2 0
Ohio 6 4
Colorado 10 8
Utah 14 12 New York
Indexing like this has a few special cases. The first is slicing or selecting data with a Boolean array:
148]: data[:2]
In [148]:
Out[
one two three four0 1 2 3
Ohio 4 5 6 7
Colorado
149]: data[data["three"] > 5]
In [149]:
Out[
one two three four4 5 6 7
Colorado 8 9 10 11
Utah 12 13 14 15 New York
The row selection syntax data[:2]
is provided as a convenience. Passing a single element or a list to the []
operator selects columns.
Another use case is indexing with a Boolean DataFrame, such as one produced by a scalar comparison. Consider a DataFrame with all Boolean values produced by comparing with a scalar value:
150]: data < 5
In [150]:
Out[
one two three fourTrue True True True
Ohio True False False False
Colorado False False False False
Utah False False False False New York
We can use this DataFrame to assign the value 0 to each location with the value True
, like so:
151]: data[data < 5] = 0
In [
152]: data
In [152]:
Out[
one two three four0 0 0 0
Ohio 0 5 6 7
Colorado 8 9 10 11
Utah 12 13 14 15 New York
Selection on DataFrame with loc and iloc
Like Series, DataFrame has special attributes loc
and iloc
for label-based and integer-based indexing, respectively. Since DataFrame is two-dimensional, you can select a subset of the rows and columns with NumPy-like notation using either axis labels (loc
) or integers (iloc
).
As a first example, let's select a single row by label:
153]: data
In [153]:
Out[
one two three four0 0 0 0
Ohio 0 5 6 7
Colorado 8 9 10 11
Utah 12 13 14 15
New York
154]: data.loc["Colorado"]
In [154]:
Out[0
one 5
two 6
three 7
four Name: Colorado, dtype: int64
The result of selecting a single row is a Series with an index that contains the DataFrame's column labels. To select multiple roles, creating a new DataFrame, pass a sequence of labels:
155]: data.loc[["Colorado", "New York"]]
In [155]:
Out[
one two three four0 5 6 7
Colorado 12 13 14 15 New York
You can combine both row and column selection in loc
by separating the selections with a comma:
156]: data.loc["Colorado", ["two", "three"]]
In [156]:
Out[5
two 6
three Name: Colorado, dtype: int64
We'll then perform some similar selections with integers using iloc
:
157]: data.iloc[2]
In [157]:
Out[8
one 9
two 10
three 11
four
Name: Utah, dtype: int64
158]: data.iloc[[2, 1]]
In [158]:
Out[
one two three four8 9 10 11
Utah 0 5 6 7
Colorado
159]: data.iloc[2, [3, 0, 1]]
In [159]:
Out[11
four 8
one 9
two
Name: Utah, dtype: int64
160]: data.iloc[[1, 2], [3, 0, 1]]
In [160]:
Out[
four one two7 0 5
Colorado 11 8 9 Utah
Both indexing functions work with slices in addition to single labels or lists of labels:
161]: data.loc[:"Utah", "two"]
In [161]:
Out[0
Ohio 5
Colorado 9
Utah
Name: two, dtype: int64
162]: data.iloc[:, :3][data.three > 5]
In [162]:
Out[
one two three0 5 6
Colorado 8 9 10
Utah 12 13 14 New York
Boolean arrays can be used with loc
but not iloc
:
163]: data.loc[data.three >= 2]
In [163]:
Out[
one two three four0 5 6 7
Colorado 8 9 10 11
Utah 12 13 14 15 New York
There are many ways to select and rearrange the data contained in a pandas object. For DataFrame, Table 5.4 provides a short summary of many of them. As you will see later, there are a number of additional options for working with hierarchical indexes.
Type | Notes |
---|---|
df[column] |
Select single column or sequence of columns from the DataFrame; special case conveniences: Boolean array (filter rows), slice (slice rows), or Boolean DataFrame (set values based on some criterion) |
df.loc[rows] |
Select single row or subset of rows from the DataFrame by label |
df.loc[:, cols] |
Select single column or subset of columns by label |
df.loc[rows, cols] |
Select both row(s) and column(s) by label |
df.iloc[rows] |
Select single row or subset of rows from the DataFrame by integer position |
df.iloc[:, cols] |
Select single column or subset of columns by integer position |
df.iloc[rows, cols] |
Select both row(s) and column(s) by integer position |
df.at[row, col] |
Select a single scalar value by row and column label |
df.iat[row, col] |
Select a single scalar value by row and column position (integers) |
reindex method |
Select either rows or columns by labels |
Integer indexing pitfalls
Working with pandas objects indexed by integers can be a stumbling block for new users since they work differently from built-in Python data structures like lists and tuples. For example, you might not expect the following code to generate an error:
164]: ser = pd.Series(np.arange(3.))
In [
165]: ser
In [165]:
Out[0 0.0
1 1.0
2 2.0
dtype: float64
166]: ser[-1]
In [---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
~/miniforge-x86/envs/book-env/lib/python3.10/site-packages/pandas/core/indexes/ra
in get_loc(self, key)
nge.py 344 try:
--> 345 return self._range.index(new_key)
346 except ValueError as err:
ValueError: -1 is not in range
The above exception was the direct cause of the following exception:KeyError Traceback (most recent call last)
<ipython-input-166-44969a759c20> in <module>
----> 1 ser[-1]
~/miniforge-x86/envs/book-env/lib/python3.10/site-packages/pandas/core/series.py
in __getitem__(self, key)
1010
1011 elif key_is_scalar:
-> 1012 return self._get_value(key)
1013
1014 if is_hashable(key):
~/miniforge-x86/envs/book-env/lib/python3.10/site-packages/pandas/core/series.py
in _get_value(self, label, takeable)
1119
1120 # Similar to Index.get_value, but we do not fall back to position
al-> 1121 loc = self.index.get_loc(label)
1122
1123 if is_integer(loc):
~/miniforge-x86/envs/book-env/lib/python3.10/site-packages/pandas/core/indexes/ra
in get_loc(self, key)
nge.py 345 return self._range.index(new_key)
346 except ValueError as err:
--> 347 raise KeyError(key) from err
348 self._check_indexing_error(key)
349 raise KeyError(key)
KeyError: -1
In this case, pandas could “fall back” on integer indexing, but it is difficult to do this in general without introducing subtle bugs into the user code. Here we have an index containing 0
, 1
, and 2
, but pandas does not want to guess what the user wants (label-based indexing or position-based):
167]: ser
In [167]:
Out[0 0.0
1 1.0
2 2.0
dtype: float64
On the other hand, with a noninteger index, there is no such ambiguity:
168]: ser2 = pd.Series(np.arange(3.), index=["a", "b", "c"])
In [
169]: ser2[-1]
In [169]: 2.0 Out[
If you have an axis index containing integers, data selection will always be label oriented. As I said above, if you use loc
(for labels) or iloc
(for integers) you will get exactly what you want:
170]: ser.iloc[-1]
In [170]: 2.0 Out[
On the other hand, slicing with integers is always integer oriented:
171]: ser[:2]
In [171]:
Out[0 0.0
1 1.0
dtype: float64
As a result of these pitfalls, it is best to always prefer indexing with loc
and iloc
to avoid ambiguity.
Pitfalls with chained indexing
In the previous section we looked at how you can do flexible selections on a DataFrame using loc
and iloc
. These indexing attributes can also be used to modify DataFrame objects in place, but doing so requires some care.
For example, in the example DataFrame above, we can assign to a column or row by label or integer position:
172]: data.loc[:, "one"] = 1
In [
173]: data
In [173]:
Out[
one two three four1 0 0 0
Ohio 1 5 6 7
Colorado 1 9 10 11
Utah 1 13 14 15
New York
174]: data.iloc[2] = 5
In [
175]: data
In [175]:
Out[
one two three four1 0 0 0
Ohio 1 5 6 7
Colorado 5 5 5 5
Utah 1 13 14 15
New York
176]: data.loc[data["four"] > 5] = 3
In [
177]: data
In [177]:
Out[
one two three four1 0 0 0
Ohio 3 3 3 3
Colorado 5 5 5 5
Utah 3 3 3 3 New York
A common gotcha for new pandas users is to chain selections when assigning, like this:
In [177]: data.loc[data.three == 5]["three"] = 6
<ipython-input-11-0ed1cf2155d5>:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
Depending on the data contents, this may print a special SettingWithCopyWarning
, which warns you that you are trying to modify a temporary value (the nonempty result of data.loc[data.three == 5]
) instead of the original DataFrame data
, which might be what you were intending. Here, data
was unmodified:
179]: data
In [179]:
Out[
one two three four1 0 0 0
Ohio 3 3 3 3
Colorado 5 5 5 5
Utah 3 3 3 3 New York
In these scenarios, the fix is to rewrite the chained assignment to use a single loc
operation:
180]: data.loc[data.three == 5, "three"] = 6
In [
181]: data
In [181]:
Out[
one two three four1 0 0 0
Ohio 3 3 3 3
Colorado 5 5 6 5
Utah 3 3 3 3 New York
A good rule of thumb is to avoid chained indexing when doing assignments. There are other cases where pandas will generate SettingWithCopyWarning
that have to do with chained indexing. I refer you to this topic in the online pandas documentation.
Arithmetic and Data Alignment
pandas can make it much simpler to work with objects that have different indexes. For example, when you add objects, if any index pairs are not the same, the respective index in the result will be the union of the index pairs. Let’s look at an example:
182]: s1 = pd.Series([7.3, -2.5, 3.4, 1.5], index=["a", "c", "d", "e"])
In [
183]: s2 = pd.Series([-2.1, 3.6, -1.5, 4, 3.1],
In [=["a", "c", "e", "f", "g"])
.....: index
184]: s1
In [184]:
Out[7.3
a -2.5
c 3.4
d 1.5
e
dtype: float64
185]: s2
In [185]:
Out[-2.1
a 3.6
c -1.5
e 4.0
f 3.1
g dtype: float64
Adding these yields:
186]: s1 + s2
In [186]:
Out[5.2
a 1.1
c
d NaN0.0
e
f NaN
g NaN dtype: float64
The internal data alignment introduces missing values in the label locations that don’t overlap. Missing values will then propagate in further arithmetic computations.
In the case of DataFrame, alignment is performed on both rows and columns:
187]: df1 = pd.DataFrame(np.arange(9.).reshape((3, 3)), columns=list("bcd"),
In [=["Ohio", "Texas", "Colorado"])
.....: index
188]: df2 = pd.DataFrame(np.arange(12.).reshape((4, 3)), columns=list("bde"),
In [=["Utah", "Ohio", "Texas", "Oregon"])
.....: index
189]: df1
In [189]:
Out[
b c d0.0 1.0 2.0
Ohio 3.0 4.0 5.0
Texas 6.0 7.0 8.0
Colorado
190]: df2
In [190]:
Out[
b d e0.0 1.0 2.0
Utah 3.0 4.0 5.0
Ohio 6.0 7.0 8.0
Texas 9.0 10.0 11.0 Oregon
Adding these returns a DataFrame with index and columns that are the unions of the ones in each DataFrame:
191]: df1 + df2
In [191]:
Out[
b c d e
Colorado NaN NaN NaN NaN3.0 NaN 6.0 NaN
Ohio
Oregon NaN NaN NaN NaN9.0 NaN 12.0 NaN
Texas Utah NaN NaN NaN NaN
Since the "c"
and "e"
columns are not found in both DataFrame objects, they appear as missing in the result. The same holds for the rows with labels that are not common to both objects.
If you add DataFrame objects with no column or row labels in common, the result will contain all nulls:
192]: df1 = pd.DataFrame({"A": [1, 2]})
In [
193]: df2 = pd.DataFrame({"B": [3, 4]})
In [
194]: df1
In [194]:
Out[
A0 1
1 2
195]: df2
In [195]:
Out[
B0 3
1 4
196]: df1 + df2
In [196]:
Out[
A B0 NaN NaN
1 NaN NaN
Arithmetic methods with fill values
In arithmetic operations between differently indexed objects, you might want to fill with a special value, like 0, when an axis label is found in one object but not the other. Here is an example where we set a particular value to NA (null) by assigning np.nan
to it:
197]: df1 = pd.DataFrame(np.arange(12.).reshape((3, 4)),
In [=list("abcd"))
.....: columns
198]: df2 = pd.DataFrame(np.arange(20.).reshape((4, 5)),
In [=list("abcde"))
.....: columns
199]: df2.loc[1, "b"] = np.nan
In [
200]: df1
In [200]:
Out[
a b c d0 0.0 1.0 2.0 3.0
1 4.0 5.0 6.0 7.0
2 8.0 9.0 10.0 11.0
201]: df2
In [201]:
Out[
a b c d e0 0.0 1.0 2.0 3.0 4.0
1 5.0 NaN 7.0 8.0 9.0
2 10.0 11.0 12.0 13.0 14.0
3 15.0 16.0 17.0 18.0 19.0
Adding these results in missing values in the locations that don’t overlap:
202]: df1 + df2
In [202]:
Out[
a b c d e0 0.0 2.0 4.0 6.0 NaN
1 9.0 NaN 13.0 15.0 NaN
2 18.0 20.0 22.0 24.0 NaN
3 NaN NaN NaN NaN NaN
Using the add
method on df1
, I pass df2
and an argument to fill_value
, which substitutes the passed value for any missing values in the operation:
203]: df1.add(df2, fill_value=0)
In [203]:
Out[
a b c d e0 0.0 2.0 4.0 6.0 4.0
1 9.0 5.0 13.0 15.0 9.0
2 18.0 20.0 22.0 24.0 14.0
3 15.0 16.0 17.0 18.0 19.0
See Table 5.5 for a listing of Series and DataFrame methods for arithmetic. Each has a counterpart, starting with the letter r
, that has arguments reversed. So these two statements are equivalent:
204]: 1 / df1
In [204]:
Out[
a b c d0 inf 1.000000 0.500000 0.333333
1 0.250 0.200000 0.166667 0.142857
2 0.125 0.111111 0.100000 0.090909
205]: df1.rdiv(1)
In [205]:
Out[
a b c d0 inf 1.000000 0.500000 0.333333
1 0.250 0.200000 0.166667 0.142857
2 0.125 0.111111 0.100000 0.090909
Relatedly, when reindexing a Series or DataFrame, you can also specify a different fill value:
206]: df1.reindex(columns=df2.columns, fill_value=0)
In [206]:
Out[
a b c d e0 0.0 1.0 2.0 3.0 0
1 4.0 5.0 6.0 7.0 0
2 8.0 9.0 10.0 11.0 0
Method | Description |
---|---|
add, radd |
Methods for addition (+) |
sub, rsub |
Methods for subtraction (-) |
div, rdiv |
Methods for division (/) |
floordiv, rfloordiv |
Methods for floor division (//) |
mul, rmul |
Methods for multiplication (*) |
pow, rpow |
Methods for exponentiation (**) |
Operations between DataFrame and Series
As with NumPy arrays of different dimensions, arithmetic between DataFrame and Series is also defined. First, as a motivating example, consider the difference between a two-dimensional array and one of its rows:
207]: arr = np.arange(12.).reshape((3, 4))
In [
208]: arr
In [208]:
Out[0., 1., 2., 3.],
array([[ 4., 5., 6., 7.],
[ 8., 9., 10., 11.]])
[
209]: arr[0]
In [209]: array([0., 1., 2., 3.])
Out[
210]: arr - arr[0]
In [210]:
Out[0., 0., 0., 0.],
array([[4., 4., 4., 4.],
[8., 8., 8., 8.]]) [
When we subtract arr[0]
from arr
, the subtraction is performed once for each row. This is referred to as broadcasting and is explained in more detail as it relates to general NumPy arrays in Appendix A: Advanced NumPy. Operations between a DataFrame and a Series are similar:
211]: frame = pd.DataFrame(np.arange(12.).reshape((4, 3)),
In [=list("bde"),
.....: columns=["Utah", "Ohio", "Texas", "Oregon"])
.....: index
212]: series = frame.iloc[0]
In [
213]: frame
In [213]:
Out[
b d e0.0 1.0 2.0
Utah 3.0 4.0 5.0
Ohio 6.0 7.0 8.0
Texas 9.0 10.0 11.0
Oregon
214]: series
In [214]:
Out[0.0
b 1.0
d 2.0
e Name: Utah, dtype: float64
By default, arithmetic between DataFrame and Series matches the index of the Series on the columns of the DataFrame, broadcasting down the rows:
215]: frame - series
In [215]:
Out[
b d e0.0 0.0 0.0
Utah 3.0 3.0 3.0
Ohio 6.0 6.0 6.0
Texas 9.0 9.0 9.0 Oregon
If an index value is not found in either the DataFrame’s columns or the Series’s index, the objects will be reindexed to form the union:
216]: series2 = pd.Series(np.arange(3), index=["b", "e", "f"])
In [
217]: series2
In [217]:
Out[0
b 1
e 2
f
dtype: int64
218]: frame + series2
In [218]:
Out[
b d e f0.0 NaN 3.0 NaN
Utah 3.0 NaN 6.0 NaN
Ohio 6.0 NaN 9.0 NaN
Texas 9.0 NaN 12.0 NaN Oregon
If you want to instead broadcast over the columns, matching on the rows, you have to use one of the arithmetic methods and specify to match over the index. For example:
219]: series3 = frame["d"]
In [
220]: frame
In [220]:
Out[
b d e0.0 1.0 2.0
Utah 3.0 4.0 5.0
Ohio 6.0 7.0 8.0
Texas 9.0 10.0 11.0
Oregon
221]: series3
In [221]:
Out[1.0
Utah 4.0
Ohio 7.0
Texas 10.0
Oregon
Name: d, dtype: float64
222]: frame.sub(series3, axis="index")
In [222]:
Out[
b d e-1.0 0.0 1.0
Utah -1.0 0.0 1.0
Ohio -1.0 0.0 1.0
Texas -1.0 0.0 1.0 Oregon
The axis that you pass is the axis to match on. In this case we mean to match on the DataFrame’s row index (axis="index"
) and broadcast across the columns.
Function Application and Mapping
NumPy ufuncs (element-wise array methods) also work with pandas objects:
223]: frame = pd.DataFrame(np.random.standard_normal((4, 3)),
In [=list("bde"),
.....: columns=["Utah", "Ohio", "Texas", "Oregon"])
.....: index
224]: frame
In [224]:
Out[
b d e-0.204708 0.478943 -0.519439
Utah -0.555730 1.965781 1.393406
Ohio 0.092908 0.281746 0.769023
Texas 1.246435 1.007189 -1.296221
Oregon
225]: np.abs(frame)
In [225]:
Out[
b d e0.204708 0.478943 0.519439
Utah 0.555730 1.965781 1.393406
Ohio 0.092908 0.281746 0.769023
Texas 1.246435 1.007189 1.296221 Oregon
Another frequent operation is applying a function on one-dimensional arrays to each column or row. DataFrame’s apply
method does exactly this:
226]: def f1(x):
In [return x.max() - x.min()
.....:
227]: frame.apply(f1)
In [227]:
Out[1.802165
b 1.684034
d 2.689627
e dtype: float64
Here the function f
, which computes the difference between the maximum and minimum of a Series, is invoked once on each column in frame
. The result is a Series having the columns of frame
as its index.
If you pass axis="columns"
to apply
, the function will be invoked once per row instead. A helpful way to think about this is as "apply across the columns":
228]: frame.apply(f1, axis="columns")
In [228]:
Out[0.998382
Utah 2.521511
Ohio 0.676115
Texas 2.542656
Oregon dtype: float64
Many of the most common array statistics (like sum
and mean
) are DataFrame methods, so using apply
is not necessary.
The function passed to apply
need not return a scalar value; it can also return a Series with multiple values:
229]: def f2(x):
In [return pd.Series([x.min(), x.max()], index=["min", "max"])
.....:
230]: frame.apply(f2)
In [230]:
Out[
b d emin -0.555730 0.281746 -1.296221
max 1.246435 1.965781 1.393406
Element-wise Python functions can be used, too. Suppose you wanted to compute a formatted string from each floating-point value in frame
. You can do this with applymap
:
231]: def my_format(x):
In [return f"{x:.2f}"
.....:
232]: frame.applymap(my_format)
In [232]:
Out[
b d e-0.20 0.48 -0.52
Utah -0.56 1.97 1.39
Ohio 0.09 0.28 0.77
Texas 1.25 1.01 -1.30 Oregon
The reason for the name applymap
is that Series has a map
method for applying an element-wise function:
233]: frame["e"].map(my_format)
In [233]:
Out[-0.52
Utah 1.39
Ohio 0.77
Texas -1.30
Oregon object Name: e, dtype:
Sorting and Ranking
Sorting a dataset by some criterion is another important built-in operation. To sort lexicographically by row or column label, use the sort_index
method, which returns a new, sorted object:
234]: obj = pd.Series(np.arange(4), index=["d", "a", "b", "c"])
In [
235]: obj
In [235]:
Out[0
d 1
a 2
b 3
c
dtype: int64
236]: obj.sort_index()
In [236]:
Out[1
a 2
b 3
c 0
d dtype: int64
With a DataFrame, you can sort by index on either axis:
237]: frame = pd.DataFrame(np.arange(8).reshape((2, 4)),
In [=["three", "one"],
.....: index=["d", "a", "b", "c"])
.....: columns
238]: frame
In [238]:
Out[
d a b c0 1 2 3
three 4 5 6 7
one
239]: frame.sort_index()
In [239]:
Out[
d a b c4 5 6 7
one 0 1 2 3
three
240]: frame.sort_index(axis="columns")
In [240]:
Out[
a b c d1 2 3 0
three 5 6 7 4 one
The data is sorted in ascending order by default but can be sorted in descending order, too:
241]: frame.sort_index(axis="columns", ascending=False)
In [241]:
Out[
d c b a0 3 2 1
three 4 7 6 5 one
To sort a Series by its values, use its sort_values
method:
242]: obj = pd.Series([4, 7, -3, 2])
In [
243]: obj.sort_values()
In [243]:
Out[2 -3
3 2
0 4
1 7
dtype: int64
Any missing values are sorted to the end of the Series by default:
244]: obj = pd.Series([4, np.nan, 7, np.nan, -3, 2])
In [
245]: obj.sort_values()
In [245]:
Out[4 -3.0
5 2.0
0 4.0
2 7.0
1 NaN
3 NaN
dtype: float64
Missing values can be sorted to the start instead by using the na_position
option:
246]: obj.sort_values(na_position="first")
In [246]:
Out[1 NaN
3 NaN
4 -3.0
5 2.0
0 4.0
2 7.0
dtype: float64
When sorting a DataFrame, you can use the data in one or more columns as the sort keys. To do so, pass one or more column names to sort_values
:
247]: frame = pd.DataFrame({"b": [4, 7, -3, 2], "a": [0, 1, 0, 1]})
In [
248]: frame
In [248]:
Out[
b a0 4 0
1 7 1
2 -3 0
3 2 1
249]: frame.sort_values("b")
In [249]:
Out[
b a2 -3 0
3 2 1
0 4 0
1 7 1
To sort by multiple columns, pass a list of names:
250]: frame.sort_values(["a", "b"])
In [250]:
Out[
b a2 -3 0
0 4 0
3 2 1
1 7 1
Ranking assigns ranks from one through the number of valid data points in an array, starting from the lowest value. The rank
methods for Series and DataFrame are the place to look; by default, rank
breaks ties by assigning each group the mean rank:
251]: obj = pd.Series([7, -5, 7, 4, 2, 0, 4])
In [
252]: obj.rank()
In [252]:
Out[0 6.5
1 1.0
2 6.5
3 4.5
4 3.0
5 2.0
6 4.5
dtype: float64
Ranks can also be assigned according to the order in which they’re observed in the data:
253]: obj.rank(method="first")
In [253]:
Out[0 6.0
1 1.0
2 7.0
3 4.0
4 3.0
5 2.0
6 5.0
dtype: float64
Here, instead of using the average rank 6.5 for the entries 0 and 2, they instead have been set to 6 and 7 because label 0 precedes label 2 in the data.
You can rank in descending order, too:
254]: obj.rank(ascending=False)
In [254]:
Out[0 1.5
1 7.0
2 1.5
3 3.5
4 5.0
5 6.0
6 3.5
dtype: float64
See Table 5.6 for a list of tie-breaking methods available.
DataFrame can compute ranks over the rows or the columns:
255]: frame = pd.DataFrame({"b": [4.3, 7, -3, 2], "a": [0, 1, 0, 1],
In ["c": [-2, 5, 8, -2.5]})
.....:
256]: frame
In [256]:
Out[
b a c0 4.3 0 -2.0
1 7.0 1 5.0
2 -3.0 0 8.0
3 2.0 1 -2.5
257]: frame.rank(axis="columns")
In [257]:
Out[
b a c0 3.0 2.0 1.0
1 3.0 1.0 2.0
2 1.0 2.0 3.0
3 3.0 2.0 1.0
Method | Description |
---|---|
"average" |
Default: assign the average rank to each entry in the equal group |
"min" |
Use the minimum rank for the whole group |
"max" |
Use the maximum rank for the whole group |
"first" |
Assign ranks in the order the values appear in the data |
"dense" |
Like method="min" , but ranks always increase by 1 between groups rather than the number of equal elements in a group |
Axis Indexes with Duplicate Labels
Up until now almost all of the examples we have looked at have unique axis labels (index values). While many pandas functions (like reindex
) require that the labels be unique, it’s not mandatory. Let’s consider a small Series with duplicate indices:
258]: obj = pd.Series(np.arange(5), index=["a", "a", "b", "b", "c"])
In [
259]: obj
In [259]:
Out[0
a 1
a 2
b 3
b 4
c dtype: int64
The is_unique
property of the index can tell you whether or not its labels are unique:
260]: obj.index.is_unique
In [260]: False Out[
Data selection is one of the main things that behaves differently with duplicates. Indexing a label with multiple entries returns a Series, while single entries return a scalar value:
261]: obj["a"]
In [261]:
Out[0
a 1
a
dtype: int64
262]: obj["c"]
In [262]: 4 Out[
This can make your code more complicated, as the output type from indexing can vary based on whether or not a label is repeated.
The same logic extends to indexing rows (or columns) in a DataFrame:
263]: df = pd.DataFrame(np.random.standard_normal((5, 3)),
In [=["a", "a", "b", "b", "c"])
.....: index
264]: df
In [264]:
Out[0 1 2
0.274992 0.228913 1.352917
a 0.886429 -2.001637 -0.371843
a 1.669025 -0.438570 -0.539741
b 0.476985 3.248944 -1.021228
b -0.577087 0.124121 0.302614
c
265]: df.loc["b"]
In [265]:
Out[0 1 2
1.669025 -0.438570 -0.539741
b 0.476985 3.248944 -1.021228
b
266]: df.loc["c"]
In [266]:
Out[0 -0.577087
1 0.124121
2 0.302614
Name: c, dtype: float64
5.3 Summarizing and Computing Descriptive Statistics
pandas objects are equipped with a set of common mathematical and statistical methods. Most of these fall into the category of reductions or summary statistics, methods that extract a single value (like the sum or mean) from a Series, or a Series of values from the rows or columns of a DataFrame. Compared with the similar methods found on NumPy arrays, they have built-in handling for missing data. Consider a small DataFrame:
267]: df = pd.DataFrame([[1.4, np.nan], [7.1, -4.5],
In [0.75, -1.3]],
.....: [np.nan, np.nan], [=["a", "b", "c", "d"],
.....: index=["one", "two"])
.....: columns
268]: df
In [268]:
Out[
one two1.40 NaN
a 7.10 -4.5
b
c NaN NaN0.75 -1.3 d
Calling DataFrame’s sum
method returns a Series containing column sums:
269]: df.sum()
In [269]:
Out[9.25
one -5.80
two dtype: float64
Passing axis="columns"
or axis=1
sums across the columns instead:
270]: df.sum(axis="columns")
In [270]:
Out[1.40
a 2.60
b 0.00
c -0.55
d dtype: float64
When an entire row or column contains all NA values, the sum is 0, whereas if any value is not NA, then the result is NA. This can be disabled with the skipna
option, in which case any NA value in a row or column names the corresponding result NA:
271]: df.sum(axis="index", skipna=False)
In [271]:
Out[
one NaN
two NaN
dtype: float64
272]: df.sum(axis="columns", skipna=False)
In [272]:
Out[
a NaN2.60
b
c NaN-0.55
d dtype: float64
Some aggregations, like mean
, require at least one non-NA value to yield a value result, so here we have:
273]: df.mean(axis="columns")
In [273]:
Out[1.400
a 1.300
b
c NaN-0.275
d dtype: float64
See Table 5.7 for a list of common options for each reduction method.
Method | Description |
---|---|
axis |
Axis to reduce over; "index" for DataFrame’s rows and "columns" for columns |
skipna |
Exclude missing values; True by default |
level |
Reduce grouped by level if the axis is hierarchically indexed (MultiIndex) |
Some methods, like idxmin
and idxmax
, return indirect statistics, like the index value where the minimum or maximum values are attained:
274]: df.idxmax()
In [274]:
Out[
one b
two dobject dtype:
Other methods are accumulations:
275]: df.cumsum()
In [275]:
Out[
one two1.40 NaN
a 8.50 -4.5
b
c NaN NaN9.25 -5.8 d
Some methods are neither reductions nor accumulations. describe
is one such example, producing multiple summary statistics in one shot:
276]: df.describe()
In [276]:
Out[
one two3.000000 2.000000
count 3.083333 -2.900000
mean 3.493685 2.262742
std min 0.750000 -4.500000
25% 1.075000 -3.700000
50% 1.400000 -2.900000
75% 4.250000 -2.100000
max 7.100000 -1.300000
On nonnumeric data, describe
produces alternative summary statistics:
277]: obj = pd.Series(["a", "a", "b", "c"] * 4)
In [
278]: obj.describe()
In [278]:
Out[16
count 3
unique
top a8
freq object dtype:
See Table 5.8 for a full list of summary statistics and related methods.
Method | Description |
---|---|
count |
Number of non-NA values |
describe |
Compute set of summary statistics |
min, max |
Compute minimum and maximum values |
argmin, argmax |
Compute index locations (integers) at which minimum or maximum value is obtained, respectively; not available on DataFrame objects |
idxmin, idxmax |
Compute index labels at which minimum or maximum value is obtained, respectively |
quantile |
Compute sample quantile ranging from 0 to 1 (default: 0.5) |
sum |
Sum of values |
mean |
Mean of values |
median |
Arithmetic median (50% quantile) of values |
mad |
Mean absolute deviation from mean value |
prod |
Product of all values |
var |
Sample variance of values |
std |
Sample standard deviation of values |
skew |
Sample skewness (third moment) of values |
kurt |
Sample kurtosis (fourth moment) of values |
cumsum |
Cumulative sum of values |
cummin, cummax |
Cumulative minimum or maximum of values, respectively |
cumprod |
Cumulative product of values |
diff |
Compute first arithmetic difference (useful for time series) |
pct_change |
Compute percent changes |
Correlation and Covariance
Some summary statistics, like correlation and covariance, are computed from pairs of arguments. Let’s consider some DataFrames of stock prices and volumes originally obtained from Yahoo! Finance and available in binary Python pickle files you can find in the accompanying datasets for the book:
279]: price = pd.read_pickle("examples/yahoo_price.pkl")
In [
280]: volume = pd.read_pickle("examples/yahoo_volume.pkl") In [
I now compute percent changes of the prices, a time series operation that will be explored further in Ch 11: Time Series:
281]: returns = price.pct_change()
In [
282]: returns.tail()
In [282]:
Out[
AAPL GOOG IBM MSFT
Date 2016-10-17 -0.000680 0.001837 0.002072 -0.003483
2016-10-18 -0.000681 0.019616 -0.026168 0.007690
2016-10-19 -0.002979 0.007846 0.003583 -0.002255
2016-10-20 -0.000512 -0.005652 0.001719 -0.004867
2016-10-21 -0.003930 0.003011 -0.012474 0.042096
The corr
method of Series computes the correlation of the overlapping, non-NA, aligned-by-index values in two Series. Relatedly, cov
computes the covariance:
283]: returns["MSFT"].corr(returns["IBM"])
In [283]: 0.49976361144151166
Out[
284]: returns["MSFT"].cov(returns["IBM"])
In [284]: 8.870655479703549e-05 Out[
DataFrame’s corr
and cov
methods, on the other hand, return a full correlation or covariance matrix as a DataFrame, respectively:
285]: returns.corr()
In [285]:
Out[
AAPL GOOG IBM MSFT1.000000 0.407919 0.386817 0.389695
AAPL 0.407919 1.000000 0.405099 0.465919
GOOG 0.386817 0.405099 1.000000 0.499764
IBM 0.389695 0.465919 0.499764 1.000000
MSFT
286]: returns.cov()
In [286]:
Out[
AAPL GOOG IBM MSFT0.000277 0.000107 0.000078 0.000095
AAPL 0.000107 0.000251 0.000078 0.000108
GOOG 0.000078 0.000078 0.000146 0.000089
IBM 0.000095 0.000108 0.000089 0.000215 MSFT
Using DataFrame’s corrwith
method, you can compute pair-wise correlations between a DataFrame’s columns or rows with another Series or DataFrame. Passing a Series returns a Series with the correlation value computed for each column:
287]: returns.corrwith(returns["IBM"])
In [287]:
Out[0.386817
AAPL 0.405099
GOOG 1.000000
IBM 0.499764
MSFT dtype: float64
Passing a DataFrame computes the correlations of matching column names. Here, I compute correlations of percent changes with volume:
288]: returns.corrwith(volume)
In [288]:
Out[-0.075565
AAPL -0.007067
GOOG -0.204849
IBM -0.092950
MSFT dtype: float64
Passing axis="columns"
does things row-by-row instead. In all cases, the data points are aligned by label before the correlation is computed.
Unique Values, Value Counts, and Membership
Another class of related methods extracts information about the values contained in a one-dimensional Series. To illustrate these, consider this example:
289]: obj = pd.Series(["c", "a", "d", "a", "a", "b", "b", "c", "c"]) In [
The first function is unique
, which gives you an array of the unique values in a Series:
290]: uniques = obj.unique()
In [
291]: uniques
In [291]: array(['c', 'a', 'd', 'b'], dtype=object) Out[
The unique values are not necessarily returned in the order in which they first appear, and not in sorted order, but they could be sorted after the fact if needed (uniques.sort()
). Relatedly, value_counts
computes a Series containing value frequencies:
292]: obj.value_counts()
In [292]:
Out[3
c 3
a 2
b 1
d Name: count, dtype: int64
The Series is sorted by value in descending order as a convenience. value_counts
is also available as a top-level pandas method that can be used with NumPy arrays or other Python sequences:
293]: pd.value_counts(obj.to_numpy(), sort=False)
In [293]:
Out[3
c 3
a 1
d 2
b Name: count, dtype: int64
isin
performs a vectorized set membership check and can be useful in filtering a dataset down to a subset of values in a Series or column in a DataFrame:
294]: obj
In [294]:
Out[0 c
1 a
2 d
3 a
4 a
5 b
6 b
7 c
8 c
object
dtype:
295]: mask = obj.isin(["b", "c"])
In [
296]: mask
In [296]:
Out[0 True
1 False
2 False
3 False
4 False
5 True
6 True
7 True
8 True
bool
dtype:
297]: obj[mask]
In [297]:
Out[0 c
5 b
6 b
7 c
8 c
object dtype:
Related to isin
is the Index.get_indexer
method, which gives you an index array from an array of possibly nondistinct values into another array of distinct values:
298]: to_match = pd.Series(["c", "a", "b", "b", "c", "a"])
In [
299]: unique_vals = pd.Series(["c", "b", "a"])
In [
300]: indices = pd.Index(unique_vals).get_indexer(to_match)
In [
301]: indices
In [301]: array([0, 2, 1, 1, 0, 2]) Out[
See Table 5.9 for a reference on these methods.
Method | Description |
---|---|
isin |
Compute a Boolean array indicating whether each Series or DataFrame value is contained in the passed sequence of values |
get_indexer |
Compute integer indices for each value in an array into another array of distinct values; helpful for data alignment and join-type operations |
unique |
Compute an array of unique values in a Series, returned in the order observed |
value_counts |
Return a Series containing unique values as its index and frequencies as its values, ordered count in descending order |
In some cases, you may want to compute a histogram on multiple related columns in a DataFrame. Here’s an example:
302]: data = pd.DataFrame({"Qu1": [1, 3, 4, 3, 4],
In ["Qu2": [2, 3, 1, 2, 3],
.....: "Qu3": [1, 5, 2, 4, 4]})
.....:
303]: data
In [303]:
Out[
Qu1 Qu2 Qu30 1 2 1
1 3 3 5
2 4 1 2
3 3 2 4
4 4 3 4
We can compute the value counts for a single column, like so:
304]: data["Qu1"].value_counts().sort_index()
In [304]:
Out[
Qu11 1
3 2
4 2
Name: count, dtype: int64
To compute this for all columns, pass pandas.value_counts
to the DataFrame’s apply
method:
305]: result = data.apply(pd.value_counts).fillna(0)
In [
306]: result
In [306]:
Out[
Qu1 Qu2 Qu31 1.0 1.0 1.0
2 0.0 2.0 1.0
3 2.0 2.0 0.0
4 2.0 0.0 2.0
5 0.0 0.0 1.0
Here, the row labels in the result are the distinct values occurring in all of the columns. The values are the respective counts of these values in each column.
There is also a DataFrame.value_counts
method, but it computes counts considering each row of the DataFrame as a tuple to determine the number of occurrences of each distinct row:
307]: data = pd.DataFrame({"a": [1, 1, 1, 2, 2], "b": [0, 0, 1, 0, 0]})
In [
308]: data
In [308]:
Out[
a b0 1 0
1 1 0
2 1 1
3 2 0
4 2 0
309]: data.value_counts()
In [309]:
Out[
a b1 0 2
2 0 2
1 1 1
Name: count, dtype: int64
In this case, the result has an index representing the distinct rows as a hierarchical index, a topic we will explore in greater detail in Ch 8: Data Wrangling: Join, Combine, and Reshape.
5.4 Conclusion
In the next chapter, we will discuss tools for reading (or loading) and writing datasets with pandas. After that, we will dig deeper into data cleaning, wrangling, analysis, and visualization tools using pandas.