数据的split-apply-聚合, 案例-缺失值-重采样-加权平均-线性回归
分割-apply-聚合
- 大数据的MapReduce
The most general-purpose GroupBy method is apply, which is the subject of the rest of this section. As illustrated in Figure 10-2, apply splits the object being manipulated into pieces, invokes the passed function on each piece, and then attempts to concatenate the pieces together.
Returning to the tipping dataset from before, suppose you wanted to select the top five tip_pct values by group. First, write a function that selects the rows with the largest values in a particular column:
total_bill | tip | smoker | day | time | size | |
0 | 16.99 | 1.01 | No | Sun | Dinner | 2 |
1 | 10.34 | 1.66 | No | Sun | Dinner | 3 |
total_bill | tip | smoker | day | time | size | tip_pct | |
109 | 14.31 | 4.00 | Yes | Sat | Dinner | 2 | 0.279525 |
183 | 23.17 | 6.50 | Yes | Sun | Dinner | 4 | 0.280535 |
232 | 11.61 | 3.39 | No | Sat | Dinner | 2 | 0.291990 |
67 | 3.07 | 1.00 | Yes | Sat | Dinner | 1 | 0.325733 |
178 | 9.60 | 4.00 | Yes | Sun | Dinner | 2 | 0.416667 |
172 | 7.25 | 5.15 | Yes | Sun | Dinner | 2 | 0.710345 |
Now, if we group by smoker, say, and call apply with this function, we get the following:
total_bill | tip | smoker | day | time | size | tip_pct | ||
smoker | ||||||||
No | 88 | 24.71 | 5.85 | No | Thur | Lunch | 2 | 0.236746 |
185 | 20.69 | 5.00 | No | Sun | Dinner | 5 | 0.241663 | |
51 | 10.29 | 2.60 | No | Sun | Dinner | 2 | 0.252672 | |
149 | 7.51 | 2.00 | No | Thur | Lunch | 2 | 0.266312 | |
232 | 11.61 | 3.39 | No | Sat | Dinner | 2 | 0.291990 | |
Yes | 109 | 14.31 | 4.00 | Yes | Sat | Dinner | 2 | 0.279525 |
183 | 23.17 | 6.50 | Yes | Sun | Dinner | 4 | 0.280535 | |
67 | 3.07 | 1.00 | Yes | Sat | Dinner | 1 | 0.325733 | |
178 | 9.60 | 4.00 | Yes | Sun | Dinner | 2 | 0.416667 | |
172 | 7.25 | 5.15 | Yes | Sun | Dinner | 2 | 0.710345 |
What has happened here? The top function is called on each row(类似RDD) group from the DataFrame, and then the results are glued together using pandas.concat, labeling the pieces with the group names. The result therefore has a hierarchical index whose inner level contains index values from the original DataFrame.
If you pass a function to apply that takes other arguments or keywords, you can pass these after the function:
total_bill | tip | smoker | day | time | size | tip_pct | |||
smoker | day | ||||||||
No | Fri | 94 | 22.75 | 3.25 | No | Fri | Dinner | 2 | 0.142857 |
Sat | 212 | 48.33 | 9.00 | No | Sat | Dinner | 4 | 0.186220 | |
Sun | 156 | 48.17 | 5.00 | No | Sun | Dinner | 6 | 0.103799 | |
Thur | 142 | 41.19 | 5.00 | No | Thur | Lunch | 5 | 0.121389 | |
Yes | Fri | 95 | 40.17 | 4.73 | Yes | Fri | Dinner | 4 | 0.117750 |
Sat | 170 | 50.81 | 10.00 | Yes | Sat | Dinner | 3 | 0.196812 | |
Sun | 182 | 45.35 | 3.50 | Yes | Sun | Dinner | 3 | 0.077178 | |
Thur | 197 | 43.11 | 5.00 | Yes | Thur | Lunch | 4 | 0.115982 |
Beyound these basic usage mechanics, getting the most out of apply may require some creativity. What occurs inside the function passed is up to you; it only needs to only return a pandas object or a scalar value. The rest of this chapter will mainly consist of examples showing you how to solve various using groupby.
可以自定义各种函数, 只要返回的是df, 然后, 又可以各种groupby..
You may recall that I earlier called describe on a GroupBy object:
count | mean | std | min | 25% | 50% | 75% | max | |
smoker | ||||||||
No | 151.0 | 0.159328 | 0.039910 | 0.056797 | 0.136906 | 0.155625 | 0.185014 | 0.291990 |
Yes | 93.0 | 0.163196 | 0.085119 | 0.035638 | 0.106771 | 0.153846 | 0.195059 | 0.710345 |
Inside GroupBy, when you invoke a method like describe, it's actually just a shortcut for:
过滤分组键
- group_keys=False
In the preceding examples, you see that the resulting object has a hierarchical index formed from the group keys along with the indexes of each piece of the original object. You can disable this by passing group_keys=False to groupby.
total_bill | tip | smoker | day | time | size | tip_pct | |
88 | 24.71 | 5.85 | No | Thur | Lunch | 2 | 0.236746 |
185 | 20.69 | 5.00 | No | Sun | Dinner | 5 | 0.241663 |
51 | 10.29 | 2.60 | No | Sun | Dinner | 2 | 0.252672 |
149 | 7.51 | 2.00 | No | Thur | Lunch | 2 | 0.266312 |
232 | 11.61 | 3.39 | No | Sat | Dinner | 2 | 0.291990 |
109 | 14.31 | 4.00 | Yes | Sat | Dinner | 2 | 0.279525 |
183 | 23.17 | 6.50 | Yes | Sun | Dinner | 4 | 0.280535 |
67 | 3.07 | 1.00 | Yes | Sat | Dinner | 1 | 0.325733 |
178 | 9.60 | 4.00 | Yes | Sun | Dinner | 2 | 0.416667 |
172 | 7.25 | 5.15 | Yes | Sun | Dinner | 2 | 0.710345 |
分位数和桶分析
- cut, qcut
As you may recall from Chapter8, pandas has some tool, in particular cut and qcut, for slicing data up into buckets with bins of your choosing or by sample quantiles. Combineing these functions with groupby makes it convenient to perform bucket or quantile analysis on a dataset. Consider a simple random dataset and equal-length bucket categorization using cut:
The Categorical object returned by cut can be passed directly to groupby. So we could compute a set of statistics for the data2 column like so:
count | max | mean | min | |
data1 | ||||
(-3.714, -1.672] | 49 | <bound method Series.max of 25 -0.372893\n2... | -0.2432 | -2.16709 |
(-1.672, 0.361] | 601 | <bound method Series.max of 0 0.861588\n1... | -0.0253114 | -2.90659 |
(0.361, 2.395] | 340 | <bound method Series.max of 4 0.228388\n7... | 0.024466 | -3.14779 |
(2.395, 4.429] | 10 | <bound method Series.max of 201 -0.519746\n4... | -0.267874 | -0.835444 |
Theses were equal-length buckets; to compute equal-size buckets based on sample quantiles, use qcut.(等长度的'桶'), I'll pass lable=false to just get quantile numbers:
count | max | mean | min | |
data1 | ||||
0 | 100 | <bound method Series.max of 11 2.804563\n2... | -0.069347 | -2.25593 |
1 | 100 | <bound method Series.max of 1 -0.195015\n2... | -0.0408363 | -2.75307 |
2 | 100 | <bound method Series.max of 6 -1.087337\n1... | -0.212456 | -2.88498 |
3 | 100 | <bound method Series.max of 5 0.120671\n1... | 0.0688246 | -2.82311 |
4 | 100 | <bound method Series.max of 22 0.058132\n3... | 0.0401668 | -2.69601 |
5 | 100 | <bound method Series.max of 0 0.861588\n3... | -0.12863 | -2.90659 |
6 | 100 | <bound method Series.max of 47 0.543961\n5... | 0.108924 | -3.14779 |
7 | 100 | <bound method Series.max of 4 0.228388\n7... | 0.0391474 | -1.8324 |
8 | 100 | <bound method Series.max of 9 0.303886\n1... | -0.00849982 | -2.19997 |
9 | 100 | <bound method Series.max of 23 0.246278\n3... | -0.0121871 | -2.40748 |
Example 缺失值填充
When cleaning up missing data, in some cases you will replace data observations using dropna, but in others you may want to impute(归咎于) (fill in) the null(NA) values using a fixed value or some value derived(派生) from the data(cj.随机森林预测). fillna is the right tool to use; for example, here i fill in NA values with the mean.
Suppose you need the fill value to vary(变化) by group. One way to do this is to group the data and use apply with a function that calls fillna on each data chunk. Here is some sample data on US states divided into eastern and western regions:
Note that the syntax ['East'] * 4 produces a list containing four copies of the elements in ['East
']. Adding lists together concatenates them.
Let's set some values in the data to be missing:
We can fill the NA values using the group means like so:
In another case, you might have predifined fill values in your code that vary by group. Since the groups have a name attribute set internallh, we can use that:
Example: 随机采样
Suppose you wanted to draw a random sample(with or without replacement) from a large dataset for Monte Calo(蒙特卡洛) simulation purposes or some other application. There are a number of ways to perform the "draws"; here we use the sample method for Series.
To demonstrate, here's a way to construct a deck of English-style playing cards:
So now we have a Series of lenght 52 whose index contains card names and values are the ones used in Blackjack and other games
Now, based on what i said before, drawing a hand of five cards from the deck could be written as:
Suppose you wanted two random cards from each suit. Because the suit is the last character of each card name, we can group based on this and use apply:
Alternatively, we could write:
Example: 加权平均和相关
Under the split-combine paradigm of groupby, operations between columns in a DataFrame or two Series, such as a group weighted average, are posible. As an example, take this dataset containing group keys, values, and some weights:
category | data | weights | |
0 | a | 0.434777 | 0.486455 |
1 | a | -2.414575 | 0.374778 |
2 | a | -0.682643 | 0.651142 |
3 | a | 0.538472 | 0.238194 |
4 | b | 1.001960 | 0.724147 |
5 | b | -2.006634 | 0.770404 |
6 | b | 0.162167 | 0.262188 |
7 | b | 0.924946 | 0.723322 |
The group weighted average by category would then be:
As another example, consider a financial dataset originally obtained from Yahoo! Finance containing end-of-day prices for a few stocks and the S&P 500 index.
AAPL | MSFT | XOM | SPX | |
2011-10-11 | 400.29 | 27.00 | 76.27 | 1195.54 |
2011-10-12 | 402.19 | 26.96 | 77.16 | 1207.25 |
2011-10-13 | 408.43 | 27.18 | 76.37 | 1203.66 |
2011-10-14 | 422.00 | 27.27 | 78.11 | 1224.58 |
One task of interest might be to compute a DataFrame consisting of the yearly correlations of daily returns with SPX. As one way to do this, we first create a function that computes the pairwise correlation of each column with the 'SPX' column:
Next, we compute percent change on close_px using pct_change:
Lastly, we group these percent changes by year, which can be extracted from each row label with a one-line function that returns the year attribute of each datetime label:
AAPL | MSFT | XOM | SPX | |
2003 | 0.541124 | 0.745174 | 0.661265 | 1.0 |
2004 | 0.374283 | 0.588531 | 0.557742 | 1.0 |
2005 | 0.467540 | 0.562374 | 0.631010 | 1.0 |
2006 | 0.428267 | 0.406126 | 0.518514 | 1.0 |
2007 | 0.508118 | 0.658770 | 0.786264 | 1.0 |
2008 | 0.681434 | 0.804626 | 0.828303 | 1.0 |
2009 | 0.707103 | 0.654902 | 0.797921 | 1.0 |
2010 | 0.710105 | 0.730118 | 0.839057 | 1.0 |
2011 | 0.691931 | 0.800996 | 0.859975 | 1.0 |
You could also compute inter-column correlations. Here we compute the annual correlation between Apple and Microsoft:
Example: 线性回归
In the same theme as the previous example, you can use groupby to perform more complex group-wise statistical analysis, as long as the function returns a pandas object or scalar value.
For example, i can define the following regress function, which executes an ordinary least squares(OLS) regression on each chunk of data:
Now, to run a yearly linear regression of AAPL on SPX return , execute:
SPX | intercept | |
2003 | 1.195406 | 0.000710 |
2004 | 1.363463 | 0.004201 |
2005 | 1.766415 | 0.003246 |
2006 | 1.645496 | 0.000080 |
2007 | 1.198761 | 0.003438 |
2008 | 0.968016 | -0.001110 |
2009 | 0.879103 | 0.002954 |
2010 | 1.052608 | 0.001261 |
2011 | 0.806605 | 0.001514 |
耐心和恒心, 总会获得回报的.