Important! There is a huge chance that the assignment will be impossible to pass if the versions of lighgbm and scikit-learn are wrong. The versions being tested:
import lightgbm as lgb from sklearn.linear_model import LinearRegression from sklearn.metrics import r2_score from tqdm import tqdm_notebook
from itertools import product
def(df): ''' Changes column types in the dataframe: `float64` type to `float32` `int64` type to `int32` ''' float_cols = [c for c in df if df[c].dtype == "float64"] int_cols = [c for c in df if df[c].dtype == "int64"] # Downcast df[float_cols] = df[float_cols].astype(np.float32) df[int_cols] = df[int_cols].astype(np.int32) return df
We now need to prepare the features. This part is all implemented for you.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
# Create "grid" with columns index_cols = ['shop_id', 'item_id', 'date_block_num']
# For every month we create a grid from all shops/items combinations from that month grid = [] for block_num in sales['date_block_num'].unique(): cur_shops = sales.loc[sales['date_block_num'] == block_num, 'shop_id'].unique() cur_items = sales.loc[sales['date_block_num'] == block_num, 'item_id'].unique() grid.append(np.array(list(product(*[cur_shops, cur_items, [block_num]])),dtype='int32'))
# Turn the grid into a dataframe grid = pd.DataFrame(np.vstack(grid), columns = index_cols,dtype=np.int32)
print(grid.shape) grid.head()
(278619, 3)
shop_id
item_id
date_block_num
0
28
7738
0
1
28
7737
0
2
28
7770
0
3
28
7664
0
4
28
7814
0
1 2 3 4 5 6 7
# Groupby data to get shop-item-month aggregates gb = sales.groupby(index_cols,as_index=False).agg({'item_cnt_day':{'target':'sum'}}) # Fix column names gb.columns = [col[0] if col[-1]==''else col[-1] for col in gb.columns.values]
print(gb.shape) gb.head()
/opt/conda/lib/python3.6/site-packages/pandas/core/groupby.py:4036: FutureWarning: using a dict with renaming is deprecated and will be removed in a future version
return super(DataFrameGroupBy, self).aggregate(arg, *args, **kwargs)
(145463, 4)
shop_id
item_id
date_block_num
target
0
26
27
0
1.0
1
26
27
10
1.0
2
26
27
14
1.0
3
26
28
8
1.0
4
26
28
9
1.0
1 2 3 4 5
# Join it to the grid all_data = pd.merge(grid, gb, how='left', on=index_cols).fillna(0)
print(all_data.shape) all_data.head()
(278619, 4)
shop_id
item_id
date_block_num
target
0
28
7738
0
4.0
1
28
7737
0
10.0
2
28
7770
0
6.0
3
28
7664
0
1.0
4
28
7814
0
2.0
1 2 3 4 5 6 7
# Same as above but with shop-month aggregates gb = sales.groupby(['shop_id', 'date_block_num'],as_index=False).agg({'item_cnt_day':{'target_shop':'sum'}}) gb.columns = [col[0] if col[-1]==''else col[-1] for col in gb.columns.values] all_data = pd.merge(all_data, gb, how='left', on=['shop_id', 'date_block_num']).fillna(0)
print(all_data.shape) all_data.head()
(278619, 5)
/opt/conda/lib/python3.6/site-packages/pandas/core/groupby.py:4036: FutureWarning: using a dict with renaming is deprecated and will be removed in a future version
return super(DataFrameGroupBy, self).aggregate(arg, *args, **kwargs)
shop_id
item_id
date_block_num
target
target_shop
0
28
7738
0
4.0
7057.0
1
28
7737
0
10.0
7057.0
2
28
7770
0
6.0
7057.0
3
28
7664
0
1.0
7057.0
4
28
7814
0
2.0
7057.0
1 2 3 4 5 6 7
# Same as above but with item-month aggregates gb = sales.groupby(['item_id', 'date_block_num'],as_index=False).agg({'item_cnt_day':{'target_item':'sum'}}) gb.columns = [col[0] if col[-1] == ''else col[-1] for col in gb.columns.values] all_data = pd.merge(all_data, gb, how='left', on=['item_id', 'date_block_num']).fillna(0)
print(all_data.shape) all_data.head()
/opt/conda/lib/python3.6/site-packages/pandas/core/groupby.py:4036: FutureWarning: using a dict with renaming is deprecated and will be removed in a future version
return super(DataFrameGroupBy, self).aggregate(arg, *args, **kwargs)
(278619, 6)
shop_id
item_id
date_block_num
target
target_shop
target_item
0
28
7738
0
4.0
7057.0
11.0
1
28
7737
0
10.0
7057.0
16.0
2
28
7770
0
6.0
7057.0
10.0
3
28
7664
0
1.0
7057.0
1.0
4
28
7814
0
2.0
7057.0
6.0
1 2 3 4
# Downcast dtypes from 64 to 32 bit to save memory all_data = downcast_dtypes(all_data) del grid, gb gc.collect();
After creating a grid, we can calculate some features. We will use lags from [1, 2, 3, 4, 5, 12] months ago.
# List of columns that we will use to create lags cols_to_rename = list(all_data.columns.difference(index_cols))
shift_range = [1, 2, 3, 4, 5, 12]
for month_shift in tqdm_notebook(shift_range): train_shift = all_data[index_cols + cols_to_rename].copy() train_shift['date_block_num'] = train_shift['date_block_num'] + month_shift foo = lambda x: '{}_lag_{}'.format(x, month_shift) if x in cols_to_rename else x train_shift = train_shift.rename(columns=foo)
# Don't use old data from year 2013 all_data = all_data[all_data['date_block_num'] >= 12]
# List of all lagged features fit_cols = [col for col in all_data.columns if col[-1] in [str(item) for item in shift_range]] # We will drop these at fitting stage to_drop_cols = list(set(list(all_data.columns)) - (set(fit_cols)|set(index_cols))) + ['date_block_num']
# Category for each item item_category_mapping = items[['item_id','item_category_id']].drop_duplicates()
You need to implement a basic stacking scheme. We have a time component here, so we will use scheme f) from the reading material. Recall, that we always use first level models to build two datasets: test meta-features and 2-nd level train-metafetures. Let’s see how we get test meta-features first.
Test meta-features
Firts, we will run linear regression on numeric columns and get predictions for the last month.
1 2 3 4 5
lr = LinearRegression() lr.fit(X_train.values, y_train) pred_lr = lr.predict(X_test.values)
print('Test R-squared for linreg is %f' % r2_score(y_test, pred_lr))
Now it is your turn to write the code. You need to implement scheme f) from the reading material. Here, we will use duration T equal to month and M=15.
That is, you need to get predictions (meta-features) from linear regression and LightGBM for months 27, 28, 29, 30, 31, 32. Use the same parameters as in above models.
# And here we create 2nd level feeature matrix, init it with zeros first X_train_level2 = np.zeros([y_train_level2.shape[0], 2])
# Now fill `X_train_level2` with metafeatures for cur_block_num in [27, 28, 29, 30, 31, 32]: print(cur_block_num) ''' 1. Split `X_train` into parts Remember, that corresponding dates are stored in `dates_train` 2. Fit linear regression 3. Fit LightGBM and put predictions 4. Store predictions from 2. and 3. in the right place of `X_train_level2`. You can use `dates_train_level2` for it Make sure the order of the meta-features is the same as in `X_test_level2` ''' train,train_y = X_train[dates_train < cur_block_num], y_train[dates_train < cur_block_num] lr.fit(train.values, train_y) model = lgb.train(lgb_params, lgb.Dataset(train, label= train_y), 100) test = X_train[dates == cur_block_num] pred_lr = lr.predict(test) pred_gb = model.predict(test) X_train_level2[dates_train_level2 == cur_block_num, :] = np.c_[pred_lr,pred_gb] # YOUR CODE GOES HERE
27
/opt/conda/lib/python3.6/site-packages/ipykernel_launcher.py:22: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
28
29
30
31
32
1
X_train_level2.shape
(34404, 2)
Remember, the ensembles work best, when first level models are diverse. We can qualitatively analyze the diversity by examinig scatter plot between the two metafeatures. Plot the scatter plot below.
1 2
# YOUR CODE GOES HERE plt.scatter(X_train_level2[:,0],X_train_level2[:,1])
<matplotlib.collections.PathCollection at 0x7f2ea416f278>
Ensembling
Now, when the meta-features are created, we can ensemble our first level models.
Simple convex mix
Let’s start with simple linear convex mix:
We need to find an optimal $alpha$. And it is very easy, as it is feasible to do grid search. Next, find the optimal $alpha$ out of alphas_to_try array. Remember, that you need to use train meta-features (not test) when searching for $alpha$.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
alphas_to_try = np.linspace(0, 1, 1001)
# YOUR CODE GOES HERE best_alpha = 0# YOUR CODE GOES HERE r2_train_simple_mix = 0# YOUR CODE GOES HERE
print('Test R-squared for simple mix is %f' % r2_test_simple_mix)
Test R-squared for simple mix is 0.781144
Stacking
Now, we will try a more advanced ensembling technique. Fit a linear regression model to the meta-features. Use the same parameters as in the model above.
1 2 3
# YOUR CODE GOES HERE meta_model = LinearRegression() meta_model.fit(X_train_level2, y_train_level2)
print('Train R-squared for stacking is %f' % r2_train_stacking) print('Test R-squared for stacking is %f' % r2_test_stacking)
Train R-squared for stacking is 0.632176
Test R-squared for stacking is 0.771297
Interesting, that the score turned out to be lower than in previous method. Although the model is very simple (just 3 parameters) and, in fact, mixes predictions linearly, it looks like it managed to overfit. Examine and compare train and test scores for the two methods.
And of course this particular case does not mean simple mix is always better than stacking.
We all done! Submit everything we need to the grader now.
Current answer for task best_alpha is: 0.765
Current answer for task r2_train_simple_mix is: 0.627255043446
Current answer for task r2_test_simple_mix is: 0.781144169579
Current answer for task r2_train_stacking is: 0.632175561459
Current answer for task r2_test_stacking is: 0.771297132342
近期评论