You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

67 KiB

<html> <head> </head>

___

Copyright by Pierian Data Inc. For more information, visit us at www.pieriandata.com

Linear Regression with SciKit-Learn

We saw how to create a very simple best fit line, but now let's greatly expand our toolkit to start thinking about the considerations of overfitting, underfitting, model evaluation, as well as multiple features!

Imports

In [2]:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
C:\ProgramData\Anaconda3\lib\site-packages\statsmodels\tools\_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
  import pandas.util.testing as tm

Sample Data

This sample data is from ISLR. It displays sales (in thousands of units) for a particular product as a function of advertising budgets (in thousands of dollars) for TV, radio, and newspaper media.

In [3]:
df = pd.read_csv("Advertising.csv")
In [4]:
df.head()
Out[4]:
TV radio newspaper sales
0 230.1 37.8 69.2 22.1
1 44.5 39.3 45.1 10.4
2 17.2 45.9 69.3 9.3
3 151.5 41.3 58.5 18.5
4 180.8 10.8 58.4 12.9
In [5]:
# Everything BUT the sales column
X = df.drop('sales',axis=1)
In [6]:
y = df['sales']

SciKit Learn


Polynomial Regression

From Preprocessing, import PolynomialFeatures, which will help us transform our original data set by adding polynomial features

We will go from the equation in the form (shown here as if we only had one x feature):

$$\hat{y} = \beta_0 + \beta_1x_1 + \epsilon $$

and create more features from the original x feature for some d degree of polynomial.

$$\hat{y} = \beta_0 + \beta_1x_1 + \beta_1x^2_1 + ... + \beta_dx^d_1 + \epsilon$$

Then we can call the linear regression model on it, since in reality, we're just treating these new polynomial features x^2, x^3, ... x^d as new features. Obviously we need to be careful about choosing the correct value of d , the degree of the model. Our metric results on the test set will help us with this!

The other thing to note here is we have multiple X features, not just a single one as in the formula above, so in reality, the PolynomialFeatures will also take interaction terms into account for example, if an input sample is two dimensional and of the form [a, b], the degree-2 polynomial features are [1, a, b, a^2, ab, b^2].

In [7]:
from sklearn.preprocessing import PolynomialFeatures
In [8]:
polynomial_converter = PolynomialFeatures(degree=2,include_bias=False)
In [9]:
# Converter "fits" to data, in this case, reads in every X column
# Then it "transforms" and ouputs the new polynomial data
poly_features = polynomial_converter.fit_transform(X)
In [10]:
poly_features.shape
Out[10]:
(200, 9)
In [11]:
X.shape
Out[11]:
(200, 3)
In [12]:
X.iloc[0]
Out[12]:
TV           230.1
radio         37.8
newspaper     69.2
Name: 0, dtype: float64
In [13]:
poly_features[0]
Out[13]:
array([2.301000e+02, 3.780000e+01, 6.920000e+01, 5.294601e+04,
       8.697780e+03, 1.592292e+04, 1.428840e+03, 2.615760e+03,
       4.788640e+03])
In [14]:
poly_features[0][:3]
Out[14]:
array([230.1,  37.8,  69.2])
In [15]:
poly_features[0][:3]**2
Out[15]:
array([52946.01,  1428.84,  4788.64])

The interaction terms $$x_1 \cdot x_2 \text{ and } x_1 \cdot x_3 \text{ and } x_2 \cdot x_3 $$

In [16]:
230.1*37.8
Out[16]:
8697.779999999999
In [17]:
230.1*69.2
Out[17]:
15922.92
In [18]:
37.8*69.2
Out[18]:
2615.7599999999998

Train | Test Split

Make sure you have watched the Machine Learning Overview videos on Supervised Learning to understand why we do this step

In [19]:
from sklearn.model_selection import train_test_split
In [20]:
# random_state: 
# https://stackoverflow.com/questions/28064634/random-state-pseudo-random-number-in-scikit-learn
X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101)

Model for fitting on Polynomial Data

Create an instance of the model with parameters

In [22]:
from sklearn.linear_model import LinearRegression
In [23]:
model = LinearRegression(fit_intercept=True)

Fit/Train the Model on the training data

Make sure you only fit to the training data, in order to fairly evaluate your model's performance on future data

In [24]:
model.fit(X_train,y_train)
Out[24]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)

Evaluation on the Test Set

Calculate Performance on Test Set

We want to fairly evaluate our model, so we get performance metrics on the test set (data the model has never seen before).

In [25]:
test_predictions = model.predict(X_test)
In [26]:
from sklearn.metrics import mean_absolute_error,mean_squared_error
In [27]:
MAE = mean_absolute_error(y_test,test_predictions)
MSE = mean_squared_error(y_test,test_predictions)
RMSE = np.sqrt(MSE)
In [28]:
MAE
Out[28]:
0.489679804480361
In [29]:
MSE
Out[29]:
0.4417505510403426
In [30]:
RMSE
Out[30]:
0.6646431757269028
In [31]:
df['sales'].mean()
Out[31]:
14.022500000000003

Comparison with Simple Linear Regression

Results on the Test Set (Note: Use the same Random Split to fairly compare!)

  • Simple Linear Regression:

    • MAE: 1.213
    • RMSE: 1.516
  • Polynomial 2-degree:

    • MAE: 0.4896
    • RMSE: 0.664


Choosing a Model

Adjusting Parameters

Are we satisfied with this performance? Perhaps a higher order would improve performance even more! But how high is too high? It is now up to us to possibly go back and adjust our model and parameters, let's explore higher order Polynomials in a loop and plot out their error. This will nicely lead us into a discussion on Overfitting.

Let's use a for loop to do the following:

  1. Create different order polynomial X data
  2. Split that polynomial data for train/test
  3. Fit on the training data
  4. Report back the metrics on both the train and test results
  5. Plot these results and explore overfitting
In [32]:
# TRAINING ERROR PER DEGREE
train_rmse_errors = []
# TEST ERROR PER DEGREE
test_rmse_errors = []

for d in range(1,10):
    
    # CREATE POLY DATA SET FOR DEGREE "d"
    polynomial_converter = PolynomialFeatures(degree=d,include_bias=False)
    poly_features = polynomial_converter.fit_transform(X)
    
    # SPLIT THIS NEW POLY DATA SET
    X_train, X_test, y_train, y_test = train_test_split(poly_features, y, test_size=0.3, random_state=101)
    
    # TRAIN ON THIS NEW POLY SET
    model = LinearRegression(fit_intercept=True)
    model.fit(X_train,y_train)
    
    # PREDICT ON BOTH TRAIN AND TEST
    train_pred = model.predict(X_train)
    test_pred = model.predict(X_test)
    
    # Calculate Errors
    
    # Errors on Train Set
    train_RMSE = np.sqrt(mean_squared_error(y_train,train_pred))
    
    # Errors on Test Set
    test_RMSE = np.sqrt(mean_squared_error(y_test,test_pred))

    # Append errors to lists for plotting later
    
   
    train_rmse_errors.append(train_RMSE)
    test_rmse_errors.append(test_RMSE)
In [33]:
plt.plot(range(1,6),train_rmse_errors[:5],label='TRAIN')
plt.plot(range(1,6),test_rmse_errors[:5],label='TEST')
plt.xlabel("Polynomial Complexity")
plt.ylabel("RMSE")
plt.legend()
Out[33]:
<matplotlib.legend.Legend at 0x168c0d109c8>
In [34]:
plt.plot(range(1,10),train_rmse_errors,label='TRAIN')
plt.plot(range(1,10),test_rmse_errors,label='TEST')
plt.xlabel("Polynomial Complexity")
plt.ylabel("RMSE")
plt.legend()
Out[34]:
<matplotlib.legend.Legend at 0x168c1d7df08>
In [35]:
plt.plot(range(1,10),train_rmse_errors,label='TRAIN')
plt.plot(range(1,10),test_rmse_errors,label='TEST')
plt.xlabel("Polynomial Complexity")
plt.ylabel("RMSE")
plt.ylim(0,100)
plt.legend()
Out[35]:
<matplotlib.legend.Legend at 0x168c41e5a88>

Finalizing Model Choice

There are now 2 things we need to save, the Polynomial Feature creator AND the model itself. Let's explore how we would proceed from here:

  1. Choose final parameters based on test metrics
  2. Retrain on all data
  3. Save Polynomial Converter object
  4. Save model
In [42]:
# Based on our chart, could have also been degree=4, but 
# it is better to be on the safe side of complexity
final_poly_converter = PolynomialFeatures(degree=3,include_bias=False)
In [43]:
final_model = LinearRegression()
In [45]:
final_model.fit(final_poly_converter.fit_transform(X),y)
Out[45]:
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=None, normalize=False)

Saving Model and Converter

In [46]:
from joblib import dump, load
In [49]:
dump(final_model, 'sales_poly_model.joblib')
Out[49]:
['sales_poly_model.joblib']
In [50]:
dump(final_poly_converter,'poly_converter.joblib')
Out[50]:
['poly_converter.joblib']

Deployment and Predictions

Prediction on New Data

Recall that we will need to convert any incoming data to polynomial data, since that is what our model is trained on. We simply load up our saved converter object and only call .transform() on the new data, since we're not refitting to a new data set.

Our next ad campaign will have a total spend of 149k on TV, 22k on Radio, and 12k on Newspaper Ads, how many units could we expect to sell as a result of this?

In [62]:
loaded_poly = load('poly_converter.joblib')
loaded_model = load('sales_poly_model.joblib')
In [63]:
campaign = [[149,22,12]]
In [64]:
campaign_poly = loaded_poly.transform(campaign)
In [65]:
campaign_poly
Out[65]:
array([[1.490000e+02, 2.200000e+01, 1.200000e+01, 2.220100e+04,
        3.278000e+03, 1.788000e+03, 4.840000e+02, 2.640000e+02,
        1.440000e+02, 3.307949e+06, 4.884220e+05, 2.664120e+05,
        7.211600e+04, 3.933600e+04, 2.145600e+04, 1.064800e+04,
        5.808000e+03, 3.168000e+03, 1.728000e+03]])
In [67]:
final_model.predict(campaign_poly)
Out[67]:
array([14.64501014])


</html>