16 KiB
Linear Regression Project Exercise¶
Now that we have learned about feature engineering, cross validation, and grid search, let's test all your new skills with a project exercise in Machine Learning. This exercise will have a more guided approach, later on the ML projects will begin to be more open-ended. We'll start off with using the final version of the Ames Housing dataset we worked on through the feature engineering section of the course. Your goal will be to create a Linear Regression Model, train it on the data with the optimal parameters using a grid search, and then evaluate the model's capabilities on a test set.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
Data¶
df = pd.read_csv("../DATA/AMES_Final_DF.csv")
df.head()
df.info()
TASK: The label we are trying to predict is the SalePrice column. Separate out the data into X features and y labels
TASK: Use scikit-learn to split up X and y into a training set and test set. Since we will later be using a Grid Search strategy, set your test proportion to 10%. To get the same data split as the solutions notebook, you can specify random_state = 101
TASK: The dataset features has a variety of scales and units. For optimal regression performance, scale the X features. Take carefuly note of what to use for .fit() vs what to use for .transform()
TASK: We will use an Elastic Net model. Create an instance of default ElasticNet model with scikit-learn
TASK: The Elastic Net model has two main parameters, alpha and the L1 ratio. Create a dictionary parameter grid of values for the ElasticNet. Feel free to play around with these values, keep in mind, you may not match up exactly with the solution choices
TASK: Using scikit-learn create a GridSearchCV object and run a grid search for the best parameters for your model based on your scaled training data. In case you are curious about the warnings you may recieve for certain parameter combinations
TASK: Display the best combination of parameters for your model
TASK: Evaluate your model's performance on the unseen 10% scaled test set. In the solutions notebook we achieved an MAE of $\$$14149 and a RMSE of $$$20532