In this post, we’ll walk through building linear regression models to predict housing prices resulting from economic activity. Topics covered will include:
Reading in the Data with pandas Ordinary Least Squares (OLS) Assumptions Simple Linear Regression Multiple Linear Regression Another Look at Partial Regression PlotsFuture posts will cover related topics such as exploratory analysis, regression diagnostics, and advanced regression modeling, but I wanted to jump right in so readers could get their hands dirty with data.
What is Regression?Linear regression is a model that predicts a relationship of direct proportionality between the dependent variable (plotted on the vertical or Y axis) and the predictor variables (plotted on the X axis) that produces a straight line, like so:

Linear regression will be discussed in greater detail as we move through the modeling process.
Variable SelectionFor our dependent variable we’ll use housing_price_index (HPI), which measures price changes of residential housing.
For our predictor variables, we use our intuition to select drivers of macro- (or “big picture”) economic activity, such as unemployment, interest rates, and gross domestic product (total productivity). For an explanation of our variables, including assumptions about how they impact housing prices, and all the sources of data used in this post, see here .
Reading in the Data with pandasOnce we’ve downloaded the data, read it in using pandas’ read_csv method.
import pandas as pd# read in from csv using pd.read_csv
# be sure to use the file path where you saved the data
housing_price_index = pd.read_csv('/Users/tdobbins/Downloads/hpi/monthly-hpi.csv')
unemployment = pd.read_csv('/Users/tdobbins/Downloads/hpi/unemployment.csv')
federal_funds_rate = pd.read_csv('/Users/tdobbins/Downloads/hpi/fed_funds.csv')
shiller = pd.read_csv('/Users/tdobbins/Downloads/hpi/shiller.csv')
gross_domestic_product = pd.read_csv('/Users/tdobbins/Downloads/hpi/gdp.csv')
Once we have the data, invoke pandas’ merge method to join the data together in a single dataframe for analysis. Some data is reported monthly, others are reported quarterly. No worries. We merge the dataframes on a certain column so each row is in its logical place for measurement purposes. In this example, the best column to merge on is the date column. See below.
# merge dataframes into single dataframe by datedf = shiller.merge(housing_price_index, on='date')\
.merge(unemployment, on='date')\
.merge(federal_funds_rate, on='date')\
.merge(gross_domestic_product, on='date')
Let’s get a quick look at our variables with pandas’ head method. The headers in bold text represent the date and the variables we’ll test for our model. Each row represents a different time period.
Out[23]: date sp500 consumer_price_index long_interest_rate housing_price_index total_unemployed more_than_15_weeks not_in_labor_searched_for_work multi_jobs leavers losers federal_funds_rate total_expenditures labor_force_pr producer_price_index gross_domestic_product 0 2011-01-01 1282.62 220.22 3.39 181.35 16.2 8393 2800 6816 6.5 60.1 0.17 5766.7 64.2 192.7 14881.3 1 2011-04-01 1331.51 224.91 3.46 180.80 16.1 8016 2466 6823 6.8 59.4 0.10 5870.8 64.2 203.1 14989.6 2 2011-07-01 1325.19 225.92 3.00 184.25 15.9 8177 2785 6850 6.8 59.2 0.07 5802.6 64.0 204.6 15021.1 3 2011-10-01 1207.22 226.42 2.15 181.51 15.8 7802 2555 6917 8.0 57.9 0.07 5812.9 64.1 201.1 15190.3 4 2012-01-01 1300.58 226.66 1.97 179.13 15.2 7433 2809 7022 7.4 57.1 0.08 5765.7 63.7 200.7 15291.0Usually, the next step after gathering data would be exploratory analysis. Exploratory analysis is the part of the process where we analyze the variables (with plots and descriptive statistics) and figure out the best predictors of our dependent variable. For the sake of brevity, we’ll skip the exploratory analysis. Keep in the back of your mind, though, that it’s of utmost importance and that skipping it in the real world would preclude ever getting to the predictive section.
We’ll use ordinary least squares (OLS), a basic yet powerful way to assess our model.
Ordinary Least Squares AssumptionsOLS measures the accuracy of a linear regression model.
OLS is built on assumptions which, if held, indicate the model may be the correct lens through which to interpret our data. If the assumptions don’t hold, our model’s conclusions lose their validity. Take extra effort to choose the right model to avoid Auto-esotericism/Rube-Goldberg’s Disease .
Here are the OLS assumptions:
Linearity : A linear relationship exists between the dependent and predictor variables. If no linear relationship exists, linear regression isn’t the correct model to explain our data. No multicollinearity : Predictor variables are not collinear, i.e., they aren’t highly correlated. If the predictors are highly correlated, try removing one or more of them. Since additional predictors are supplying redundant information, removing them shouldn’t drastically reduce the Adj. R-squared (see below). Zero conditional mean : The average of the distances (or residuals) between the observations and the trend line is zero. Some will be positive, others negative, but they won’t be biased toward a set of values. Homoskedasticity : The certainty (or uncertainty) of our dependent variable is equal across all values of a predictor variable; that is, there is no pattern in the residuals. In statistical jargon, the variance is constant. No autocorrelation (serial correlation) : Autocorrelation is when a variable is correlated with itself across observations. For example, a stock price might be serially correlated if one day’s stock price impacts the next day’s stock price.Let’s begin modeling.
Simple Linear RegressionSimple linear regression uses a single predictor variable to explain a dependent variable. A simple linear regression equation is as follows:

Where:
y = dependent variable
= regression coefficient
α = intercept (expected mean value of housing prices when our independent variable is zero)
x = predictor (or independent) variable used to predict Y
ε = the error term, which accounts for the randomness that our model can’t explain.
Using statsmodels’ ols function, we construct our model setting housing_price_index as a function of total_unemployed . We assume that an increase in the total number of unemployed people will have downward pressure on housing prices. Maybe we’re wrong, but we have to start somewhere!
The code below shows how to set up a simple linear regression model with total_unemployment as our predictor variable.
from Ipython.display import HTML, displayimport statsmodels.api as sm
from statsmodels.formula.api import ols
# fit our model with .fit() and show results
# we use statsmodels' formula API to invoke the syntax below,
# where we write out the formula using ~
housing_model = ols("housing_price_index ~ total_unemployed", data=df).fit()
# summarize our model
housing_model_summary = housing_model.summary()
# convert our table to HTML and add colors to headers for explanatory purposes
HTML(
housing_model_summary\
.as_html()\
.replace(' Adj. R-squared:
', '
Adj. R-squared:')\ .replace('
coef', '
coef')\ .replace('
std err', '
std err')\ .replace('
P>|t|', '
P>|t|')\ .replace('
[95.0% Conf. Int.]', '
[95.0% Conf. Int.]') )
Out[24]: OLS Regression Results Dep. Variable: housing_price_index R-squared: 0.952 Model: OLS Adj. R-squared: 0.949 Method: Least Squares F-statistic: 413.2 Date: Fri, 17 Feb 2017 Prob (F-statistic): 2.71e-15 Time: 17:57:05 Log-Likelihood: -65.450 No. Observations: 23 AIC: 134.9 Df Residuals: 21 BIC: 137.2 Df Model: 1 Covariance Type: nonrobust coef std err t P>|t| [95.0% Conf. Int.] Intercept 313.3128 5.408 57.938 0.000 302.067 324.559 total_unemployed -8.3324 0.410 -20.327 0.000 -9.185 -7.480 Omnibus: 0.492 Durbin-Watson: 1.126 Prob(Omnibus): 0.782 Jarque-Bera (JB): 0.552 Skew: 0.294 Prob(JB): 0.759 Kurtosis: 2.521 Cond. No. 78.9Referring to the OLS regression results above, we’ll offer a high-level explanation of a few metrics to understand the strength of our model: Adj. R-squared, coefficients, standard errors, and p-values.
To explain:
Adj. R-squaredindicates that 95% of housing prices can be explained by our predictor variable, total_unemployed .
The regression coefficient (coef) represents the change in the dependent variable resulting from a one unit change in the predictor variable, all other variables being held constant. In our model, a one unit increase in total_unemployed reduces housing_price_index by 8.33. In line with our assumptions, an increase in unemployment appears to reduce housing prices.
The standard error measures the accuracy of total_unemployed ‘s coefficient by estimating the variation of the coefficient if the same test were run on a different sample of our population. Our standard error, 0.41, is low and therefore appears accurate.
The p-value means the probability of an 8.33 decrease in housing_price_index due to a one unit increase in total_unemployed is 0%, assuming there is no relationship between the two variables. A low p-value indicates that the results are statistically significant, that is in general the p-value is less than 0.05.
The confidence interval is a range within which our coefficient is likely to fall. We can be 95% confident that total_unemployed ‘s coefficient will be within our confidence interval, [-9.185, -7.480].Let’s use statsmodels’ plot_regress_exog function to help us understand our model.
Please see the four graphs below.
The “Y and Fitted vs. X” graph plots the dependent variable against our predicted values with a confidence interval. The inverse relationship in our graph indicates that housing_price_index and total_unemployed are negatively correlated, i.e., when one variable increases the other decreases. The “Residuals versus total_unemployed ” graph shows our model’s errors versus the specified predictor variable. Each dot is an observed value; the line represents the mean of those observed values. Since there’s no pattern in the distance between the dots and the mean value, the OLS assumption of homoskedasticity holds. The “Partial regression plot” shows the relationship between housing_price_index and total_unemployed , taking in to account the impact of adding other independent variables on our existing total_unemployed coefficient. We’ll see later how this same graph changes when we add more variables. The Component and Component Plus Residual (CCPR) plot is an extension of the partial regression plot, but shows where our trend line would lie after adding the impact of adding our other independent variables on our existing total_unemployed coefficient. More on this plot