## Intro

Regression is central to so much of the statistical analysis & machine learning tools that we leverage as data scientists.

Stated simply, we utilize regression techniques to model Y through some function of X.

We’ll take a look at some additional ideas to set up the premise of regression; and then we’ll take a look at modeling some numeric variable Y with a numeric X.

## Regression with a Numeric Explanatory Variable

I have pulled down a house prices dataset from kaggle. You can find that here: https://www.kaggle.com/shree1992/housedata/data

Below you’ll see a scatter plot between the sqft living space of a home and its price.

In additional to that scatter plot, I also include a regression line. More on that in a moment.

```
housing %>%
ggplot(aes(x = sqft_living, y = price)) +
geom_point() +
geom_smooth(method = "lm", se = FALSE)
```

What you can see above is that these two data points are indeed correlated, but you’ll also notice that the trend line moves right through the middle of these datapoints.

The way regression works and in this case we’re talking about ordinary least squares regression (OLS).

What we’re seeing is a line, a line that has a y-intercept and a slope. Think rise over run!

Now what I would like to highlight here is the objective function that determines the placement of said line.

The line will be placed where the absolute distance from the line and the surrounding datapoints is least. In other words, if you place that y-intercept a little higher, or increase the slope of the line… the absolute distance between the actuals and the prediction would go up. Hence the rationale for the positioning of the line.

## Correlation versus Causation

Now, we’ve observed a relationship between two variables that are positively correlated.

With that said, can we conclude that x causes y? Certainly not! If you remembered that from college statistics, give yourself a pat on the back. Obviously there could be an number of other factors at play.

To call on the notion of the general modeling framework, when we build a linear model, we are creating a linear function or a *line*.

The purpose of this line is to allow us to either explain or predict.

Whatever the case, modeling a line requires a y intercept and a slope.

In another post, I speak about the general modeling framework is Y as some function of X + epsilon or error. In the case of the equation of the line, you may ask yourself where epsilon is… and the case is that we don’t represent epsilon in our equation of a line or linear function, as the sole purpose of the model is to capture *signal*, not noise.

## Interpreting Your Regression Model Output

We’ll first run the `lm`

function in R. This function builds a simple linear model as determined by the formula you pass it.

y ~ x, or in this case, price as a function of sqft living.

`fit <- lm(price ~ sqft_living, data = housing)`

You can see in the above output our call but also this coefficients section.

This section highlight our equation of a line. The y intercept is 12954 and our coefficient for our explanatory variable, `sqft_living`

is 252. The way to interpret that coefficient is that for every 1 unit increase to `sqft_living`

, we should see a 252 unit increase in `price`

.

My house is about 3000 sqft, so according to this equation of a line, if you plopped my house down in Seattle, we’d predict its value to be $12,954 + $252*3000 = $768K… needless to say, all of this data is based on the housing market… my home is not nearly that valuable.

With this example behind us, one thing to keep in mind, is it’s the slope or coefficient that we can rely on to quantify the relationship between x and y.

## Diving Deeper into Your Regression Output

We are going to dive deeper into the nitty gritty of your linear model. We’ll do so a couple different ways, but the first will be with the classic `summary`

function in R.

`summary(fit)`

With a call as simple as that we get the following regression output.

Lets go from the top!

First things first, the call, makes sense. We get some stats in the residuals, or in other words the error, but we want dive deep into that for now.

Next we see the coefficients as we saw before in a slightly different format.

A couple things I want to point you to are the idea of R-squared and p-value… two of the most mis-used statistics terms out there.

R-squared is defined as the amount of variation in Y that can be explained by variation in X.

P-value is the traditional measure of statistical significance. The key takeaway here, is that p-value serves to tell us the likelihood a given output might just be random noise. In other words, the likelihood of a given occurrence happening randomly is 5% or less and as such it is statistically significant.

Another glance of some similar outputs is passing our model to the `get_regression_table`

in the `moderndive`

package.

`get_regression_table(fit)`

`get_regression_table`

serves as a quick wrapper to the model that is able to display conveniently some of the more important statistics about our model.

## Conclusion

Hopefully this proved to be a useful introduction to linear regression. How to build and how to interpret them.

### Recap

Today we got a crash course in the following:

- visualizing the relationship between a Y and an X
- adding regression lines to our Y & X visualizations
- building a linear regression model
- evaluating said model through an understanding of its statistical significance through p-value or the amount of variation in Y we can explain through the variation in x.

If this was useful come check out the rest of my posts at datasciencelessons.com! As always, Happy Data Science-ing!