Prediction (out of sample)¶
[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Artificial data¶
[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Estimation¶
[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.984
Model: OLS Adj. R-squared: 0.983
Method: Least Squares F-statistic: 928.1
Date: Sun, 27 Mar 2022 Prob (F-statistic): 3.87e-41
Time: 11:20:47 Log-Likelihood: 3.2215
No. Observations: 50 AIC: 1.557
Df Residuals: 46 BIC: 9.205
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 5.1541 0.081 63.931 0.000 4.992 5.316
x1 0.4845 0.012 38.967 0.000 0.459 0.510
x2 0.4751 0.049 9.721 0.000 0.377 0.574
x3 -0.0199 0.001 -18.191 0.000 -0.022 -0.018
==============================================================================
Omnibus: 1.014 Durbin-Watson: 2.794
Prob(Omnibus): 0.602 Jarque-Bera (JB): 0.355
Skew: -0.083 Prob(JB): 0.838
Kurtosis: 3.378 Cond. No. 221.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In-sample prediction¶
[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.65760317 5.12169866 5.54819216 5.91118911 6.19414018 6.39256024
6.51476528 6.5805061 6.61772332 6.6579568 6.73116375 6.86079683
7.05995097 7.32921185 7.65655976 8.01934433 8.38800595 8.73093235
9.01965238 9.23351374 9.3630761 9.41166213 9.39481158 9.33772816
9.27113861 9.22624442 9.22959549 9.29872807 9.43928359 9.6440818
9.89430046 10.1625678 10.41745957 10.62866203 10.77195235 10.83317838
10.81058382 10.71509673 10.56853414 10.40001752 10.24118825 10.121009
10.06100524 10.07172871 10.15102374 10.28438148 10.44732361 10.6094241
10.73930808 10.80980669]
Create a new sample of explanatory variables Xnew, predict and plot¶
[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[10.78907741 10.64245773 10.3885807 10.06990876 9.74233739 9.46150997
9.26919435 9.1830566 9.19233578 9.26047852]
Plot comparison¶
[7]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7fcf4eebcaf0>
Predicting with Formulas¶
Using formulas can make both estimation and prediction a lot easier
[8]:
from statsmodels.formula.api import ols
data = {"x1": x1, "y": y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2
[9]:
res.params
[9]:
Intercept 5.154058
x1 0.484504
np.sin(x1) 0.475137
I((x1 - 5) ** 2) -0.019858
dtype: float64
Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0 10.789077
1 10.642458
2 10.388581
3 10.069909
4 9.742337
5 9.461510
6 9.269194
7 9.183057
8 9.192336
9 9.260479
dtype: float64