Prediction (out of sample)¶
[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Artificial data¶
[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Estimation¶
[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.982
Model: OLS Adj. R-squared: 0.980
Method: Least Squares F-statistic: 815.6
Date: Sat, 09 Jul 2022 Prob (F-statistic): 7.18e-40
Time: 10:15:30 Log-Likelihood: -2.3645
No. Observations: 50 AIC: 12.73
Df Residuals: 46 BIC: 20.38
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 5.0717 0.090 56.259 0.000 4.890 5.253
x1 0.4952 0.014 35.619 0.000 0.467 0.523
x2 0.4595 0.055 8.408 0.000 0.350 0.570
x3 -0.0193 0.001 -15.798 0.000 -0.022 -0.017
==============================================================================
Omnibus: 2.101 Durbin-Watson: 2.156
Prob(Omnibus): 0.350 Jarque-Bera (JB): 1.443
Skew: -0.407 Prob(JB): 0.486
Kurtosis: 3.169 Cond. No. 221.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In-sample prediction¶
[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.58961885 5.0496539 5.47329499 5.83549733 6.12025464 6.323229
6.45246349 6.52706077 6.57404458 6.62391979 6.70566054 6.84194988
7.04545302 7.31673644 7.64417483 8.00586112 8.37320572 8.71563374
9.0056083 9.22315483 9.3591428 9.41678584 9.41111375 9.36650317
9.31267251 9.27979939 9.29356245 9.37092247 9.51733577 9.72585771
9.97828373 10.24814025 10.50503425 10.71964674 10.86854978 10.93805563
10.92646561 10.8443492 10.71280746 10.56000625 10.4165486 10.31044676
10.26251984 10.28297331 10.36972225 10.5087339 10.67633324 10.84309266
10.9786669 11.05677863]
Create a new sample of explanatory variables Xnew, predict and plot¶
[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[11.04885439 10.91913205 10.68563323 10.38942693 10.0845744 9.82489304
9.65078008 9.57932186 9.60011056 9.67779235]
Plot comparison¶
[7]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7fbd605cc790>
Predicting with Formulas¶
Using formulas can make both estimation and prediction a lot easier
[8]:
from statsmodels.formula.api import ols
data = {"x1": x1, "y": y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2
[9]:
res.params
[9]:
Intercept 5.071725
x1 0.495223
np.sin(x1) 0.459545
I((x1 - 5) ** 2) -0.019284
dtype: float64
Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0 11.048854
1 10.919132
2 10.685633
3 10.389427
4 10.084574
5 9.824893
6 9.650780
7 9.579322
8 9.600111
9 9.677792
dtype: float64