Prediction (out of sample)¶
[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Artificial data¶
[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Estimation¶
[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.983
Model: OLS Adj. R-squared: 0.982
Method: Least Squares F-statistic: 910.2
Date: Wed, 01 Feb 2023 Prob (F-statistic): 6.01e-41
Time: 21:31:58 Log-Likelihood: 0.91756
No. Observations: 50 AIC: 6.165
Df Residuals: 46 BIC: 13.81
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 5.1062 0.084 60.485 0.000 4.936 5.276
x1 0.4853 0.013 37.270 0.000 0.459 0.511
x2 0.3892 0.051 7.604 0.000 0.286 0.492
x3 -0.0185 0.001 -16.212 0.000 -0.021 -0.016
==============================================================================
Omnibus: 2.107 Durbin-Watson: 2.531
Prob(Omnibus): 0.349 Jarque-Bera (JB): 1.241
Skew: -0.319 Prob(JB): 0.538
Kurtosis: 3.434 Cond. No. 221.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In-sample prediction¶
[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.64288088 5.06799138 5.4615446 5.80232862 6.07678678 6.28124502
6.42251552 6.51677745 6.58691882 6.658776 6.75688892 6.9004692
7.10024374 7.35669233 7.65996882 7.99151881 8.32712808 8.64090093
8.90951476 9.11605192 9.25277923 9.32241848 9.33769933 9.31926777
9.29229411 9.28233755 9.31114685 9.39308711 9.53277959 9.7243425
9.95235728 10.19440191 10.42473474 10.61852376 10.75592648 10.82534993
10.82535541 10.76489482 10.66183992 10.54004602 10.42543267 10.34172508
10.30655621 10.32856965 10.40599957 10.52696081 10.67140157 10.81439788
10.93024852 10.99669769]
Create a new sample of explanatory variables Xnew, predict and plot¶
[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[10.98934447 10.87773314 10.67712729 10.42231068 10.15907098 9.93298935
9.77828059 9.70941614 9.71758098 9.77283188]
Plot comparison¶
[7]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7fc7dda89350>
Predicting with Formulas¶
Using formulas can make both estimation and prediction a lot easier
[8]:
from statsmodels.formula.api import ols
data = {"x1": x1, "y": y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2
[9]:
res.params
[9]:
Intercept 5.106209
x1 0.485255
np.sin(x1) 0.389216
I((x1 - 5) ** 2) -0.018533
dtype: float64
Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0 10.989344
1 10.877733
2 10.677127
3 10.422311
4 10.159071
5 9.932989
6 9.778281
7 9.709416
8 9.717581
9 9.772832
dtype: float64