Prediction (out of sample)¶
[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Artificial data¶
[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Estimation¶
[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.988
Model: OLS Adj. R-squared: 0.987
Method: Least Squares F-statistic: 1262.
Date: Sun, 20 Feb 2022 Prob (F-statistic): 3.62e-44
Time: 20:03:58 Log-Likelihood: 7.3858
No. Observations: 50 AIC: -6.772
Df Residuals: 46 BIC: 0.8765
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 4.8827 0.074 65.824 0.000 4.733 5.032
x1 0.5136 0.011 44.895 0.000 0.491 0.537
x2 0.4657 0.045 10.355 0.000 0.375 0.556
x3 -0.0205 0.001 -20.449 0.000 -0.023 -0.019
==============================================================================
Omnibus: 1.390 Durbin-Watson: 2.455
Prob(Omnibus): 0.499 Jarque-Bera (JB): 0.928
Skew: 0.332 Prob(JB): 0.629
Kurtosis: 3.072 Cond. No. 221.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In-sample prediction¶
[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.36916808 4.84405393 5.28172753 5.65680991 5.95308125 6.16614571
6.30415371 6.3864628 6.44045727 6.49704901 6.58559875 6.72909219
6.94036353 7.21998679 7.55618159 7.92674852 8.30271656 8.65310285
8.95000311 9.17317625 9.31336991 9.37384061 9.36981868 9.326006
9.27251753 9.2399335 9.2542752 9.3327297 9.48082611 9.69152716
9.94638536 10.21857393 10.47729432 10.69283633 10.84145945 10.90929331
10.89461656 10.80813969 10.67124533 10.51247513 10.36284046 10.2507273
10.19723256 10.21269788 10.2950105 10.42995046 10.59352682 10.75591936
10.8863779 10.95827454]
Create a new sample of explanatory variables Xnew, predict and plot¶
[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[10.94090093 10.79960485 10.55264838 10.24164846 9.92138768 9.64640151
9.45762616 9.37237597 9.38010423 9.44498542]
Plot comparison¶
[7]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7f6bf5576400>
Predicting with Formulas¶
Using formulas can make both estimation and prediction a lot easier
[8]:
from statsmodels.formula.api import ols
data = {"x1": x1, "y": y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2
[9]:
res.params
[9]:
Intercept 4.882670
x1 0.513599
np.sin(x1) 0.465677
I((x1 - 5) ** 2) -0.020540
dtype: float64
Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0 10.940901
1 10.799605
2 10.552648
3 10.241648
4 9.921388
5 9.646402
6 9.457626
7 9.372376
8 9.380104
9 9.444985
dtype: float64