Prediction (out of sample)¶
[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Artificial data¶
[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Estimation¶
[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.978
Model: OLS Adj. R-squared: 0.977
Method: Least Squares F-statistic: 694.1
Date: Sun, 23 Jul 2023 Prob (F-statistic): 2.72e-38
Time: 11:01:09 Log-Likelihood: -6.5763
No. Observations: 50 AIC: 21.15
Df Residuals: 46 BIC: 28.80
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 4.9918 0.098 50.900 0.000 4.794 5.189
x1 0.4921 0.015 32.536 0.000 0.462 0.523
x2 0.5293 0.059 8.902 0.000 0.410 0.649
x3 -0.0188 0.001 -14.143 0.000 -0.021 -0.016
==============================================================================
Omnibus: 1.587 Durbin-Watson: 1.833
Prob(Omnibus): 0.452 Jarque-Bera (JB): 1.559
Skew: 0.357 Prob(JB): 0.459
Kurtosis: 2.512 Cond. No. 221.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In-sample prediction¶
[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.52227713 5.00677089 5.45048777 5.8245801 6.11061119 6.30358441
6.41276414 6.46015364 6.47687999 6.49807998 6.55712728 6.68014917
6.88173397 7.16253417 7.50915928 7.89637578 8.29125298 8.65857341
8.96661894 9.19238202 9.3253457 9.36921159 9.34129165 9.2696639
9.18855932 9.13273771 9.13177685 9.20521288 9.35933062 9.58613094
9.86464504 10.16437955 10.45032651 10.68871503 10.85255984 10.92609482
10.90736336 10.80853982 10.65392931 10.47597446 10.30992513 10.18804702
10.13432047 10.16050055 10.26418578 10.42921264 10.62831121 10.82758555
10.99208268 11.09153529]
Create a new sample of explanatory variables Xnew, predict and plot¶
[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[11.09557905 10.96104613 10.7086946 10.38582948 10.05472079 9.77735772
9.60027149 9.54314287 9.5939837 9.71207194]
Plot comparison¶
[7]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7fa16fc56610>
Predicting with Formulas¶
Using formulas can make both estimation and prediction a lot easier
[8]:
from statsmodels.formula.api import ols
data = {"x1": x1, "y": y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2
[9]:
res.params
[9]:
Intercept 4.991810
x1 0.492114
np.sin(x1) 0.529324
I((x1 - 5) ** 2) -0.018781
dtype: float64
Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0 11.095579
1 10.961046
2 10.708695
3 10.385829
4 10.054721
5 9.777358
6 9.600271
7 9.543143
8 9.593984
9 9.712072
dtype: float64