Prediction (out of sample)¶
[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.api as sm
plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)
Artificial data¶
[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)
Estimation¶
[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.984
Model: OLS Adj. R-squared: 0.983
Method: Least Squares F-statistic: 932.6
Date: Thu, 03 Aug 2023 Prob (F-statistic): 3.47e-41
Time: 21:28:34 Log-Likelihood: 1.9172
No. Observations: 50 AIC: 4.166
Df Residuals: 46 BIC: 11.81
Df Model: 3
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
const 5.0674 0.083 61.237 0.000 4.901 5.234
x1 0.4871 0.013 38.170 0.000 0.461 0.513
x2 0.4293 0.050 8.558 0.000 0.328 0.530
x3 -0.0191 0.001 -17.005 0.000 -0.021 -0.017
==============================================================================
Omnibus: 0.690 Durbin-Watson: 1.956
Prob(Omnibus): 0.708 Jarque-Bera (JB): 0.739
Skew: 0.075 Prob(JB): 0.691
Kurtosis: 2.424 Cond. No. 221.
==============================================================================
Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In-sample prediction¶
[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.59103596 5.03487491 5.44436667 5.79611295 6.07515976 6.27745436
6.41051106 6.49217669 6.54769836 6.60557548 6.69287738 6.83079587
7.03116351 7.29450955 7.60997295 7.95708686 8.30914124 8.63757125
8.91665036 9.12771717 9.26224142 9.32322545 9.32471099 9.28947207
9.24527339 9.22030859 9.23856814 9.3158978 9.45739515 9.65657214
9.89642099 10.15220856 10.39553996 10.59902366 10.74077195 10.80799696
10.79911167 10.72399069 10.60234787 10.46049738 10.32703032 10.22811736
10.18320892 10.20183952 10.28206133 10.41076413 10.56582933 10.71976399
10.84421779 10.91464096]
Create a new sample of explanatory variables Xnew, predict and plot¶
[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew) # predict out of sample
print(ynewpred)
[10.90363851 10.77832626 10.55554103 10.27365186 9.98316588 9.73436243
9.56498291 9.49099038 9.50266125 9.56696592]
Plot comparison¶
[7]:
import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7f4a801a5f90>
Predicting with Formulas¶
Using formulas can make both estimation and prediction a lot easier
[8]:
from statsmodels.formula.api import ols
data = {"x1": x1, "y": y}
res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()
We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2
[9]:
res.params
[9]:
Intercept 5.067397
x1 0.487127
np.sin(x1) 0.429334
I((x1 - 5) ** 2) -0.019054
dtype: float64
Now we only have to pass the single variable and we get the transformed right-hand side variables automatically
[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0 10.903639
1 10.778326
2 10.555541
3 10.273652
4 9.983166
5 9.734362
6 9.564983
7 9.490990
8 9.502661
9 9.566966
dtype: float64