Prediction (out of sample)

[1]:
%matplotlib inline
[2]:
import numpy as np
import matplotlib.pyplot as plt

import statsmodels.api as sm

plt.rc("figure", figsize=(16, 8))
plt.rc("font", size=14)

Artificial data

[3]:
nsample = 50
sig = 0.25
x1 = np.linspace(0, 20, nsample)
X = np.column_stack((x1, np.sin(x1), (x1 - 5) ** 2))
X = sm.add_constant(X)
beta = [5.0, 0.5, 0.5, -0.02]
y_true = np.dot(X, beta)
y = y_true + sig * np.random.normal(size=nsample)

Estimation

[4]:
olsmod = sm.OLS(y, X)
olsres = olsmod.fit()
print(olsres.summary())
                            OLS Regression Results
==============================================================================
Dep. Variable:                      y   R-squared:                       0.987
Model:                            OLS   Adj. R-squared:                  0.987
Method:                 Least Squares   F-statistic:                     1209.
Date:                Sat, 04 Feb 2023   Prob (F-statistic):           9.61e-44
Time:                        20:49:19   Log-Likelihood:                 7.3284
No. Observations:                  50   AIC:                            -6.657
Df Residuals:                      46   BIC:                            0.9912
Df Model:                           3
Covariance Type:            nonrobust
==============================================================================
                 coef    std err          t      P>|t|      [0.025      0.975]
------------------------------------------------------------------------------
const          4.9629      0.074     66.830      0.000       4.813       5.112
x1             0.5057      0.011     44.153      0.000       0.483       0.529
x2             0.4582      0.045     10.176      0.000       0.368       0.549
x3            -0.0204      0.001    -20.308      0.000      -0.022      -0.018
==============================================================================
Omnibus:                        0.634   Durbin-Watson:                   1.723
Prob(Omnibus):                  0.728   Jarque-Bera (JB):                0.740
Skew:                           0.146   Prob(JB):                        0.691
Kurtosis:                       2.480   Cond. No.                         221.
==============================================================================

Notes:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.

In-sample prediction

[5]:
ypred = olsres.predict(X)
print(ypred)
[ 4.45239724  4.9206011   5.35212304  5.72199419  6.01425682  6.22458614
  6.36100086  6.44254572  6.49616252  6.55226364  6.63973537  6.78119199
  6.9892603   7.26450506  7.59533622  7.95991314  8.3297329   8.67431329
  8.96620097  9.1854819   9.32305324  9.38211888  9.37766305  9.33398847
  9.28072347  9.24795414  9.26128127  9.33761424  9.48239279  9.68869337
  9.93836658 10.20501914 10.45835018 10.66912958 10.81400044 10.87931654
 10.86338434 10.77674127 10.64042442 10.48251424 10.33352098 10.22137195
 10.16682314 10.18004908 10.25897117 10.3895992  10.54832992 10.70582497
 10.83183106 10.90015057]

Create a new sample of explanatory variables Xnew, predict and plot

[6]:
x1n = np.linspace(20.5, 25, 10)
Xnew = np.column_stack((x1n, np.sin(x1n), (x1n - 5) ** 2))
Xnew = sm.add_constant(Xnew)
ynewpred = olsres.predict(Xnew)  # predict out of sample
print(ynewpred)
[10.88000217 10.73782725 10.49159278 10.18224324  9.86367596  9.58954518
  9.40012563  9.31245187  9.3161476   9.37596621]

Plot comparison

[7]:
import matplotlib.pyplot as plt

fig, ax = plt.subplots()
ax.plot(x1, y, "o", label="Data")
ax.plot(x1, y_true, "b-", label="True")
ax.plot(np.hstack((x1, x1n)), np.hstack((ypred, ynewpred)), "r", label="OLS prediction")
ax.legend(loc="best")
[7]:
<matplotlib.legend.Legend at 0x7f959985f050>
../../../_images/examples_notebooks_generated_predict_12_1.png

Predicting with Formulas

Using formulas can make both estimation and prediction a lot easier

[8]:
from statsmodels.formula.api import ols

data = {"x1": x1, "y": y}

res = ols("y ~ x1 + np.sin(x1) + I((x1-5)**2)", data=data).fit()

We use the I to indicate use of the Identity transform. Ie., we do not want any expansion magic from using **2

[9]:
res.params
[9]:
Intercept           4.962927
x1                  0.505686
np.sin(x1)          0.458152
I((x1 - 5) ** 2)   -0.020421
dtype: float64

Now we only have to pass the single variable and we get the transformed right-hand side variables automatically

[10]:
res.predict(exog=dict(x1=x1n))
[10]:
0    10.880002
1    10.737827
2    10.491593
3    10.182243
4     9.863676
5     9.589545
6     9.400126
7     9.312452
8     9.316148
9     9.375966
dtype: float64