Stationarity of Econometric Learning with Bounded Memory and a Predicted State Variable (cid:3)

In this paper, we consider a model where producers set their prices based on their prediction of the aggregated price level and an exogenous variable, which can be a demand or a cost-push shock. To form their expectations, they use OLS-type econometric learning with bounded memory. We show that the aggregated price follows the random coe¢ cient autoregressive process and we prove that this process is covariance stationary.


Introduction
Econometric Learning was designed to model the forecast of the future economic variables in forward looking models.In contrast to the Rational Expectations Theory, which imposes a We would like to thank Adriana Cornea, Martin Ellison, Michele Berardi, Jack R. Rogers and Dooruj Rambaccussing for comments and discussions.Keqing Liu and Šarūnas Gird• enas are grateful for …nancial support from ESRC.Special acknowledge goes to the anonymous referee who corrected some typos in our proof.
very strong assumption that the agents know the structure of the model, Econometric Learning only assumes that agents behave as professional econometricians.They collect the available data and use OLS regression to produce the forecast.As more data becomes available, this econometric forecast often converges to the rational expectations equilibria (Sargent, 1993).Although econometric learning relaxes many assumptions of the rational expectations mechanism, we think that one of them could still be too strong.In particular, it assumes that agents have access to the entire history of the variables, and they use all of them to form the forecast.Not only does that assumption require in…nite memory, it also neglects the cost of data collection and processing.
Several papers facilitate the assumption of in…nite memory and consider the case when the memory is bounded (for a survey, see Chevillon and Mavroeidis, 2014).However, the majority of the results are proven for non-stochastic models (Evans and Honkapohja, 2000).The only exception known to us is Honkapohja and Mitra (2003) who investigate learning with bounded memory in a stochastic environment.However, they consider a very special case of learning the intercept parameter, and their model does not account for the possibility of using some exogenous independent variables when the expectation is formed.
This paper picks up the research from Honkapohja and Mitra (2003) and explores the dynamic properties of econometric learning with bounded memory in a stochastic environment.
We expand that paper by adding a stochastic exogenous variable which can be used for econometric forecasts.
The introduction of stochastic independent variable makes the mathematical framework more complex as compared to Honkapohja and Mitra (2003) where the model evolves according to a simple autoregressive process (AR).In this paper, the model is more complex since the transition matrix has random coe¢ cients (the random coe¢ cient autoregressive model, RCAR, as in Nicholls and Quinn, 1982).It is also more complex than Conlisk (1974), since our transition matrices are autocorrelated.Nevertheless, we proved the stationarity of the model.
In addition, we formulate a su¢ cient condition for stationarity which can be more generally applied in the RCAR literature.
This paper is structured as follows.In Section 2 we present the model and introduce OLStype learning with …nite memory.In Section 3 we prove that the RCAR process of price movement is covariance-stationary.Section 4 concludes the paper.
We consider a model where producer j sets the current price p t (j) depending on the expected aggregated level of price p e t and the exogenous but not completely observable state variable e w t : where , are known constant parameters and e w t is the estimated value of the exogenous cost push shock which can negatively a¤ect the pro…t.The cost push shock w t is not observed in period t; however, every producer has access to the historical data of its past realisation of This model is very similar to the cobweb model as presented in Kaldor (1934), Ezekiel (1938) and more recently in Evans and Honkapohja (2003).It is known to be stable when j j < 1.We will restrict our analysis to this particular case.In equilibrium, each producer sets the same price, that is p t = p t (j):

OLS Learning
As w t is the only state variable, the producer expects the aggregated price to depend on the variable where 2 and 2 are unknown parameters with producer estimates based on available historical data fp s ; w s g : The price expectation is then where b 2;t and b 2;t are estimated coe¢ cients and e w t is a proxy for w t .The classical OLS-type learning model assumes that agents forecast future prices by running the OLS regression using equation ( 2) and that at time t, the available information set consists of the entire history of prices and the exogenous state variable fp s ; w s g t 1 s=0 .Coe¢ cients b 2;t and b 2;t are OLS estimators on the information set fp s ; w s g t s=0 .

Learning with Bounded Memory
Learning with bounded memory in our paper simply means that the agent is only using a limited number of observations T to form expectations. 1 The forecast will be made using the same OLS algorithm as in the classical case (3); however, we assume that only a …nite set of historical data, fp s ; w s g t 1 s=t T , is used to estimate the coe¢ cients.Consequently, the estimators b 2;t and b 2;t are de…ned as follows: b 2;t 1 = p t 1 b 2;t 1 w t 1 ; (5) Finally, as the agents cannot observe the realization of w t at the time when they set their prices, the forecast e w t is used.The forecast is based on available historical data fw s g t 1 s=t T ; and consists of the weighted sum as in Mitra and Honkapohja (2003).Formally, e w t can be written as where i;t is the expected probability that w t = w t i and therefore, Our set up covers an extensive range of models.For example, if w t follows a Markov process with high persistency, the best prediction for w t is w t 1 : In this case, 1t = 1; and it = 0 for i > 1.In particular, for T = 2, 1 = 1; 2 = 0; the price p t follows a simple autoregressive process with p e t = p t 1 : If w t is i:i:d: distributed, the best proxy for w t might be w t 1 : In this case, i;t = 1 T ; and the price p t follows the AR(T ) process with p e t = p t 1 : Our model will also work if i;t corresponds to precautionary predictors with larger weights attached to the worse realisations as in the Robust Control or The Ambiguity Aversion theories.
First, we show that the aggregated price p t follows a Random Coe¢ cient Autoregressive (RCAR) process.

Proposition 1
The actual price follows an autoregressive process of order T with random coe¢ cients as in (10) where

Stationarity of Bounded Memory Learning
Proposition 1 allows us to write our model in the RCAR representation.
Lemma 2 For any realisation of w t ; i) P T i=1 Z i;t = 1 and ii) : Then, according to (11), where k i;t can be any number with the following restrictions First, we will prove that the expectation of y t is …nite by applying Proposition 3: Thus, we have proved that E [jy t j] is …nite if E [j" t j] exists.To complete the proof, we need to show that E [y t y 0 t ] is also …nite: We iterate it backwards to obtain: Finally, we will show that the expectations of the absolute value of the product are bounded2 : Another interesting implication of Proposition 3 is that the spectral radius of M t is smaller than one.
Lemma 5 For any realization of the stochastic matrix M t , its eigenvalues are less than one in absolute value.
Proof.Consider G n = (M t ) n : Applying proposition 3 we can claim that jG n j < c T e n J : [9] Sargent, Thomas J., 1993, Bounded Rationality in Macroeconomics, Clarendon Press, Oxford.
5 Appendix: Proof of Proposition 3 For any memory length T , and constant e > ; there exists a boundary c T such that every element of the product of n matrices M t is bounded in absolute value by c T e n : where the matrix M t can be represented as follows where Z t has the form of Z 1;t Z 2;t ::: Z T 1;t 0 0 ::: 0 :::: ::: ::: ::: where each element Z i;t is smaller than 1 in absolute value, jZ i;t j < 1; and S is the lower shift matrix 0 0 ::: 0 0 1 0 ::: 0 0 0 1 ::: 0 0 :::: ::: ::: ::: ::: Proof.First we will compute the product using the property of matrix S. For any matrix A, the …rst row of SA is zero.Moreover, if the …rst k rows of A are zeros, then the …rst of k + 1 rows of SA are also zeros.
To compute the product (22) we need to sum up the products of n matrixes, each of them is either Z or S.However, if S appears more than T 1 times, the product is zero.Therefore, we can restrict our attention to only those cases when S appears less than T times.
The number of products with S being exactly on k places is n!=k!=(n k)! and therefore, the total number of non-zero products is less than n!=((T 1)!(n T + 1)!) (T 1): Moreover, we can claim that every component is a matrix with elements less than ( z) n T ; where z = max i jZ i;t j 1: Therefore, every element of a n e n :

[( Z t 1 +Let e 2 (
S) ( Z t 2 + S) ::::( Z t n + S)] ij < n! (T 2)!(n T + 1)! n T < n T n T :Consider the sequence fa n g ; de…ned as a n = n T n T , ; 1), then we can …nd n ( e ; T; ); such that for any n > n ;It follows from (23) that for any positive ka n +k < e k a n = e k n T n T :(24)To complete the proof we de…ne c T c T = max n n