
Econometrics is a branch of economics that deals with the statistical and mathematical modelling of economic systems and the use of quantitative methods to analyze and make predictions about economic phenomena. It combines the tools of economics, mathematics, and statistics to study real-world economic data and to test and validate economic theories.
Econometric methods are used in a variety of applications, including forecasting, causal inference, policy evaluation, and model building. For example, econometric models can be used to study the relationship between economic variables, such as inflation, gross domestic product (GDP), and unemployment, or to estimate the impact of a policy change, such as a change in tax rates, on the economy.

Econometric models can be simple, such as linear regression models, or more complex, such as dynamic stochastic general equilibrium models. The choice of model depends on the research question and the available data. Econometricians use a variety of software and programming languages, such as R, MATLAB, and Stata, to perform their analyses and create models.
Read more about Econometrics in Ackerberg D, Benkard L, Berry S, Pakes A. Econometric Tools for Analyzing Market Outcomes. In: Heckman J, Leamer E The Handbook of Econometrics. Vol. 6A. Amsterdam: North-Holland ; 2007. pp. 4171-4276.
Economic model
There are many different models that have been developed to analyze the US economy and its various components, including macroeconomic models, microeconomic models, and models that focus on specific sectors of the economy. Here are a few examples:
- Solow growth model: This model is used to analyze long-term economic growth and the factors that contribute to it, such as capital accumulation, technological progress, and population growth.
- IS-LM model: This model is used to analyze the interactions between the goods market and the money market in the economy and how changes in interest rates and investment affect economic activity.
- Phillips curve: This model shows the relationship between unemployment and inflation and is used to analyze the tradeoff between these two economic variables.
- Ricardian model: This model is used to analyze the effect of trade on the economy and how changes in trade policy, such as tariffs, affect the economy.
- Keynesian macroeconomic model: This model is used to analyze macroeconomic variables, such as aggregate demand, output, and employment, and the impact of government policy, such as fiscal policy, on the economy.
-
Production function model: This model is used to analyze the relationship between inputs, such as labour and capital, and output in the economy.
- The CMI model. The CMI model measures the rate of market (competition) imperfection. At the base of this way is a calculation of the natural (normal) price (P0) that corresponds to the ideal market, even if the ideal market was never formed in the real world.
These are just a few examples of the many models that have been developed to study the US economy. The choice of model depends on the research question and the available data.
Econometricians use various techniques, such as regression analysis and time series analysis, to estimate these models and make predictions about the economy. The main advantage of econometric theories is the possibility of real-time analysis, based on statistics and history, such models can calculate the dynamics of the development of economic processes, and hence the time of the onset of key events.
And the main advantage of deductive theories is the possibility of explaining, because of tracking cause-and-effect relationships and therefore predicting the possible outcome, as a result of the prevailing factors, but those theories do not answer the question of when this may happen.
The CMI model is the best of both worlds. It is a synthesis of an econometric approach together with a deductive model. Thus, the CMI model forecast based on the influenced factors gives an unambiguously readable result, which explains the situation and its possible occurrence time.
Who is the leader of US economy modelling?
It’s difficult to say who the leader is in US economy modelling as it is a highly interdisciplinary field that involves input from various institutions and individuals, including economists, statisticians, financial analysts, and experts in machine learning and artificial intelligence. Every hedge fund has its own models. Some of the well-known institutions that engage in economic modelling and forecasting include the Federal Reserve, the International Monetary Fund (IMF), the Organization for Economic Co-operation and Development (OECD), and private consulting firms such as The Conference Board and Moody’s Analytics. So our research group are comparing its own forecast by the CMI model with rating agencies and the oldest banks.
Limitations of models
The limitations of economic models vary depending on the specific model and the assumptions and methods used in its development. Some common limitations include:
Simplification: Economic models often simplify reality by making assumptions that may not hold in all situations. This can lead to inaccurate predictions and a limited understanding of real-world phenomena.
Data limitations: The accuracy of an economic model depends on the quality and quantity of data used in its development. Models can be limited by the availability and accuracy of data and the difficulty of measuring certain economic variables.
Assumption of stationarity: Many economic models assume that certain underlying processes, such as inflation or interest rates, are stationary. However, this assumption may not hold in all cases, leading to incorrect predictions.
Structural breaks: Economic models may fail to account for sudden changes in the underlying processes that drive economic activity, such as financial crises or natural disasters.
Model specification: Choosing an appropriate specification for an economic model can be challenging, and misspecification can lead to incorrect predictions.
Limited predictive ability: Economic models are often developed to explain past economic behaviour and may not be able to accurately predict future events.
Model uncertainty: The uncertainty surrounding model parameters and model selection can also lead to limitations in the accuracy of economic model predictions.
Our research team understands the limitations in forecasting economic phenomena, and ongoing research aims to improve the CMI model accuracy and address these limitations.
How to check economic model accuracy?
Backtesting: A common method of testing economic models is to use historical data to evaluate their accuracy. This involves comparing the predicted values generated by the model to actual observed values.
Out-of-sample testing: This method involves testing the model on data that was not used to develop the model. This helps to determine the accuracy of the model when applied to new and unseen data.
Cross-validation involves dividing the data into multiple sub-samples, training the model on each sub-sample, and evaluating its accuracy on the remaining data.
Model comparison: The accuracy of different models can be compared by using the same data and comparing their predictions. The model with the lowest prediction error is generally considered to be the most accurate.
Real-world evaluation: The accuracy of an economic model can be tested by comparing its predictions to real-world events or data. This helps to determine the accuracy of the model in predicting real-world outcomes.
It’s important to keep in mind that no model is perfect and there are always limitations and uncertainties associated with economic models. The accuracy of an economic model is also dependent on the quality and quantity of data used to develop the model and the complexity of the relationships being modelled. In the CMI model, we aggregate more than several thousand data inputs.
What is nonparametric econometrics?
Nonparametric econometrics is a branch of the quantitative science of modelling the economy, that uses the data to form the Econometric model itself. The interest prediction variables were evaluated on different methods (neural networks, nearest neighbour, spline smoothing). The main difference between econometrics and nonparametric econometrics is that the last does not require specification of the functional forms of the evaluated objects. Nonparametric econometrics weakens parametric assumptions, and it makes predictions closer to empirical data.
An important stage in the formation of econometrics was the construction of economic barometers. The construction of economic barometers is based on the idea that there are indicators that change earlier than others and therefore can serve as signals of changes in the latter. The first and most famous was the Harvard Barometer, which was created in 1903 under the leadership of W. Parsons and W. Mitchell. It consisted of curves characterizing the stock, commodity and money markets.
Until the 1970s, econometrics was understood as an empirical evaluation of models created within the framework of economic theory.
An important event in the development of econometrics was the advent of computers. Thanks to them, the statistical analysis of time series has greatly developed. J. Box and G. Jenkinsgiep created the ARIMA model in 1970, and K. Sims and some other scientists – developed VAR models in the early 1980s. It stimulated econometric research and the rapid development of financial markets and derivatives. This led to the winner of the Nobel Prize in Economics in 1981, J. Tobin’s approach to developing models using censored data. Norwegian economist Trygve Magnus Haavelmo considered economic series as the realization of random processes. The main problems that appear when working with such data are non-stationarity and strong volatility. If the variables are non-stationary, then there is a risk of establishing a connection where there is none. A solution to this problem is the transition from the levels of the series to their differences. In 1989, Haavelmo was awarded the Nobel Prize in Economics “for his clarification of the probability theory foundations of econometrics and his analyses of simultaneous economic structures.”
The disadvantage of this method is the complexity of the economic interpretation of the results obtained. To solve this problem, Clive Granger introduced the concept of cointegration as a stationary combination between non-stationary variables. He proposed a deviation correction model (ESM), for which he developed methods for estimating its parameters, generalizing and testing. Cointegration is applied if the short-term dynamics reflect significant destabilizing factors and the long-term tends to economic equilibrium. The models created by Granger were generalized by S. Johansen in 1990 for the multidimensional case. In 2003, Granger together with R. Engle received the Nobel Prize. R. Engle, in turn, is known as the creator of models with time-varying volatility (the so-called ARCH models). These models have become widespread in financial markets.