Open Access
Open access
Sustainability, volume 17, issue 3, pages 843

Development of per Capita GDP Forecasting Model Using Deep Learning: Including Consumer Goods Index and Unemployment Rate

Publication typeJournal Article
Publication date2025-01-21
Journal: Sustainability
scimago Q1
wos Q2
SJR0.672
CiteScore6.8
Impact factor3.3
ISSN20711050
Abstract

In the 21st century, the increasing complexity and uncertainty of the global economy have heightened the need for accurate economic forecasting. Per capita GDP, a critical indicator of living standards, economic growth, and productivity, plays a key role in government policy-making, corporate strategy, and investor decisions. However, predicting per capita GDP poses significant challenges due to its sensitivity to various economic and social factors. Traditional methods such as statistical analysis, regression, and time-series models have shown limitations in capturing nonlinear interactions and volatility of economic data. To address these limitations, this study develops a per capita GDP forecasting model based on deep learning, incorporating key macroeconomic variables—the Consumer Price Index (CPI) and unemployment rate (UR)—to enhance predictive accuracy. This study employs five deep-learning regression models (RNN, LSTM, GRU, TCN, and Transformer) applied to real and placebo datasets, each incorporating combinations of CPI and UR. The results demonstrate that deep learning models can effectively capture complex, nonlinear relationships in economic data, significantly improving predictive accuracy compared to traditional models. Among the models, the Transformer consistently achieves the highest R-squared and lowest error values across various metrics (MSE, RMSE, and MSLE), indicating its superior ability to model intricate economic patterns. In addition, including CPI and UR as additional predictors enhances model robustness, with the TCN and Transformer models showing particularly strong performance in capturing short-term economic fluctuations. The findings suggest that the deep learning models, especially the Transformer, offer valuable tools for policymakers and business leaders, providing reliable GDP forecasts that support economic decision-making, resource allocation, and strategic planning. Academically, this study advances the understanding of deep learning applications in economic forecasting, particularly in integrating significant macroeconomic variables for enhanced predictive performance. The developed model is a foundation for informed economic policy and strategic decisions, offering a robust and actionable framework for managing economic uncertainties. This research contributes to theoretical and applied economics, providing insights that bridge academic innovation with practical utility in economic forecasting.

Adewale M.D., Ebem D.U., Awodele O., Sambo-Magaji A., Aggrey E.M., Okechalu E.A., Donatus R.E., Olayanju K.A., Owolabi A.F., Oju J.U., Ubadike O.C., Otu G.A., Muhammed U.I., Danjuma O.R., Oluyide O.P.
2024-12-01 citations by CoLab: 2
Balderas L., Lastra M., Benítez J.M.
2024-09-11 citations by CoLab: 3 PDF Abstract  
Time series forecasting is undoubtedly a key area in machine learning due to the numerous fields where it is crucial to estimate future data points of sequences based on a set of previously observed values. Deep learning has been successfully applied to this area. On the other hand, growing concerns about the steady increase in the amount of resources required by deep learning-based tools have made Green AI gain traction as a move towards making machine learning more sustainable. In this paper, we present a deep learning-based time series forecasting methodology called GreeNNTSF, which aims to reduce the size of the resulting model, thereby diminishing the associated computational and energetic costs without giving up adequate forecasting performance. The methodology, based on the ODF2NNA algorithm, produces models that outperform state-of-the-art techniques not only in terms of prediction accuracy but also in terms of computational costs and memory footprint. To prove this claim, after presenting the main state-of-the-art methods that utilize deep learning for time series forecasting and introducing our methodology we test GreeNNTSF on a selection of real-world forecasting problems that are commonly used as benchmarks, such as SARS-CoV-2 and PhysioNet (medicine), Brazilian Weather (climate), WTI and Electricity (economics), and Traffic (smart cities). The results of each experiment conducted objectively demonstrate, rigorously following the experimentation presented in the original papers that addressed these problems, that our method is more competitive than other state-of-the-art approaches, producing more accurate and efficient models.
Jadon A., Patil A., Jadon S.
2024-07-27 citations by CoLab: 17 Abstract  
Time Series Forecasting has been an active area of research due to its many applications ranging from network usage prediction, resource allocation, anomaly detection, and predictive maintenance. Numerous publications published in the last five years have proposed diverse sets of objective loss functions to address cases such as biased data, long-term forecasting, and multicollinear features. In this paper, we have summarized well-known 14 regression loss functions commonly used for time series forecasting and listed out the circumstances where their application can aid in faster and better model convergence. We have also demonstrated how certain categories of loss functions perform well across all datasets and can be considered as a baseline objective function in circumstances where the distribution of the data is unknown. Our code is available on GitHub .
Atif D.
Computational Economics scimago Q2 wos Q2
2024-07-16 citations by CoLab: 2 Abstract  
Accurate long-term forecasting of Gross Domestic Product (GDP) is crucial for informed policy-making and strategic economic decisions. This research paper compares two hybrid forecasting models: ARIMA-LSTM and ARIMA-TCN. We also introduce an innovative methodology where linear and non-linear GDP components are fed into dense regression layers to enhance forecast accuracy. By combining the strengths of linear autoregressive integrated moving average (ARIMA) models with the memory-retaining capabilities of long short-term memory (LSTM) networks and temporal convolutional networks (TCN), we create hybrid architectures that capture diverse patterns in GDP time series. Additionally, dense regression is utilized to learn the optimal combination of components to improve accuracy further. Our empirical analysis involves extensive experimentation on real-world GDP datasets, assessing the models’ predictive capabilities in long-term forecasting through evaluation metrics such as MAE and RMSE. The investigation reveals that the ARIMA-LSTM hybrid model outperforms other models, demonstrating a superior ability to minimize significant errors in the presence of heteroskedastic innovations. These findings underscore the importance of hybridizing ARIMA and LSTM to enhance GDP predictive accuracy in volatile economies.
Chen S., Huang Y., Ge L.
2024-03-15 citations by CoLab: 3 Abstract  
The widespread and substantial effect of the global financial crisis in history underlines the importance of forecasting financial crisis effectively. In this paper, we propose temporal convolutional network (TCN), which based on a convolutional neural network, to construct an early warning system for financial crises. The proposed TCN is compared with logit model and other deep learning models. The Shapley value decomposition is calculated for the interpretability of the early warning system. Experimental results show that the proposed TCN outperforms other models, and the stock price and the real GDP growth have the largest contributions in the crises prediction.
Shams M.Y., Tarek Z., El-kenawy E.M., Eid M.M., Elshewey A.M.
Computational Urban Science scimago Q1 wos Q2 Open Access
2024-01-29 citations by CoLab: 14 PDF Abstract  
AbstractGross Domestic Product (GDP) is significant for measuring the strength of national and global economies in urban profiling areas. GDP is significant because it provides information on the size and performance of an economy. The real GDP growth rate is frequently used to indicate the economy’s health. This paper proposes a new model called Pearson Correlation-Long Short-Term Memory-Recurrent Neural Network (PC-LSTM-RNN) for predicting GDP in urban profiling areas. Pearson correlation is used to select the important features strongly correlated with the target feature. This study employs two separate datasets, denoted as Dataset A and Dataset B. Dataset A comprises 227 instances and 20 features, with 70% utilized for training and 30% for testing purposes. On the other hand, Dataset B consists of 61 instances and 4 features, encompassing historical GDP growth data for India from 1961 to 2021. To enhance GDP prediction performance, we implement a parameter transfer approach, fine-tuning the parameters learned from Dataset A on Dataset B. Moreover, in this study, a preprocessing stage that includes median imputation and data normalization is performed. Mean Square Error, Mean Absolute Error, Root Mean Square Error, Mean Absolute Percentage Error, Median Absolute Error, and determination coefficient (R2) evaluation metrics are utilized in this study to demonstrate the performance of the proposed model. The experimental results demonstrated that the proposed model gave better results than other regression models used in this study. Also, the results show that the proposed model achieved the highest results for R2, with 99.99%. This paper addresses a critical research gap in the domain of GDP prediction through artificial intelligence (AI) algorithms. While acknowledging the widespread application of such algorithms in forecasting GDP, the proposed model introduces distinctive advantages over existing approaches. Using PC-LSTM-RNN which achieves high R2 with minimum error rates.
Abakah E.J., Hossain S., Abdullah M., Goodell J.W.
Finance Research Letters scimago Q1 wos Q1
2024-01-01 citations by CoLab: 11 Abstract  
We investigate the interconnectedness between the US electricity market and cryptocurrency, NFTs, and DeFi markets, while considering the conditioning effect of uncertainty factors. We employ an R2 connectedness approach to analyze the shock transmission mechanism and results reveal significant connectedness among these markets with varying degrees of asymmetry in return spillovers. Notably, the US electricity market acts as a receiver of shocks. Further, global uncertainty factors positively influence interconnectedness. Findings provide valuable insights for policymakers and market participants to manage risks and promote a sustainable and resilient ecosystem between the blockchain and electricity markets.
Han Y., Tian Y., Yu L., Gao Y.
Neurocomputing scimago Q1 wos Q1
2023-10-01 citations by CoLab: 9 Abstract  
Although helpful in reducing the uncertainty associated with economic activities, economic forecasting often suffers from low accuracy. Recognizing the high compatibility between deep learning and the nonlinear characteristics of socioeconomic systems, in this paper, we introduce state-of-the-art temporal fusion transformers (TFTs) into the field of economic system forecasting and predict the performance of the Chinese macroeconomic system. Based on an extended analysis of gross final product (GFP) and the intertemporal dynamic relationship between demand-side indicators and output indicators, we establish a scientific economic forecasting framework. To summarize the forecasting characteristics of the TFT algorithm, we compare its one-step and three-step modeling effects in forecasting output indicators with a series of representative benchmark models. According to our proposed four-dimensional evaluation system, the forecasts for China’s macroeconomic system provided by the TFT model have obvious advantages in terms of overall stability, forecasting efficiency, reduction of numerical and timing errors, direction accuracy, and turning point accuracy. The forecast results show that China’s economy faces a risk of slowing growth in the post-pandemic period.
Zhang X., Liu C.
Journal of Econometrics scimago Q1 wos Q1
2023-07-01 citations by CoLab: 112 Abstract  
This paper considers the model averaging prediction in a quasi-likelihood framework that allows for parameter uncertainty and model misspecification. We propose an averaging prediction that selects the data-driven weights by minimizing a K -fold cross-validation. We provide two theoretical justifications for the proposed method. First, when all candidate models are misspecified, we show that the proposed averaging prediction using K -fold cross-validation weights is asymptotically optimal in the sense of achieving the lowest possible prediction risk. Second, when the model set includes correctly specified models, we demonstrate that the proposed K -fold cross-validation asymptotically assigns all weights to the correctly specified models. Monte Carlo simulations show that the proposed averaging prediction achieves lower empirical risk than other existing model averaging methods. As an empirical illustration, the proposed method is applied to credit card default prediction.
Srinivasan N., M K., V N., S M K., Kumar S., R S.
2023-03-17 citations by CoLab: 5
Zema T., Kozina A., Sulich A., Römer I., Schieck M.
2022-10-19 citations by CoLab: 10 Abstract  
The usage of machine learning methods in the financial sector, regarding repayment prediction or forecasting, is quite a new topic, constantly gaining in importance. The concept of the alternative costs in the literature covering machine learning and deep learning occurs most often in connection with the non-financial areas as costs of lost benefits. This empirical paper presents research dedicated to deep learning used in forecasting the alternative costs of leasing represented by the variable KUK_PRC. The study is based on the experimental approach and uses real organization data to solve the forecasting problems in the financial area with AI solutions. This research contributes to the science by identifying and exploration of the research gap in the field of applied economics and finances. The main finding of this paper is the proposed forecasting ACSeq-DNN model that forecasts opportunity costs with smaller deviations from actual values than the forecasting achieved by state-of-the-art models.
Ferrara L., Simoni A.
2022-10-10 citations by CoLab: 11 Abstract  
Alternative data sets are widely used for macroeconomic nowcasting together with machine learning--based tools. The latter are often applied without a complete picture of their theoretical nowcasting properties. Against this background, this paper proposes a theoretically grounded nowcasting methodology that allows researchers to incorporate alternative Google Search Data (GSD) among the predictors and that combines targeted preselection, Ridge regularization, and Generalized Cross Validation. Breaking with most existing literature, which focuses on asymptotic in-sample theoretical properties, we establish the theoretical out-of-sample properties of our methodology and support them by Monte-Carlo simulations. We apply our methodology to GSD to nowcast GDP growth rate of several countries during various economic periods. Our empirical findings support the idea that GSD tend to increase nowcasting accuracy, even after controlling for official variables, but that the gain differs between periods of recessions and of macroeconomic stability.
Prusty S., Patnaik S., Dash S.K.
Frontiers in Nanotechnology scimago Q2 wos Q2 Open Access
2022-08-19 citations by CoLab: 114 PDF Abstract  
Cancer is the unregulated development of abnormal cells in the human body system. Cervical cancer, also known as cervix cancer, develops on the cervix’s surface. This causes an overabundance of cells to build up, eventually forming a lump or tumour. As a result, early detection is essential to determine what effective treatment we can take to overcome it. Therefore, the novel Machine Learning (ML) techniques come to a place that predicts cervical cancer before it becomes too serious. Furthermore, four common diagnosis testing namely, Hinselmann, Schiller, Cytology, and Biopsy have been compared and predicted with four common ML models, namely Support Vector Machine (SVM), Random Forest (RF), K-Nearest Neighbors (K-NNs), and Extreme Gradient Boosting (XGB). Additionally, to enhance the better performance of ML models, the Stratified k-fold cross-validation (SKCV) method has been implemented over here. The findings of the experiments demonstrate that utilizing an RF classifier for analyzing the cervical cancer risk, could be a good alternative for assisting clinical specialists in classifying this disease in advance.
Vu H.L., Ng K.T., Richter A., An C.
2022-06-01 citations by CoLab: 110 Abstract  
The use of machine learning techniques in waste management studies is increasingly popular. Recent literature suggests k-fold cross validation may reduce input dataset partition uncertainties and minimize overfitting issues. The objectives are to quantify the benefits of k-fold cross validation for municipal waste disposal prediction and to identify the relationship of testing dataset variance on predictive neural network model performance. It is hypothesized that the dataset characteristics and variances may dictate the necessity of k-fold cross validation on neural network waste model construction. Seven RNN-LSTM predictive models were developed using historical landfill waste records and climatic and socio-economic data. The performance of all trials was acceptable in the training and validation stages, with MAPE all less than 10%. In this study, the 7-fold cross validation reduced the bias in selection of testing sets as it helps to reduce MAPE by up to 44.57%, MSE by up to 54.15%, and increased R value by up to 8.33%. Correlation analysis suggests that fewer outliers and less variance of the testing dataset correlated well with lower modeling error. The length of the continuous high waste season and length of total high waste period appear not important to the model performance. The result suggests that k-fold cross validation should be applied to testing datasets with higher variances. The use of MSE as an evaluation index is recommended.
Li Q., Yu C., Yan G.
IEEE Access scimago Q1 wos Q2 Open Access
2022-04-28 citations by CoLab: 14 Abstract  
Gross domestic product (GDP) can effectively reflect the situation of economic development and resource allocation in different regions. The high-precision GDP prediction technology lays a foundation for the sustainable development of regional resources and the proposal of economic management policies. To build an accurate GDP prediction model, this paper proposed a new multi-predictor ensemble decision framework based on deep reinforcement learning. Overall modeling consists of the following steps: Firstly, GRU, TCN, and DBN are the main predictors to train three GDP forecasting models with their characteristics. Then, the DQN algorithm effectively analyses the adaptability of these three neural networks to different GDP datasets to obtain an ensemble model. Finally, by adaptive optimization of the ensemble weight coefficients of these three neural networks, the DQN algorithm got the final GDP prediction results. Through three groups of experimental cases from China, the following conclusions can be drawn: (1) the DQN algorithm can obtain excellent experimental results in ensemble learning, which effectively improves the prediction performance of single predictors by more than 10 %. (2) The ensemble multi-predictor region GDP prediction framework based on deep reinforcement learning can achieve better prediction results than 18 benchmark models. In addition, the MAPE value of the proposed model is lower than 4.2% in all cases.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?