Global Renewable Energy Consumption Forecasting: A Comparative Benchmarking Study of Statistical, Machine Learning, and Deep Learning Models
Abstract
Energy independence is a critical component of national sovereignty and economic security. As fossil fuel resources are geographically concentrated and global energy markets are strongly influenced by geopolitical dynamics, many countries are increasingly transitioning toward renewable energy sources. Renewable energy not only enhances energy security but also contributes to global climate objectives, including the United Nations Sustainable Development Goal 7 (SDG-7) and the International Energy Agency’s Net-Zero Emissions scenario. In this context, accurate forecasting of renewable energy consumption is essential for energy policy planning, infrastructure investment, and monitoring the progress of the global energy transition. This study presents a rigorous comparative benchmarking of four forecasting approaches — Autoregressive Integrated Moving Average (ARIMA), eXtreme Gradient Boosting (XGBoost), Long Short-Term Memory networks (LSTM), and Transformer — applied to annual renewable energy consumption data sourced from the World Bank (EG.FEC.RNEW.ZS indicator, 1960–2020) across 11 aggregate regions and income groups. The dataset spans 61 annual observations (56 training, 5 test) covering the period 1960 to 2020; years 2021 and 2022 were excluded due to incomplete reporting. Each model undergoes automated hyperparameter optimisation, and predictive accuracy is evaluated on the held-out test period (2016–2020) using Root Mean Squared Error (RMSE). The World aggregate renewable energy share ranged from 16.54% (2007) to 19.74% (2020), with the Augmented Dickey-Fuller test confirming non-stationarity (ADF = 0.5240, p = 0.9856). Results show that deep learning models outperform classical
baselines: LSTM achieves the best test RMSE of 0.7286, followed by Transformer (0.8938), ARIMA (1.2294), and XGBoost (1.2518). Notably, the Transformer achieved a lower validation RMSE (0.1567) than LSTM (0.1963) during tuning yet generalised less effectively to the test period — indicating overfitting under limited data conditions. The champion LSTM model is subsequently retrained on each of the 11 regions and used to generate 20-year forecasts (2021–2040), revealing divergent regional energy transition trajectories. Hardware acceleration via Apple Metal Performance Shaders (MPS) on PyTorch was employed throughout deep learning training.
Keywords: Renewable energy forecasting; LSTM; Transformer; ARIMA; XGBoost; time series benchmarking; deep learning; World Bank; energy transition; PyTorch MPS
DOI: 10.7176/CEIS/17-1-05
Publication date: March 28th, 2026
To list your conference here. Please contact the administrator of this platform.
Paper submission email: CEIS@iiste.org
ISSN (Paper)2222-1727 ISSN (Online)2222-2863
Please add our address "contact@iiste.org" into your email contact list.
This journal follows ISO 9001 management standard and licensed under a Creative Commons Attribution 3.0 License.
Copyright © www.iiste.org
Computer Engineering and Intelligent Systems