SPERT: Reinforcement Learning-Enhanced Transformer Model for Agile Story Point Estimation
Story point estimation is a key practice in Agile project management that assigns effort values to user stories, helping teams manage workloads effectively. Inaccurate story point estimation can lead to project delays, resource misallocation and budget overruns. This study introduces Story Point Estimation using Reinforced Transformers (SPERT), a novel model that integrates transformer-based embeddings with reinforcement learning (RL) to improve the accuracy of story point estimation. SPERT utilizes Bidirectional Encoder Representations from Transformers (BERT) embeddings, which capture the deep semantic relationships within user stories, while the RL component refines predictions dynamically based on project feedback. We evaluate SPERT across multiple Agile projects and benchmark its performance against state-of-the-art models, including SBERT-XG, LHC-SE, Deep-SE and TF-IDF-SE. Results demonstrate that SPERT outperforms these models in terms of Mean Absolute Error (MAE), Median Absolute Error (MdAE) and Standardized Accuracy (SA). Statistical analysis using Wilcoxon tests and A12 effect size confirms the significance of SPERT’s performance, highlighting its ability to generalize across diverse projects and improve estimation accuracy in Agile environments.