Explainable Deep Learning for COVID-19 Vaccine Sentiment in Arabic Tweets Using Multi-Self-Attention BiLSTM with XLNet
The COVID-19 pandemic has generated a vast corpus of online conversations regarding vaccines, predominantly on social media platforms like X (formerly known as Twitter). However, analyzing sentiment in Arabic text is challenging due to the diverse dialects and lack of readily available sentiment analysis resources for the Arabic language. This paper proposes an explainable Deep Learning (DL) approach designed for sentiment analysis of Arabic tweets related to COVID-19 vaccinations. The proposed approach utilizes a Bidirectional Long Short-Term Memory (BiLSTM) network with Multi-Self-Attention (MSA) mechanism for capturing contextual impacts over long spans within the tweets, while having the sequential nature of Arabic text constructively learned by the BiLSTM model. Moreover, the XLNet embeddings are utilized to feed contextual information into the model. Subsequently, two essential Explainable Artificial Intelligence (XAI) methods, namely Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), have been employed for gaining further insights into the features’ contributions to the overall model performance and accordingly achieving reasonable interpretation of the model’s output. Obtained experimental results indicate that the combined XLNet with BiLSTM model outperforms other implemented state-of-the-art methods, achieving an accuracy of 93.2% and an F-measure of 92% for average sentiment classification. The integration of LIME and SHAP techniques not only enhanced the model’s interpretability, but also provided detailed insights into the factors that influence the classification of emotions. These findings underscore the model’s effectiveness and reliability for sentiment analysis in low-resource languages such as Arabic.