Open Access
,
pages 253-260
Assessing Large Language Models Effectiveness in Outdated Method Renaming
Publication type: Book Chapter
Publication date: 2024-12-07
scimago Q2
SJR: 0.352
CiteScore: 2.4
Impact factor: —
ISSN: 03029743, 16113349, 18612075, 18612083
Abstract
Identifying effective methods for automatic method renaming after code modifications is crucial for maintaining developer productivity and enhancing the performance of source code analysis tools. In this study, we focus on benchmarking the effectiveness of the ChatGPT large language model (LLM) in predicting new method names after code modifications. Leveraging a dataset of method code snippets along with their original and modified names, we conducted experiments on 116 samples to assess the prediction accuracy of ChatGPT. Using Jaccard similarity as the metric, we varied the similarity threshold to evaluate the classification performance of predicted names. However, the Jaccard similarity does not retain the magnitude or direction of the vectors, reflecting the strength and polarity of the similarity. In addition, it ignores the order and context of the words, which results in missing potential syntactic or semantic variations. To solve this problem, we propose another validation process which not only detects whether or not an LLM captured semantic changes of a method, but also its structural changes in order to be able to generate a suitable name for this given method. Our results indicate that ChatGPT achieves a high success rate in predicting method names, obtaining 98% (Resp. 94%) when the threshold is set to 0.5 for the Cosine (resp. Jaccard) similarity. For a threshold of 1 (maximum similarity), ChatGPT maintains a notable performance with 49% (Resp. 74%) Cosine (resp. Jaccard) similarity. This demonstrates the potential of ChatGPT for automating method renaming tasks in software development workflows.
Found
Nothing found, try to update filter.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
0
Total citations:
0
Cite this
GOST |
RIS |
BibTex
Cite this
GOST
Copy
Ben Mrad A. et al. Assessing Large Language Models Effectiveness in Outdated Method Renaming // Lecture Notes in Computer Science. 2024. pp. 253-260.
GOST all authors (up to 50)
Copy
Ben Mrad A., O. Thiombiano A. M., Mkaouer M. W., Hnich B. Assessing Large Language Models Effectiveness in Outdated Method Renaming // Lecture Notes in Computer Science. 2024. pp. 253-260.
Cite this
RIS
Copy
TY - GENERIC
DO - 10.1007/978-981-96-0805-8_18
UR - https://link.springer.com/10.1007/978-981-96-0805-8_18
TI - Assessing Large Language Models Effectiveness in Outdated Method Renaming
T2 - Lecture Notes in Computer Science
AU - Ben Mrad, Ali
AU - O. Thiombiano, Abdoul Majid
AU - Mkaouer, Mohamed Wiem
AU - Hnich, Brahim
PY - 2024
DA - 2024/12/07
PB - Springer Nature
SP - 253-260
SN - 0302-9743
SN - 1611-3349
SN - 1861-2075
SN - 1861-2083
ER -
Cite this
BibTex (up to 50 authors)
Copy
@incollection{2024_Ben Mrad,
author = {Ali Ben Mrad and Abdoul Majid O. Thiombiano and Mohamed Wiem Mkaouer and Brahim Hnich},
title = {Assessing Large Language Models Effectiveness in Outdated Method Renaming},
publisher = {Springer Nature},
year = {2024},
pages = {253--260},
month = {dec}
}