Open Access
Open access
volume 2 issue 1 publication number 36

An empirical study of LLaMA3 quantization: from LLMs to MLLMs

Publication typeJournal Article
Publication date2024-12-30
SJR
CiteScore4.0
Impact factor
ISSN27319008, 20973330
Abstract

The LLaMA family, a collection of foundation language models ranging from 7B to 65B parameters, has become one of the most powerful open-source large language models (LLMs) and the popular LLM backbone of multi-modal large language models (MLLMs), widely used in computer vision and natural language understanding tasks. In particular, LLaMA3 models have recently been released and have achieved impressive performance in various domains with super-large scale pre-training on over 15T tokens of data. Given the wide application of low-bit quantization for LLMs in resource-constrained scenarios, we explore LLaMA3’s capabilities when quantized to low bit-width. This exploration can potentially provide new insights and challenges for the low-bit quantization of LLaMA3 and other future LLMs, especially in addressing performance degradation issues that suffer in LLM compression. Specifically, we comprehensively evaluate the 10 existing post-training quantization and LoRA fine-tuning (LoRA-FT) methods of LLaMA3 on 1-8 bits and various datasets to reveal the low-bit quantization performance of LLaMA3. To uncover the capabilities of low-bit quantized MLLM, we assessed the performance of the LLaMA3-based LLaVA-Next-8B model under 2-4 ultra-low bits with post-training quantization methods. Our experimental results indicate that LLaMA3 still suffers from non-negligible degradation in linguistic and visual contexts, particularly under ultra-low bit widths. This highlights the significant performance gap at low bit-width that needs to be addressed in future developments. We expect that this empirical study will prove valuable in advancing future models, driving LLMs and MLLMs to achieve higher accuracy at lower bit to enhance practicality.

Found 
Found 

Top-30

Journals

1
Information Sciences
1 publication, 7.69%
Neural Networks
1 publication, 7.69%
ACM Transactions on Internet of Things
1 publication, 7.69%
Communications in Computer and Information Science
1 publication, 7.69%
1

Publishers

1
2
3
4
5
6
7
Institute of Electrical and Electronics Engineers (IEEE)
7 publications, 53.85%
Association for Computing Machinery (ACM)
3 publications, 23.08%
Elsevier
2 publications, 15.38%
Springer Nature
1 publication, 7.69%
1
2
3
4
5
6
7
  • We do not take into account publications without a DOI.
  • Statistics recalculated weekly.

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
13
Share
Cite this
GOST |
Cite this
GOST Copy
Huang W. et al. An empirical study of LLaMA3 quantization: from LLMs to MLLMs // Visual Intelligence. 2024. Vol. 2. No. 1. 36
GOST all authors (up to 50) Copy
Huang W., Zheng X., Ma X., Qin H., Lv C., Chen H., Luo J., Qi X., Liu X., Magno M. An empirical study of LLaMA3 quantization: from LLMs to MLLMs // Visual Intelligence. 2024. Vol. 2. No. 1. 36
RIS |
Cite this
RIS Copy
TY - JOUR
DO - 10.1007/s44267-024-00070-x
UR - https://link.springer.com/10.1007/s44267-024-00070-x
TI - An empirical study of LLaMA3 quantization: from LLMs to MLLMs
T2 - Visual Intelligence
AU - Huang, Wei
AU - Zheng, Xingyu
AU - Ma, Xudong
AU - Qin, Haotong
AU - Lv, Chengtao
AU - Chen, Hong
AU - Luo, Jie
AU - Qi, Xiaojuan
AU - Liu, Xianglong
AU - Magno, Michele
PY - 2024
DA - 2024/12/30
PB - Springer Nature
IS - 1
VL - 2
SN - 2731-9008
SN - 2097-3330
ER -
BibTex
Cite this
BibTex (up to 50 authors) Copy
@article{2024_Huang,
author = {Wei Huang and Xingyu Zheng and Xudong Ma and Haotong Qin and Chengtao Lv and Hong Chen and Jie Luo and Xiaojuan Qi and Xianglong Liu and Michele Magno},
title = {An empirical study of LLaMA3 quantization: from LLMs to MLLMs},
journal = {Visual Intelligence},
year = {2024},
volume = {2},
publisher = {Springer Nature},
month = {dec},
url = {https://link.springer.com/10.1007/s44267-024-00070-x},
number = {1},
pages = {36},
doi = {10.1007/s44267-024-00070-x}
}