J-Tucker: Joint Compression Scheme for Efficient Deployment of Multi-Task Deep Learning Models on Edge Devices
Publication type: Journal Article
Publication date: 2025-03-01
scimago Q1
wos Q1
SJR: 2.516
CiteScore: 16.2
Impact factor: 6.3
ISSN: 08908044, 1558156X
Abstract
With the advancement of intelligent edge computing, the deployment of deep learning models (e.g., Convolutional Neural Networks) on edge devices is becoming increasingly popular. However, the limited storage and computing capabilities of these devices present a significant bottleneck for the deployment and execution of large models. In this paper, based on Tucker decomposition, we propose a joint compression scheme (J-Tucker) to compress the CNN models in a multi-task scenario. In J-Tucker, we propose to utilize the core tensor to represent the unique information in each task while utilizing the shared factor matrices to represent the correlations and common features among multi-tasks. Moreover, we propose several novel techniques, such as a shared rank selection strategy to select the ranks to maximally reduce the redundancy in the kernels while reducing the impact on model’s accuracy. We have done extensive experiments on two neural networks (AlexNet and VGG-16) with three real datasets (CIFAR-10, CIFAR-100, and STL-10) to evaluate the effectiveness of the proposed algorithms. The experimental results demonstrate that compared with existing compression algorithms that compress each task individually, our joint compression scheme J-Tucker can achieve significantly better performance with a much higher compression ratio.
Found
Nothing found, try to update filter.
Found
Nothing found, try to update filter.
Top-30
Journals
|
1
|
|
|
IEEE Transactions on Artificial Intelligence
1 publication, 50%
|
|
|
IEEE Access
1 publication, 50%
|
|
|
1
|
Publishers
|
1
2
|
|
|
Institute of Electrical and Electronics Engineers (IEEE)
2 publications, 100%
|
|
|
1
2
|
- We do not take into account publications without a DOI.
- Statistics recalculated weekly.
Are you a researcher?
Create a profile to get free access to personal recommendations for colleagues and new articles.
Metrics
2
Total citations:
2
Citations from 2024:
2
(100%)
Cite this
GOST |
RIS |
BibTex |
MLA
Cite this
GOST
Copy
Wen J. et al. J-Tucker: Joint Compression Scheme for Efficient Deployment of Multi-Task Deep Learning Models on Edge Devices // IEEE Network. 2025. Vol. 39. No. 2. pp. 13-19.
GOST all authors (up to 50)
Copy
Wen J., Li X., Xie K., Liang W., Xie G. J-Tucker: Joint Compression Scheme for Efficient Deployment of Multi-Task Deep Learning Models on Edge Devices // IEEE Network. 2025. Vol. 39. No. 2. pp. 13-19.
Cite this
RIS
Copy
TY - JOUR
DO - 10.1109/mnet.2024.3516196
UR - https://ieeexplore.ieee.org/document/10794779/
TI - J-Tucker: Joint Compression Scheme for Efficient Deployment of Multi-Task Deep Learning Models on Edge Devices
T2 - IEEE Network
AU - Wen, Jigang
AU - Li, Xiaocan
AU - Xie, Kun
AU - Liang, Wei
AU - Xie, Gaogang
PY - 2025
DA - 2025/03/01
PB - Institute of Electrical and Electronics Engineers (IEEE)
SP - 13-19
IS - 2
VL - 39
SN - 0890-8044
SN - 1558-156X
ER -
Cite this
BibTex (up to 50 authors)
Copy
@article{2025_Wen,
author = {Jigang Wen and Xiaocan Li and Kun Xie and Wei Liang and Gaogang Xie},
title = {J-Tucker: Joint Compression Scheme for Efficient Deployment of Multi-Task Deep Learning Models on Edge Devices},
journal = {IEEE Network},
year = {2025},
volume = {39},
publisher = {Institute of Electrical and Electronics Engineers (IEEE)},
month = {mar},
url = {https://ieeexplore.ieee.org/document/10794779/},
number = {2},
pages = {13--19},
doi = {10.1109/mnet.2024.3516196}
}
Cite this
MLA
Copy
Wen, Jigang, et al. “J-Tucker: Joint Compression Scheme for Efficient Deployment of Multi-Task Deep Learning Models on Edge Devices.” IEEE Network, vol. 39, no. 2, Mar. 2025, pp. 13-19. https://ieeexplore.ieee.org/document/10794779/.