Open Access
Open access
Applied Sciences (Switzerland), volume 15, issue 3, pages 1252

FPE–Transformer: A Feature Positional Encoding-Based Transformer Model for Attack Detection

Publication typeJournal Article
Publication date2025-01-26
scimago Q2
SJR0.508
CiteScore5.3
Impact factor2.5
ISSN20763417
Abstract

The increase in cybersecurity threats has made attack detection systems critically important. Traditional deep learning methods often require large amounts of data and struggle to understand relationships between features effectively. With their self-attention mechanism, Transformers excel in modeling complex relationships and long-term dependencies. They are also adaptable to various data types and sources, making them advantageous in large-scale attack detection scenarios. This paper introduces the FPE–Transformer framework, leveraging the strengths of the Transformer architecture. FPE–Transformer incorporates an innovative feature positional encoding mechanism that encodes the positional information of each feature separately, enabling a deeper understanding of feature relationships and more precise attack detection. Additionally, the model includes a ClassificationHead for enhanced accuracy and complex pattern recognition. The framework’s performance was validated using the NSL-KDD and CIC-IDS2017 datasets, demonstrating its superiority over traditional methods in detecting diverse attack types and improving overall performance. This study highlights FPE–Transformer’s innovative approach and ability to address key limitations of traditional deep learning methods, establishing it as a robust solution for modern attack detection challenges.

Found 

Are you a researcher?

Create a profile to get free access to personal recommendations for colleagues and new articles.
Share
Cite this
GOST | RIS | BibTex | MLA
Found error?