TY - JOUR
T1 - A Security Risk Taxonomy for Prompt-Based Interaction With Large Language Models
AU - Derner, Erik
AU - Batistic, Kristina
AU - Zahalka, Jan
AU - Babuska, Robert
PY - 2024
Y1 - 2024
N2 - As large language models (LLMs) permeate more and more applications, an assessment of their associated security risks becomes increasingly necessary. The potential for exploitation by malicious actors, ranging from disinformation to data breaches and reputation damage, is substantial. This paper addresses a gap in current research by specifically focusing on security risks posed by LLMs within the prompt-based interaction scheme, which extends beyond the widely covered ethical and societal implications. Our work proposes a taxonomy of security risks along the user-model communication pipeline and categorizes the attacks by target and attack type alongside the commonly used confidentiality, integrity, and availability (CIA) triad. The taxonomy is reinforced with specific attack examples to showcase the real-world impact of these risks. Through this taxonomy, we aim to inform the development of robust and secure LLM applications, enhancing their safety and trustworthiness.
AB - As large language models (LLMs) permeate more and more applications, an assessment of their associated security risks becomes increasingly necessary. The potential for exploitation by malicious actors, ranging from disinformation to data breaches and reputation damage, is substantial. This paper addresses a gap in current research by specifically focusing on security risks posed by LLMs within the prompt-based interaction scheme, which extends beyond the widely covered ethical and societal implications. Our work proposes a taxonomy of security risks along the user-model communication pipeline and categorizes the attacks by target and attack type alongside the commonly used confidentiality, integrity, and availability (CIA) triad. The taxonomy is reinforced with specific attack examples to showcase the real-world impact of these risks. Through this taxonomy, we aim to inform the development of robust and secure LLM applications, enhancing their safety and trustworthiness.
KW - jailbreak
KW - Large language models
KW - natural language processing
KW - security
UR - http://www.scopus.com/inward/record.url?scp=85202710051&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2024.3450388
DO - 10.1109/ACCESS.2024.3450388
M3 - Review article
AN - SCOPUS:85202710051
SN - 2169-3536
VL - 12
SP - 126176
EP - 126187
JO - IEEE Access
JF - IEEE Access
ER -