Recent research has shown that large language models (LLMs) are
particularly vulnerable to adversarial attacks. Since the release of
ChatGPT1
, various industries are adopting LLM-based chatbots and
virtual assistants in their data workflows. The rapid development
pace of AI-based systems is being driven by the potential of Generative AI (GenAI) to assist humans in decision making. The immense
optimism behind GenAI often overshadows the adversarial risks
associated with these technologies. A threat actor can use security
gaps, poor safeguards, and limited data governance to carry out
attacks that grant unauthorized access to the system and its data.
As a proof-of-concept, we assess the performance of BarkPlug2
, the
Mississippi State University chatbot, against data poison attacks
from a red team perspective.