Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a...Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.展开更多
This paper studies cyber risk management by integrating contextual log analysis with User and Entity Behavior Analytics (UEBA). Leveraging Python scripting and PostgreSQL database management, the solution enriches log...This paper studies cyber risk management by integrating contextual log analysis with User and Entity Behavior Analytics (UEBA). Leveraging Python scripting and PostgreSQL database management, the solution enriches log data with contextual and behavioral information from Linux system logs and semantic datasets. By incorporating Common Vulnerability Scoring System (CVSS) metrics and customized risk scoring algorithms, the system calculates Insider Threat scores to identify potential security breaches. The integration of contextual log analysis and UEBA [1] offers a proactive defense against insider threats, reducing false positives and prioritizing high-risk alerts.展开更多
文摘Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.
文摘This paper studies cyber risk management by integrating contextual log analysis with User and Entity Behavior Analytics (UEBA). Leveraging Python scripting and PostgreSQL database management, the solution enriches log data with contextual and behavioral information from Linux system logs and semantic datasets. By incorporating Common Vulnerability Scoring System (CVSS) metrics and customized risk scoring algorithms, the system calculates Insider Threat scores to identify potential security breaches. The integration of contextual log analysis and UEBA [1] offers a proactive defense against insider threats, reducing false positives and prioritizing high-risk alerts.