Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, a...Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.展开更多
This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assist...This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.展开更多
AIM To prospectively evaluate the efficacy of submucosal injection of platelet-rich plasma(PRP) on endoscopic resection of large sessile lesions.METHODS Eleven patients were submitted to endoscopic mucosal resection(E...AIM To prospectively evaluate the efficacy of submucosal injection of platelet-rich plasma(PRP) on endoscopic resection of large sessile lesions.METHODS Eleven patients were submitted to endoscopic mucosal resection(EMR) with prior injection of PRP, obtained at the time of endoscopy. Patients were followed during 1 mo. The incidence of adverse events(delayed bleeding or perforation) and the percentage of mucosal healing(MHR) after 4 wk were registered. RESULTS EMR was performed in 11 lesions(46.4 mm ± 4 mm, range 40-70 mm). Delayed bleeding or perforation was not observed in any patient. Mean ulcerated area atbaseline was 22.7 cm^2 ± 11.7 cm^2 whereas at week 4 were 2.9 cm^2 ± 1.5 cm^2. Patients treated with PRP showed a very high MHR after 4 wk(87.5%). CONCLUSION PRP is an easy-to-obtain solution with proven and favourable biological activities that could be used in advanced endoscopic resection.展开更多
文摘Large Language Models (LLMs) have revolutionized Generative Artificial Intelligence (GenAI) tasks, becoming an integral part of various applications in society, including text generation, translation, summarization, and more. However, their widespread usage emphasizes the critical need to enhance their security posture to ensure the integrity and reliability of their outputs and minimize harmful effects. Prompt injections and training data poisoning attacks are two of the most prominent vulnerabilities in LLMs, which could potentially lead to unpredictable and undesirable behaviors, such as biased outputs, misinformation propagation, and even malicious content generation. The Common Vulnerability Scoring System (CVSS) framework provides a standardized approach to capturing the principal characteristics of vulnerabilities, facilitating a deeper understanding of their severity within the security and AI communities. By extending the current CVSS framework, we generate scores for these vulnerabilities such that organizations can prioritize mitigation efforts, allocate resources effectively, and implement targeted security measures to defend against potential risks.
文摘This paper introduces a novel multi-tiered defense architecture to protect language models from adversarial prompt attacks. We construct adversarial prompts using strategies like role emulation and manipulative assistance to simulate real threats. We introduce a comprehensive, multi-tiered defense framework named GUARDIAN (Guardrails for Upholding Ethics in Language Models) comprising a system prompt filter, pre-processing filter leveraging a toxic classifier and ethical prompt generator, and pre-display filter using the model itself for output screening. Extensive testing on Meta’s Llama-2 model demonstrates the capability to block 100% of attack prompts. The approach also auto-suggests safer prompt alternatives, thereby bolstering language model security. Quantitatively evaluated defense layers and an ethical substitution mechanism represent key innovations to counter sophisticated attacks. The integrated methodology not only fortifies smaller LLMs against emerging cyber threats but also guides the broader application of LLMs in a secure and ethical manner.
文摘AIM To prospectively evaluate the efficacy of submucosal injection of platelet-rich plasma(PRP) on endoscopic resection of large sessile lesions.METHODS Eleven patients were submitted to endoscopic mucosal resection(EMR) with prior injection of PRP, obtained at the time of endoscopy. Patients were followed during 1 mo. The incidence of adverse events(delayed bleeding or perforation) and the percentage of mucosal healing(MHR) after 4 wk were registered. RESULTS EMR was performed in 11 lesions(46.4 mm ± 4 mm, range 40-70 mm). Delayed bleeding or perforation was not observed in any patient. Mean ulcerated area atbaseline was 22.7 cm^2 ± 11.7 cm^2 whereas at week 4 were 2.9 cm^2 ± 1.5 cm^2. Patients treated with PRP showed a very high MHR after 4 wk(87.5%). CONCLUSION PRP is an easy-to-obtain solution with proven and favourable biological activities that could be used in advanced endoscopic resection.