为解决油浸式电力变压器中低能放电、高能放电等放电性故障的定位问题,提出了基于油中金属分析(Metal In-Oil Analysis,MIA)的放电性故障定位方法。通过对变压器内部高故障概率构件进行表面处理,将潜在的故障信息源预置于构件表面,并应...为解决油浸式电力变压器中低能放电、高能放电等放电性故障的定位问题,提出了基于油中金属分析(Metal In-Oil Analysis,MIA)的放电性故障定位方法。通过对变压器内部高故障概率构件进行表面处理,将潜在的故障信息源预置于构件表面,并应用示位金属(Metal for Position Indication,MPI)进行发生故障构件的确定。在此基础上,结合已有的局部放电、油中溶解气体分析等在线监测系统进行软、硬件的整合,可以实现较为完善的变压器放电性故障的诊断与定位。研究结果表明,该方法在提高放电性故障定位精度的同时,还可以降低对原有某种特定故障定位方法在精度方面的要求,并通过连续监测使运行维护人员对变压器的潜伏性故障信息有更为全面的掌握,为变压器状态检修的实现提供了新的技术支撑。展开更多
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor...The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.展开更多
文摘为解决油浸式电力变压器中低能放电、高能放电等放电性故障的定位问题,提出了基于油中金属分析(Metal In-Oil Analysis,MIA)的放电性故障定位方法。通过对变压器内部高故障概率构件进行表面处理,将潜在的故障信息源预置于构件表面,并应用示位金属(Metal for Position Indication,MPI)进行发生故障构件的确定。在此基础上,结合已有的局部放电、油中溶解气体分析等在线监测系统进行软、硬件的整合,可以实现较为完善的变压器放电性故障的诊断与定位。研究结果表明,该方法在提高放电性故障定位精度的同时,还可以降低对原有某种特定故障定位方法在精度方面的要求,并通过连续监测使运行维护人员对变压器的潜伏性故障信息有更为全面的掌握,为变压器状态检修的实现提供了新的技术支撑。
文摘The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks.