期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
A NEW ANONYMITY CONTROLLED E-CASH SCHEME 被引量:1
1
作者 Zhang Fangguo Wang Changjie Wang Yumin (Key Lab. on ISN, Xidian Univ., Xi’an, 710071) 《Journal of Electronics(China)》 2002年第4期369-374,共6页
E-cash is a type of very important electronic payment systems. The complete anonymity of E-cash can be used for criminal activities, so E-cash should be anonymity controlled.Moreover, Elliptic Curve Cryptography(ECC) ... E-cash is a type of very important electronic payment systems. The complete anonymity of E-cash can be used for criminal activities, so E-cash should be anonymity controlled.Moreover, Elliptic Curve Cryptography(ECC) has been regard as the mainstream of current public cryptography . In this paper, a new anonymity controlled E-cash scheme based on ECC for the first time and using a new technology-one-time key pairs digital signature is designed, and its security and efficiency are analyzed. In our scheme, the coin tracing and owner tracing can be implemented. 展开更多
关键词 E-CASH Anonymity control One-time key pairs digital signature ECC
下载PDF
Probabilistic Automata-Based Method for Enhancing Performance of Deep Reinforcement Learning Systems
2
作者 Min Yang Guanjun Liu +1 位作者 Ziyuan Zhou Jiacun Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI 2024年第11期2327-2339,共13页
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty... Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management.However, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications.First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications. 展开更多
关键词 Deep reinforcement learning(DRL) performance improvement framework probabilistic automata real-time monitoring the key probabilistic decision-making units(PDMU)-action pair
下载PDF
Using Memory in the Right Way to Accelerate Big Data Processing 被引量:2
3
作者 阎栋 尹绪森 +3 位作者 连城 钟翔 周鑫 吴甘沙 《Journal of Computer Science & Technology》 SCIE EI CSCD 2015年第1期30-41,共12页
Big data processing is becoming a standout part of data center computation. However, latest research has indicated that big data workloads cannot make full use of modern memory systems. We find that the dramatic ineff... Big data processing is becoming a standout part of data center computation. However, latest research has indicated that big data workloads cannot make full use of modern memory systems. We find that the dramatic inefficiency of the big data processing is from the enormous amount of cache misses and stalls of the depended memory accesses. In this paper, we introduce two optimizations to tackle these problems. The first one is the slice-and-merge strategy, which reduces the cache miss rate of the sort procedure. The second optimization is direct-memory-access, which reforms the data structure used in key/value storage. These optimizations are evaluated with both micro-benchmarks and the real-world benchmark HiBench. The results of our micro-benchmarks clearly demonstrate the effectiveness of our optimizations in terms of hardware event counts; and the additional results of HiBench show the 1.21X average speedup on the application-level. Both results illustrate that careful hardware/software co-design will improve the memory efficiency of big data processing. Our work has already been integrated into Intel distribution for Apache Hadoop. 展开更多
关键词 big data key/value pair architecture awareness performance measurement
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部