Happens-before memory model (HMM) is used as the basis of Java memory model (JMM). Although HMM itself is simple, some complex axioms have to be introduced in JMM to prevent the causality loop, which causes absurd...Happens-before memory model (HMM) is used as the basis of Java memory model (JMM). Although HMM itself is simple, some complex axioms have to be introduced in JMM to prevent the causality loop, which causes absurd out-of-thin-air reads that may break the type safety and se- curity guarantee of Java. The resulting JMM is complex and difficult to understand. It also has many anti-intuitive behav- iors, as demonstrated by the "ugly examples" by Aspinall and ~ev6~ [1]. Furthermore, HMM (and JMM) specifies only what execution traces are acceptable, but says nothing about how these traces are generated. This gap makes it difficult for static reasoning about programs. In this paper we present OHMM, an operational variation of HMM. The model is specified by giving an operational semantics to a language running on an abstract machine de- signed to simulate HMM. Thanks to its generative nature, the model naturally prevents out-of-thin-air reads. On the other hand, it uses a novel replay mechanism to allow instruc- tions to be executed multiple times, which can be used to model many useful speculations and optimization. The model is weaker than JMM for lockless programs, thus can accom- modate more optimization, such as the reordering of inde- pendent memory accesses that is not valid in JMM. Program behaviors are more natural in this model than in JMM, and many of the anti-intuitive examples in JMM are no longer valid here. We hope OHMM can serve as the basis for new memory models for Java-like languages.展开更多
Data races are ubiquitous in multi-threaded ap- plications, but they are by no means easy to detect. One of the most important reasons is the complexity of thread in- terleavings. A volume of research has been devoted...Data races are ubiquitous in multi-threaded ap- plications, but they are by no means easy to detect. One of the most important reasons is the complexity of thread in- terleavings. A volume of research has been devoted to the interleaving-insensitive detection. However, all the previous work focuses on the uniform detection (unknown to the char- acteristics of thread interleavings), thereby making the detec- tion defective in either reporting false positives or suffering from prohibitive overhead. To cope with the problem above, we propose an efficient, precise, and sound step-by-step res- olution based on the characteristics of thread interleavings. We first try to tease apart the categories of thread interleav- ings from the several typical sources arising from the lock synchronizations. We then conduct a brief study and find a new and complex pattern the previous work cannot detect. It is also revealed that the simple pattern with the majority of thread interleavings can be resolved by a simple processing to achieve a big profit for the previous reordering-based design. Our final experimental results demonstrate the effectiveness of our empiricism-based approach, and show that 51.0% of execution time and 52.3 % of trace size arising from the state- of-the-art reordering technique can be saved through a quick filtering of the simple pattern with a negligible (4.45%) per- formance overhead introduced on-the-fly.展开更多
文摘Happens-before memory model (HMM) is used as the basis of Java memory model (JMM). Although HMM itself is simple, some complex axioms have to be introduced in JMM to prevent the causality loop, which causes absurd out-of-thin-air reads that may break the type safety and se- curity guarantee of Java. The resulting JMM is complex and difficult to understand. It also has many anti-intuitive behav- iors, as demonstrated by the "ugly examples" by Aspinall and ~ev6~ [1]. Furthermore, HMM (and JMM) specifies only what execution traces are acceptable, but says nothing about how these traces are generated. This gap makes it difficult for static reasoning about programs. In this paper we present OHMM, an operational variation of HMM. The model is specified by giving an operational semantics to a language running on an abstract machine de- signed to simulate HMM. Thanks to its generative nature, the model naturally prevents out-of-thin-air reads. On the other hand, it uses a novel replay mechanism to allow instruc- tions to be executed multiple times, which can be used to model many useful speculations and optimization. The model is weaker than JMM for lockless programs, thus can accom- modate more optimization, such as the reordering of inde- pendent memory accesses that is not valid in JMM. Program behaviors are more natural in this model than in JMM, and many of the anti-intuitive examples in JMM are no longer valid here. We hope OHMM can serve as the basis for new memory models for Java-like languages.
文摘Data races are ubiquitous in multi-threaded ap- plications, but they are by no means easy to detect. One of the most important reasons is the complexity of thread in- terleavings. A volume of research has been devoted to the interleaving-insensitive detection. However, all the previous work focuses on the uniform detection (unknown to the char- acteristics of thread interleavings), thereby making the detec- tion defective in either reporting false positives or suffering from prohibitive overhead. To cope with the problem above, we propose an efficient, precise, and sound step-by-step res- olution based on the characteristics of thread interleavings. We first try to tease apart the categories of thread interleav- ings from the several typical sources arising from the lock synchronizations. We then conduct a brief study and find a new and complex pattern the previous work cannot detect. It is also revealed that the simple pattern with the majority of thread interleavings can be resolved by a simple processing to achieve a big profit for the previous reordering-based design. Our final experimental results demonstrate the effectiveness of our empiricism-based approach, and show that 51.0% of execution time and 52.3 % of trace size arising from the state- of-the-art reordering technique can be saved through a quick filtering of the simple pattern with a negligible (4.45%) per- formance overhead introduced on-the-fly.