Objective: Atherosclerosis is an inflamrnatory process that results in complex lesions or plaques that protrude into the arterial lumen. Carotid atherosclerotic plaque rupture, with distal atheromatous debris emboliz...Objective: Atherosclerosis is an inflamrnatory process that results in complex lesions or plaques that protrude into the arterial lumen. Carotid atherosclerotic plaque rupture, with distal atheromatous debris embolization, causes cerebrovascular events. This review aimed to explore research progress on the risk factors and outcomes of human carotid atherosclerotic plaques, and the molecular and cellular mechanisms of human carotid atherosclerotic plaque vulnerability for therapeutic intervention. Data Sources: We searched the PubMed database for recently published research articles up to June 2016, with the key words of"risk factors", "outcomes", "'blood components", "molecular mechanisms", "cellular mechanisms", and "human carotid atherosclerotic plaques". Study Selection: The articles, regarding the latest developments related to the risk factors and outcomes, atherosclerotic plaque composition, blood components, and consequences of human carotid atherosclerotic plaques, and the molecular and cellular mechanisms of human carotid atherosclerotic plaque vulnerability for therapeutic intervention, were selected. Results: This review described the latest researches regarding the interactive effects of both traditional and novel risk factors for human carotid atherosclerotic plaques, novel insights into human carotid atherosclerotic plaque composition and blood components, and consequences of human carotid atherosclerotic plaque. Conclusion: Carotid plaque biology and serologic biomarkers of vulnerability can be used to predict the risk ofcerebrovascular events. Furthermore, plaque composition, rather than lesion burden, seems to most predict rupture and subsequent thrombosis.展开更多
Distributed computing systems have been widely used as the amount of data grows exponentially in the era of information explosion. Job completion time (JCT) is a major metric for assessing their effectiveness. How to ...Distributed computing systems have been widely used as the amount of data grows exponentially in the era of information explosion. Job completion time (JCT) is a major metric for assessing their effectiveness. How to reduce the JCT for these systems through reasonable scheduling has become a hot issue in both industry and academia. Data skew is a common phenomenon that can compromise the performance of such distributed computing systems. This paper proposes SMART, which can effectively reduce the JCT through handling the data skew during the reducing phase. SMART predicts the size of reduce tasks based on part of the completed map tasks and then enforces largest-first scheduling in the reducing phase according to the predicted reduce task size. SMART makes minimal modifications to the original Hadoop with only 20 additional lines of code and is readily deployable. The robustness and the effectiveness of SMART have been evaluated with a real-world cluster against a large number of datasets. Experiments show that SMART reduces JCT by up to 6.47%, 9.26%, and 13.66% for Terasort, WordCount and InvertedIndex respectively with the Purdue MapReduce benchmarks suite (PUMA) dataset.展开更多
Remote direct memory access (RDMA) has become one of the state-of-the-art high-performance network technologies in datacenters. The reliable transport of RDMA is designed based on a lossless underlying network and can...Remote direct memory access (RDMA) has become one of the state-of-the-art high-performance network technologies in datacenters. The reliable transport of RDMA is designed based on a lossless underlying network and cannot endure a high packet loss rate. However, except for switch buffer overflow, there is another kind of packet loss in the RDMA network, i.e., packet corruption, which has not been discussed in depth. The packet corruption incurs long application tail latency by causing timeout retransmissions. The challenges to solving packet corruption in the RDMA network include: 1) packet corruption is inevitable with any remedial mechanisms and 2) RDMA hardware is not programmable. This paper proposes some designs which can guarantee the expected tail latency of applications with the existence of packet corruption. The key idea is controlling the occurring probabilities of timeout events caused by packet corruption through transforming timeout retransmissions into out-of-order retransmissions. We build a probabilistic model to estimate the occurrence probabilities and real effects of the corruption patterns. We implement these two mechanisms with the help of programmable switches and the zero-byte message RDMA feature. We build an ns-3 simulation and implement optimization mechanisms on our testbed. The simulation and testbed experiments show that the optimizations can decrease the flow completion time by several orders of magnitudes with less than 3% bandwidth cost at different packet corruption rates.展开更多
文摘Objective: Atherosclerosis is an inflamrnatory process that results in complex lesions or plaques that protrude into the arterial lumen. Carotid atherosclerotic plaque rupture, with distal atheromatous debris embolization, causes cerebrovascular events. This review aimed to explore research progress on the risk factors and outcomes of human carotid atherosclerotic plaques, and the molecular and cellular mechanisms of human carotid atherosclerotic plaque vulnerability for therapeutic intervention. Data Sources: We searched the PubMed database for recently published research articles up to June 2016, with the key words of"risk factors", "outcomes", "'blood components", "molecular mechanisms", "cellular mechanisms", and "human carotid atherosclerotic plaques". Study Selection: The articles, regarding the latest developments related to the risk factors and outcomes, atherosclerotic plaque composition, blood components, and consequences of human carotid atherosclerotic plaques, and the molecular and cellular mechanisms of human carotid atherosclerotic plaque vulnerability for therapeutic intervention, were selected. Results: This review described the latest researches regarding the interactive effects of both traditional and novel risk factors for human carotid atherosclerotic plaques, novel insights into human carotid atherosclerotic plaque composition and blood components, and consequences of human carotid atherosclerotic plaque. Conclusion: Carotid plaque biology and serologic biomarkers of vulnerability can be used to predict the risk ofcerebrovascular events. Furthermore, plaque composition, rather than lesion burden, seems to most predict rupture and subsequent thrombosis.
基金This work was supported by the National Key Research and Development Project of China under Grant No.2020YFB1707600the National Natural Science Foundation of China under Grant Nos.62072228,61972222 and 92067206the Fundamental Research Funds for the Central Universities of China,the Collaborative Innovation Center of Novel Software Technology and Industrialization,and the Jiangsu Innovation and Entrepreneurship(Shuangchuang)Program.
文摘Distributed computing systems have been widely used as the amount of data grows exponentially in the era of information explosion. Job completion time (JCT) is a major metric for assessing their effectiveness. How to reduce the JCT for these systems through reasonable scheduling has become a hot issue in both industry and academia. Data skew is a common phenomenon that can compromise the performance of such distributed computing systems. This paper proposes SMART, which can effectively reduce the JCT through handling the data skew during the reducing phase. SMART predicts the size of reduce tasks based on part of the completed map tasks and then enforces largest-first scheduling in the reducing phase according to the predicted reduce task size. SMART makes minimal modifications to the original Hadoop with only 20 additional lines of code and is readily deployable. The robustness and the effectiveness of SMART have been evaluated with a real-world cluster against a large number of datasets. Experiments show that SMART reduces JCT by up to 6.47%, 9.26%, and 13.66% for Terasort, WordCount and InvertedIndex respectively with the Purdue MapReduce benchmarks suite (PUMA) dataset.
基金This work was supported by the Key-Area Research and Development Program of Guangdong Province of China under Grant No.2020B0101390001the National Natural Science Foundation of China under Grant Nos.61772265 and 62072228the Fundamental Research Funds for the Central Universities of China,the Collaborative Innovation Center of Novel Software Technology and Industrialization of Jiangsu Province of China,and the Jiangsu Innovation and Entrepreneurship(Shuangchuang)Program of China.
文摘Remote direct memory access (RDMA) has become one of the state-of-the-art high-performance network technologies in datacenters. The reliable transport of RDMA is designed based on a lossless underlying network and cannot endure a high packet loss rate. However, except for switch buffer overflow, there is another kind of packet loss in the RDMA network, i.e., packet corruption, which has not been discussed in depth. The packet corruption incurs long application tail latency by causing timeout retransmissions. The challenges to solving packet corruption in the RDMA network include: 1) packet corruption is inevitable with any remedial mechanisms and 2) RDMA hardware is not programmable. This paper proposes some designs which can guarantee the expected tail latency of applications with the existence of packet corruption. The key idea is controlling the occurring probabilities of timeout events caused by packet corruption through transforming timeout retransmissions into out-of-order retransmissions. We build a probabilistic model to estimate the occurrence probabilities and real effects of the corruption patterns. We implement these two mechanisms with the help of programmable switches and the zero-byte message RDMA feature. We build an ns-3 simulation and implement optimization mechanisms on our testbed. The simulation and testbed experiments show that the optimizations can decrease the flow completion time by several orders of magnitudes with less than 3% bandwidth cost at different packet corruption rates.