Each rock joint is unique by nature which means that utilization of replicas in direct shear tests is required in experimental parameter studies.However,a method to acquire knowledge about the ability of the replicas ...Each rock joint is unique by nature which means that utilization of replicas in direct shear tests is required in experimental parameter studies.However,a method to acquire knowledge about the ability of the replicas to imitate the shear mechanical behavior of the rock joint and their dispersion in direct shear testing is lacking.In this study,a novel method is presented for geometric quality assurance of replicas.The aim is to facilitate generation of high-quality direct shear testing data as a prerequisite for reliable subsequent analyses of the results.In Part 1 of this study,two quality assurance parameters,smf and V_(Hp100),are derived and their usefulness for evaluation of geometric deviations,i.e.geometric reproducibility,is shown.In Part 2,the parameters are validated by showing a correlation between the parameters and the shear mechanical behavior,which qualifies the parameters for usage in the quality assurance method.Unique results from direct shear tests presenting comparisons between replicas and the rock joint show that replicas fulfilling proposed threshold values of σ_(mf)<0.06 mm and|V_(Hp100)|<0.2 mm have a narrow dispersion and imitate the shear mechanical behavior of the rock joint in all aspects apart from having a slightly lower peak shear strength.The wear in these replicas,which have similar morphology as the rock joint,is in the same areas as in the rock joint.The wear is slightly larger in the rock joint and therefore the discrepancy in peak shear strength derives from differences in material properties,possibly from differences in toughness.It is shown by application of the suggested method that the quality assured replicas manufactured following the process employed in this study phenomenologically capture the shear strength characteristics,which makes them useful in parameter studies.展开更多
MapReduce is a popular program- ming model for processing large-scale datasets in a distributed environment and is a funda- mental component of current cloud comput- ing and big data applications. In this paper, a hea...MapReduce is a popular program- ming model for processing large-scale datasets in a distributed environment and is a funda- mental component of current cloud comput- ing and big data applications. In this paper, a heartbeat mechanism for MapReduce Task Scheduler using Dynamic Calibration (HMTS- DC) is proposed to address the unbalanced node computation capacity problem in a het- erogeneous MapReduce environment. HMTS- DC uses two mechanisms to dynamically adapt and balance tasks assigned to each com- pute node: 1) using heartbeat to dynamically estimate the capacity of the compute nodes, and 2) using data locality of replicated data blocks to reduce data transfer between nodes. With the first mechanism, based on the heart- beats received during the early state of the job, the task scheduler can dynamically estimate the computational capacity of each node. Us- ing the second mechanism, unprocessed Tasks local to each compute node are reassigned and reserved to allow nodes with greater capacities to reserve more local tasks than their weaker counterparts. Experimental results show that HMTS-DC performs better than Hadoop and Dynamic Data Placement Strategy (DDP) in a dynamic environment. Furthermore, an en- hanced HMTS-DC (EHMTS-DC) is proposed bv incorporatin historical data. In contrastto the "slow start" property of HMTS-DC, EHMTS-DC relies on the historical computation capacity of the slave machines. The experimental results show that EHMTS-DC outperforms HMTS-DC in a dynamic environment.展开更多
文摘Each rock joint is unique by nature which means that utilization of replicas in direct shear tests is required in experimental parameter studies.However,a method to acquire knowledge about the ability of the replicas to imitate the shear mechanical behavior of the rock joint and their dispersion in direct shear testing is lacking.In this study,a novel method is presented for geometric quality assurance of replicas.The aim is to facilitate generation of high-quality direct shear testing data as a prerequisite for reliable subsequent analyses of the results.In Part 1 of this study,two quality assurance parameters,smf and V_(Hp100),are derived and their usefulness for evaluation of geometric deviations,i.e.geometric reproducibility,is shown.In Part 2,the parameters are validated by showing a correlation between the parameters and the shear mechanical behavior,which qualifies the parameters for usage in the quality assurance method.Unique results from direct shear tests presenting comparisons between replicas and the rock joint show that replicas fulfilling proposed threshold values of σ_(mf)<0.06 mm and|V_(Hp100)|<0.2 mm have a narrow dispersion and imitate the shear mechanical behavior of the rock joint in all aspects apart from having a slightly lower peak shear strength.The wear in these replicas,which have similar morphology as the rock joint,is in the same areas as in the rock joint.The wear is slightly larger in the rock joint and therefore the discrepancy in peak shear strength derives from differences in material properties,possibly from differences in toughness.It is shown by application of the suggested method that the quality assured replicas manufactured following the process employed in this study phenomenologically capture the shear strength characteristics,which makes them useful in parameter studies.
文摘MapReduce is a popular program- ming model for processing large-scale datasets in a distributed environment and is a funda- mental component of current cloud comput- ing and big data applications. In this paper, a heartbeat mechanism for MapReduce Task Scheduler using Dynamic Calibration (HMTS- DC) is proposed to address the unbalanced node computation capacity problem in a het- erogeneous MapReduce environment. HMTS- DC uses two mechanisms to dynamically adapt and balance tasks assigned to each com- pute node: 1) using heartbeat to dynamically estimate the capacity of the compute nodes, and 2) using data locality of replicated data blocks to reduce data transfer between nodes. With the first mechanism, based on the heart- beats received during the early state of the job, the task scheduler can dynamically estimate the computational capacity of each node. Us- ing the second mechanism, unprocessed Tasks local to each compute node are reassigned and reserved to allow nodes with greater capacities to reserve more local tasks than their weaker counterparts. Experimental results show that HMTS-DC performs better than Hadoop and Dynamic Data Placement Strategy (DDP) in a dynamic environment. Furthermore, an en- hanced HMTS-DC (EHMTS-DC) is proposed bv incorporatin historical data. In contrastto the "slow start" property of HMTS-DC, EHMTS-DC relies on the historical computation capacity of the slave machines. The experimental results show that EHMTS-DC outperforms HMTS-DC in a dynamic environment.