HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for t...HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for the neutron production data of HENDL2.0 was performed with a series of existing spherical shell benchmark experiments (such as V, Be, Fe, Pb, Cr, Mn, Cu, Al, Si, Co, Zr, Nb, Mo, W and Ti). These experiments were simulated numerically using HENDL2.0/MG and a home-developed code VisualBUS. Calculations were conducted with both FENDL2.1/MG and FENDL2.1/MC, which are based on a continuous-energy Monte Carlo Code MCNP/4C. By comparison and analysis of the neutron leakage spectra and the integral test, benchmark results of neutron production data are presented in this paper.展开更多
At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, man...At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS), Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.展开更多
This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the ...This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).展开更多
HENDL1.0/MG, a multi-group working library of the Hybrid Evaluated NuclearData Library, was home-developed by the FDS Team of ASIPP (Institute of Plasma Physics,Chinese Academy of Sciences) on the basis of several nat...HENDL1.0/MG, a multi-group working library of the Hybrid Evaluated NuclearData Library, was home-developed by the FDS Team of ASIPP (Institute of Plasma Physics,Chinese Academy of Sciences) on the basis of several national data libraries. To validate andqualify the process of producing HENDL1.0/MG, simulating calculations of a series of existentspherical shell benchmark experiments (Al, Mo, Co, Ti, Mn, W, Be and V) have been performedwith HENDL1.0/MG and the multifunctional neutronics code system named VisualBUS home-developed also by FDS Team.展开更多
The multi-group working nuclear data library HENDL1.0/MG is numerically tested with a series of existent benchmark spherical shell experiments (Si, Cr, Fe, Cu, Zr and Nb) by calculations using the multi-functional neu...The multi-group working nuclear data library HENDL1.0/MG is numerically tested with a series of existent benchmark spherical shell experiments (Si, Cr, Fe, Cu, Zr and Nb) by calculations using the multi-functional neutronics code VisualBUS. The ratio of calculated/measured neutron leakage rates and the neutron leakage spectra are presented in tabular and figural forms. The results from the calculations with the code ANISN and IAEA data library FENDL2.0/MG were also included for comparison, where the origination of the data used is different from that of HENDL1.0/MG.展开更多
System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to opti...System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.展开更多
数据包络分析(data envelopment analysis,DEA)在为决策单元(decision making unit,DMU)评估效率水平的同时,可为其中的非有效单元提供消除低效的改进措施,即基准信息。但经典DEA模型为非有效单元提供的基准信息不易一步到位,缺乏对分...数据包络分析(data envelopment analysis,DEA)在为决策单元(decision making unit,DMU)评估效率水平的同时,可为其中的非有效单元提供消除低效的改进措施,即基准信息。但经典DEA模型为非有效单元提供的基准信息不易一步到位,缺乏对分组信息的充分利用。在依赖上下文的DEA框架内进行开发,提出了一种基于分组的两步DEA基准学习模型。模型使用加权L1范式衡量待评估单元与相应目标的接近程度。通过最小化实际点到Pareto有效边界的距离,为每一个决策单元在组内和全局的最佳实践前沿上分别设立单独基准,解决了在实践中目标点难以一步实现的问题,模型的结果可以视为针对最佳实践的长期改进策略。由于充分考虑了分组信息,模型能够反映给定基准过程中涉及的DMU周围环境,并增强了组内DMU在设立目标上的灵活性。该模型被用于评估西班牙公立大学的科研水平,通过对比实验验证了该模型的优势。展开更多
基金supported by National Natural Science Foundation of China (No.10675123)
文摘HENDL2.0, the latest version of the hybrid evaluated nuclear data library, was developed based upon some evaluated data from FENDL2.1 and ENDF/B-VII. To qualify and validate the working library, an integral test for the neutron production data of HENDL2.0 was performed with a series of existing spherical shell benchmark experiments (such as V, Be, Fe, Pb, Cr, Mn, Cu, Al, Si, Co, Zr, Nb, Mo, W and Ti). These experiments were simulated numerically using HENDL2.0/MG and a home-developed code VisualBUS. Calculations were conducted with both FENDL2.1/MG and FENDL2.1/MC, which are based on a continuous-energy Monte Carlo Code MCNP/4C. By comparison and analysis of the neutron leakage spectra and the integral test, benchmark results of neutron production data are presented in this paper.
文摘At present, big data is very popular, because it has proved to be much successful in many fields such as social media, E-commerce transactions, etc. Big data describes the tools and technologies needed to capture, manage, store, distribute, and analyze petabyte or larger-sized datasets having different structures with high speed. Big data can be structured, unstructured, or semi structured. Hadoop is an open source framework that is used to process large amounts of data in an inexpensive and efficient way, and job scheduling is a key factor for achieving high performance in big data processing. This paper gives an overview of big data and highlights the problems and challenges in big data. It then highlights Hadoop Distributed File System (HDFS), Hadoop MapReduce, and various parameters that affect the performance of job scheduling algorithms in big data such as Job Tracker, Task Tracker, Name Node, Data Node, etc. The primary purpose of this paper is to present a comparative study of job scheduling algorithms along with their experimental results in Hadoop environment. In addition, this paper describes the advantages, disadvantages, features, and drawbacks of various Hadoop job schedulers such as FIFO, Fair, capacity, Deadline Constraints, Delay, LATE, Resource Aware, etc, and provides a comparative study among these schedulers.
文摘This work illustrates the innovative results obtained by applying the recently developed the 2<sup>nd</sup>-order predictive modeling methodology called “2<sup>nd</sup>- BERRU-PM”, where the acronym BERRU denotes “best-estimate results with reduced uncertainties” and “PM” denotes “predictive modeling.” The physical system selected for this illustrative application is a polyethylene-reflected plutonium (acronym: PERP) OECD/NEA reactor physics benchmark. This benchmark is modeled using the neutron transport Boltzmann equation (involving 21,976 uncertain parameters), the solution of which is representative of “large-scale computations.” The results obtained in this work confirm the fact that the 2<sup>nd</sup>-BERRU-PM methodology predicts best-estimate results that fall in between the corresponding computed and measured values, while reducing the predicted standard deviations of the predicted results to values smaller than either the experimentally measured or the computed values of the respective standard deviations. The obtained results also indicate that 2<sup>nd</sup>-order response sensitivities must always be included to quantify the need for including (or not) the 3<sup>rd</sup>- and/or 4<sup>th</sup>-order sensitivities. When the parameters are known with high precision, the contributions of the higher-order sensitivities diminish with increasing order, so that the inclusion of the 1<sup>st</sup>- and 2<sup>nd</sup>-order sensitivities may suffice for obtaining accurate predicted best- estimate response values and best-estimate standard deviations. On the other hand, when the parameters’ standard deviations are sufficiently large to approach (or be outside of) the radius of convergence of the multivariate Taylor-series which represents the response in the phase-space of model parameters, the contributions stemming from the 3<sup>rd</sup>- and even 4<sup>th</sup>-order sensitivities are necessary to ensure consistency between the computed and measured response. In such cases, the use of only the 1<sup>st</sup>-order sensitivities erroneously indicates that the computed results are inconsistent with the respective measured response. Ongoing research aims at extending the 2<sup>nd</sup>-BERRU-PM methodology to fourth-order, thus enabling the computation of third-order response correlations (skewness) and fourth-order response correlations (kurtosis).
基金The project supported by the Natural Science Foundation of Anhui province (No. 01043601)
文摘HENDL1.0/MG, a multi-group working library of the Hybrid Evaluated NuclearData Library, was home-developed by the FDS Team of ASIPP (Institute of Plasma Physics,Chinese Academy of Sciences) on the basis of several national data libraries. To validate andqualify the process of producing HENDL1.0/MG, simulating calculations of a series of existentspherical shell benchmark experiments (Al, Mo, Co, Ti, Mn, W, Be and V) have been performedwith HENDL1.0/MG and the multifunctional neutronics code system named VisualBUS home-developed also by FDS Team.
文摘The multi-group working nuclear data library HENDL1.0/MG is numerically tested with a series of existent benchmark spherical shell experiments (Si, Cr, Fe, Cu, Zr and Nb) by calculations using the multi-functional neutronics code VisualBUS. The ratio of calculated/measured neutron leakage rates and the neutron leakage spectra are presented in tabular and figural forms. The results from the calculations with the code ANISN and IAEA data library FENDL2.0/MG were also included for comparison, where the origination of the data used is different from that of HENDL1.0/MG.
文摘System-on-a-chips with intellectual property cores need a large volume of data for testing. The large volume of test data requires a large testing time and test data memory. Therefore new techniques are needed to optimize the test data volume, decrease the testing time, and conquer the ATE memory limitation for SOC designs. This paper presents a new compression method of testing for intellectual property core-based system-on-chip. The proposed method is based on new split- data variable length (SDV) codes that are designed using the split-options along with identification bits in a string of test data. This paper analyses the reduction of test data volume, testing time, run time, size of memory required in ATE and improvement of compression ratio. Experimental results for ISCAS 85 and ISCAS 89 Benchmark circuits show that SDV codes outperform other compression methods with the best compression ratio for test data compression. The decompression architecture for SDV codes is also presented for decoding the implementations of compressed bits. The proposed scheme shows that SDV codes are accessible to any of the variations in the input test data stream.
文摘数据包络分析(data envelopment analysis,DEA)在为决策单元(decision making unit,DMU)评估效率水平的同时,可为其中的非有效单元提供消除低效的改进措施,即基准信息。但经典DEA模型为非有效单元提供的基准信息不易一步到位,缺乏对分组信息的充分利用。在依赖上下文的DEA框架内进行开发,提出了一种基于分组的两步DEA基准学习模型。模型使用加权L1范式衡量待评估单元与相应目标的接近程度。通过最小化实际点到Pareto有效边界的距离,为每一个决策单元在组内和全局的最佳实践前沿上分别设立单独基准,解决了在实践中目标点难以一步实现的问题,模型的结果可以视为针对最佳实践的长期改进策略。由于充分考虑了分组信息,模型能够反映给定基准过程中涉及的DMU周围环境,并增强了组内DMU在设立目标上的灵活性。该模型被用于评估西班牙公立大学的科研水平,通过对比实验验证了该模型的优势。