One of the methods forming the shell is to appropriately design the coolingstaves and hearth without overheating during the campaign life of the furnace. The three-dimensionalsteady mathematical models for calculating...One of the methods forming the shell is to appropriately design the coolingstaves and hearth without overheating during the campaign life of the furnace. The three-dimensionalsteady mathematical models for calculating the temperature distribution in the coolers andtwo-dimensional unsteady mathematical models with phase-change latent heat for calculating thetemperature distribution of the hearth bottom were established. The calculation results show thatthe formation of the slag-metal protection shell can be achieved by optimizing the design parametersof the coolers. Increasing the heat conductivity of the carbon brick can move the isothermal lineof 1150 deg C upward outside the hearth bottom.展开更多
Cloud computing, after its success as a commercial infrastructure, is now emerging as a private infrastructure. The software platforms available to build private cloud computing infrastructure vary in their performanc...Cloud computing, after its success as a commercial infrastructure, is now emerging as a private infrastructure. The software platforms available to build private cloud computing infrastructure vary in their performance for management of cloud resources as well as in utilization of local physical resources. Organizations and individuals looking forward to reaping the benefits of private cloud computing need to understand which software platform would provide the efficient services and optimum utilization of cloud resources for their target applications. In this paper, we present our initial study on performance evaluation and comparison of three cloud computing software platforms from the perspective of common cloud users who intend to build their private clouds. We compare the performance of the selected software platforms from several respects describing their suitability for applications from different domains. Our results highlight the critical parameters for performance evaluation of a software platform and the best software platform for different application domains.展开更多
Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results.Monte Carlo methods are often used in simulating complex systems.Because of their reliance on ...Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results.Monte Carlo methods are often used in simulating complex systems.Because of their reliance on repeated computation of random or pseudo-random numbers,these methods are most suited to calculation by a computer and tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm.In finance,Monte Carlo simulation method is used to calculate the value of companies,to evaluate economic investments and financial derivatives.On the other hand,Grid Computing applies heterogeneous computer resources of many geographically disperse computers in a network in order to solve a single problem that requires a great number of computer processing cycles or access to large amounts of data.In this paper,we have developed a simulation based on Monte Carlo method which is applied on grid computing in order to predict through complex calculations the future trends in stock prices.展开更多
Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pa...Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pattern and ensures workloads are operating at optimum levels. The analysis process involves data query and extraction, data analysis, and result visualization. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate, and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis become the knowledge of the system and should be shared and communicated. This paper presents our cloud service architecture that explores a search cluster for data indexing and query. We develop REST APIs that the data can be accessed by different analysis modules. This architecture enables extensions to integrate with software frameworks of both batch processing(such as Hadoop) and stream processing(such as Spark) of big data. The analysis results are structured in Semantic Media Wiki pages in the context of the monitoring data source and the analysis process. This cloud architecture is empirically assessed to evaluate its responsiveness when processing a large set of data records under node failures.展开更多
基金The work was financially supported by"95"key project of China(No.1997-02-08).]
文摘One of the methods forming the shell is to appropriately design the coolingstaves and hearth without overheating during the campaign life of the furnace. The three-dimensionalsteady mathematical models for calculating the temperature distribution in the coolers andtwo-dimensional unsteady mathematical models with phase-change latent heat for calculating thetemperature distribution of the hearth bottom were established. The calculation results show thatthe formation of the slag-metal protection shell can be achieved by optimizing the design parametersof the coolers. Increasing the heat conductivity of the carbon brick can move the isothermal lineof 1150 deg C upward outside the hearth bottom.
文摘Cloud computing, after its success as a commercial infrastructure, is now emerging as a private infrastructure. The software platforms available to build private cloud computing infrastructure vary in their performance for management of cloud resources as well as in utilization of local physical resources. Organizations and individuals looking forward to reaping the benefits of private cloud computing need to understand which software platform would provide the efficient services and optimum utilization of cloud resources for their target applications. In this paper, we present our initial study on performance evaluation and comparison of three cloud computing software platforms from the perspective of common cloud users who intend to build their private clouds. We compare the performance of the selected software platforms from several respects describing their suitability for applications from different domains. Our results highlight the critical parameters for performance evaluation of a software platform and the best software platform for different application domains.
文摘Monte Carlo methods are a class of computational algorithms that rely on repeated random sampling to compute their results.Monte Carlo methods are often used in simulating complex systems.Because of their reliance on repeated computation of random or pseudo-random numbers,these methods are most suited to calculation by a computer and tend to be used when it is infeasible or impossible to compute an exact result with a deterministic algorithm.In finance,Monte Carlo simulation method is used to calculate the value of companies,to evaluate economic investments and financial derivatives.On the other hand,Grid Computing applies heterogeneous computer resources of many geographically disperse computers in a network in order to solve a single problem that requires a great number of computer processing cycles or access to large amounts of data.In this paper,we have developed a simulation based on Monte Carlo method which is applied on grid computing in order to predict through complex calculations the future trends in stock prices.
基金supported by the Discovery grant No.RGPIN 2014-05254 from Natural Science&Engineering Research Council(NSERC),Canada
文摘Cloud monitoring is of a source of big data that are constantly produced from traces of infrastructures,platforms, and applications. Analysis of monitoring data delivers insights of the system's workload and usage pattern and ensures workloads are operating at optimum levels. The analysis process involves data query and extraction, data analysis, and result visualization. Since the volume of monitoring data is big, these operations require a scalable and reliable architecture to extract, aggregate, and analyze data in an arbitrary range of granularity. Ultimately, the results of analysis become the knowledge of the system and should be shared and communicated. This paper presents our cloud service architecture that explores a search cluster for data indexing and query. We develop REST APIs that the data can be accessed by different analysis modules. This architecture enables extensions to integrate with software frameworks of both batch processing(such as Hadoop) and stream processing(such as Spark) of big data. The analysis results are structured in Semantic Media Wiki pages in the context of the monitoring data source and the analysis process. This cloud architecture is empirically assessed to evaluate its responsiveness when processing a large set of data records under node failures.