Indirect heating and intensified digestion technology can be applied to reduce greatly the energy consumption in Bayer process of diasporic bauxite. A great advantage of two-stream process is to avoid or reduce effici...Indirect heating and intensified digestion technology can be applied to reduce greatly the energy consumption in Bayer process of diasporic bauxite. A great advantage of two-stream process is to avoid or reduce efficiently serious scaling problem of bauxite slurry on indirect heating surface, which certainly happens in the single stream process and brings about great troubles to the indirect heating. As a result of a great number of experiments and the theoretical analysis, a new lime adding technology for the two-stream digestion process is developed in this paper that lime is added into spent liquor stream instead of bauxite slurry, which is more suitable to the two-stream process of diasporic bauxite. The influences of the new lime addition technology on preheating and digestion process were discussed. It was deduced that the new technology can be used efficiently in the two-stream process of non-diasporic bauxite.展开更多
In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of ...In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.展开更多
In this work we discuss SDSPbMM, an integrated Strategy for Data Stream Processing based on Measurement Metadata, applied to an outpatient monitoring scenario. The measures associated to the attributes of the patient ...In this work we discuss SDSPbMM, an integrated Strategy for Data Stream Processing based on Measurement Metadata, applied to an outpatient monitoring scenario. The measures associated to the attributes of the patient (entity) under monitoring, come from heterogeneous data sources as data streams, together with metadata associated with the formal definition of a measurement and evaluation project. Such metadata supports the patient analysis and monitoring in a more consistent way, facilitating for instance: i) The early detection of problems typical of data such as missing values, outliers, among others;and ii) The risk anticipation by means of on-line classification models adapted to the patient. We also performed a simulation using a prototype developed for outpatient monitoring, in order to analyze empirically processing times and variable scalability, which shed light on the feasibility of applying the prototype to real situations. In addition, we analyze statistically the results of the simulation, in order to detect the components which incorporate more variability to the system.展开更多
To analyze the physical structure of assembly process and assure product quality, the quality stability of multi-station assembly process was investigated. First, the assembly process was modeled as a one-dimensional ...To analyze the physical structure of assembly process and assure product quality, the quality stability of multi-station assembly process was investigated. First, the assembly process was modeled as a one-dimensional discrete variant system by state space equation based on variation stream. Then, the criterion to judge whether the process is stable or not and the index, stability degree, to show the level of stability were proposed by analyzing the bounded-input bounded-output (BIBO) stability of system. Finally, a simulated example of a sheet metal assembly process with three stations, was provided to verify the effectiveness of the proposed method.展开更多
With the advent of the IoT era, the amount of real-time data that is processed in data centers has increased explosively. As a result, stream mining, extracting useful knowledge from a huge amount of data in real time...With the advent of the IoT era, the amount of real-time data that is processed in data centers has increased explosively. As a result, stream mining, extracting useful knowledge from a huge amount of data in real time, is attracting more and more attention. It is said, however, that real- time stream processing will become more difficult in the near future, because the performance of processing applications continues to increase at a rate of 10% - 15% each year, while the amount of data to be processed is increasing exponentially. In this study, we focused on identifying a promising stream mining algorithm, specifically a Frequent Itemset Mining (FIsM) algorithm, then we improved its performance using an FPGA. FIsM algorithms are important and are basic data- mining techniques used to discover association rules from transactional databases. We improved on an approximate FIsM algorithm proposed recently so that it would fit onto hardware architecture efficiently. We then ran experiments on an FPGA. As a result, we have been able to achieve a speed 400% faster than the original algorithm implemented on a CPU. Moreover, our FPGA prototype showed a 20 times speed improvement compared to the CPU version.展开更多
The present work describes the amount of Di-n- butyl phosphate (DBP) produced when PUREX solvent (30%tri-n-butyl phosphate (TBP) mixed with 70% hydrocarbon diluent) is exposed to intensive radiolytic and chemical at- ...The present work describes the amount of Di-n- butyl phosphate (DBP) produced when PUREX solvent (30%tri-n-butyl phosphate (TBP) mixed with 70% hydrocarbon diluent) is exposed to intensive radiolytic and chemical at- tack during the separation of uranium and plutonium from fission products of FBTR mixed carbide fuel reprocessing solution. DBP is the major degradation product of Tri-n-butyl phosphate (TBP). Amount of DBP formed in the lean organic streams of different fuel burn-up FBTR carbide fuel reprocessing solutions were analyzed by Gas Chromatographic technique. The method is based on the preparation of diazo methane and conversion of non-volatile Di-n-butyl phosphate in to volatile and stable derivatives by the action of diazomethane and then determined by Gas Chromatography (GC). A calibration graph was made for DBP over a concentration in the range from 200 to 1800 ppm with correlation coefficient of 0.99587 and RSD 1.2%. The degraded 30% TBP-NPH solvent loaded with heavy metal ions like uranium was analyzed after repeated use and results are compared with standard ion chromatographic technique. A column comparison study to select of proper gas chromatographic column for the separation of DBP from other components in a single aliquot of injection is also examined.展开更多
Most distributed stream processing engines(DSPEs)do not support online task management and cannot adapt to time-varying data flows.Recently,some studies have proposed online task deployment algorithms to solve this pr...Most distributed stream processing engines(DSPEs)do not support online task management and cannot adapt to time-varying data flows.Recently,some studies have proposed online task deployment algorithms to solve this problem.However,these approaches do not guarantee the Quality of Service(QoS)when the task deployment changes at runtime,because the task migrations caused by the change of task deployments will impose an exorbitant cost.We study one of the most popular DSPEs,Apache Storm,and find out that when a task needs to be migrated,Storm has to stop the resource(implemented as a process of Worker in Storm)where the task is deployed.This will lead to the stop and restart of all tasks in the resource,resulting in the poor performance of task migrations.Aiming to solve this problem,in this pa-per,we propose N-Storm(Nonstop Storm),which is a task-resource decoupling DSPE.N-Storm allows tasks allocated to resources to be changed at runtime,which is implemented by a thread-level scheme for task migrations.Particularly,we add a local shared key/value store on each node to make resources aware of the changes in the allocation plan.Thus,each resource can manage its tasks at runtime.Based on N-Storm,we further propose Online Task Deployment(OTD).Differ-ing from traditional task deployment algorithms that deploy all tasks at once without considering the cost of task migra-tions caused by a task re-deployment,OTD can gradually adjust the current task deployment to an optimized one based on the communication cost and the runtime states of resources.We demonstrate that OTD can adapt to different kinds of applications including computation-and communication-intensive applications.The experimental results on a real DSPE cluster show that N-Storm can avoid the system stop and save up to 87%of the performance degradation time,compared with Apache Storm and other state-of-the-art approaches.In addition,OTD can increase the average CPU usage by 51%for computation-intensive applications and reduce network communication costs by 88%for communication-intensive ap-plications.展开更多
文摘Indirect heating and intensified digestion technology can be applied to reduce greatly the energy consumption in Bayer process of diasporic bauxite. A great advantage of two-stream process is to avoid or reduce efficiently serious scaling problem of bauxite slurry on indirect heating surface, which certainly happens in the single stream process and brings about great troubles to the indirect heating. As a result of a great number of experiments and the theoretical analysis, a new lime adding technology for the two-stream digestion process is developed in this paper that lime is added into spent liquor stream instead of bauxite slurry, which is more suitable to the two-stream process of diasporic bauxite. The influences of the new lime addition technology on preheating and digestion process were discussed. It was deduced that the new technology can be used efficiently in the two-stream process of non-diasporic bauxite.
基金supported by the Research Fund of National Key Laboratory of Computer Architecture under Grant No.CARCH201501the Open Project Program of the State Key Laboratory of Mathematical Engineering and Advanced Computing under Grant No.2016A09
文摘In the era of Big Data, typical architecture of distributed real-time stream processing systems is the combination of Flume, Kafka, and Storm. As a kind of distributed message system, Kafka has the characteristics of horizontal scalability and high throughput, which is manly deployed in many areas in order to address the problem of speed mismatch between message producers and consumers. When using Kafka, we need to quickly receive data sent by producers. In addition, we need to send data to consumers quickly. Therefore, the performance of Kafka is of critical importance to the performance of the whole stream processing system. In this paper, we propose the improved design of real-time stream processing systems, and focus on improving the Kafka's data loading process.We use Kafka cat to transfer data from the source to Kafka topic directly, which can reduce the network transmission. We also utilize the memory file system to accelerate the process of data loading, which can address the bottleneck and performance problems caused by disk I/O. Extensive experiments are conducted to evaluate the performance, which show the superiority of our improved design.
文摘In this work we discuss SDSPbMM, an integrated Strategy for Data Stream Processing based on Measurement Metadata, applied to an outpatient monitoring scenario. The measures associated to the attributes of the patient (entity) under monitoring, come from heterogeneous data sources as data streams, together with metadata associated with the formal definition of a measurement and evaluation project. Such metadata supports the patient analysis and monitoring in a more consistent way, facilitating for instance: i) The early detection of problems typical of data such as missing values, outliers, among others;and ii) The risk anticipation by means of on-line classification models adapted to the patient. We also performed a simulation using a prototype developed for outpatient monitoring, in order to analyze empirically processing times and variable scalability, which shed light on the feasibility of applying the prototype to real situations. In addition, we analyze statistically the results of the simulation, in order to detect the components which incorporate more variability to the system.
基金Supported bythe National High-Tech Research and Development Plan (National"863"Plan) (No2006AA04Z115)Tianjin Science andTechnology Key Project (No05YFGDGX08700)
文摘To analyze the physical structure of assembly process and assure product quality, the quality stability of multi-station assembly process was investigated. First, the assembly process was modeled as a one-dimensional discrete variant system by state space equation based on variation stream. Then, the criterion to judge whether the process is stable or not and the index, stability degree, to show the level of stability were proposed by analyzing the bounded-input bounded-output (BIBO) stability of system. Finally, a simulated example of a sheet metal assembly process with three stations, was provided to verify the effectiveness of the proposed method.
文摘With the advent of the IoT era, the amount of real-time data that is processed in data centers has increased explosively. As a result, stream mining, extracting useful knowledge from a huge amount of data in real time, is attracting more and more attention. It is said, however, that real- time stream processing will become more difficult in the near future, because the performance of processing applications continues to increase at a rate of 10% - 15% each year, while the amount of data to be processed is increasing exponentially. In this study, we focused on identifying a promising stream mining algorithm, specifically a Frequent Itemset Mining (FIsM) algorithm, then we improved its performance using an FPGA. FIsM algorithms are important and are basic data- mining techniques used to discover association rules from transactional databases. We improved on an approximate FIsM algorithm proposed recently so that it would fit onto hardware architecture efficiently. We then ran experiments on an FPGA. As a result, we have been able to achieve a speed 400% faster than the original algorithm implemented on a CPU. Moreover, our FPGA prototype showed a 20 times speed improvement compared to the CPU version.
文摘The present work describes the amount of Di-n- butyl phosphate (DBP) produced when PUREX solvent (30%tri-n-butyl phosphate (TBP) mixed with 70% hydrocarbon diluent) is exposed to intensive radiolytic and chemical at- tack during the separation of uranium and plutonium from fission products of FBTR mixed carbide fuel reprocessing solution. DBP is the major degradation product of Tri-n-butyl phosphate (TBP). Amount of DBP formed in the lean organic streams of different fuel burn-up FBTR carbide fuel reprocessing solutions were analyzed by Gas Chromatographic technique. The method is based on the preparation of diazo methane and conversion of non-volatile Di-n-butyl phosphate in to volatile and stable derivatives by the action of diazomethane and then determined by Gas Chromatography (GC). A calibration graph was made for DBP over a concentration in the range from 200 to 1800 ppm with correlation coefficient of 0.99587 and RSD 1.2%. The degraded 30% TBP-NPH solvent loaded with heavy metal ions like uranium was analyzed after repeated use and results are compared with standard ion chromatographic technique. A column comparison study to select of proper gas chromatographic column for the separation of DBP from other components in a single aliquot of injection is also examined.
基金The work was supported by the National Natural Science Foundation of China under Grant Nos.62072419 and 61672479.
文摘Most distributed stream processing engines(DSPEs)do not support online task management and cannot adapt to time-varying data flows.Recently,some studies have proposed online task deployment algorithms to solve this problem.However,these approaches do not guarantee the Quality of Service(QoS)when the task deployment changes at runtime,because the task migrations caused by the change of task deployments will impose an exorbitant cost.We study one of the most popular DSPEs,Apache Storm,and find out that when a task needs to be migrated,Storm has to stop the resource(implemented as a process of Worker in Storm)where the task is deployed.This will lead to the stop and restart of all tasks in the resource,resulting in the poor performance of task migrations.Aiming to solve this problem,in this pa-per,we propose N-Storm(Nonstop Storm),which is a task-resource decoupling DSPE.N-Storm allows tasks allocated to resources to be changed at runtime,which is implemented by a thread-level scheme for task migrations.Particularly,we add a local shared key/value store on each node to make resources aware of the changes in the allocation plan.Thus,each resource can manage its tasks at runtime.Based on N-Storm,we further propose Online Task Deployment(OTD).Differ-ing from traditional task deployment algorithms that deploy all tasks at once without considering the cost of task migra-tions caused by a task re-deployment,OTD can gradually adjust the current task deployment to an optimized one based on the communication cost and the runtime states of resources.We demonstrate that OTD can adapt to different kinds of applications including computation-and communication-intensive applications.The experimental results on a real DSPE cluster show that N-Storm can avoid the system stop and save up to 87%of the performance degradation time,compared with Apache Storm and other state-of-the-art approaches.In addition,OTD can increase the average CPU usage by 51%for computation-intensive applications and reduce network communication costs by 88%for communication-intensive ap-plications.