In the procedure of the steady-state hierarchical optimization with feedback for large-scale industrial processes, a sequence of set-point changes with different magnitudes is carried out on the optimization layer. To...In the procedure of the steady-state hierarchical optimization with feedback for large-scale industrial processes, a sequence of set-point changes with different magnitudes is carried out on the optimization layer. To improve the dynamic performance of transient response driven by the set-point changes, a filter-based iterative learning control strategy is proposed. In the proposed updating law, a local-symmetric-integral operator is adopted for eliminating the measurement noise of output information,a set of desired trajectories are specified according to the set-point changes sequence, the current control input is iteratively achieved by utilizing smoothed output error to modify its control input at previous iteration, to which the amplified coefficients related to the different magnitudes of set-point changes are introduced. The convergence of the algorithm is conducted by incorporating frequency-domain technique into time-domain analysis. Numerical simulation demonstrates the effectiveness of the proposed strategy,展开更多
The energy Internet operation platform provides market entities such as energy users,energy enterprises,suppliers,and governments with the ability to interact,transact,and manage various operations.Owing to the large ...The energy Internet operation platform provides market entities such as energy users,energy enterprises,suppliers,and governments with the ability to interact,transact,and manage various operations.Owing to the large number of platform users,complex businesses,and large amounts of data-mining tasks,it is necessary to solve the problems afflicting platform task scheduling and the provision of simultaneous access to a large number of users.This study examines the two core technologies of platform task scheduling and multiuser concurrent processing,proposing a distributed task-scheduling method and a technical implementation scheme based on the particle swarm optimization algorithm,and presents a systematic solution in concurrent processing for massive user numbers.Based on the results of this study,the energy internet operation platform can effectively deal with the concurrent access of tens of millions of users and complex task-scheduling problems.展开更多
Task decomposition is a kind of powerful technique increasingly being used within industry as a pathway for achieving product's developing success. In this paper, topology's concept in modern mathematics is us...Task decomposition is a kind of powerful technique increasingly being used within industry as a pathway for achieving product's developing success. In this paper, topology's concept in modern mathematics is used for task decomposition technique's deduction in product developing process. It puts fonvard the views of resolvability, measurability and connectivity of tasks and their practi-cal principles. Combined with an example of developing the typical mechanical product, it ex-plains the implementing method of task decomposition in Concurrent Engineering (CE).展开更多
As one of the most widely used languages in the world,Chinese language is distinct from most western languages in many properties,thus providing a unique opportunity for understanding the brain basis of human language...As one of the most widely used languages in the world,Chinese language is distinct from most western languages in many properties,thus providing a unique opportunity for understanding the brain basis of human language and cognition.In recent years,non-invasive neuroimaging techniques such as magnetic resonance imaging(MRI)blaze a new trail to comprehensively study specific neural correlates of Chinese language processing and Chinese speakers.We reviewed the application of functional MRI(fMRI)in such studies and some essential findings on brain systems in processing Chinese.Specifically,for example,the application of task fMRI and resting-state fMRI in observing the process of reading and writing the logographic characters and producing or listening to the tonal speech.Elementary cognitive neuroscience and several potential research directions around brain and Chinese language were discussed,which may be informative for future research.展开更多
Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield env...Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.展开更多
The key-value store can provide flexibility of data types because it does not need to specify the data types to be stored in advance and can store any types of data as the value of the key-value pair.Various types of ...The key-value store can provide flexibility of data types because it does not need to specify the data types to be stored in advance and can store any types of data as the value of the key-value pair.Various types of studies have been conducted to improve the performance of the key-value store while maintaining its flexibility.However,the research efforts storing the large-scale values such as multimedia data files(e.g.,images or videos)in the key-value store were limited.In this study,we propose a new key-value store,WR-Store++aiming to store the large-scale values stably.Specifically,it provides a new design of separating data and index by working with the built-in data structure of the Windows operating system and the file system.The utilization of the built-in data structure of the Windows operating system achieves the efficiency of the key-value store and that of the file system extends the limited space of the storage significantly.We also present chunk-based memory management and parallel processing of WR-Store++to further improve its performance in the GET operation.Through the experiments,we show that WR-Store++can store at least 32.74 times larger datasets than the existing baseline key-value store,WR-Store,which has the limitation in storing large-scale data sets.Furthermore,in terms of processing efficiency,we show that WR-Store++outperforms not only WR-Store but also the other state-ofthe-art key-value stores,LevelDB,RocksDB,and BerkeleyDB,for individual key-value operations and mixed workloads.展开更多
Stuttering is a common neurological deficit and its underlying cognitive mechanisms are a matter of debate, with evidence suggesting abnormal modulation between speech encoding and other cognitive components. Previous...Stuttering is a common neurological deficit and its underlying cognitive mechanisms are a matter of debate, with evidence suggesting abnormal modulation between speech encoding and other cognitive components. Previous studies have mainly used single task experiments to investigate the disturbance of language production. It is unclear whether there is abnormal interaction between the three language tasks (orthographic, phonological and semantic judgment) in stuttering patients. This study used dual tasks and manipulated the stimulus onset asynchrony (SOA) between tasks 1 and 2 and the nature of the second task, including orthographic, phonological, and semantic judgments. The results showed that the performance records of orthographic judgment, phonological judgment, and semantic judgment were significantly reduced between the patient and control groups with short SOA (P 〈 0.05). However, different patterns of interaction between task 2 and SOA were observed across subject groups: subjects with stuttering were more strongly modulated by SOA when the second task was semantic judgment or phonological judgment (P 〈 0.05), but not in the orthographic judgment experiment (P 〉 0.05). These results indicated that the interaction mechanism between semantic processing and phonological encoding might be an underlying cause for stuttering.展开更多
Large-scale magnetic structures are the main carrier of major eruptions in the solar atmosphere. These structures are rooted in the photosphere and are driven by the unceasing motion of the photospheric material throu...Large-scale magnetic structures are the main carrier of major eruptions in the solar atmosphere. These structures are rooted in the photosphere and are driven by the unceasing motion of the photospheric material through a series of equilibrium configurations. The motion brings energy into the coronal magnetic field until the system ceases to be in equilibrium. The catastrophe theory for solar eruptions indicates that loss of mechanical equilibrium constitutes the main trigger mechanism of major eruptions, usually shown up as solar flares, eruptive prominences, and coronal mass ejections (CMEs). Magnetic reconnection which takes place at the very beginning of the eruption as a result of plasma instabilities/turbulence inside the current sheet, converts magnetic energy into heating and kinetic energy that are responsible for solar flares, and for accelerating both plasma ejecta (flows and CMEs) and energetic particles. Various manifestations are thus related to one another, and the physics behind these relationships is catastrophe and magnetic reconnection. This work reports on recent progress in both theoretical research and observations on eruptive phenomena showing the above manifestations. We start by displaying the properties of large-scale structures in the corona and the related magnetic fields prior to an eruption, and show various morphological features of the disrupting magnetic fields. Then, in the framework of the catastrophe theory, we look into the physics behind those features investigated in a succession of previous works, and discuss the approaches they used.展开更多
Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.Ho...Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.展开更多
The implementation of product development process management (PDPM) is an effective means of developing products with higher quality in shorter lead time. It is argued in this paper that product, data, person and acti...The implementation of product development process management (PDPM) is an effective means of developing products with higher quality in shorter lead time. It is argued in this paper that product, data, person and activity are basic factors in PDPM With detailed analysis of these basic factors and their relations in product developmed process, all product development activities are considered as tasks and the management of product development process is regarded as the management of task execution A task decomposition based product development model is proposed with methods of constructing task relation matrix from layer model and constraint model resulted from task decomposition. An algorithm for constructing directed task graph is given and is used in the management of tasks. Finally, the usage and limitation of the proposed PDPM model is given with further work proposed.展开更多
Most distributed stream processing engines(DSPEs)do not support online task management and cannot adapt to time-varying data flows.Recently,some studies have proposed online task deployment algorithms to solve this pr...Most distributed stream processing engines(DSPEs)do not support online task management and cannot adapt to time-varying data flows.Recently,some studies have proposed online task deployment algorithms to solve this problem.However,these approaches do not guarantee the Quality of Service(QoS)when the task deployment changes at runtime,because the task migrations caused by the change of task deployments will impose an exorbitant cost.We study one of the most popular DSPEs,Apache Storm,and find out that when a task needs to be migrated,Storm has to stop the resource(implemented as a process of Worker in Storm)where the task is deployed.This will lead to the stop and restart of all tasks in the resource,resulting in the poor performance of task migrations.Aiming to solve this problem,in this pa-per,we propose N-Storm(Nonstop Storm),which is a task-resource decoupling DSPE.N-Storm allows tasks allocated to resources to be changed at runtime,which is implemented by a thread-level scheme for task migrations.Particularly,we add a local shared key/value store on each node to make resources aware of the changes in the allocation plan.Thus,each resource can manage its tasks at runtime.Based on N-Storm,we further propose Online Task Deployment(OTD).Differ-ing from traditional task deployment algorithms that deploy all tasks at once without considering the cost of task migra-tions caused by a task re-deployment,OTD can gradually adjust the current task deployment to an optimized one based on the communication cost and the runtime states of resources.We demonstrate that OTD can adapt to different kinds of applications including computation-and communication-intensive applications.The experimental results on a real DSPE cluster show that N-Storm can avoid the system stop and save up to 87%of the performance degradation time,compared with Apache Storm and other state-of-the-art approaches.In addition,OTD can increase the average CPU usage by 51%for computation-intensive applications and reduce network communication costs by 88%for communication-intensive ap-plications.展开更多
Heterogeneous computing is one effective method of high performance computing with many advantages. Task scheduling is a critical issue in heterogeneous environments as well as in homogeneous environments. A number of...Heterogeneous computing is one effective method of high performance computing with many advantages. Task scheduling is a critical issue in heterogeneous environments as well as in homogeneous environments. A number of task scheduling algorithms for homogeneous environments have been proposed, whereas, a few for heterogeneous environments can be found in the literature. A novel task scheduling algorithm for heterogeneous environments, called the heterogeneous critical task (HCT) scheduling algorithm is presented. By means of the directed acyclic graph and the gantt graph, the HCT algorithm defines the critical task and the idle time slot. After determining the critical tasks of a given task, the HCT algorithm tentatively duplicates the critical tasks onto the processor that has the given task in the idle time slot, to reduce the start time of the given task. To compare the performance of the HCT algorithm with several recently proposed algorithms, a large set of randomly generated applications and the Gaussian elimination application are randomly generated. The experimental result has shown that the HCT algorithm outperforms the other algorithm.展开更多
Assembly process planning(APP) for complicated products is a time-consuming and difficult work with conventional method. Virtual assembly process planning(VAPP) provides engineers a new and efficiency way. Previou...Assembly process planning(APP) for complicated products is a time-consuming and difficult work with conventional method. Virtual assembly process planning(VAPP) provides engineers a new and efficiency way. Previous studies in VAPP are almost isolated and dispersive, and have not established a whole understanding and discussed key realization techniques of VAPP from a systemic and integrated view. The integrated virtual assembly process planning(IVAPP) system is a new virtual reality based engineering application, which offers engineers an efficient, intuitive, immersive and integrated method for assembly process planning in a virtual environment. Based on analysis the information integration requirement of VAPP, the architecture of IVAPP is proposed. Through the integrated structure, IVAPP system can realize information integration and workflow controlling. In order to mode/the assembly process in IVAPP, a hierarchical assembly task list(HATL) is presented, in which different assembly tasks for assembling different components are organized into a hierarchical list. A process-oriented automatic geometrical constraint recognition algorithm(AGCR) is proposed, so that geometrical constraints between components can be automatically recognized during the process of interactive assembling. At the same time, a progressive hierarchical reasoning(PHR) model is discussed. AGCR and PHR will greatly reduce the interactive workload. A discrete control node model(DCNM) for cable harness assembly planning in IVAPP is detailed. DCNM converts a cable harness into continuous flexed line segments connected by a series of section center points, and designs can realize cable harness planning through controlling those control nodes. Mechanical assemblies (such as transmission case and engine of automobile) are used to illustrate the feasibility of the proposed method and algorithms. The application of IVAPP system reveals advantages over the traditional assembly process planning method in shortening the time-consumed in assembly planning and in minimizing the handling difficulty, excessive reorientation and dissimilarity of assembly operations.展开更多
Two types of persistent heavy rainfall events (PHREs) over the Yangtze River-Huaihe River Basin were determined in a recent statistical study: type A, whose precipitation is mainly located to the south of the Yangt...Two types of persistent heavy rainfall events (PHREs) over the Yangtze River-Huaihe River Basin were determined in a recent statistical study: type A, whose precipitation is mainly located to the south of the Yangtze River; and type B, whose precipitation is mainly located to the north of the river. The present study investigated these two PHRE types using a newly derived set of energy equations to show the scale interaction and main energy paths contributing to the persistence of the precipitation. The main results were as follows. The available potential energy (APE) and kinetic energy (KE) associated with both PHRE types generally increased upward in the troposphere, with the energy of the type-A PHREs stronger than that of the type-B PHREs (except for in the middle troposphere). There were two main common and universal energy paths of the two PHRE types: (1) the baroclinic energy conversion from APE to KE was the dominant energy source for the evolution of large-scale background circulations; and (2) the downscaled energy cascade processes of KE and APE were vital for sustaining the eddy flow, which directly caused the PHREs. The significant differences between the two PHRE types mainly appeared in the lower troposphere, where the baroclinic energy conversion associated with the eddy flow in type-A PHREs was from KE to APE, which reduced the intensity of the precipitation-related eddy flow; whereas, the conversion in type-B PHREs was from APE to KE, which enhanced the eddy flow.展开更多
Perovskite solar cells(PSCs) have emerged as one of the most promising candidates for photovoltaic applications. Low-cost, low-temperature solution processes including coating and printing techniques makes PSCs promis...Perovskite solar cells(PSCs) have emerged as one of the most promising candidates for photovoltaic applications. Low-cost, low-temperature solution processes including coating and printing techniques makes PSCs promising for the greatly potential commercialization due to the scalability and compatibility with large-scale, roll-to-roll manufacturing processes. In this review, we focus on the solution deposition of charge transport layers and perovskite absorption layer in both mesoporous and planar structural PSC devices. Furthermore, the most recent design strategies via solution deposition are presented as well, which have been explored to enlarge the active area, enhance the crystallization and passivate the defects, leading to the performance improvement of PSC devices.展开更多
Genetic algorithm has been proposed to solve the problem of task assignment. However, it has some drawbacks, e.g., it often takes a long time to find an optimal solution, and the success rate is low. To overcome these...Genetic algorithm has been proposed to solve the problem of task assignment. However, it has some drawbacks, e.g., it often takes a long time to find an optimal solution, and the success rate is low. To overcome these problems, a new coarse grained parallel genetic algorithm with the scheme of central migration is presented, which exploits isolated sub populations. The new approach has been implemented in the PVM environment and has been evaluated on a workstation network for solving the task assignment problem. The results show that it not only significantly improves the result quality but also increases the speed for getting best solution.展开更多
Multiple earth observing satellites need to communicate with each other to observe plenty of targets on the Earth together. The factors, such as external interference, result in satellite information interaction delay...Multiple earth observing satellites need to communicate with each other to observe plenty of targets on the Earth together. The factors, such as external interference, result in satellite information interaction delays, which is unable to ensure the integrity and timeliness of the information on decision making for satellites. And the optimization of the planning result is affected. Therefore, the effect of communication delay is considered during the multi-satel ite coordinating process. For this problem, firstly, a distributed cooperative optimization problem for multiple satellites in the delayed communication environment is formulized. Secondly, based on both the analysis of the temporal sequence of tasks in a single satellite and the dynamically decoupled characteristics of the multi-satellite system, the environment information of multi-satellite distributed cooperative optimization is constructed on the basis of the directed acyclic graph(DAG). Then, both a cooperative optimization decision making framework and a model are built according to the decentralized partial observable Markov decision process(DEC-POMDP). After that, a satellite coordinating strategy aimed at different conditions of communication delay is mainly analyzed, and a unified processing strategy on communication delay is designed. An approximate cooperative optimization algorithm based on simulated annealing is proposed. Finally, the effectiveness and robustness of the method presented in this paper are verified via the simulation.展开更多
基金This work was supported by the National Natural Science Foundation of China (No. 60274055)
文摘In the procedure of the steady-state hierarchical optimization with feedback for large-scale industrial processes, a sequence of set-point changes with different magnitudes is carried out on the optimization layer. To improve the dynamic performance of transient response driven by the set-point changes, a filter-based iterative learning control strategy is proposed. In the proposed updating law, a local-symmetric-integral operator is adopted for eliminating the measurement noise of output information,a set of desired trajectories are specified according to the set-point changes sequence, the current control input is iteratively achieved by utilizing smoothed output error to modify its control input at previous iteration, to which the amplified coefficients related to the different magnitudes of set-point changes are introduced. The convergence of the algorithm is conducted by incorporating frequency-domain technique into time-domain analysis. Numerical simulation demonstrates the effectiveness of the proposed strategy,
基金supported by the Science and Technology Project of State Grid Corporation“Research and Application of Internet Operation Platform for Ubiquitous Power Internet of Things”(5700-201955462A-0-0-00).
文摘The energy Internet operation platform provides market entities such as energy users,energy enterprises,suppliers,and governments with the ability to interact,transact,and manage various operations.Owing to the large number of platform users,complex businesses,and large amounts of data-mining tasks,it is necessary to solve the problems afflicting platform task scheduling and the provision of simultaneous access to a large number of users.This study examines the two core technologies of platform task scheduling and multiuser concurrent processing,proposing a distributed task-scheduling method and a technical implementation scheme based on the particle swarm optimization algorithm,and presents a systematic solution in concurrent processing for massive user numbers.Based on the results of this study,the energy internet operation platform can effectively deal with the concurrent access of tens of millions of users and complex task-scheduling problems.
基金the State High-Tech Developmets Plan of Cina(No.863-511-9930-007)
文摘Task decomposition is a kind of powerful technique increasingly being used within industry as a pathway for achieving product's developing success. In this paper, topology's concept in modern mathematics is used for task decomposition technique's deduction in product developing process. It puts fonvard the views of resolvability, measurability and connectivity of tasks and their practi-cal principles. Combined with an example of developing the typical mechanical product, it ex-plains the implementing method of task decomposition in Concurrent Engineering (CE).
基金the National Natural Scientific Foundation of China(Grants 81790650,81790651,81727808,81627901,and 31771253)the Beijing Municipal Science and Technology Commission(Grants Z171100000117012 and Z181100001518003)the Collaborative Research Fund of the Chinese Institute for Brain Research,Beijing(No.2020-NKXPT-02).
文摘As one of the most widely used languages in the world,Chinese language is distinct from most western languages in many properties,thus providing a unique opportunity for understanding the brain basis of human language and cognition.In recent years,non-invasive neuroimaging techniques such as magnetic resonance imaging(MRI)blaze a new trail to comprehensively study specific neural correlates of Chinese language processing and Chinese speakers.We reviewed the application of functional MRI(fMRI)in such studies and some essential findings on brain systems in processing Chinese.Specifically,for example,the application of task fMRI and resting-state fMRI in observing the process of reading and writing the logographic characters and producing or listening to the tonal speech.Elementary cognitive neuroscience and several potential research directions around brain and Chinese language were discussed,which may be informative for future research.
基金The National Natural Science Foundation of China(41271393).
文摘Battlefield environment simulation process is an important part of battlefield environment information support, which needs to be built around the task process. At present, the interoperability between battlefield environment simulation system and command and control system is still imperfect, and the traditional simulation data model cannot meet war fighters’ high-efficient and accurate understanding and analysis on battlefield environment’s information. Therefore, a kind of task-orientated battlefield environment simulation process model needs to be construed to effectively analyze the key information demands of the command and control system. The structured characteristics of tasks and simulation process are analyzed, and the simulation process concept model is constructed with the method of object-orientated. The data model and formal syntax of GeoBML are analyzed, and the logical model of simulation process is constructed with formal language. The object data structure of simulation process is defined and the object model of simulation process which maps tasks is constructed. In the end, the battlefield environment simulation platform modules are designed and applied based on this model, verifying that the model can effectively express the real-time dynamic correlation between battlefield environment simulation data and operational tasks.
文摘The key-value store can provide flexibility of data types because it does not need to specify the data types to be stored in advance and can store any types of data as the value of the key-value pair.Various types of studies have been conducted to improve the performance of the key-value store while maintaining its flexibility.However,the research efforts storing the large-scale values such as multimedia data files(e.g.,images or videos)in the key-value store were limited.In this study,we propose a new key-value store,WR-Store++aiming to store the large-scale values stably.Specifically,it provides a new design of separating data and index by working with the built-in data structure of the Windows operating system and the file system.The utilization of the built-in data structure of the Windows operating system achieves the efficiency of the key-value store and that of the file system extends the limited space of the storage significantly.We also present chunk-based memory management and parallel processing of WR-Store++to further improve its performance in the GET operation.Through the experiments,we show that WR-Store++can store at least 32.74 times larger datasets than the existing baseline key-value store,WR-Store,which has the limitation in storing large-scale data sets.Furthermore,in terms of processing efficiency,we show that WR-Store++outperforms not only WR-Store but also the other state-ofthe-art key-value stores,LevelDB,RocksDB,and BerkeleyDB,for individual key-value operations and mixed workloads.
基金the China Postdoctoral Science Foundation,No.2001,#14the Capital Medical Development Science Research Program,No.2005-2003
文摘Stuttering is a common neurological deficit and its underlying cognitive mechanisms are a matter of debate, with evidence suggesting abnormal modulation between speech encoding and other cognitive components. Previous studies have mainly used single task experiments to investigate the disturbance of language production. It is unclear whether there is abnormal interaction between the three language tasks (orthographic, phonological and semantic judgment) in stuttering patients. This study used dual tasks and manipulated the stimulus onset asynchrony (SOA) between tasks 1 and 2 and the nature of the second task, including orthographic, phonological, and semantic judgments. The results showed that the performance records of orthographic judgment, phonological judgment, and semantic judgment were significantly reduced between the patient and control groups with short SOA (P 〈 0.05). However, different patterns of interaction between task 2 and SOA were observed across subject groups: subjects with stuttering were more strongly modulated by SOA when the second task was semantic judgment or phonological judgment (P 〈 0.05), but not in the orthographic judgment experiment (P 〉 0.05). These results indicated that the interaction mechanism between semantic processing and phonological encoding might be an underlying cause for stuttering.
基金the National Natural Science Foundation of China.
文摘Large-scale magnetic structures are the main carrier of major eruptions in the solar atmosphere. These structures are rooted in the photosphere and are driven by the unceasing motion of the photospheric material through a series of equilibrium configurations. The motion brings energy into the coronal magnetic field until the system ceases to be in equilibrium. The catastrophe theory for solar eruptions indicates that loss of mechanical equilibrium constitutes the main trigger mechanism of major eruptions, usually shown up as solar flares, eruptive prominences, and coronal mass ejections (CMEs). Magnetic reconnection which takes place at the very beginning of the eruption as a result of plasma instabilities/turbulence inside the current sheet, converts magnetic energy into heating and kinetic energy that are responsible for solar flares, and for accelerating both plasma ejecta (flows and CMEs) and energetic particles. Various manifestations are thus related to one another, and the physics behind these relationships is catastrophe and magnetic reconnection. This work reports on recent progress in both theoretical research and observations on eruptive phenomena showing the above manifestations. We start by displaying the properties of large-scale structures in the corona and the related magnetic fields prior to an eruption, and show various morphological features of the disrupting magnetic fields. Then, in the framework of the catastrophe theory, we look into the physics behind those features investigated in a succession of previous works, and discuss the approaches they used.
基金supported by the Science and Technology Department of Sichuan Province(No.2021YFG0156).
文摘Generating diverse and factual text is challenging and is receiving increasing attention.By sampling from the latent space,variational autoencoder-based models have recently enhanced the diversity of generated text.However,existing research predominantly depends on summarizationmodels to offer paragraph-level semantic information for enhancing factual correctness.The challenge lies in effectively generating factual text using sentence-level variational autoencoder-based models.In this paper,a novel model called fact-aware conditional variational autoencoder is proposed to balance the factual correctness and diversity of generated text.Specifically,our model encodes the input sentences and uses them as facts to build a conditional variational autoencoder network.By training a conditional variational autoencoder network,the model is enabled to generate text based on input facts.Building upon this foundation,the input text is passed to the discriminator along with the generated text.By employing adversarial training,the model is encouraged to generate text that is indistinguishable to the discriminator,thereby enhancing the quality of the generated text.To further improve the factual correctness,inspired by the natural language inference system,the entailment recognition task is introduced to be trained together with the discriminator via multi-task learning.Moreover,based on the entailment recognition results,a penalty term is further proposed to reconstruct the loss of our model,forcing the generator to generate text consistent with the facts.Experimental results demonstrate that compared with competitivemodels,ourmodel has achieved substantial improvements in both the quality and factual correctness of the text,despite only sacrificing a small amount of diversity.Furthermore,when considering a comprehensive evaluation of diversity and quality metrics,our model has also demonstrated the best performance.
文摘The implementation of product development process management (PDPM) is an effective means of developing products with higher quality in shorter lead time. It is argued in this paper that product, data, person and activity are basic factors in PDPM With detailed analysis of these basic factors and their relations in product developmed process, all product development activities are considered as tasks and the management of product development process is regarded as the management of task execution A task decomposition based product development model is proposed with methods of constructing task relation matrix from layer model and constraint model resulted from task decomposition. An algorithm for constructing directed task graph is given and is used in the management of tasks. Finally, the usage and limitation of the proposed PDPM model is given with further work proposed.
基金The work was supported by the National Natural Science Foundation of China under Grant Nos.62072419 and 61672479.
文摘Most distributed stream processing engines(DSPEs)do not support online task management and cannot adapt to time-varying data flows.Recently,some studies have proposed online task deployment algorithms to solve this problem.However,these approaches do not guarantee the Quality of Service(QoS)when the task deployment changes at runtime,because the task migrations caused by the change of task deployments will impose an exorbitant cost.We study one of the most popular DSPEs,Apache Storm,and find out that when a task needs to be migrated,Storm has to stop the resource(implemented as a process of Worker in Storm)where the task is deployed.This will lead to the stop and restart of all tasks in the resource,resulting in the poor performance of task migrations.Aiming to solve this problem,in this pa-per,we propose N-Storm(Nonstop Storm),which is a task-resource decoupling DSPE.N-Storm allows tasks allocated to resources to be changed at runtime,which is implemented by a thread-level scheme for task migrations.Particularly,we add a local shared key/value store on each node to make resources aware of the changes in the allocation plan.Thus,each resource can manage its tasks at runtime.Based on N-Storm,we further propose Online Task Deployment(OTD).Differ-ing from traditional task deployment algorithms that deploy all tasks at once without considering the cost of task migra-tions caused by a task re-deployment,OTD can gradually adjust the current task deployment to an optimized one based on the communication cost and the runtime states of resources.We demonstrate that OTD can adapt to different kinds of applications including computation-and communication-intensive applications.The experimental results on a real DSPE cluster show that N-Storm can avoid the system stop and save up to 87%of the performance degradation time,compared with Apache Storm and other state-of-the-art approaches.In addition,OTD can increase the average CPU usage by 51%for computation-intensive applications and reduce network communication costs by 88%for communication-intensive ap-plications.
文摘Heterogeneous computing is one effective method of high performance computing with many advantages. Task scheduling is a critical issue in heterogeneous environments as well as in homogeneous environments. A number of task scheduling algorithms for homogeneous environments have been proposed, whereas, a few for heterogeneous environments can be found in the literature. A novel task scheduling algorithm for heterogeneous environments, called the heterogeneous critical task (HCT) scheduling algorithm is presented. By means of the directed acyclic graph and the gantt graph, the HCT algorithm defines the critical task and the idle time slot. After determining the critical tasks of a given task, the HCT algorithm tentatively duplicates the critical tasks onto the processor that has the given task in the idle time slot, to reduce the start time of the given task. To compare the performance of the HCT algorithm with several recently proposed algorithms, a large set of randomly generated applications and the Gaussian elimination application are randomly generated. The experimental result has shown that the HCT algorithm outperforms the other algorithm.
基金supported by National Natural Science Foundation of China (Grant No. 50805009)The Eleventh Five Year Plan Defense Pre-Research Fund, China (Grant No. 51318010205)
文摘Assembly process planning(APP) for complicated products is a time-consuming and difficult work with conventional method. Virtual assembly process planning(VAPP) provides engineers a new and efficiency way. Previous studies in VAPP are almost isolated and dispersive, and have not established a whole understanding and discussed key realization techniques of VAPP from a systemic and integrated view. The integrated virtual assembly process planning(IVAPP) system is a new virtual reality based engineering application, which offers engineers an efficient, intuitive, immersive and integrated method for assembly process planning in a virtual environment. Based on analysis the information integration requirement of VAPP, the architecture of IVAPP is proposed. Through the integrated structure, IVAPP system can realize information integration and workflow controlling. In order to mode/the assembly process in IVAPP, a hierarchical assembly task list(HATL) is presented, in which different assembly tasks for assembling different components are organized into a hierarchical list. A process-oriented automatic geometrical constraint recognition algorithm(AGCR) is proposed, so that geometrical constraints between components can be automatically recognized during the process of interactive assembling. At the same time, a progressive hierarchical reasoning(PHR) model is discussed. AGCR and PHR will greatly reduce the interactive workload. A discrete control node model(DCNM) for cable harness assembly planning in IVAPP is detailed. DCNM converts a cable harness into continuous flexed line segments connected by a series of section center points, and designs can realize cable harness planning through controlling those control nodes. Mechanical assemblies (such as transmission case and engine of automobile) are used to illustrate the feasibility of the proposed method and algorithms. The application of IVAPP system reveals advantages over the traditional assembly process planning method in shortening the time-consumed in assembly planning and in minimizing the handling difficulty, excessive reorientation and dissimilarity of assembly operations.
基金supported by the National Key Basic Research and Development Project of China(Grant No.2012CB417201)the National Natural Science Foundation of China(Grant Nos.41375053 and 41505038)
文摘Two types of persistent heavy rainfall events (PHREs) over the Yangtze River-Huaihe River Basin were determined in a recent statistical study: type A, whose precipitation is mainly located to the south of the Yangtze River; and type B, whose precipitation is mainly located to the north of the river. The present study investigated these two PHRE types using a newly derived set of energy equations to show the scale interaction and main energy paths contributing to the persistence of the precipitation. The main results were as follows. The available potential energy (APE) and kinetic energy (KE) associated with both PHRE types generally increased upward in the troposphere, with the energy of the type-A PHREs stronger than that of the type-B PHREs (except for in the middle troposphere). There were two main common and universal energy paths of the two PHRE types: (1) the baroclinic energy conversion from APE to KE was the dominant energy source for the evolution of large-scale background circulations; and (2) the downscaled energy cascade processes of KE and APE were vital for sustaining the eddy flow, which directly caused the PHREs. The significant differences between the two PHRE types mainly appeared in the lower troposphere, where the baroclinic energy conversion associated with the eddy flow in type-A PHREs was from KE to APE, which reduced the intensity of the precipitation-related eddy flow; whereas, the conversion in type-B PHREs was from APE to KE, which enhanced the eddy flow.
基金Projects(51673214,51673218,61774170)supported by the National Natural Science Foundation of ChinaProject(2017YFA0206600)supported by the National Key Research and Development Program of China。
文摘Perovskite solar cells(PSCs) have emerged as one of the most promising candidates for photovoltaic applications. Low-cost, low-temperature solution processes including coating and printing techniques makes PSCs promising for the greatly potential commercialization due to the scalability and compatibility with large-scale, roll-to-roll manufacturing processes. In this review, we focus on the solution deposition of charge transport layers and perovskite absorption layer in both mesoporous and planar structural PSC devices. Furthermore, the most recent design strategies via solution deposition are presented as well, which have been explored to enlarge the active area, enhance the crystallization and passivate the defects, leading to the performance improvement of PSC devices.
基金Supported by the Nation"86 3"Hi-Tech Development Program of China(86 3-30 6 -ZD11-0 1-8)
文摘Genetic algorithm has been proposed to solve the problem of task assignment. However, it has some drawbacks, e.g., it often takes a long time to find an optimal solution, and the success rate is low. To overcome these problems, a new coarse grained parallel genetic algorithm with the scheme of central migration is presented, which exploits isolated sub populations. The new approach has been implemented in the PVM environment and has been evaluated on a workstation network for solving the task assignment problem. The results show that it not only significantly improves the result quality but also increases the speed for getting best solution.
基金supported by the National Science Foundation for Young Scholars of China(6130123471401175)
文摘Multiple earth observing satellites need to communicate with each other to observe plenty of targets on the Earth together. The factors, such as external interference, result in satellite information interaction delays, which is unable to ensure the integrity and timeliness of the information on decision making for satellites. And the optimization of the planning result is affected. Therefore, the effect of communication delay is considered during the multi-satel ite coordinating process. For this problem, firstly, a distributed cooperative optimization problem for multiple satellites in the delayed communication environment is formulized. Secondly, based on both the analysis of the temporal sequence of tasks in a single satellite and the dynamically decoupled characteristics of the multi-satellite system, the environment information of multi-satellite distributed cooperative optimization is constructed on the basis of the directed acyclic graph(DAG). Then, both a cooperative optimization decision making framework and a model are built according to the decentralized partial observable Markov decision process(DEC-POMDP). After that, a satellite coordinating strategy aimed at different conditions of communication delay is mainly analyzed, and a unified processing strategy on communication delay is designed. An approximate cooperative optimization algorithm based on simulated annealing is proposed. Finally, the effectiveness and robustness of the method presented in this paper are verified via the simulation.