Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is...Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.展开更多
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour...Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.展开更多
This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysi...This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.展开更多
On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the ef...On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the effect of spacing between reefs on flow scale and the flow state,were analyzed.Results indicate upwelling,slow flow,and eddy around a single reef.Maximum velocity,height,and volume of upwelling in front of a single reef were positively correlated with inflow velocity.The length and volume of slow flow increased with the increase in inflow velocity.Eddies were present both inside and backward,and vorticity was positively correlated with inflow velocity.Space between reefs had a minor influence on the maximum velocity and height of upwelling.With the increase in space from 0.5 L to 1.5 L(L is the reef lehgth),the length of slow flow in the front and back of the combined reefs increased slightly.When the space was 2.0 L,the length of the slow flow decreased.In four different spaces,eddies were present inside and at the back of each reef.The maximum vorticity was negatively correlated with space from 0.5 L to 1.5 L,but under 2.0 L space,the maximum vorticity was close to the vorticity of a single reef under the same inflow velocity.展开更多
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove...Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities...An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R☉along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R☉.展开更多
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of e...The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.展开更多
In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergo...In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergoing spatial superposition, we argue that evolution in the presence of metric perturbations will differ from that in the absence of these waves. This differential state evolution will then encode the information being processed by the tract due to the interaction of the quantum state of the ions at the nodes with the “controlling’ potential. Upon decoherence, which is equal to a measurement, the final spatial state of the ions is decided and it also gets reset by the next impulse initiation time. Under synchronization, several tracts undergo such processes in synchrony and therefore the picture of a quantum computing circuit is complete. Under this model, based on the number of axons in the corpus callosum alone, we estimate that upwards of 50 million quantum states might be prepared and evolved every second in this white matter tract, far greater processing than any present quantum computer can accomplish.展开更多
Handling the massive amount of data generated by Smart Mobile Devices(SMDs)is a challenging computational problem.Edge Computing is an emerging computation paradigm that is employed to conquer this problem.It can brin...Handling the massive amount of data generated by Smart Mobile Devices(SMDs)is a challenging computational problem.Edge Computing is an emerging computation paradigm that is employed to conquer this problem.It can bring computation power closer to the end devices to reduce their computation latency and energy consumption.Therefore,this paradigm increases the computational ability of SMDs by collaboration with edge servers.This is achieved by computation offloading from the mobile devices to the edge nodes or servers.However,not all applications benefit from computation offloading,which is only suitable for certain types of tasks.Task properties,SMD capability,wireless channel state,and other factors must be counted when making computation offloading decisions.Hence,optimization methods are important tools in scheduling computation offloading tasks in Edge Computing networks.In this paper,we review six types of optimization methods-they are Lyapunov optimization,convex optimization,heuristic techniques,game theory,machine learning,and others.For each type,we focus on the objective functions,application areas,types of offloading methods,evaluation methods,as well as the time complexity of the proposed algorithms.We discuss a few research problems that are still open.Our purpose for this review is to provide a concise summary that can help new researchers get started with their computation offloading researches for Edge Computing networks.展开更多
Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown th...Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.展开更多
Lithium-ion batteries(LIBs)and lithium-sulfur(Li–S)batteries are two types of energy storage systems with significance in both scientific research and commercialization.Nevertheless,the rational design of electrode m...Lithium-ion batteries(LIBs)and lithium-sulfur(Li–S)batteries are two types of energy storage systems with significance in both scientific research and commercialization.Nevertheless,the rational design of electrode materials for overcoming the bottlenecks of LIBs and Li–S batteries(such as low diffusion rates in LIBs and low sulfur utilization in Li–S batteries)remain the greatest challenge,while two-dimensional(2D)electrodes materials provide a solution because of their unique structural and electrochemical properties.In this article,from the perspective of ab-initio simulations,we review the design of 2D electrode materials for LIBs and Li–S batteries.We first propose the theoretical design principles for 2D electrodes,including stability,electronic properties,capacity,and ion diffusion descriptors.Next,classified examples of promising 2D electrodes designed by theoretical simulations are given,covering graphene,phosphorene,MXene,transition metal sulfides,and so on.Finally,common challenges and a future perspective are provided.This review paves the way for rational design of 2D electrode materials for LIBs and Li–S battery applications and may provide a guide for future experiments.展开更多
Despite the advances mobile devices have endured,they still remain resource-restricted computing devices,so there is a need for a technology that supports these devices.An emerging technology that supports such resour...Despite the advances mobile devices have endured,they still remain resource-restricted computing devices,so there is a need for a technology that supports these devices.An emerging technology that supports such resource-con-strained devices is called fog computing.End devices can offload the task to close-by fog nodes to improve the quality of service and experience.Since com-putation offloading is a multiobjective problem,we need to consider many factors before taking offloading decisions,such as task length,remaining battery power,latency,communication cost,etc.This study uses the multiobjective grey wolf optimization(MOGWO)technique for optimizing offloading decisions.This is thefirst time MOGWO has been applied for computation offloading in fog com-puting.A gravity reference point method is also integrated with MOGWO to pro-pose an enhanced multiobjective grey wolf optimization(E-MOGWO)algorithm.Itfinds the optimal offloading target by taking into account two parameters,i.e.,energy consumption and computational time in a heterogeneous,scalable,multi-fog,multi-user environment.The proposed E-MOGWO is compared with MOG-WO,non-dominated sorting genetic algorithm(NSGA-II)and accelerated particle swarm optimization(APSO).The results showed that the proposed algorithm achieved better results than existing approaches regarding energy consumption,computational time and the number of tasks successfully executed.展开更多
Fundamental particles in nature can be classified as bosons or fermions,which satisfy their correspondent statistics.However,quasiparticles of condensed matter physics may be neither bosons nor fermions,but can be nam...Fundamental particles in nature can be classified as bosons or fermions,which satisfy their correspondent statistics.However,quasiparticles of condensed matter physics may be neither bosons nor fermions,but can be named as anyons satisfying a generalized statistics.These anyons can be related with topological phases of matter.Interestingly,anyons can be used to encode qubits to perform quantum computations with specific advantages in which the corresponding qubits are naturally fault tolerant due to topological protection.[1,2]This approach is called topological quantum computation.However,its implementation based on natural systems still seems far from realization.展开更多
In view of the randomness distribution of multiple users in the dynamic large-scale Internet of Things(IoT)scenario,comprehensively formulating available resources for fog nodes in the area and achieving computation s...In view of the randomness distribution of multiple users in the dynamic large-scale Internet of Things(IoT)scenario,comprehensively formulating available resources for fog nodes in the area and achieving computation services at low cost have become great challenges.As a result,this paper studies an efficient and intelligent computation offloading mechanism with resource allocation.Specifically,an optimization problem is formulated to minimize the total energy consumption of all tasks under the joint optimization of computation offloading decisions,bandwidth resources and transmission power.Meanwhile,a Twin Delayed Deep Deterministic Policy Gradient-based Intelligent Computation Offloading(TD3PG-ICO)algorithm is proposed to solve this optimization problem.By combining the concept of the actor critic algorithm,the proposed algorithm designs two independent critic networks that can avoid the subjective prediction of a single critic network and better guide the policy network to generate the global optimal computation offloading policy.Additionally,this algorithm introduces a continuous variable discretization operation to select the target offloading node with random probability.The available resources of the target node are dynamically allocated to improve the model decision-making effect.Finally,the simulation results show that this proposed algorithm has faster convergence speed and good robustness.It can always approach the greedy algorithm with respect to the lowest total energy consumption.Furthermore,compared with full local and Deep Q-learning Network(DQN)-based computation offloading schemes,the total energy consumption can be reduced by an average of 15.53%and 6.41%,respectively.展开更多
Mobile-edge computing(MEC)is a promising technology for the fifth-generation(5G)and sixth-generation(6G)architectures,which provides resourceful computing capabilities for Internet of Things(IoT)devices,such as virtua...Mobile-edge computing(MEC)is a promising technology for the fifth-generation(5G)and sixth-generation(6G)architectures,which provides resourceful computing capabilities for Internet of Things(IoT)devices,such as virtual reality,mobile devices,and smart cities.In general,these IoT applications always bring higher energy consumption than traditional applications,which are usually energy-constrained.To provide persistent energy,many references have studied the offloading problem to save energy consumption.However,the dynamic environment dramatically increases the optimization difficulty of the offloading decision.In this paper,we aim to minimize the energy consumption of the entireMECsystemunder the latency constraint by fully considering the dynamic environment.UnderMarkov games,we propose amulti-agent deep reinforcement learning approach based on the bi-level actorcritic learning structure to jointly optimize the offloading decision and resource allocation,which can solve the combinatorial optimization problem using an asymmetric method and compute the Stackelberg equilibrium as a better convergence point than Nash equilibrium in terms of Pareto superiority.Our method can better adapt to a dynamic environment during the data transmission than the single-agent strategy and can effectively tackle the coordination problem in the multi-agent environment.The simulation results show that the proposed method could decrease the total computational overhead by 17.8%compared to the actor-critic-based method and reduce the total computational overhead by 31.3%,36.5%,and 44.7%compared with randomoffloading,all local execution,and all offloading execution,respectively.展开更多
In the field of single-server blind quantum computation(BQC), a major focus is to make the client as classical as possible. To achieve this goal, we propose two single-server BQC protocols to achieve verifiable univer...In the field of single-server blind quantum computation(BQC), a major focus is to make the client as classical as possible. To achieve this goal, we propose two single-server BQC protocols to achieve verifiable universal quantum computation. In these two protocols, the client only needs to perform either the gate T(in the first protocol) or the gates H and X(in the second protocol). With assistance from a single server, the client can utilize his quantum capabilities to generate some single-qubit states while keeping the actual state of these qubits confidential from others. By using these single-qubit states, the verifiable universal quantum computation can be achieved.展开更多
Driven by the demands of diverse artificial intelligence(AI)-enabled application,Mobile Edge Computing(MEC)is considered one of the key technologies for 6G edge intelligence.In this paper,we consider a serial task mod...Driven by the demands of diverse artificial intelligence(AI)-enabled application,Mobile Edge Computing(MEC)is considered one of the key technologies for 6G edge intelligence.In this paper,we consider a serial task model and design a quality of service(QoS)-aware task offloading via communication-computation resource coordination for multi-user MEC systems,which can mitigate the I/O interference brought by resource reuse among virtual machines.Then we construct the system utility measuring QoS based on application latency and user devices’energy consumption.We also propose a heuristic offloading algorithm to maximize the system utility function with the constraints of task priority and I/O interference.Simulation results demonstrate the proposed algorithm’s significant advantages in terms of task completion time,terminal energy consumption and system resource utilization.展开更多
Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across ...Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across multiple related samples.However,the improvement of variants identification using the mutual support information from mul-tiple samples remains quite limited for population-scale genotyping.Results In this study,we developed a computational framework for joint calling genetic variants from 5,061 sheep by incorporating the sequencing error and optimizing mutual support information from multiple samples’data.The variants were accurately identified from multiple samples by using four steps:(1)Probabilities of variants from two widely used algorithms,GATK and Freebayes,were calculated by Poisson model incorporating base sequencing error potential;(2)The variants with high mapping quality or consistently identified from at least two samples by GATK and Freebayes were used to construct the raw high-confidence identification(rHID)variants database;(3)The high confidence variants identified in single sample were ordered by probability value and controlled by false discovery rate(FDR)using rHID database;(4)To avoid the elimination of potentially true variants from rHID database,the vari-ants that failed FDR were reexamined to rescued potential true variants and ensured high accurate identification variants.The results indicated that the percent of concordant SNPs and Indels from Freebayes and GATK after our new method were significantly improved 12%-32%compared with raw variants and advantageously found low frequency variants of individual sheep involved several traits including nipples number(GPC5),scrapie pathology(PAPSS2),sea-sonal reproduction and litter size(GRM1),coat color(RAB27A),and lentivirus susceptibility(TMEM154).Conclusion The new method used the computational strategy to reduce the number of false positives,and simulta-neously improve the identification of genetic variants.This strategy did not incur any extra cost by using any addi-tional samples or sequencing data information and advantageously identified rare variants which can be important for practical applications of animal breeding.展开更多
Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent com...Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent computing, subverting the imaging mechanism of traditional optical imaging which only relies on orderly information transmission. To meet the high-precision requirements of traditional optical imaging for optical processing and adjustment, as well as to solve its problems of being sensitive to gravity and temperature in use, we establish an optical imaging system model from the perspective of computational optical imaging and studies how to design and solve the imaging consistency problem of optical system under the influence of gravity, thermal effect, stress, and other external environment to build a high robustness optical system. The results show that the high robustness interval of the optical system exists and can effectively reduce the sensitivity of the optical system to the disturbance of each link, thus realizing the high robustness of optical imaging.展开更多
基金The authors are grateful for financial support from the National Key Projects for Fundamental Research and Development of China(2021YFA1500803)the National Natural Science Foundation of China(51825205,52120105002,22102202,22088102,U22A20391)+1 种基金the DNL Cooperation Fund,CAS(DNL202016)the CAS Project for Young Scientists in Basic Research(YSBR-004).
文摘Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.
基金This work is supported by National Natural Science Foundation of China(Nos.U23B20151 and 52171253).
文摘Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method.
基金the financial support for this work provided by the National Key R&D Program of China‘Technologies and Integrated Application of Magnesite Waste Utilization for High-Valued Chemicals and Materials’(2020YFC1909303)。
文摘This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.
基金supported by the National Natural Science Foundation of China(No.32002442)the National Key R&D Program(No.2019YFD0902101).
文摘On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the effect of spacing between reefs on flow scale and the flow state,were analyzed.Results indicate upwelling,slow flow,and eddy around a single reef.Maximum velocity,height,and volume of upwelling in front of a single reef were positively correlated with inflow velocity.The length and volume of slow flow increased with the increase in inflow velocity.Eddies were present both inside and backward,and vorticity was positively correlated with inflow velocity.Space between reefs had a minor influence on the maximum velocity and height of upwelling.With the increase in space from 0.5 L to 1.5 L(L is the reef lehgth),the length of slow flow in the front and back of the combined reefs increased slightly.When the space was 2.0 L,the length of the slow flow decreased.In four different spaces,eddies were present inside and at the back of each reef.The maximum vorticity was negatively correlated with space from 0.5 L to 1.5 L,but under 2.0 L space,the maximum vorticity was close to the vorticity of a single reef under the same inflow velocity.
基金the National Key Research and Development Program of China(2021YFF0900800)the National Natural Science Foundation of China(61972276,62206116,62032016)+2 种基金the New Liberal Arts Reform and Practice Project of National Ministry of Education(2021170002)the Open Research Fund of the State Key Laboratory for Management and Control of Complex Systems(20210101)Tianjin University Talent Innovation Reward Program for Literature and Science Graduate Student(C1-2022-010)。
文摘Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
基金This study is partially supported by the National Natural Science Foundation of China(NSFC)(6200512062125504).
文摘An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R☉along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R☉.
基金the National Natural Science Foundation of China(62271192)Henan Provincial Scientists Studio(GZS2022015)+10 种基金Central Plains Talents Plan(ZYYCYU202012173)NationalKeyR&DProgramofChina(2020YFB2008400)the Program ofCEMEE(2022Z00202B)LAGEO of Chinese Academy of Sciences(LAGEO-2019-2)Program for Science&Technology Innovation Talents in the University of Henan Province(20HASTIT022)Natural Science Foundation of Henan under Grant 202300410126Program for Innovative Research Team in University of Henan Province(21IRTSTHN015)Equipment Pre-Research Joint Research Program of Ministry of Education(8091B032129)Training Program for Young Scholar of Henan Province for Colleges and Universities(2020GGJS172)Program for Science&Technology Innovation Talents in Universities of Henan Province under Grand(22HASTIT020)Henan Province Science Fund for Distinguished Young Scholars(222300420006).
文摘The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.
文摘In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergoing spatial superposition, we argue that evolution in the presence of metric perturbations will differ from that in the absence of these waves. This differential state evolution will then encode the information being processed by the tract due to the interaction of the quantum state of the ions at the nodes with the “controlling’ potential. Upon decoherence, which is equal to a measurement, the final spatial state of the ions is decided and it also gets reset by the next impulse initiation time. Under synchronization, several tracts undergo such processes in synchrony and therefore the picture of a quantum computing circuit is complete. Under this model, based on the number of axons in the corpus callosum alone, we estimate that upwards of 50 million quantum states might be prepared and evolved every second in this white matter tract, far greater processing than any present quantum computer can accomplish.
基金supported by National Key R&D Program of China under Grant.No.2018YFB1800805National Natural Science Foundation of China under Grant No.61772345,61902257,61972261Shenzhen Science and Technology Program under Grant No.RCYX20200714114645048,No.JCYJ20190808142207420,No.GJHZ20190822095416463.
文摘Handling the massive amount of data generated by Smart Mobile Devices(SMDs)is a challenging computational problem.Edge Computing is an emerging computation paradigm that is employed to conquer this problem.It can bring computation power closer to the end devices to reduce their computation latency and energy consumption.Therefore,this paradigm increases the computational ability of SMDs by collaboration with edge servers.This is achieved by computation offloading from the mobile devices to the edge nodes or servers.However,not all applications benefit from computation offloading,which is only suitable for certain types of tasks.Task properties,SMD capability,wireless channel state,and other factors must be counted when making computation offloading decisions.Hence,optimization methods are important tools in scheduling computation offloading tasks in Edge Computing networks.In this paper,we review six types of optimization methods-they are Lyapunov optimization,convex optimization,heuristic techniques,game theory,machine learning,and others.For each type,we focus on the objective functions,application areas,types of offloading methods,evaluation methods,as well as the time complexity of the proposed algorithms.We discuss a few research problems that are still open.Our purpose for this review is to provide a concise summary that can help new researchers get started with their computation offloading researches for Edge Computing networks.
基金supported by the National Natural Science Foundation of China (22078004)the Research Development Fund from Xi’an Jiaotong-Liverpool University (RDF-16-02-03 and RDF15-01-23)key program special fund (KSF-E-03)。
文摘Deuterium(D_(2)) is one of the important fuel sources that power nuclear fusion reactors. The existing D_(2)/H_(2) separation technologies that obtain high-purity D_(2) are cost-intensive. Recent research has shown that metal-organic frameworks(MOFs) are of good potential for D_(2)/H_(2) separation application. In this work, a high-throughput computational screening of 12020 computation-ready experimental MOFs is carried out to determine the best MOFs for hydrogen isotope separation application. Meanwhile, the detailed structure-performance correlation is systematically investigated with the aid of machine learning. The results indicate that the ideal D_(2)/H_(2) adsorption selectivity calculated based on Henry coefficient is strongly correlated with the 1/ΔAD feature descriptor;that is, inverse of the adsorbility difference of the two adsorbates. Meanwhile, the machine learning(ML) results show that the prediction accuracy of all the four ML methods is significantly improved after the addition of this feature descriptor. In addition, the ML results based on extreme gradient boosting model also revealed that the 1/ΔAD descriptor has the highest relative importance compared to other commonly-used descriptors. To further explore the effect of hydrogen isotope separation in binary mixture, 1548 MOFs with ideal adsorption selectivity greater than 1.5 are simulated at equimolar conditions. The structure-performance relationship shows that high adsorption selectivity MOFs generally have smaller pore size(0.3-0.5 nm) and lower surface area. Among the top 200 performers, the materials mainly have the sql, pcu, cds, hxl, and ins topologies.Finally, three MOFs with high D_(2)/H_(2) selectivity and good D_(2) uptake are identified as the best candidates,of all which had one-dimensional channel pore. The findings obtained in this work may be helpful for the identification of potentially promising candidates for hydrogen isotope separation.
基金supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(PolyU152178/20 E)the Hong Kong Polytechnic University(1-W19S)Science and Technology Program of Guangdong Province of China(2020A0505090001).
文摘Lithium-ion batteries(LIBs)and lithium-sulfur(Li–S)batteries are two types of energy storage systems with significance in both scientific research and commercialization.Nevertheless,the rational design of electrode materials for overcoming the bottlenecks of LIBs and Li–S batteries(such as low diffusion rates in LIBs and low sulfur utilization in Li–S batteries)remain the greatest challenge,while two-dimensional(2D)electrodes materials provide a solution because of their unique structural and electrochemical properties.In this article,from the perspective of ab-initio simulations,we review the design of 2D electrode materials for LIBs and Li–S batteries.We first propose the theoretical design principles for 2D electrodes,including stability,electronic properties,capacity,and ion diffusion descriptors.Next,classified examples of promising 2D electrodes designed by theoretical simulations are given,covering graphene,phosphorene,MXene,transition metal sulfides,and so on.Finally,common challenges and a future perspective are provided.This review paves the way for rational design of 2D electrode materials for LIBs and Li–S battery applications and may provide a guide for future experiments.
文摘Despite the advances mobile devices have endured,they still remain resource-restricted computing devices,so there is a need for a technology that supports these devices.An emerging technology that supports such resource-con-strained devices is called fog computing.End devices can offload the task to close-by fog nodes to improve the quality of service and experience.Since com-putation offloading is a multiobjective problem,we need to consider many factors before taking offloading decisions,such as task length,remaining battery power,latency,communication cost,etc.This study uses the multiobjective grey wolf optimization(MOGWO)technique for optimizing offloading decisions.This is thefirst time MOGWO has been applied for computation offloading in fog com-puting.A gravity reference point method is also integrated with MOGWO to pro-pose an enhanced multiobjective grey wolf optimization(E-MOGWO)algorithm.Itfinds the optimal offloading target by taking into account two parameters,i.e.,energy consumption and computational time in a heterogeneous,scalable,multi-fog,multi-user environment.The proposed E-MOGWO is compared with MOG-WO,non-dominated sorting genetic algorithm(NSGA-II)and accelerated particle swarm optimization(APSO).The results showed that the proposed algorithm achieved better results than existing approaches regarding energy consumption,computational time and the number of tasks successfully executed.
文摘Fundamental particles in nature can be classified as bosons or fermions,which satisfy their correspondent statistics.However,quasiparticles of condensed matter physics may be neither bosons nor fermions,but can be named as anyons satisfying a generalized statistics.These anyons can be related with topological phases of matter.Interestingly,anyons can be used to encode qubits to perform quantum computations with specific advantages in which the corresponding qubits are naturally fault tolerant due to topological protection.[1,2]This approach is called topological quantum computation.However,its implementation based on natural systems still seems far from realization.
基金partially supported by the National Natural Science Foundation of China(No.61971235)the China Postdoctoral Science Foundation(No.2018M630590)+3 种基金the Jiangsu Planned Projects for Postdoctoral Research Funds(No.2021K501C)the 333 High-level Talents Training Project of Jiangsu Provincethe 1311 Talents Plan of NJUPTthe Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX20_0851).
文摘In view of the randomness distribution of multiple users in the dynamic large-scale Internet of Things(IoT)scenario,comprehensively formulating available resources for fog nodes in the area and achieving computation services at low cost have become great challenges.As a result,this paper studies an efficient and intelligent computation offloading mechanism with resource allocation.Specifically,an optimization problem is formulated to minimize the total energy consumption of all tasks under the joint optimization of computation offloading decisions,bandwidth resources and transmission power.Meanwhile,a Twin Delayed Deep Deterministic Policy Gradient-based Intelligent Computation Offloading(TD3PG-ICO)algorithm is proposed to solve this optimization problem.By combining the concept of the actor critic algorithm,the proposed algorithm designs two independent critic networks that can avoid the subjective prediction of a single critic network and better guide the policy network to generate the global optimal computation offloading policy.Additionally,this algorithm introduces a continuous variable discretization operation to select the target offloading node with random probability.The available resources of the target node are dynamically allocated to improve the model decision-making effect.Finally,the simulation results show that this proposed algorithm has faster convergence speed and good robustness.It can always approach the greedy algorithm with respect to the lowest total energy consumption.Furthermore,compared with full local and Deep Q-learning Network(DQN)-based computation offloading schemes,the total energy consumption can be reduced by an average of 15.53%and 6.41%,respectively.
基金supported by the National Natural Science Foundation of China(62162050)the Fundamental Research Funds for the Central Universities(No.N2217002)the Natural Science Foundation of Liaoning ProvincialDepartment of Science and Technology(No.2022-KF-11-04).
文摘Mobile-edge computing(MEC)is a promising technology for the fifth-generation(5G)and sixth-generation(6G)architectures,which provides resourceful computing capabilities for Internet of Things(IoT)devices,such as virtual reality,mobile devices,and smart cities.In general,these IoT applications always bring higher energy consumption than traditional applications,which are usually energy-constrained.To provide persistent energy,many references have studied the offloading problem to save energy consumption.However,the dynamic environment dramatically increases the optimization difficulty of the offloading decision.In this paper,we aim to minimize the energy consumption of the entireMECsystemunder the latency constraint by fully considering the dynamic environment.UnderMarkov games,we propose amulti-agent deep reinforcement learning approach based on the bi-level actorcritic learning structure to jointly optimize the offloading decision and resource allocation,which can solve the combinatorial optimization problem using an asymmetric method and compute the Stackelberg equilibrium as a better convergence point than Nash equilibrium in terms of Pareto superiority.Our method can better adapt to a dynamic environment during the data transmission than the single-agent strategy and can effectively tackle the coordination problem in the multi-agent environment.The simulation results show that the proposed method could decrease the total computational overhead by 17.8%compared to the actor-critic-based method and reduce the total computational overhead by 31.3%,36.5%,and 44.7%compared with randomoffloading,all local execution,and all offloading execution,respectively.
基金Project supported by the National Science Foundation of Sichuan Province (Grant No. 2022NSFSC0534)the Central Guidance on Local Science and Technology Development Fund of Sichuan Province (Grant No. 22ZYZYTS0064)+1 种基金the Chengdu Key Research and Development Support Program (Grant No. 2021-YF09-0016-GX)the Key Project of Sichuan Normal University (Grant No. XKZX-02)。
文摘In the field of single-server blind quantum computation(BQC), a major focus is to make the client as classical as possible. To achieve this goal, we propose two single-server BQC protocols to achieve verifiable universal quantum computation. In these two protocols, the client only needs to perform either the gate T(in the first protocol) or the gates H and X(in the second protocol). With assistance from a single server, the client can utilize his quantum capabilities to generate some single-qubit states while keeping the actual state of these qubits confidential from others. By using these single-qubit states, the verifiable universal quantum computation can be achieved.
基金funded in part by the Open Research Fund of the Shaanxi Province Key Laboratory of Information Communication Network and Security under Grant No.ICNS202003in part supported by BUPT Excellent Ph.D.Students Foundation under Grant CX2022210。
文摘Driven by the demands of diverse artificial intelligence(AI)-enabled application,Mobile Edge Computing(MEC)is considered one of the key technologies for 6G edge intelligence.In this paper,we consider a serial task model and design a quality of service(QoS)-aware task offloading via communication-computation resource coordination for multi-user MEC systems,which can mitigate the I/O interference brought by resource reuse among virtual machines.Then we construct the system utility measuring QoS based on application latency and user devices’energy consumption.We also propose a heuristic offloading algorithm to maximize the system utility function with the constraints of task priority and I/O interference.Simulation results demonstrate the proposed algorithm’s significant advantages in terms of task completion time,terminal energy consumption and system resource utilization.
基金Superior Farms sheep producersIBEST for their supportfinancial support from the Idaho Global Entrepreneurial Mission
文摘Background Pan-genomics is a recently emerging strategy that can be utilized to provide a more comprehensive characterization of genetic variation.Joint calling is routinely used to combine identified variants across multiple related samples.However,the improvement of variants identification using the mutual support information from mul-tiple samples remains quite limited for population-scale genotyping.Results In this study,we developed a computational framework for joint calling genetic variants from 5,061 sheep by incorporating the sequencing error and optimizing mutual support information from multiple samples’data.The variants were accurately identified from multiple samples by using four steps:(1)Probabilities of variants from two widely used algorithms,GATK and Freebayes,were calculated by Poisson model incorporating base sequencing error potential;(2)The variants with high mapping quality or consistently identified from at least two samples by GATK and Freebayes were used to construct the raw high-confidence identification(rHID)variants database;(3)The high confidence variants identified in single sample were ordered by probability value and controlled by false discovery rate(FDR)using rHID database;(4)To avoid the elimination of potentially true variants from rHID database,the vari-ants that failed FDR were reexamined to rescued potential true variants and ensured high accurate identification variants.The results indicated that the percent of concordant SNPs and Indels from Freebayes and GATK after our new method were significantly improved 12%-32%compared with raw variants and advantageously found low frequency variants of individual sheep involved several traits including nipples number(GPC5),scrapie pathology(PAPSS2),sea-sonal reproduction and litter size(GRM1),coat color(RAB27A),and lentivirus susceptibility(TMEM154).Conclusion The new method used the computational strategy to reduce the number of false positives,and simulta-neously improve the identification of genetic variants.This strategy did not incur any extra cost by using any addi-tional samples or sequencing data information and advantageously identified rare variants which can be important for practical applications of animal breeding.
文摘Computational optical imaging is an interdisciplinary subject integrating optics, mathematics, and information technology. It introduces information processing into optical imaging and combines it with intelligent computing, subverting the imaging mechanism of traditional optical imaging which only relies on orderly information transmission. To meet the high-precision requirements of traditional optical imaging for optical processing and adjustment, as well as to solve its problems of being sensitive to gravity and temperature in use, we establish an optical imaging system model from the perspective of computational optical imaging and studies how to design and solve the imaging consistency problem of optical system under the influence of gravity, thermal effect, stress, and other external environment to build a high robustness optical system. The results show that the high robustness interval of the optical system exists and can effectively reduce the sensitivity of the optical system to the disturbance of each link, thus realizing the high robustness of optical imaging.