The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical r...The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is...Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.展开更多
The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computat...The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computations simpler,according to a general appeal to the“simplexity”of animal embodied cognition.A new activity of what we can call“distributed computation”holds the promise of originating a new generation of robots with better adaptability and restricted number of required control parameters.The framework of distributed computation helps us see them in a more naturalized and prudent perspective,avoiding ontological or metaphysical considerations.Despite these progresses,there are still problems regarding the epistemological limitations of computational modeling remain to be solved.展开更多
Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as bioch...Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as biochemical sensors for medicine,food safety and environmental monitoring.However,there is little research on aptamer-target binding mechanisms,which limits their application and development.Computational simulation has gained much attention for revealing aptamer-target binding mechanisms at the atomic level.This work summarizes the main simulation methods used in the mechanistic analysis of aptamer-target complexes,the characteristics of binding between aptamers and different targets(metal ions,small organic molecules,biomacromolecules,cells,bacteria and viruses),the types of aptamer-target interactions and the factors influencing their strength.It provides a reference for further use of simulations in understanding aptamer-target binding mechanisms.展开更多
This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysi...This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.展开更多
On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the ef...On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the effect of spacing between reefs on flow scale and the flow state,were analyzed.Results indicate upwelling,slow flow,and eddy around a single reef.Maximum velocity,height,and volume of upwelling in front of a single reef were positively correlated with inflow velocity.The length and volume of slow flow increased with the increase in inflow velocity.Eddies were present both inside and backward,and vorticity was positively correlated with inflow velocity.Space between reefs had a minor influence on the maximum velocity and height of upwelling.With the increase in space from 0.5 L to 1.5 L(L is the reef lehgth),the length of slow flow in the front and back of the combined reefs increased slightly.When the space was 2.0 L,the length of the slow flow decreased.In four different spaces,eddies were present inside and at the back of each reef.The maximum vorticity was negatively correlated with space from 0.5 L to 1.5 L,but under 2.0 L space,the maximum vorticity was close to the vorticity of a single reef under the same inflow velocity.展开更多
For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of sol...For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.展开更多
Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challeng...Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.展开更多
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s...Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.展开更多
By deploying the ubiquitous and reliable coverage of low Earth orbit(LEO)satellite networks using optical inter satel-lite link(OISL),computation offloading services can be provided for any users without proximal serv...By deploying the ubiquitous and reliable coverage of low Earth orbit(LEO)satellite networks using optical inter satel-lite link(OISL),computation offloading services can be provided for any users without proximal servers,while the resource limita-tion of both computation and storage on satellites is the impor-tant factor affecting the maximum task completion time.In this paper,we study a delay-optimal multi-satellite collaborative computation offloading scheme that allows satellites to actively migrate tasks among themselves by employing the high-speed OISLs,such that tasks with long queuing delay will be served as quickly as possible by utilizing idle computation resources in the neighborhood.To satisfy the delay requirement of delay-sensi-tive task,we first propose a deadline-aware task scheduling scheme in which a priority model is constructed to sort the order of tasks being served based on its deadline,and then a delay-optimal collaborative offloading scheme is derived such that the tasks which cannot be completed locally can be migrated to other idle satellites.Simulation results demonstrate the effective-ness of our multi-satellite collaborative computation offloading strategy in reducing task complement time and improving resource utilization of the LEO satellite network.展开更多
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of e...The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.展开更多
In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergo...In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergoing spatial superposition, we argue that evolution in the presence of metric perturbations will differ from that in the absence of these waves. This differential state evolution will then encode the information being processed by the tract due to the interaction of the quantum state of the ions at the nodes with the “controlling’ potential. Upon decoherence, which is equal to a measurement, the final spatial state of the ions is decided and it also gets reset by the next impulse initiation time. Under synchronization, several tracts undergo such processes in synchrony and therefore the picture of a quantum computing circuit is complete. Under this model, based on the number of axons in the corpus callosum alone, we estimate that upwards of 50 million quantum states might be prepared and evolved every second in this white matter tract, far greater processing than any present quantum computer can accomplish.展开更多
Inflammatory bowel diseases (IBD) are complex multifactorial disorders that include Crohn’s disease (CD) and ulcerative colitis (UC). Considering that IBD is a genetic and multifactorial disease, we screened for the ...Inflammatory bowel diseases (IBD) are complex multifactorial disorders that include Crohn’s disease (CD) and ulcerative colitis (UC). Considering that IBD is a genetic and multifactorial disease, we screened for the distribution dynamism of IBD pathogenic genetic variants (single nucleotide polymorphisms;SNPs) and risk factors in four (4) IBD pediatric patients, by integrating both clinical exome sequencing and computational statistical approaches, aiming to categorize IBD patients in CD and UC phenotype. To this end, we first aligned genomic read sequences of these IBD patients to hg19 human genome by using bowtie 2 package. Next, we performed genetic variant calling analysis in terms of single nucleotide polymorphism (SNP) for genes covered by at least 20 read genomic sequences. Finally, we checked for biological and genomic functions of genes exhibiting statistically significant genetic variant (SNPs) by introducing Fitcon genomic parameter. Findings showed Fitcon parameter as normalizing IBD patient’s population variability, as well as inducing a relative good clustering between IBD patients in terms of CD and UC phenotypes. Genomic analysis revealed a random distribution of risk factors and as well pathogenic SNPs genetic variants in the four IBD patient’s genome, claiming to be involved in: i) Metabolic disorders, ii) Autoimmune deficiencies;iii) Crohn’s disease pathways. Integration of genomic and computational statistical analysis supported a relative genetic variability regarding IBD patient population by processing IBD pathogenic SNP genetic variants as opposite to IBD risk factor variants. Interestingly, findings clearly allowed categorizing IBD patients in CD and UC phenotypes by applying Fitcon parameter in selecting IBD pathogenic genetic variants. Considering as a whole, the study suggested the efficiency of integrating clinical exome sequencing and computational statistical tools as a right approach in discriminating IBD phenotypes as well as improving inflammatory bowel disease (IBD) molecular diagnostic process.展开更多
Single-pixel imaging(SPI)enables an invisible target to be imaged onto a photosensitive surface without a lens,emerging as a promising way for indirect optical encryption.However,due to its linear and broadcast imagin...Single-pixel imaging(SPI)enables an invisible target to be imaged onto a photosensitive surface without a lens,emerging as a promising way for indirect optical encryption.However,due to its linear and broadcast imaging principles,SPI encryption has been confined to a single-user framework for the long term.We propose a multi-image SPI encryption method and combine it with orthogonal frequency division multiplexing-assisted key management,to achieve a multiuser SPI encryption and authentication framework.Multiple images are first encrypted as a composite intensity sequence containing the plaintexts and authentication information,simultaneously generating different sets of keys for users.Then,the SPI keys for encryption and authentication are asymmetrically isolated into independent frequency carriers and encapsulated into a Malus metasurface,so as to establish an individually private and content-independent channel for each user.Users can receive different plaintexts privately and verify the authenticity,eliminating the broadcast transparency of SPI encryption.The improved linear security is also verified by simulating attacks.By the combination of direct key management and indirect image encryption,our work achieves the encryption and authentication functionality under a multiuser computational imaging framework,facilitating its application in optical communication,imaging,and security.展开更多
Introduction to Computer Science,as one of the fundamental courses in computer-related majors,plays an important role in the cultivation of computer professionals.However,traditional teaching models and content can no...Introduction to Computer Science,as one of the fundamental courses in computer-related majors,plays an important role in the cultivation of computer professionals.However,traditional teaching models and content can no longer fully meet the needs of modern information technology development.In response to these issues,this article introduces the concept of computational creative thinking,optimizes course content,adopts exploratory teaching methods,and innovates course assessment methods,aiming to comprehensively enhance students’computational thinking and innovative abilities.By continuously improving and promoting this teaching model,it will undoubtedly promote computer education in universities to a new level.展开更多
Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-f...Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-form solu-tion due to the nonlinearity of HJI equation,and many iterative algorithms are proposed to solve the HJI equation.Simultane-ous policy updating algorithm(SPUA)is an effective algorithm for solving HJI equation,but it is an on-policy integral reinforce-ment learning(IRL).For online implementation of SPUA,the dis-turbance signals need to be adjustable,which is unrealistic.In this paper,an off-policy IRL algorithm based on SPUA is pro-posed without making use of any knowledge of the systems dynamics.Then,a neural-network based online adaptive critic implementation scheme of the off-policy IRL algorithm is pre-sented.Based on the online off-policy IRL method,a computa-tional intelligence interception guidance(CIIG)law is developed for intercepting high-maneuvering target.As a model-free method,intercepting targets can be achieved through measur-ing system data online.The effectiveness of the CIIG is verified through two missile and target engagement scenarios.展开更多
This paper takes the assessment and evaluation of computational mechanics course as the background,and constructs a diversified course evaluation system that is student-centered and integrates both quantitative and qu...This paper takes the assessment and evaluation of computational mechanics course as the background,and constructs a diversified course evaluation system that is student-centered and integrates both quantitative and qualitative evaluation methods.The system not only pays attention to students’practical operation and theoretical knowledge mastery but also puts special emphasis on the cultivation of students’innovative abilities.In order to realize a comprehensive and objective evaluation,the assessment and evaluation method of the entropy weight model combining TOPSIS(Technique for Order Preference by Similarity to Ideal Solution)multi-attribute decision analysis and entropy weight theory is adopted,and its validity and practicability are verified through example analysis.This method can not only comprehensively and objectively evaluate students’learning outcomes,but also provide a scientific decision-making basis for curriculum teaching reform.The implementation of this diversified course evaluation system can better reflect the comprehensive ability of students and promote the continuous improvement of teaching quality.展开更多
In blind quantum computation(BQC),a client with weak quantum computation capabilities is allowed to delegate its quantum computation tasks to a server with powerful quantum computation capabilities,and the inputs,algo...In blind quantum computation(BQC),a client with weak quantum computation capabilities is allowed to delegate its quantum computation tasks to a server with powerful quantum computation capabilities,and the inputs,algorithms and outputs of the quantum computation are confidential to the server.Verifiability refers to the ability of the client to verify with a certain probability whether the server has executed the protocol correctly and can be realized by introducing trap qubits into the computation graph state to detect server deception.The existing verifiable universal BQC protocols are analyzed and compared in detail.The XTH protocol(proposed by Xu Q S,Tan X Q,Huang R in 2020),a recent improvement protocol of verifiable universal BQC,uses a sandglass-like graph state to further decrease resource expenditure and enhance verification capability.However,the XTH protocol has two shortcomings:limitations in the coloring scheme and a high probability of accepting an incorrect computation result.In this paper,we present an improved version of the XTH protocol,which revises the limitations of the original coloring scheme and further improves the verification ability.The analysis demonstrates that the resource expenditure is the same as for the XTH protocol,while the probability of accepting the wrong computation result is reduced from the original minimum(0.866)^(d*)to(0.819)^(d^(*)),where d;is the number of repeated executions of the protocol.展开更多
To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network...To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.展开更多
文摘The purpose of this review is to explore the intersection of computational engineering and biomedical science,highlighting the transformative potential this convergence holds for innovation in healthcare and medical research.The review covers key topics such as computational modelling,bioinformatics,machine learning in medical diagnostics,and the integration of wearable technology for real-time health monitoring.Major findings indicate that computational models have significantly enhanced the understanding of complex biological systems,while machine learning algorithms have improved the accuracy of disease prediction and diagnosis.The synergy between bioinformatics and computational techniques has led to breakthroughs in personalized medicine,enabling more precise treatment strategies.Additionally,the integration of wearable devices with advanced computational methods has opened new avenues for continuous health monitoring and early disease detection.The review emphasizes the need for interdisciplinary collaboration to further advance this field.Future research should focus on developing more robust and scalable computational models,enhancing data integration techniques,and addressing ethical considerations related to data privacy and security.By fostering innovation at the intersection of these disciplines,the potential to revolutionize healthcare delivery and outcomes becomes increasingly attainable.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.
基金The authors are grateful for financial support from the National Key Projects for Fundamental Research and Development of China(2021YFA1500803)the National Natural Science Foundation of China(51825205,52120105002,22102202,22088102,U22A20391)+1 种基金the DNL Cooperation Fund,CAS(DNL202016)the CAS Project for Young Scientists in Basic Research(YSBR-004).
文摘Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors.
文摘The emphasis on the simplification of cognitive and motor tasks by recent results of morphological computation has rendered possible the construction of appropriate“mimetic bodies”able to render accompanied computations simpler,according to a general appeal to the“simplexity”of animal embodied cognition.A new activity of what we can call“distributed computation”holds the promise of originating a new generation of robots with better adaptability and restricted number of required control parameters.The framework of distributed computation helps us see them in a more naturalized and prudent perspective,avoiding ontological or metaphysical considerations.Despite these progresses,there are still problems regarding the epistemological limitations of computational modeling remain to be solved.
文摘Aptamers are a type of single-chain oligonucleotide that can combine with a specific target.Due to their simple preparation,easy modification,stable structure and reusability,aptamers have been widely applied as biochemical sensors for medicine,food safety and environmental monitoring.However,there is little research on aptamer-target binding mechanisms,which limits their application and development.Computational simulation has gained much attention for revealing aptamer-target binding mechanisms at the atomic level.This work summarizes the main simulation methods used in the mechanistic analysis of aptamer-target complexes,the characteristics of binding between aptamers and different targets(metal ions,small organic molecules,biomacromolecules,cells,bacteria and viruses),the types of aptamer-target interactions and the factors influencing their strength.It provides a reference for further use of simulations in understanding aptamer-target binding mechanisms.
基金the financial support for this work provided by the National Key R&D Program of China‘Technologies and Integrated Application of Magnesite Waste Utilization for High-Valued Chemicals and Materials’(2020YFC1909303)。
文摘This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%.
基金supported by the National Natural Science Foundation of China(No.32002442)the National Key R&D Program(No.2019YFD0902101).
文摘On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the effect of spacing between reefs on flow scale and the flow state,were analyzed.Results indicate upwelling,slow flow,and eddy around a single reef.Maximum velocity,height,and volume of upwelling in front of a single reef were positively correlated with inflow velocity.The length and volume of slow flow increased with the increase in inflow velocity.Eddies were present both inside and backward,and vorticity was positively correlated with inflow velocity.Space between reefs had a minor influence on the maximum velocity and height of upwelling.With the increase in space from 0.5 L to 1.5 L(L is the reef lehgth),the length of slow flow in the front and back of the combined reefs increased slightly.When the space was 2.0 L,the length of the slow flow decreased.In four different spaces,eddies were present inside and at the back of each reef.The maximum vorticity was negatively correlated with space from 0.5 L to 1.5 L,but under 2.0 L space,the maximum vorticity was close to the vorticity of a single reef under the same inflow velocity.
基金financially supported by the National Natural Science Foundation of China(U21A20313,22222807)。
文摘For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics.
基金Deanship of Research and Graduate Studies at King Khalid University for funding this work through large Research Project under Grant Number RGP2/302/45supported by the Deanship of Scientific Research,Vice Presidency forGraduate Studies and Scientific Research,King Faisal University,Saudi Arabia(Grant Number A426).
文摘Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results.
基金The work is partially supported by Natural Science Foundation of Ningxia(Grant No.AAC03300)National Natural Science Foundation of China(Grant No.61962001)Graduate Innovation Project of North Minzu University(Grant No.YCX23152).
文摘Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system.
基金This work was supported by the National Key Research and Development Program of China(2021YFB2900600)the National Natural Science Foundation of China(61971041+2 种基金62001027)the Beijing Natural Science Foundation(M22001)the Technological Innovation Program of Beijing Institute of Technology(2022CX01027).
文摘By deploying the ubiquitous and reliable coverage of low Earth orbit(LEO)satellite networks using optical inter satel-lite link(OISL),computation offloading services can be provided for any users without proximal servers,while the resource limita-tion of both computation and storage on satellites is the impor-tant factor affecting the maximum task completion time.In this paper,we study a delay-optimal multi-satellite collaborative computation offloading scheme that allows satellites to actively migrate tasks among themselves by employing the high-speed OISLs,such that tasks with long queuing delay will be served as quickly as possible by utilizing idle computation resources in the neighborhood.To satisfy the delay requirement of delay-sensi-tive task,we first propose a deadline-aware task scheduling scheme in which a priority model is constructed to sort the order of tasks being served based on its deadline,and then a delay-optimal collaborative offloading scheme is derived such that the tasks which cannot be completed locally can be migrated to other idle satellites.Simulation results demonstrate the effective-ness of our multi-satellite collaborative computation offloading strategy in reducing task complement time and improving resource utilization of the LEO satellite network.
基金the National Natural Science Foundation of China(62271192)Henan Provincial Scientists Studio(GZS2022015)+10 种基金Central Plains Talents Plan(ZYYCYU202012173)NationalKeyR&DProgramofChina(2020YFB2008400)the Program ofCEMEE(2022Z00202B)LAGEO of Chinese Academy of Sciences(LAGEO-2019-2)Program for Science&Technology Innovation Talents in the University of Henan Province(20HASTIT022)Natural Science Foundation of Henan under Grant 202300410126Program for Innovative Research Team in University of Henan Province(21IRTSTHN015)Equipment Pre-Research Joint Research Program of Ministry of Education(8091B032129)Training Program for Young Scholar of Henan Province for Colleges and Universities(2020GGJS172)Program for Science&Technology Innovation Talents in Universities of Henan Province under Grand(22HASTIT020)Henan Province Science Fund for Distinguished Young Scholars(222300420006).
文摘The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes.
文摘In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergoing spatial superposition, we argue that evolution in the presence of metric perturbations will differ from that in the absence of these waves. This differential state evolution will then encode the information being processed by the tract due to the interaction of the quantum state of the ions at the nodes with the “controlling’ potential. Upon decoherence, which is equal to a measurement, the final spatial state of the ions is decided and it also gets reset by the next impulse initiation time. Under synchronization, several tracts undergo such processes in synchrony and therefore the picture of a quantum computing circuit is complete. Under this model, based on the number of axons in the corpus callosum alone, we estimate that upwards of 50 million quantum states might be prepared and evolved every second in this white matter tract, far greater processing than any present quantum computer can accomplish.
文摘Inflammatory bowel diseases (IBD) are complex multifactorial disorders that include Crohn’s disease (CD) and ulcerative colitis (UC). Considering that IBD is a genetic and multifactorial disease, we screened for the distribution dynamism of IBD pathogenic genetic variants (single nucleotide polymorphisms;SNPs) and risk factors in four (4) IBD pediatric patients, by integrating both clinical exome sequencing and computational statistical approaches, aiming to categorize IBD patients in CD and UC phenotype. To this end, we first aligned genomic read sequences of these IBD patients to hg19 human genome by using bowtie 2 package. Next, we performed genetic variant calling analysis in terms of single nucleotide polymorphism (SNP) for genes covered by at least 20 read genomic sequences. Finally, we checked for biological and genomic functions of genes exhibiting statistically significant genetic variant (SNPs) by introducing Fitcon genomic parameter. Findings showed Fitcon parameter as normalizing IBD patient’s population variability, as well as inducing a relative good clustering between IBD patients in terms of CD and UC phenotypes. Genomic analysis revealed a random distribution of risk factors and as well pathogenic SNPs genetic variants in the four IBD patient’s genome, claiming to be involved in: i) Metabolic disorders, ii) Autoimmune deficiencies;iii) Crohn’s disease pathways. Integration of genomic and computational statistical analysis supported a relative genetic variability regarding IBD patient population by processing IBD pathogenic SNP genetic variants as opposite to IBD risk factor variants. Interestingly, findings clearly allowed categorizing IBD patients in CD and UC phenotypes by applying Fitcon parameter in selecting IBD pathogenic genetic variants. Considering as a whole, the study suggested the efficiency of integrating clinical exome sequencing and computational statistical tools as a right approach in discriminating IBD phenotypes as well as improving inflammatory bowel disease (IBD) molecular diagnostic process.
基金supported by the National Key R&D Program of China(Grant No.2021YFB3900300)National Natural Science Foundation of China(Grant Nos.61860206007,62275177,and 62371321)+4 种基金Ministry of Education Science and Technology Chunhui Project(Grant No.HZKY20220559)International S and T Cooperation Program of Sichuan Province(Grant No.2023YFH0030)Sichuan Science and Technology Innovation Seeding Project(Grant No.23-YCG034)Sichuan Science and Technology Program(Grant No.2023YFG0334)Chengdu Science and Technology Program(Grant No.2022-GH02-00001-HZ).
文摘Single-pixel imaging(SPI)enables an invisible target to be imaged onto a photosensitive surface without a lens,emerging as a promising way for indirect optical encryption.However,due to its linear and broadcast imaging principles,SPI encryption has been confined to a single-user framework for the long term.We propose a multi-image SPI encryption method and combine it with orthogonal frequency division multiplexing-assisted key management,to achieve a multiuser SPI encryption and authentication framework.Multiple images are first encrypted as a composite intensity sequence containing the plaintexts and authentication information,simultaneously generating different sets of keys for users.Then,the SPI keys for encryption and authentication are asymmetrically isolated into independent frequency carriers and encapsulated into a Malus metasurface,so as to establish an individually private and content-independent channel for each user.Users can receive different plaintexts privately and verify the authenticity,eliminating the broadcast transparency of SPI encryption.The improved linear security is also verified by simulating attacks.By the combination of direct key management and indirect image encryption,our work achieves the encryption and authentication functionality under a multiuser computational imaging framework,facilitating its application in optical communication,imaging,and security.
基金2024 Education and Teaching Reform Research Project of Hainan Normal University(hsjg2024-04)。
文摘Introduction to Computer Science,as one of the fundamental courses in computer-related majors,plays an important role in the cultivation of computer professionals.However,traditional teaching models and content can no longer fully meet the needs of modern information technology development.In response to these issues,this article introduces the concept of computational creative thinking,optimizes course content,adopts exploratory teaching methods,and innovates course assessment methods,aiming to comprehensively enhance students’computational thinking and innovative abilities.By continuously improving and promoting this teaching model,it will undoubtedly promote computer education in universities to a new level.
文摘Missile interception problem can be regarded as a two-person zero-sum differential games problem,which depends on the solution of Hamilton-Jacobi-Isaacs(HJI)equa-tion.It has been proved impossible to obtain a closed-form solu-tion due to the nonlinearity of HJI equation,and many iterative algorithms are proposed to solve the HJI equation.Simultane-ous policy updating algorithm(SPUA)is an effective algorithm for solving HJI equation,but it is an on-policy integral reinforce-ment learning(IRL).For online implementation of SPUA,the dis-turbance signals need to be adjustable,which is unrealistic.In this paper,an off-policy IRL algorithm based on SPUA is pro-posed without making use of any knowledge of the systems dynamics.Then,a neural-network based online adaptive critic implementation scheme of the off-policy IRL algorithm is pre-sented.Based on the online off-policy IRL method,a computa-tional intelligence interception guidance(CIIG)law is developed for intercepting high-maneuvering target.As a model-free method,intercepting targets can be achieved through measur-ing system data online.The effectiveness of the CIIG is verified through two missile and target engagement scenarios.
基金2024 Key Project of Teaching Reform Research and Practice in Higher Education in Henan Province“Exploration and Practice of Training Model for Outstanding Students in Basic Mechanics Discipline”(2024SJGLX094)Henan Province“Mechanics+X”Basic Discipline Outstanding Student Training Base2024 Research and Practice Project of Higher Education Teaching Reform in Henan University of Science and Technology“Optimization and Practice of Ability-Oriented Teaching Mode for Computational Mechanics Course:A New Exploration in Cultivating Practical Simulation Engineers”(2024BK074)。
文摘This paper takes the assessment and evaluation of computational mechanics course as the background,and constructs a diversified course evaluation system that is student-centered and integrates both quantitative and qualitative evaluation methods.The system not only pays attention to students’practical operation and theoretical knowledge mastery but also puts special emphasis on the cultivation of students’innovative abilities.In order to realize a comprehensive and objective evaluation,the assessment and evaluation method of the entropy weight model combining TOPSIS(Technique for Order Preference by Similarity to Ideal Solution)multi-attribute decision analysis and entropy weight theory is adopted,and its validity and practicability are verified through example analysis.This method can not only comprehensively and objectively evaluate students’learning outcomes,but also provide a scientific decision-making basis for curriculum teaching reform.The implementation of this diversified course evaluation system can better reflect the comprehensive ability of students and promote the continuous improvement of teaching quality.
文摘In blind quantum computation(BQC),a client with weak quantum computation capabilities is allowed to delegate its quantum computation tasks to a server with powerful quantum computation capabilities,and the inputs,algorithms and outputs of the quantum computation are confidential to the server.Verifiability refers to the ability of the client to verify with a certain probability whether the server has executed the protocol correctly and can be realized by introducing trap qubits into the computation graph state to detect server deception.The existing verifiable universal BQC protocols are analyzed and compared in detail.The XTH protocol(proposed by Xu Q S,Tan X Q,Huang R in 2020),a recent improvement protocol of verifiable universal BQC,uses a sandglass-like graph state to further decrease resource expenditure and enhance verification capability.However,the XTH protocol has two shortcomings:limitations in the coloring scheme and a high probability of accepting an incorrect computation result.In this paper,we present an improved version of the XTH protocol,which revises the limitations of the original coloring scheme and further improves the verification ability.The analysis demonstrates that the resource expenditure is the same as for the XTH protocol,while the probability of accepting the wrong computation result is reduced from the original minimum(0.866)^(d*)to(0.819)^(d^(*)),where d;is the number of repeated executions of the protocol.
文摘To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.