期刊文献+
共找到245,693篇文章
< 1 2 250 >
每页显示 20 50 100
Computational Experiments for Complex Social Systems:Experiment Design and Generative Explanation 被引量:2
1
作者 Xiao Xue Deyu Zhou +5 位作者 Xiangning Yu Gang Wang Juanjuan Li Xia Xie Lizhen Cui Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期1022-1038,共17页
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove... Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”. 展开更多
关键词 Agent-based modeling computational experiments cyber-physical-social systems(CPSS) generative deduction generative experiments meta model
下载PDF
Hypergraph Computation
2
作者 Yue Gao Shuyi Ji +1 位作者 Xiangmin Han Qionghai Dai 《Engineering》 SCIE EI CAS CSCD 2024年第9期188-201,共14页
Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The ... Practical real-world scenarios such as the Internet,social networks,and biological networks present the challenges of data scarcity and complex correlations,which limit the applications of artificial intelligence.The graph structure is a typical tool used to formulate such correlations,it is incapable of modeling highorder correlations among different objects in systems;thus,the graph structure cannot fully convey the intricate correlations among objects.Confronted with the aforementioned two challenges,hypergraph computation models high-order correlations among data,knowledge,and rules through hyperedges and leverages these high-order correlations to enhance the data.Additionally,hypergraph computation achieves collaborative computation using data and high-order correlations,thereby offering greater modeling flexibility.In particular,we introduce three types of hypergraph computation methods:①hypergraph structure modeling,②hypergraph semantic computing,and③efficient hypergraph computing.We then specify how to adopt hypergraph computation in practice by focusing on specific tasks such as three-dimensional(3D)object recognition,revealing that hypergraph computation can reduce the data requirement by 80%while achieving comparable performance or improve the performance by 52%given the same data,compared with a traditional data-based method.A comprehensive overview of the applications of hypergraph computation in diverse domains,such as intelligent medicine and computer vision,is also provided.Finally,we introduce an open-source deep learning library,DeepHypergraph(DHG),which can serve as a tool for the practical usage of hypergraph computation. 展开更多
关键词 High-order correlation Hypergraph structure modeling Hypergraph semantic computing Efficient hypergraph computing Hypergraph computation framework
下载PDF
Joint computation offloading and parallel scheduling to maximize delay-guarantee in cooperative MEC systems
3
作者 Mian Guo Mithun Mukherjee +3 位作者 Jaime Lloret Lei Li Quansheng Guan Fei Ji 《Digital Communications and Networks》 SCIE CSCD 2024年第3期693-705,共13页
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess... The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC. 展开更多
关键词 Edge computing computation offloading Parallel scheduling Mobile-edge cooperation Delay guarantee
下载PDF
Secure and Efficient Outsourced Computation in Cloud Computing Environments
4
作者 Varun Dixit Davinderjit Kaur 《Journal of Software Engineering and Applications》 2024年第9期750-762,共13页
Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodo... Secure and efficient outsourced computation in cloud computing environments is crucial for ensuring data confidentiality, integrity, and resource optimization. In this research, we propose novel algorithms and methodologies to address these challenges. Through a series of experiments, we evaluate the performance, security, and efficiency of the proposed algorithms in real-world cloud environments. Our results demonstrate the effectiveness of homomorphic encryption-based secure computation, secure multiparty computation, and trusted execution environment-based approaches in mitigating security threats while ensuring efficient resource utilization. Specifically, our homomorphic encryption-based algorithm exhibits encryption times ranging from 20 to 1000 milliseconds and decryption times ranging from 25 to 1250 milliseconds for payload sizes varying from 100 KB to 5000 KB. Furthermore, our comparative analysis against state-of-the-art solutions reveals the strengths of our proposed algorithms in terms of security guarantees, encryption overhead, and communication latency. 展开更多
关键词 Secure computation Cloud computing Homomorphic Encryption Secure Multiparty computation Resource Optimization
下载PDF
EG-STC: An Efficient Secure Two-Party Computation Scheme Based on Embedded GPU for Artificial Intelligence Systems
5
作者 Zhenjiang Dong Xin Ge +2 位作者 Yuehua Huang Jiankuo Dong Jiang Xu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4021-4044,共24页
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W... This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications. 展开更多
关键词 Secure two-party computation embedded GPU acceleration privacy-preserving machine learning edge computing
下载PDF
From the perspective of experimental practice: High-throughput computational screening in photocatalysis
6
作者 Yunxuan Zhao Junyu Gao +2 位作者 Xuanang Bian Han Tang Tierui Zhang 《Green Energy & Environment》 SCIE EI CAS CSCD 2024年第1期1-6,共6页
Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is... Photocatalysis,a critical strategy for harvesting sunlight to address energy demand and environmental concerns,is underpinned by the discovery of high-performance photocatalysts,thereby how to design photocatalysts is now generating widespread interest in boosting the conversion effi-ciency of solar energy.In the past decade,computational technologies and theoretical simulations have led to a major leap in the development of high-throughput computational screening strategies for novel high-efficiency photocatalysts.In this viewpoint,we started with introducing the challenges of photocatalysis from the view of experimental practice,especially the inefficiency of the traditional“trial and error”method.Sub-sequently,a cross-sectional comparison between experimental and high-throughput computational screening for photocatalysis is presented and discussed in detail.On the basis of the current experimental progress in photocatalysis,we also exemplified the various challenges associated with high-throughput computational screening strategies.Finally,we offered a preferred high-throughput computational screening procedure for pho-tocatalysts from an experimental practice perspective(model construction and screening,standardized experiments,assessment and revision),with the aim of a better correlation of high-throughput simulations and experimental practices,motivating to search for better descriptors. 展开更多
关键词 PHOTOCATALYSIS High-throughput computational screening PHOTOCATALYST Theoretical simulations Experiments
下载PDF
A fast forward computational method for nuclear measurement using volumetric detection constraints
7
作者 Qiong Zhang Lin-Lv Lin 《Nuclear Science and Techniques》 SCIE EI CAS CSCD 2024年第2期47-63,共17页
Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sour... Owing to the complex lithology of unconventional reservoirs,field interpreters usually need to provide a basis for interpretation using logging simulation models.Among the various detection tools that use nuclear sources,the detector response can reflect various types of information of the medium.The Monte Carlo method is one of the primary methods used to obtain nuclear detection responses in complex environments.However,this requires a computational process with extensive random sampling,consumes considerable resources,and does not provide real-time response results.Therefore,a novel fast forward computational method(FFCM)for nuclear measurement that uses volumetric detection constraints to rapidly calculate the detector response in various complex environments is proposed.First,the data library required for the FFCM is built by collecting the detection volume,detector counts,and flux sensitivity functions through a Monte Carlo simulation.Then,based on perturbation theory and the Rytov approximation,a model for the detector response is derived using the flux sensitivity function method and a one-group diffusion model.The environmental perturbation is constrained to optimize the model according to the tool structure and the impact of the formation and borehole within the effective detection volume.Finally,the method is applied to a neutron porosity tool for verification.In various complex simulation environments,the maximum relative error between the calculated porosity results of Monte Carlo and FFCM was 6.80%,with a rootmean-square error of 0.62 p.u.In field well applications,the formation porosity model obtained using FFCM was in good agreement with the model obtained by interpreters,which demonstrates the validity and accuracy of the proposed method. 展开更多
关键词 Nuclear measurement Fast forward computation Volumetric constraints
下载PDF
Computational fluid dynamics modeling of rapid pyrolysis of solid waste magnesium nitrate hydrate under different injection methods
8
作者 Wenchang Wu Kefan Yu +1 位作者 Liang Zhao Hui Dong 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第2期224-237,共14页
This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysi... This study developed a numerical model to efficiently treat solid waste magnesium nitrate hydrate through multi-step chemical reactions.The model simulates two-phase flow,heat,and mass transfer processes in a pyrolysis furnace to improve the decomposition rate of magnesium nitrate.The performance of multi-nozzle and single-nozzle injection methods was evaluated,and the effects of primary and secondary nozzle flow ratios,velocity ratios,and secondary nozzle inclination angles on the decomposition rate were investigated.Results indicate that multi-nozzle injection has a higher conversion efficiency and decomposition rate than single-nozzle injection,with a 10.3%higher conversion rate under the design parameters.The decomposition rate is primarily dependent on the average residence time of particles,which can be increased by decreasing flow rate and velocity ratios and increasing the inclination angle of secondary nozzles.The optimal parameters are injection flow ratio of 40%,injection velocity ratio of 0.6,and secondary nozzle inclination of 30°,corresponding to a maximum decomposition rate of 99.33%. 展开更多
关键词 MULTI-NOZZLE computational fluid dynamics Thermal decomposition reaction Pyrolysis furnace
下载PDF
Flow Field Characteristics of Multi-Trophic Artificial Reef Based on Computation Fluid Dynamics
9
作者 HUANG Junlin LI Jiao +3 位作者 LI Yan GONG Pihai GUAN Changtao XIA Xu 《Journal of Ocean University of China》 CAS CSCD 2024年第2期317-327,共11页
On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the ef... On the basis of computational fluid dynamics,the flow field characteristics of multi-trophic artificial reefs,including the flow field distribution features of a single reef under three different velocities and the effect of spacing between reefs on flow scale and the flow state,were analyzed.Results indicate upwelling,slow flow,and eddy around a single reef.Maximum velocity,height,and volume of upwelling in front of a single reef were positively correlated with inflow velocity.The length and volume of slow flow increased with the increase in inflow velocity.Eddies were present both inside and backward,and vorticity was positively correlated with inflow velocity.Space between reefs had a minor influence on the maximum velocity and height of upwelling.With the increase in space from 0.5 L to 1.5 L(L is the reef lehgth),the length of slow flow in the front and back of the combined reefs increased slightly.When the space was 2.0 L,the length of the slow flow decreased.In four different spaces,eddies were present inside and at the back of each reef.The maximum vorticity was negatively correlated with space from 0.5 L to 1.5 L,but under 2.0 L space,the maximum vorticity was close to the vorticity of a single reef under the same inflow velocity. 展开更多
关键词 artificial reef flow field characteristics computation fluid dynamics multi-trophic structure
下载PDF
GCAGA: A Gini Coefficient-Based Optimization Strategy for Computation Offloading in Multi-User-Multi-Edge MEC System
10
作者 Junqing Bai Qiuchao Dai Yingying Li 《Computers, Materials & Continua》 SCIE EI 2024年第6期5083-5103,共21页
To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network... To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA. 展开更多
关键词 Mobile edge computing multi-user-multi-edge joint optimization Gini coefficient adaptive genetic algorithm
下载PDF
Effect of solvent on the initiation mechanism of living anionic polymerization of styrene:A computational study
11
作者 Shen Li Yin-Ning Zhou +1 位作者 Zhong-Xin Liu Zheng-Hong Luo 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第5期135-142,共8页
For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of sol... For living anionic polymerization(LAP),solvent has a great influence on both reaction mechanism and kinetics.In this work,by using the classical butyl lithium-styrene polymerization as a model system,the effect of solvent on the mechanism and kinetics of LAP was revealed through a strategy combining density functional theory(DFT)calculations and kinetic modeling.In terms of mechanism,it is found that the stronger the solvent polarity,the more electrons transfer from initiator to solvent through detailed energy decomposition analysis of electrostatic interactions between initiator and solvent molecules.Furthermore,we also found that the stronger the solvent polarity,the higher the monomer initiation energy barrier and the smaller the initiation rate coefficient.Counterintuitively,initiation is more favorable at lower temperatures based on the calculated results ofΔG_(TS).Finally,the kinetic characteristics in different solvents were further examined by kinetic modeling.It is found that in benzene and n-pentane,the polymerization rate exhibits first-order kinetics.While,slow initiation and fast propagation were observed in tetrahydrofuran(THF)due to the slow free ion formation rate,leading to a deviation from first-order kinetics. 展开更多
关键词 Living anionic polymerization Solvent effect Reaction kinetics computational chemistry Mathematical modeling Kinetic modeling
下载PDF
Two-Stage IoT Computational Task Offloading Decision-Making in MEC with Request Holding and Dynamic Eviction
12
作者 Dayong Wang Kamalrulnizam Bin Abu Bakar Babangida Isyaku 《Computers, Materials & Continua》 SCIE EI 2024年第8期2065-2080,共16页
The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support ... The rapid development of Internet of Things(IoT)technology has led to a significant increase in the computational task load of Terminal Devices(TDs).TDs reduce response latency and energy consumption with the support of task-offloading in Multi-access Edge Computing(MEC).However,existing task-offloading optimization methods typically assume that MEC’s computing resources are unlimited,and there is a lack of research on the optimization of task-offloading when MEC resources are exhausted.In addition,existing solutions only decide whether to accept the offloaded task request based on the single decision result of the current time slot,but lack support for multiple retry in subsequent time slots.It is resulting in TD missing potential offloading opportunities in the future.To fill this gap,we propose a Two-Stage Offloading Decision-making Framework(TSODF)with request holding and dynamic eviction.Long Short-Term Memory(LSTM)-based task-offloading request prediction and MEC resource release estimation are integrated to infer the probability of a request being accepted in the subsequent time slot.The framework learns optimized decision-making experiences continuously to increase the success rate of task offloading based on deep learning technology.Simulation results show that TSODF reduces total TD’s energy consumption and delay for task execution and improves task offloading rate and system resource utilization compared to the benchmark method. 展开更多
关键词 Decision making internet of things load prediction task offloading multi-access edge computing
下载PDF
Numerical Analysis of Bacterial Meningitis Stochastic Delayed Epidemic Model through Computational Methods
13
作者 Umar Shafique Mohamed Mahyoub Al-Shamiri +3 位作者 Ali Raza Emad Fadhal Muhammad Rafiq Nauman Ahmed 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第10期311-329,共19页
Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challeng... Based on theWorld Health Organization(WHO),Meningitis is a severe infection of the meninges,the membranes covering the brain and spinal cord.It is a devastating disease and remains a significant public health challenge.This study investigates a bacterial meningitis model through deterministic and stochastic versions.Four-compartment population dynamics explain the concept,particularly the susceptible population,carrier,infected,and recovered.The model predicts the nonnegative equilibrium points and reproduction number,i.e.,the Meningitis-Free Equilibrium(MFE),and Meningitis-Existing Equilibrium(MEE).For the stochastic version of the existing deterministicmodel,the twomethodologies studied are transition probabilities and non-parametric perturbations.Also,positivity,boundedness,extinction,and disease persistence are studiedrigorouslywiththe helpofwell-known theorems.Standard and nonstandard techniques such as EulerMaruyama,stochastic Euler,stochastic Runge Kutta,and stochastic nonstandard finite difference in the sense of delay have been presented for computational analysis of the stochastic model.Unfortunately,standard methods fail to restore the biological properties of the model,so the stochastic nonstandard finite difference approximation is offered as an efficient,low-cost,and independent of time step size.In addition,the convergence,local,and global stability around the equilibria of the nonstandard computational method is studied by assuming the perturbation effect is zero.The simulations and comparison of the methods are presented to support the theoretical results and for the best visualization of results. 展开更多
关键词 Bacterial Meningitis disease stochastic delayed model stability analysis extinction and persistence computational methods
下载PDF
Computational Fluid Dynamics Approach for Predicting Pipeline Response to Various Blast Scenarios: A Numerical Modeling Study
14
作者 Farman Saifi Mohd Javaid +1 位作者 Abid Haleem S.M.Anas 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2747-2777,共31页
Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile fac... Recent industrial explosions globally have intensified the focus in mechanical engineering on designing infras-tructure systems and networks capable of withstanding blast loading.Initially centered on high-profile facilities such as embassies and petrochemical plants,this concern now extends to a wider array of infrastructures and facilities.Engineers and scholars increasingly prioritize structural safety against explosions,particularly to prevent disproportionate collapse and damage to nearby structures.Urbanization has further amplified the reliance on oil and gas pipelines,making them vital for urban life and prime targets for terrorist activities.Consequently,there is a growing imperative for computational engineering solutions to tackle blast loading on pipelines and mitigate associated risks to avert disasters.In this study,an empty pipe model was successfully validated under contact blast conditions using Abaqus software,a powerful tool in mechanical engineering for simulating blast effects on buried pipelines.Employing a Eulerian-Lagrangian computational fluid dynamics approach,the investigation extended to above-surface and below-surface blasts at standoff distances of 25 and 50 mm.Material descriptions in the numerical model relied on Abaqus’default mechanical models.Comparative analysis revealed varying pipe performance,with deformation decreasing as explosion-to-pipe distance increased.The explosion’s location relative to the pipe surface notably influenced deformation levels,a key finding highlighted in the study.Moreover,quantitative findings indicated varying ratios of plastic dissipation energy(PDE)for different blast scenarios compared to the contact blast(P0).Specifically,P1(25 mm subsurface blast)and P2(50 mm subsurface blast)showed approximately 24.07%and 14.77%of P0’s PDE,respectively,while P3(25 mm above-surface blast)and P4(50 mm above-surface blast)exhibited lower PDE values,accounting for about 18.08%and 9.67%of P0’s PDE,respectively.Utilising energy-absorbing materials such as thin coatings of ultra-high-strength concrete,metallic foams,carbon fiber-reinforced polymer wraps,and others on the pipeline to effectively mitigate blast damage is recommended.This research contributes to the advancement of mechanical engineering by providing insights and solutions crucial for enhancing the resilience and safety of underground pipelines in the face of blast events. 展开更多
关键词 Blast loading computational fluid dynamics computer modeling pipe networks response prediction structural safety
下载PDF
Computation Offloading in Edge Computing for Internet of Vehicles via Game Theory
15
作者 Jianhua Liu Jincheng Wei +3 位作者 Rongxin Luo Guilin Yuan Jiajia Liu Xiaoguang Tu 《Computers, Materials & Continua》 SCIE EI 2024年第10期1337-1361,共25页
With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,a... With the rapid advancement of Internet of Vehicles(IoV)technology,the demands for real-time navigation,advanced driver-assistance systems(ADAS),vehicle-to-vehicle(V2V)and vehicle-to-infrastructure(V2I)communications,and multimedia entertainment systems have made in-vehicle applications increasingly computingintensive and delay-sensitive.These applications require significant computing resources,which can overwhelm the limited computing capabilities of vehicle terminals despite advancements in computing hardware due to the complexity of tasks,energy consumption,and cost constraints.To address this issue in IoV-based edge computing,particularly in scenarios where available computing resources in vehicles are scarce,a multi-master and multi-slave double-layer game model is proposed,which is based on task offloading and pricing strategies.The establishment of Nash equilibrium of the game is proven,and a distributed artificial bee colonies algorithm is employed to achieve game equilibrium.Our proposed solution addresses these bottlenecks by leveraging a game-theoretic approach for task offloading and resource allocation in mobile edge computing(MEC)-enabled IoV environments.Simulation results demonstrate that the proposed scheme outperforms existing solutions in terms of convergence speed and system utility.Specifically,the total revenue achieved by our scheme surpasses other algorithms by at least 8.98%. 展开更多
关键词 Edge computing internet of vehicles resource allocation game theory artificial bee colony algorithm
下载PDF
Computation Tree Logic Model Checking of Multi-Agent Systems Based on Fuzzy Epistemic Interpreted Systems
16
作者 Xia Li Zhanyou Ma +3 位作者 Zhibao Mian Ziyuan Liu Ruiqi Huang Nana He 《Computers, Materials & Continua》 SCIE EI 2024年第3期4129-4152,共24页
Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as s... Model checking is an automated formal verification method to verify whether epistemic multi-agent systems adhere to property specifications.Although there is an extensive literature on qualitative properties such as safety and liveness,there is still a lack of quantitative and uncertain property verifications for these systems.In uncertain environments,agents must make judicious decisions based on subjective epistemic.To verify epistemic and measurable properties in multi-agent systems,this paper extends fuzzy computation tree logic by introducing epistemic modalities and proposing a new Fuzzy Computation Tree Logic of Knowledge(FCTLK).We represent fuzzy multi-agent systems as distributed knowledge bases with fuzzy epistemic interpreted systems.In addition,we provide a transformation algorithm from fuzzy epistemic interpreted systems to fuzzy Kripke structures,as well as transformation rules from FCTLK formulas to Fuzzy Computation Tree Logic(FCTL)formulas.Accordingly,we transform the FCTLK model checking problem into the FCTL model checking.This enables the verification of FCTLK formulas by using the fuzzy model checking algorithm of FCTL without additional computational overheads.Finally,we present correctness proofs and complexity analyses of the proposed algorithms.Additionally,we further illustrate the practical application of our approach through an example of a train control system. 展开更多
关键词 Model checking multi-agent systems fuzzy epistemic interpreted systems fuzzy computation tree logic transformation algorithm
下载PDF
Delay-optimal multi-satellite collaborative computation offloading supported by OISL in LEO satellite network
17
作者 ZHANG Tingting GUO Zijian +4 位作者 LI Bin FENG Yuan FU Qi HU Mingyu QU Yunbo 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第4期805-814,共10页
By deploying the ubiquitous and reliable coverage of low Earth orbit(LEO)satellite networks using optical inter satel-lite link(OISL),computation offloading services can be provided for any users without proximal serv... By deploying the ubiquitous and reliable coverage of low Earth orbit(LEO)satellite networks using optical inter satel-lite link(OISL),computation offloading services can be provided for any users without proximal servers,while the resource limita-tion of both computation and storage on satellites is the impor-tant factor affecting the maximum task completion time.In this paper,we study a delay-optimal multi-satellite collaborative computation offloading scheme that allows satellites to actively migrate tasks among themselves by employing the high-speed OISLs,such that tasks with long queuing delay will be served as quickly as possible by utilizing idle computation resources in the neighborhood.To satisfy the delay requirement of delay-sensi-tive task,we first propose a deadline-aware task scheduling scheme in which a priority model is constructed to sort the order of tasks being served based on its deadline,and then a delay-optimal collaborative offloading scheme is derived such that the tasks which cannot be completed locally can be migrated to other idle satellites.Simulation results demonstrate the effective-ness of our multi-satellite collaborative computation offloading strategy in reducing task complement time and improving resource utilization of the LEO satellite network. 展开更多
关键词 low Earth orbit(LEO)satellite network computation offloading task migration resource allocation
下载PDF
End-to-end computational design for an EUV solar corona multispectral imager with stray light suppression
18
作者 Jinming Gao Yue Sun +6 位作者 Yinxu Bian Jilong Peng Qian Yu Cuifang Kuang Xiangzhao Wang Xu Liu Xiangqun Cui 《Astronomical Techniques and Instruments》 CSCD 2024年第1期31-41,共11页
An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities... An extreme ultraviolet solar corona multispectral imager can allow direct observation of high temperature coronal plasma,which is related to solar flares,coronal mass ejections and other significant coronal activities.This manuscript proposes a novel end-to-end computational design method for an extreme ultraviolet(EUV)solar corona multispectral imager operating at wavelengths near 100 nm,including a stray light suppression design and computational image recovery.To suppress the strong stray light from the solar disk,an outer opto-mechanical structure is designed to protect the imaging component of the system.Considering the low reflectivity(less than 70%)and strong-scattering(roughness)of existing extreme ultraviolet optical elements,the imaging component comprises only a primary mirror and a curved grating.A Lyot aperture is used to further suppress any residual stray light.Finally,a deep learning computational imaging method is used to correct the individual multi-wavelength images from the original recorded multi-slit data.In results and data,this can achieve a far-field angular resolution below 7",and spectral resolution below 0.05 nm.The field of view is±3 R_(☉)along the multi-slit moving direction,where R☉represents the radius of the solar disk.The ratio of the corona's stray light intensity to the solar center's irradiation intensity is less than 10-6 at the circle of 1.3 R_(☉). 展开更多
关键词 EUV solar corona imager Curved grating Stray light suppression computational multispectral imaging
下载PDF
Outage Analysis of Optimal UAV Cooperation with IRS via Energy Harvesting Enhancement Assisted Computational Offloading
19
作者 Baofeng Ji Ying Wang +2 位作者 Weixing Wang Shahid Mumtaz Charalampos Tsimenidis 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第2期1885-1905,共21页
The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of e... The utilization of mobile edge computing(MEC)for unmanned aerial vehicle(UAV)communication presents a viable solution for achieving high reliability and low latency communication.This study explores the potential of employing intelligent reflective surfaces(IRS)andUAVs as relay nodes to efficiently offload user computing tasks to theMEC server system model.Specifically,the user node accesses the primary user spectrum,while adhering to the constraint of satisfying the primary user peak interference power.Furthermore,the UAV acquires energy without interrupting the primary user’s regular communication by employing two energy harvesting schemes,namely time switching(TS)and power splitting(PS).The selection of the optimal UAV is based on the maximization of the instantaneous signal-to-noise ratio.Subsequently,the analytical expression for the outage probability of the system in Rayleigh channels is derived and analyzed.The study investigates the impact of various system parameters,including the number of UAVs,peak interference power,TS,and PS factors,on the system’s outage performance through simulation.The proposed system is also compared to two conventional benchmark schemes:the optimal UAV link transmission and the IRS link transmission.The simulation results validate the theoretical derivation and demonstrate the superiority of the proposed scheme over the benchmark schemes. 展开更多
关键词 Unmanned aerial vehicle(UAV) intelligent reflective surface(IRS) energy harvesting computational offloading outage probability
下载PDF
Mechanism of Universal Quantum Computation in the Brain
20
作者 Aman Chawla Salvatore Domenic Morgera 《Journal of Applied Mathematics and Physics》 2024年第2期468-474,共7页
In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergo... In this paper, the authors extend [1] and provide more details of how the brain may act like a quantum computer. In particular, positing the difference between voltages on two axons as the environment for ions undergoing spatial superposition, we argue that evolution in the presence of metric perturbations will differ from that in the absence of these waves. This differential state evolution will then encode the information being processed by the tract due to the interaction of the quantum state of the ions at the nodes with the “controlling’ potential. Upon decoherence, which is equal to a measurement, the final spatial state of the ions is decided and it also gets reset by the next impulse initiation time. Under synchronization, several tracts undergo such processes in synchrony and therefore the picture of a quantum computing circuit is complete. Under this model, based on the number of axons in the corpus callosum alone, we estimate that upwards of 50 million quantum states might be prepared and evolved every second in this white matter tract, far greater processing than any present quantum computer can accomplish. 展开更多
关键词 AXONS Quantum computation Metric perturbation DECOHERENCE Time-coded information
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部