期刊文献+
共找到6,865篇文章
< 1 2 250 >
每页显示 20 50 100
17例耳硬化症患者的临床与影像学分析
1
作者 魏建初 张敏 +1 位作者 何云生 胡先芳 《中国耳鼻咽喉头颈外科》 CSCD 2024年第4期266-268,共3页
目的分析17例耳硬化症患者的临床特征、影像学表现及手术治疗效果。方法收集2020年5月~2023年5月在湖州市中心医院接受手术治疗的17例(17耳)耳硬化症患者临床、影像学资料,分析其临床表现、颞骨CT影像学表现及手术前后纯音测听结果变化... 目的分析17例耳硬化症患者的临床特征、影像学表现及手术治疗效果。方法收集2020年5月~2023年5月在湖州市中心医院接受手术治疗的17例(17耳)耳硬化症患者临床、影像学资料,分析其临床表现、颞骨CT影像学表现及手术前后纯音测听结果变化。结果17例(17耳)患者中,11耳(64.71%)为前庭窗型,表现为进行性传导性听力损失,其中9耳有耳鸣,3耳有轻微平衡问题;混合型6耳(35.29%)显示混合性听力损失,5耳有耳鸣,3耳有偶发性平衡障碍。影像学上,前庭窗型显示镫骨底板区域骨质增厚、前庭窗密度增加;混合型则显示镫骨底板增厚及耳蜗周围密度减低,1耳可见“双环征”,两种类型耳硬化症患者骨密度差异无统计学意义(P>0.05)。术后均未见耳鸣加重或其他严重并发症,仅2例(11.76%)出现暂时性头晕,术后随访显示术区恢复良好,无感染或其他迟发性并发症;术后听力测试显示气导阈值、气骨导差较术前显著改善(P均<0.05)。结论前庭窗型和混合型耳硬化症的影像学和临床表现各有特点,镫骨成形术可显著改善听力,降低气导阈值和气骨导差,且手术安全、并发症少。 展开更多
关键词 耳硬化(Otosclerosis) 体征和症状(Signs and Symptoms) 颞骨(Temporal Bone) 体层摄影术 X线计算机(Tomography X-Ray Computed) 测听法 纯音(Audiometry Pure-Tone) 听阈(Auditory Threshold)
下载PDF
基于高空平台的边缘计算卸载:网络、算法和展望
2
作者 孙恩昌 李梦思 +2 位作者 何若兰 张卉 张延华 《北京工业大学学报》 CAS CSCD 北大核心 2024年第3期348-361,共14页
高空平台(high altitude platform,HAP)技术与多接入边缘计算(multi-access edge computing,MEC)技术的结合将MEC服务器部署区域由地面扩展到空中,打破传统地面MEC网络的局限性,为用户提供无处不在的计算卸载服务。针对基于HAP的MEC卸... 高空平台(high altitude platform,HAP)技术与多接入边缘计算(multi-access edge computing,MEC)技术的结合将MEC服务器部署区域由地面扩展到空中,打破传统地面MEC网络的局限性,为用户提供无处不在的计算卸载服务。针对基于HAP的MEC卸载研究进行综述,首先,从HAP计算节点的优势、网络组成部分、网络结构、主要挑战及其应对技术4个方面分析基于HAP的MEC网络;其次,分别从图论、博弈论、机器学习、联邦学习等理论的角度对基于HAP的MEC卸载算法进行横向分析和纵向对比;最后,指出基于HAP的MEC卸载技术目前存在的问题,并对该技术的未来研究方向进行展望。 展开更多
关键词 高空平台(high altitude platform HAP) 多接入边缘计算(multi-access edge computing MEC) 计算卸载 图论 博弈论 机器学习
下载PDF
Prediction of the thermal conductivity of Mg–Al–La alloys by CALPHAD method 被引量:1
3
作者 Hongxia Li Wenjun Xu +5 位作者 Yufei Zhang Shenglan Yang Lijun Zhang Bin Liu Qun Luo Qian Li 《International Journal of Minerals,Metallurgy and Materials》 SCIE EI CSCD 2024年第1期129-137,共9页
Mg-Al alloys have excellent strength and ductility but relatively low thermal conductivity due to Al addition.The accurate prediction of thermal conductivity is a prerequisite for designing Mg-Al alloys with high ther... Mg-Al alloys have excellent strength and ductility but relatively low thermal conductivity due to Al addition.The accurate prediction of thermal conductivity is a prerequisite for designing Mg-Al alloys with high thermal conductivity.Thus,databases for predicting temperature-and composition-dependent thermal conductivities must be established.In this study,Mg-Al-La alloys with different contents of Al2La,Al3La,and Al11La3phases and solid solubility of Al in the α-Mg phase were designed.The influence of the second phase(s) and Al solid solubility on thermal conductivity was investigated.Experimental results revealed a second phase transformation from Al_(2)La to Al_(3)La and further to Al_(11)La_(3)with the increasing Al content at a constant La amount.The degree of the negative effect of the second phase(s) on thermal diffusivity followed the sequence of Al2La>Al3La>Al_(11)La_(3).Compared with the second phase,an increase in the solid solubility of Al in α-Mg remarkably reduced the thermal conductivity.On the basis of the experimental data,a database of the reciprocal thermal diffusivity of the Mg-Al-La system was established by calculation of the phase diagram (CALPHAD)method.With a standard error of±1.2 W/(m·K),the predicted results were in good agreement with the experimental data.The established database can be used to design Mg-Al alloys with high thermal conductivity and provide valuable guidance for expanding their application prospects. 展开更多
关键词 magnesium alloy thermal conductivity thermodynamic calculations materials computation
下载PDF
Computational Experiments for Complex Social Systems:Experiment Design and Generative Explanation 被引量:1
4
作者 Xiao Xue Deyu Zhou +5 位作者 Xiangning Yu Gang Wang Juanjuan Li Xia Xie Lizhen Cui Fei-Yue Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第4期1022-1038,共17页
Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a nove... Powered by advanced information technology,more and more complex systems are exhibiting characteristics of the cyber-physical-social systems(CPSS).In this context,computational experiments method has emerged as a novel approach for the design,analysis,management,control,and integration of CPSS,which can realize the causal analysis of complex systems by means of“algorithmization”of“counterfactuals”.However,because CPSS involve human and social factors(e.g.,autonomy,initiative,and sociality),it is difficult for traditional design of experiment(DOE)methods to achieve the generative explanation of system emergence.To address this challenge,this paper proposes an integrated approach to the design of computational experiments,incorporating three key modules:1)Descriptive module:Determining the influencing factors and response variables of the system by means of the modeling of an artificial society;2)Interpretative module:Selecting factorial experimental design solution to identify the relationship between influencing factors and macro phenomena;3)Predictive module:Building a meta-model that is equivalent to artificial society to explore its operating laws.Finally,a case study of crowd-sourcing platforms is presented to illustrate the application process and effectiveness of the proposed approach,which can reveal the social impact of algorithmic behavior on“rider race”. 展开更多
关键词 Agent-based modeling computational experiments cyber-physical-social systems(CPSS) generative deduction generative experiments meta model
下载PDF
IRS Assisted UAV Communications against Proactive Eavesdropping in Mobile Edge Computing Networks 被引量:1
5
作者 Ying Zhang Weiming Niu Leibing Yan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期885-902,共18页
In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ... In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes. 展开更多
关键词 Mobile edge computing(MEC) unmanned aerial vehicle(UAV) intelligent reflecting surface(IRS) zero forcing(ZF)
下载PDF
垂直轴风机功率增强的翼型与阵列优化
6
作者 雷鸣 方辉 《中国海洋平台》 2024年第2期12-18,47,共8页
针对H型垂直轴风机(Vertical Axis Wind Turbine,VAWT),通过计算流体动力学(Computational Fluid Dynamics,CFD)模拟,将翼型设计与涡轮阵列相关联,对比分析多种翼型及不同阵列条件下VAWT的转矩系数C_(m)、功率系数C_(P)和平均功率参数... 针对H型垂直轴风机(Vertical Axis Wind Turbine,VAWT),通过计算流体动力学(Computational Fluid Dynamics,CFD)模拟,将翼型设计与涡轮阵列相关联,对比分析多种翼型及不同阵列条件下VAWT的转矩系数C_(m)、功率系数C_(P)和平均功率参数Ω。结果表明:与对称翼型相比,非对称翼型在高叶尖速比下功率系数较小,弯度效应可显著增大翼型在下风区的功率系数;在风场阵列中,三涡轮阵列优化后下风区涡轮功率显著提升,单涡轮功率可提升40%,风场整体功率提升约20%;针对海上牧场结构平面提出五涡轮阵列,优化后风场整体效率提升65%,单涡轮性能提升可达100%。研究成果对于提高深远海网箱系统功能与设计具有推动意义。 展开更多
关键词 垂直轴风机 计算流体动力学(Computational Fluid Dynamics CFD) 仿真 翼型 风场阵列
下载PDF
ATSSC:An Attack Tolerant System in Serverless Computing
7
作者 Zhang Shuai Guo Yunfei +2 位作者 Hu Hongchao Liu Wenyan Wang Yawen 《China Communications》 SCIE CSCD 2024年第6期192-205,共14页
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ... Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs. 展开更多
关键词 active defense attack tolerant cloud computing SECURITY serverless computing
下载PDF
Static Analysis Techniques for Fixing Software Defects in MPI-Based Parallel Programs
8
作者 Norah Abdullah Al-Johany Sanaa Abdullah Sharaf +1 位作者 Fathy Elbouraey Eassa Reem Abdulaziz Alnanih 《Computers, Materials & Continua》 SCIE EI 2024年第5期3139-3173,共35页
The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of par... The Message Passing Interface (MPI) is a widely accepted standard for parallel computing on distributed memorysystems.However, MPI implementations can contain defects that impact the reliability and performance of parallelapplications. Detecting and correcting these defects is crucial, yet there is a lack of published models specificallydesigned for correctingMPI defects. To address this, we propose a model for detecting and correcting MPI defects(DC_MPI), which aims to detect and correct defects in various types of MPI communication, including blockingpoint-to-point (BPTP), nonblocking point-to-point (NBPTP), and collective communication (CC). The defectsaddressed by the DC_MPI model include illegal MPI calls, deadlocks (DL), race conditions (RC), and messagemismatches (MM). To assess the effectiveness of the DC_MPI model, we performed experiments on a datasetconsisting of 40 MPI codes. The results indicate that the model achieved a detection rate of 37 out of 40 codes,resulting in an overall detection accuracy of 92.5%. Additionally, the execution duration of the DC_MPI modelranged from 0.81 to 1.36 s. These findings show that the DC_MPI model is useful in detecting and correctingdefects in MPI implementations, thereby enhancing the reliability and performance of parallel applications. TheDC_MPImodel fills an important research gap and provides a valuable tool for improving the quality ofMPI-basedparallel computing systems. 展开更多
关键词 High-performance computing parallel computing software engineering software defect message passing interface DEADLOCK
下载PDF
Complementary memtransistors for neuromorphic computing: How, what and why
9
作者 Qi Chen Yue Zhou +4 位作者 Weiwei Xiong Zirui Chen Yasai Wang Xiangshui Miao Yuhui He 《Journal of Semiconductors》 EI CAS CSCD 2024年第6期64-80,共17页
Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it ... Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing. 展开更多
关键词 complementary memtransistor neuromorphic computing reward-modulated spike timing-dependent plasticity remote supervise method in-sensor computing
下载PDF
Enhanced Temporal Correlation for Universal Lesion Detection
10
作者 Muwei Jian Yue Jin Hui Yu 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期3051-3063,共13页
Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal cha... Universal lesion detection(ULD)methods for computed tomography(CT)images play a vital role in the modern clinical medicine and intelligent automation.It is well known that single 2D CT slices lack spatial-temporal characteristics and contextual information compared to 3D CT blocks.However,3D CT blocks necessitate significantly higher hardware resources during the learning phase.Therefore,efficiently exploiting temporal correlation and spatial-temporal features of 2D CT slices is crucial for ULD tasks.In this paper,we propose a ULD network with the enhanced temporal correlation for this purpose,named TCE-Net.The designed TCE module is applied to enrich the discriminate feature representation of multiple sequential CT slices.Besides,we employ multi-scale feature maps to facilitate the localization and detection of lesions in various sizes.Extensive experiments are conducted on the DeepLesion benchmark demonstrate that thismethod achieves 66.84%and 78.18%for FS@0.5 and FS@1.0,respectively,outperforming compared state-of-the-art methods. 展开更多
关键词 Universal lesion detection computational biology medical computing deep learning enhanced temporal correlation
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
11
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Task Offloading in Edge Computing Using GNNs and DQN
12
作者 Asier Garmendia-Orbegozo Jose David Nunez-Gonzalez Miguel Angel Anton 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2649-2671,共23页
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t... In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices. 展开更多
关键词 Edge computing edge offloading fog computing task offloading
下载PDF
Hybrid Approach for Cost Efficient Application Placement in Fog-Cloud Computing Environments
13
作者 Abdulelah Alwabel Chinmaya Kumar Swain 《Computers, Materials & Continua》 SCIE EI 2024年第6期4127-4148,共22页
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How... Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost. 展开更多
关键词 Placement mechanism application module placement fog computing cloud computing genetic algorithm flamingo search algorithm
下载PDF
Exploring reservoir computing:Implementation via double stochastic nanowire networks
14
作者 唐健峰 夏磊 +3 位作者 李广隶 付军 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期572-582,共11页
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana... Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing. 展开更多
关键词 double-layer stochastic(DS)nanowire network architecture neuromorphic computation nanowire network reservoir computing time series prediction
下载PDF
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
15
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
下载PDF
Linearized waveform inversion for vertical transversely isotropic elastic media:Methodology and multi-parameter crosstalk analysis
16
作者 Ke Chen Lu Liu +5 位作者 Li-Nan Xu Fei Hu Yuan Yang Jia-Hui Zuo Le-Le Zhang Yang Zhao 《Petroleum Science》 SCIE EI CAS CSCD 2024年第1期252-271,共20页
Seismic migration and inversion are closely related techniques to portray subsurface images and identify hydrocarbon reservoirs.Seismic migration aims at obtaining structural images of subsurface geologic discontinuit... Seismic migration and inversion are closely related techniques to portray subsurface images and identify hydrocarbon reservoirs.Seismic migration aims at obtaining structural images of subsurface geologic discontinuities.More specifically,seismic migration estimates the reflectivity function(stacked average reflectivity or pre-stack angle-dependent reflectivity)from seismic reflection data.On the other hand,seismic inversion quantitatively estimates the intrinsic rock properties of subsurface formulations.Such seismic inversion methods are applicable to detect hydrocarbon reservoirs that may exhibit lateral variations in the inverted parameters.Although there exist many differences,pre-stack seismic migration is similar with the first iteration of the general linearized seismic inversion.Usually,seismic migration and inversion techniques assume an acoustic or isotropic elastic medium.Unconventional reservoirs such as shale and tight sand formation have notable anisotropic property.We present a linearized waveform inversion(LWI)scheme for weakly anisotropic elastic media with vertical transversely isotropic(VTI)symmetry.It is based on two-way anisotropic elastic wave equation and simultaneously inverts for the localized perturbations(ΔVp_(0)/Vp_(0)/Vs_(0)/Vs_(0)/,Δ∈,Δδ)from the long-wavelength reference model.Our proposed VTI-elastic LWI is an iterative method that requires a forward and an adjoint operator acting on vectors in each iteration.We derive the forward Born approximation operator by perturbation theory and adjoint operator via adjoint-state method.The inversion has improved the quality of the images and reduces the multi-parameter crosstalk comparing with the adjoint-based images.We have observed that the multi-parameter crosstalk problem is more prominent in the inversion images for Thomsen anisotropy parameters.Especially,the Thomsen parameter is the most difficult to resolve.We also analyze the multi-parameter crosstalk using scattering radiation patterns.The linearized waveform inversion for VTI-elastic media presented in this article provides quantitative information of the rock properties that has the potential to help identify hydrocarbon reservoirs. 展开更多
关键词 Elastic ANISOTROPY Least-squares imaging Waveform inversion Computational geophysics
下载PDF
Supersonic expansion and condensation characteristics of hydrogen gas under different temperature conditions
17
作者 Xinyue Duan Zeyu Zhang +4 位作者 Ziyuan Zhao Yang Liu Liang Gong Xuewen Cao Jiang Bian 《Chinese Journal of Chemical Engineering》 SCIE EI CAS CSCD 2024年第5期220-226,共7页
This paper introduced supersonic expansion liquefaction technology into the field of hydrogen liquefaction.The mathematical model for supersonic condensation of hydrogen gas in a Laval nozzle model was established.The... This paper introduced supersonic expansion liquefaction technology into the field of hydrogen liquefaction.The mathematical model for supersonic condensation of hydrogen gas in a Laval nozzle model was established.The supersonic expansion and condensation characteristics of hydrogen gas under different temperature conditions were investigated.The simulation results show that the droplet number rises rapidly from 0 at the nozzle throat as the inlet temperature increases,and the maximum droplet number generated is 1.339×10^(18)kg^(-1)at inlet temperature of 36.0 K.When hydrogen nucleation occurs,the droplet radius increases significantly and shows a positive correlation with the increase in the inlet temperature,and the maximum droplet radii are 6.667×10^(-8)m,1.043×10^(-7)m,and 1.099×10^(-7)m when the inlet temperature is 36.0 K,36.5 K,and 37.0 K,respectively.The maximum nucleation rate decreases with increasing inlet temperature,and the nucleation region of the Laval nozzle becomes wider.The liquefaction efficiency can be effectively improved by lowering the inlet temperature.This is because a lower inlet temperature provides more subcooling,which allows the hydrogen to reach the thermodynamic conditions required for large-scale condensation more quickly. 展开更多
关键词 HYDROGEN LIQUEFACTION SUPERSONIC CONDENSATION Laval nozzle Computational fluid dynamics
下载PDF
EG-STC: An Efficient Secure Two-Party Computation Scheme Based on Embedded GPU for Artificial Intelligence Systems
18
作者 Zhenjiang Dong Xin Ge +2 位作者 Yuehua Huang Jiankuo Dong Jiang Xu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4021-4044,共24页
This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.W... This paper presents a comprehensive exploration into the integration of Internet of Things(IoT),big data analysis,cloud computing,and Artificial Intelligence(AI),which has led to an unprecedented era of connectivity.We delve into the emerging trend of machine learning on embedded devices,enabling tasks in resource-limited environ-ments.However,the widespread adoption of machine learning raises significant privacy concerns,necessitating the development of privacy-preserving techniques.One such technique,secure multi-party computation(MPC),allows collaborative computations without exposing private inputs.Despite its potential,complex protocols and communication interactions hinder performance,especially on resource-constrained devices.Efforts to enhance efficiency have been made,but scalability remains a challenge.Given the success of GPUs in deep learning,lever-aging embedded GPUs,such as those offered by NVIDIA,emerges as a promising solution.Therefore,we propose an Embedded GPU-based Secure Two-party Computation(EG-STC)framework for Artificial Intelligence(AI)systems.To the best of our knowledge,this work represents the first endeavor to fully implement machine learning model training based on secure two-party computing on the Embedded GPU platform.Our experimental results demonstrate the effectiveness of EG-STC.On an embedded GPU with a power draw of 5 W,our implementation achieved a secure two-party matrix multiplication throughput of 5881.5 kilo-operations per millisecond(kops/ms),with an energy efficiency ratio of 1176.3 kops/ms/W.Furthermore,leveraging our EG-STC framework,we achieved an overall time acceleration ratio of 5–6 times compared to solutions running on server-grade CPUs.Our solution also exhibited a reduced runtime,requiring only 60%to 70%of the runtime of previously best-known methods on the same platform.In summary,our research contributes to the advancement of secure and efficient machine learning implementations on resource-constrained embedded devices,paving the way for broader adoption of AI technologies in various applications. 展开更多
关键词 Secure two-party computation embedded GPU acceleration privacy-preserving machine learning edge computing
下载PDF
Complementary comments on diagnosis,severity and prognosis prediction of acute pancreatitis
19
作者 Muhsin Ozgun Ozturk Sonay Aydin 《World Journal of Gastroenterology》 SCIE CAS 2024年第1期108-111,共4页
The radiological differential diagnosis of acute pancreatitis includes diffuse pancreatic lymphoma,diffuse autoimmune pancreatitis and groove located mass lesions that may mimic groove pancreatitis.Dual energy compute... The radiological differential diagnosis of acute pancreatitis includes diffuse pancreatic lymphoma,diffuse autoimmune pancreatitis and groove located mass lesions that may mimic groove pancreatitis.Dual energy computed tomography and diffusion weighted magnetic resonance imaging are useful in the early diagnosis of acute pancreatitis,and dual energy computed tomography is also useful in severity assessment and prognosis prediction.Walled off necrosis is an important complication in terms of prognosis,and it is important to know its radiological findings and distinguish it from pseudocyst. 展开更多
关键词 Acute pancreatitis Computed tomography Diffusion weighted imaging Dual energy computed tomography Walled off necrosis
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部