期刊文献+
共找到656篇文章
< 1 2 33 >
每页显示 20 50 100
CT-CloudDetect:用于遥感卫星云检测的混合模型
1
作者 方巍 陶恩屹 《遥感信息》 CSCD 北大核心 2024年第5期1-11,共11页
云检测是在遥感卫星云图中检测云的任务。近年来,人们提出了基于深度学习的云检测方法,并取得了良好的性能。然而,现有的基于深度学习的云检测模型大多还是基于卷积神经网络(convolutional neural network,CNN),由于卷积运算的固有局部... 云检测是在遥感卫星云图中检测云的任务。近年来,人们提出了基于深度学习的云检测方法,并取得了良好的性能。然而,现有的基于深度学习的云检测模型大多还是基于卷积神经网络(convolutional neural network,CNN),由于卷积运算的固有局部性,难以捕获长距离依赖关系。针对上述问题,文章提出一个基于CNN和ViT(Vision Transformer)的混合型云检测模型,并提出一种基于CNN和ViT的编码器,使网络具备捕捉局部和全局信息的能力。为了更好地融合语义和尺度不一致的特征,提出了一个双尺度注意力融合模块,通过注意力机制有选择地融合特征。此外,提出了轻量级路由解码器,该解码器通过路由结构降低模型复杂度。在3个公开云检测数据集上对模型进行了评估。大量实验表明,所提出的模型具有比现有模型更好的性能。 展开更多
关键词 深度学习 卷积神经网络 空间Vision Transformer 混合模型 云检测
下载PDF
Combining neural network-based method with heuristic policy for optimal task scheduling in hierarchical edge cloud 被引量:1
2
作者 Zhuo Chen Peihong Wei Yan Li 《Digital Communications and Networks》 SCIE CSCD 2023年第3期688-697,共10页
Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task schedu... Deploying service nodes hierarchically at the edge of the network can effectively improve the service quality of offloaded task requests and increase the utilization of resources.In this paper,we study the task scheduling problem in the hierarchically deployed edge cloud.We first formulate the minimization of the service time of scheduled tasks in edge cloud as a combinatorial optimization problem,blue and then prove the NP-hardness of the problem.Different from the existing work that mostly designs heuristic approximation-based algorithms or policies to make scheduling decision,we propose a newly designed scheduling policy,named Joint Neural Network and Heuristic Scheduling(JNNHSP),which combines a neural network-based method with a heuristic based solution.JNNHSP takes the Sequence-to-Sequence(Seq2Seq)model trained by Reinforcement Learning(RL)as the primary policy and adopts the heuristic algorithm as the auxiliary policy to obtain the scheduling solution,thereby achieving a good balance between the quality and the efficiency of the scheduling solution.In-depth experiments show that compared with a variety of related policies and optimization solvers,JNNHSP can achieve better performance in terms of scheduling error ratio,the degree to which the policy is affected by re-sources limitations,average service latency,and execution efficiency in a typical hierarchical edge cloud. 展开更多
关键词 Edge cloud Task scheduling neural network Reinforcement learning
下载PDF
Chaotic Metaheuristics with Multi-Spiking Neural Network Based Cloud Intrusion Detection
3
作者 Mohammad Yamin Saleh Bajaba Zenah Mahmoud AlKubaisy 《Computers, Materials & Continua》 SCIE EI 2023年第3期6101-6118,共18页
Cloud Computing(CC)provides data storage options as well as computing services to its users through the Internet.On the other hand,cloud users are concerned about security and privacy issues due to the increased numbe... Cloud Computing(CC)provides data storage options as well as computing services to its users through the Internet.On the other hand,cloud users are concerned about security and privacy issues due to the increased number of cyberattacks.Data protection has become an important issue since the users’information gets exposed to third parties.Computer networks are exposed to different types of attacks which have extensively grown in addition to the novel intrusion methods and hacking tools.Intrusion Detection Systems(IDSs)can be used in a network to manage suspicious activities.These IDSs monitor the activities of the CC environment and decide whether an activity is legitimate(normal)or malicious(intrusive)based on the established system’s confidentiality,availability and integrity of the data sources.In the current study,a Chaotic Metaheuristics with Optimal Multi-Spiking Neural Network-based Intrusion Detection(CMOMSNN-ID)model is proposed to secure the cloud environment.The presented CMOMSNNID model involves the Chaotic Artificial Bee Colony Optimization-based Feature Selection(CABC-FS)technique to reduce the curse of dimensionality.In addition,the Multi-Spiking Neural Network(MSNN)classifier is also used based on the simulation of brain functioning.It is applied to resolve pattern classification problems.In order to fine-tune the parameters relevant to theMSNN model,theWhale Optimization Algorithm(WOA)is employed to boost the classification results.To demonstrate the superiority of the proposed CMOMSNN-ID model,a useful set of simulations was performed.The simulation outcomes inferred that the proposed CMOMSNN-ID model accomplished a superior performance over other models with a maximum accuracy of 99.20%. 展开更多
关键词 cloud computing security intrusion detection feature selection multi-spiking neural network
下载PDF
Adaptive Butterfly Optimization Algorithm(ABOA)Based Feature Selection and Deep Neural Network(DNN)for Detection of Distributed Denial-of-Service(DDoS)Attacks in Cloud
4
作者 S.Sureshkumar G.K.D.Prasanna Venkatesan R.Santhosh 《Computer Systems Science & Engineering》 SCIE EI 2023年第10期1109-1123,共15页
Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualiz... Cloud computing technology provides flexible,on-demand,and completely controlled computing resources and services are highly desirable.Despite this,with its distributed and dynamic nature and shortcomings in virtualization deployment,the cloud environment is exposed to a wide variety of cyber-attacks and security difficulties.The Intrusion Detection System(IDS)is a specialized security tool that network professionals use for the safety and security of the networks against attacks launched from various sources.DDoS attacks are becoming more frequent and powerful,and their attack pathways are continually changing,which requiring the development of new detection methods.Here the purpose of the study is to improve detection accuracy.Feature Selection(FS)is critical.At the same time,the IDS’s computational problem is limited by focusing on the most relevant elements,and its performance and accuracy increase.In this research work,the suggested Adaptive butterfly optimization algorithm(ABOA)framework is used to assess the effectiveness of a reduced feature subset during the feature selection phase,that was motivated by this motive Candidates.Accurate classification is not compromised by using an ABOA technique.The design of Deep Neural Networks(DNN)has simplified the categorization of network traffic into normal and DDoS threat traffic.DNN’s parameters can be finetuned to detect DDoS attacks better using specially built algorithms.Reduced reconstruction error,no exploding or vanishing gradients,and reduced network are all benefits of the changes outlined in this paper.When it comes to performance criteria like accuracy,precision,recall,and F1-Score are the performance measures that show the suggested architecture outperforms the other existing approaches.Hence the proposed ABOA+DNN is an excellent method for obtaining accurate predictions,with an improved accuracy rate of 99.05%compared to other existing approaches. 展开更多
关键词 cloud computing distributed denial of service intrusion detection system adaptive butterfly optimization algorithm deep neural network
下载PDF
Grid Side Distributed Energy Storage Cloud Group End Region Hierarchical Time-Sharing Configuration Algorithm Based onMulti-Scale and Multi Feature Convolution Neural Network
5
作者 Wen Long Bin Zhu +3 位作者 Huaizheng Li Yan Zhu Zhiqiang Chen Gang Cheng 《Energy Engineering》 EI 2023年第5期1253-1269,共17页
There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capaci... There is instability in the distributed energy storage cloud group end region on the power grid side.In order to avoid large-scale fluctuating charging and discharging in the power grid environment and make the capacitor components showa continuous and stable charging and discharging state,a hierarchical time-sharing configuration algorithm of distributed energy storage cloud group end region on the power grid side based on multi-scale and multi feature convolution neural network is proposed.Firstly,a voltage stability analysis model based onmulti-scale and multi feature convolution neural network is constructed,and the multi-scale and multi feature convolution neural network is optimized based on Self-OrganizingMaps(SOM)algorithm to analyze the voltage stability of the cloud group end region of distributed energy storage on the grid side under the framework of credibility.According to the optimal scheduling objectives and network size,the distributed robust optimal configuration control model is solved under the framework of coordinated optimal scheduling at multiple time scales;Finally,the time series characteristics of regional power grid load and distributed generation are analyzed.According to the regional hierarchical time-sharing configuration model of“cloud”,“group”and“end”layer,the grid side distributed energy storage cloud group end regional hierarchical time-sharing configuration algorithm is realized.The experimental results show that after applying this algorithm,the best grid side distributed energy storage configuration scheme can be determined,and the stability of grid side distributed energy storage cloud group end region layered timesharing configuration can be improved. 展开更多
关键词 Multiscale and multi feature convolution neural network distributed energy storage at grid side cloud group end region layered time-sharing configuration algorithm
下载PDF
A graph neural network approach to the inverse design for thermal transparency with periodic interparticle system
6
作者 刘斌 王译浠 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第8期295-303,共9页
Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various t... Recent years have witnessed significant advances in utilizing machine learning-based techniques for thermal metamaterial-based structures and devices to attain favorable thermal transport behaviors.Among the various thermal transport behaviors,achieving thermal transparency stands out as particularly desirable and intriguing.Our earlier work demonstrated the use of a thermal metamaterial-based periodic interparticle system as the underlying structure for manipulating thermal transport behavior and achieving thermal transparency.In this paper,we introduce an approach based on graph neural network to address the complex inverse design problem of determining the design parameters for a thermal metamaterial-based periodic interparticle system with the desired thermal transport behavior.Our work demonstrates that combining graph neural network modeling and inference is an effective approach for solving inverse design problems associated with attaining desirable thermal transport behaviors using thermal metamaterials. 展开更多
关键词 thermal metamaterial thermal transparency inverse design machine learning graph neural net-work
下载PDF
Regression Method for Rail Fastener Tightness Based on Center-Line Projection Distance Feature and Neural Network
7
作者 Yuanhang Wang Duxin Liu +4 位作者 Sheng Guo Yifan Wu Jing Liu Wei Li Hongjie Wang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2024年第2期356-371,共16页
In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe ope... In the railway system,fasteners have the functions of damping,maintaining the track distance,and adjusting the track level.Therefore,routine maintenance and inspection of fasteners are important to ensure the safe operation of track lines.Currently,assessment methods for fastener tightness include manual observation,acoustic wave detection,and image detection.There are limitations such as low accuracy and efficiency,easy interference and misjudgment,and a lack of accurate,stable,and fast detection methods.Aiming at the small deformation characteristics and large elastic change of fasteners from full loosening to full tightening,this study proposes high-precision surface-structured light technology for fastener detection and fastener deformation feature extraction based on the center-line projection distance and a fastener tightness regression method based on neural networks.First,the method uses a 3D camera to obtain a fastener point cloud and then segments the elastic rod area based on the iterative closest point algorithm registration.Principal component analysis is used to calculate the normal vector of the segmented elastic rod surface and extract the point on the centerline of the elastic rod.The point is projected onto the upper surface of the bolt to calculate the projection distance.Subsequently,the mapping relationship between the projection distance sequence and fastener tightness is established,and the influence of each parameter on the fastener tightness prediction is analyzed.Finally,by setting up a fastener detection scene in the track experimental base,collecting data,and completing the algorithm verification,the results showed that the deviation between the fastener tightness regression value obtained after the algorithm processing and the actual measured value RMSE was 0.2196 mm,which significantly improved the effect compared with other tightness detection methods,and realized an effective fastener tightness regression. 展开更多
关键词 Railway system Fasteners Tightness inspection neural network regression 3D point cloud processing
下载PDF
3D Ice Shape Description Method Based on BLSOM Neural Network
8
作者 ZHU Bailiu ZUO Chenglin 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI CSCD 2024年第S01期70-80,共11页
When checking the ice shape calculation software,its accuracy is judged based on the proximity between the calculated ice shape and the typical test ice shape.Therefore,determining the typical test ice shape becomes t... When checking the ice shape calculation software,its accuracy is judged based on the proximity between the calculated ice shape and the typical test ice shape.Therefore,determining the typical test ice shape becomes the key task of the icing wind tunnel tests.In the icing wind tunnel test of the tail wing model of a large amphibious aircraft,in order to obtain accurate typical test ice shape,the Romer Absolute Scanner is used to obtain the 3D point cloud data of the ice shape on the tail wing model.Then,the batch-learning self-organizing map(BLSOM)neural network is used to obtain the 2D average ice shape along the model direction based on the 3D point cloud data of the ice shape,while its tolerance band is calculated using the probabilistic statistical method.The results show that the combination of 2D average ice shape and its tolerance band can represent the 3D characteristics of the test ice shape effectively,which can be used as the typical test ice shape for comparative analysis with the calculated ice shape. 展开更多
关键词 icing wind tunnel test ice shape batch-learning self-organizing map neural network 3D point cloud
下载PDF
Intelligent extraction of road cracks based on vehicle laser point cloud and panoramic sequence images
9
作者 Ming Guo Li Zhu +4 位作者 Ming Huang Jie Ji Xian Ren Yaxuan Wei Chutian Gao 《Journal of Road Engineering》 2024年第1期69-79,共11页
In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strat... In light of the limited efficacy of conventional methods for identifying pavement cracks and the absence of comprehensive depth and location data in two-dimensional photographs,this study presents an intelligent strategy for extracting road cracks.This methodology involves the integration of laser point cloud data obtained from a vehicle-mounted system and a panoramic sequence of images.The study employs a vehicle-mounted LiDAR measurement system to acquire laser point cloud and panoramic sequence image data simultaneously.A convolutional neural network is utilized to extract cracks from the panoramic sequence image.The extracted sequence image is then aligned with the laser point cloud,enabling the assignment of RGB information to the vehicle-mounted three dimensional(3D)point cloud and location information to the two dimensional(2D)panoramic image.Additionally,a threshold value is set based on the crack elevation change to extract the aligned roadway point cloud.The three-dimensional data pertaining to the cracks can be acquired.The experimental findings demonstrate that the use of convolutional neural networks has yielded noteworthy outcomes in the extraction of road cracks.The utilization of point cloud and image alignment techniques enables the extraction of precise location data pertaining to road cracks.This approach exhibits superior accuracy when compared to conventional methods.Moreover,it facilitates rapid and accurate identification and localization of road cracks,thereby playing a crucial role in ensuring road maintenance and traffic safety.Consequently,this technique finds extensive application in the domains of intelligent transportation and urbanization development.The technology exhibits significant promise for use in the domains of intelligent transportation and city development. 展开更多
关键词 Road crack extraction Vehicle laser point cloud Panoramic sequence images Convolutional neural network
下载PDF
Job Scheduling for Cloud Computing Using Neural Networks 被引量:1
10
作者 Mahmoud Maqableh Huda Karajeh Ra’ed Masa’deh 《Communications and Network》 2014年第3期191-200,共10页
Cloud computing aims to maximize the benefit of distributed resources and aggregate them to achieve higher throughput to solve large scale computation problems. In this technology, the customers rent the resources and... Cloud computing aims to maximize the benefit of distributed resources and aggregate them to achieve higher throughput to solve large scale computation problems. In this technology, the customers rent the resources and only pay per use. Job scheduling is one of the biggest issues in cloud computing. Scheduling of users’ requests means how to allocate resources to these requests to finish the tasks in minimum time. The main task of job scheduling system is to find the best resources for user’s jobs, taking into consideration some statistics and dynamic parameters restrictions of users’ jobs. In this research, we introduce cloud computing, genetic algorithm and artificial neural networks, and then review the literature of cloud job scheduling. Many researchers in the literature tried to solve the cloud job scheduling using different techniques. Most of them use artificial intelligence techniques such as genetic algorithm and ant colony to solve the problem of job scheduling and to find the optimal distribution of resources. Unfortunately, there are still some problems in this research area. Therefore, we propose implementing artificial neural networks to optimize the job scheduling results in cloud as it can find new set of classifications not only search within the available set. 展开更多
关键词 cloud COMPUTING JOB Scheduling Artificial INTELLIGENCE Artificial neural Networks
下载PDF
Convolutional Neural Network-Based Deep Q-Network (CNN-DQN) Resource Management in Cloud Radio Access Network 被引量:2
11
作者 Amjad Iqbal Mau-Luen Tham Yoong Choon Chang 《China Communications》 SCIE CSCD 2022年第10期129-142,共14页
The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a promi... The recent surge of mobile subscribers and user data traffic has accelerated the telecommunication sector towards the adoption of the fifth-generation (5G) mobile networks. Cloud radio access network (CRAN) is a prominent framework in the 5G mobile network to meet the above requirements by deploying low-cost and intelligent multiple distributed antennas known as remote radio heads (RRHs). However, achieving the optimal resource allocation (RA) in CRAN using the traditional approach is still challenging due to the complex structure. In this paper, we introduce the convolutional neural network-based deep Q-network (CNN-DQN) to balance the energy consumption and guarantee the user quality of service (QoS) demand in downlink CRAN. We first formulate the Markov decision process (MDP) for energy efficiency (EE) and build up a 3-layer CNN to capture the environment feature as an input state space. We then use DQN to turn on/off the RRHs dynamically based on the user QoS demand and energy consumption in the CRAN. Finally, we solve the RA problem based on the user constraint and transmit power to guarantee the user QoS demand and maximize the EE with a minimum number of active RRHs. In the end, we conduct the simulation to compare our proposed scheme with nature DQN and the traditional approach. 展开更多
关键词 energy efficiency(EE) markov decision process(MDP) convolutional neural network(CNN) cloud RAN deep Q-network(DQN)
下载PDF
Underground Disease Detection Based on Cloud Computing and Attention Region Neural Network 被引量:1
12
作者 Pinjie Xu Ce Li +3 位作者 Liguo Zhang Feng Yang Jing Zheng Jingwu Feng 《Journal on Artificial Intelligence》 2019年第1期9-18,共10页
Detecting the underground disease is very crucial for the roadbed health monitoring and maintenance of transport facilities,since it is very closely related to the structural health and reliability with the rapid deve... Detecting the underground disease is very crucial for the roadbed health monitoring and maintenance of transport facilities,since it is very closely related to the structural health and reliability with the rapid development of road traffic.Ground penetrating radar(GPR)is widely used to detect road and underground diseases.However,it is still a challenging task due to data access anywhere,transmission security and data processing on cloud.Cloud computing can provide scalable and powerful technologies for large-scale storage,processing and dissemination of GPR data.Combined with cloud computing and radar detection technology,it is possible to locate the underground disease quickly and accurately.This paper deploys the framework of a ground disease detection system based on cloud computing and proposes an attention region convolution neural network for object detection in the GPR images.Experimental results of the precision and recall metrics show that the proposed approach is more efficient than traditional objection detection method in ground disease detection of cloud based system. 展开更多
关键词 cloud computing ground PENETRATING radar CONVOLUTION neural network
下载PDF
Building a Tax Predictive Model Based on the Cloud Neural Network
13
作者 田永青 李志 朱仲英 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2003年第3期81-86,共6页
Tax is very important to the whole country, so a scientific tax predictive model is needed. This paper introduces the theory of the cloud model. On this basis, it presents a cloud neural network, and analyzes the main... Tax is very important to the whole country, so a scientific tax predictive model is needed. This paper introduces the theory of the cloud model. On this basis, it presents a cloud neural network, and analyzes the main factors which influence the tax revenue. Then if proposes a tax predictive model based on the cloud neural network. The model combines the strongpoints of the cloud model and the neural network. The experiment and simulation results show the effectiveness of the algorithm in this paper. 展开更多
关键词 cloud model Simplified TS cloud inference neural network Tax predictive model.
下载PDF
Insider Attack Detection Using Deep Belief Neural Network in Cloud Computing
14
作者 A.S.Anakath R.Kannadasan +2 位作者 Niju P.Joseph P.Boominathan G.R.Sreekanth 《Computer Systems Science & Engineering》 SCIE EI 2022年第5期479-492,共14页
Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase ... Cloud computing is a high network infrastructure where users,owners,third users,authorized users,and customers can access and store their information quickly.The use of cloud computing has realized the rapid increase of information in every field and the need for a centralized location for processing efficiently.This cloud is nowadays highly affected by internal threats of the user.Sensitive applications such as banking,hospital,and business are more likely affected by real user threats.An intruder is presented as a user and set as a member of the network.After becoming an insider in the network,they will try to attack or steal sensitive data during information sharing or conversation.The major issue in today's technological development is identifying the insider threat in the cloud network.When data are lost,compromising cloud users is difficult.Privacy and security are not ensured,and then,the usage of the cloud is not trusted.Several solutions are available for the external security of the cloud network.However,insider or internal threats need to be addressed.In this research work,we focus on a solution for identifying an insider attack using the artificial intelligence technique.An insider attack is possible by using nodes of weak users’systems.They will log in using a weak user id,connect to a network,and pretend to be a trusted node.Then,they can easily attack and hack information as an insider,and identifying them is very difficult.These types of attacks need intelligent solutions.A machine learning approach is widely used for security issues.To date,the existing lags can classify the attackers accurately.This information hijacking process is very absurd,which motivates young researchers to provide a solution for internal threats.In our proposed work,we track the attackers using a user interaction behavior pattern and deep learning technique.The usage of mouse movements and clicks and keystrokes of the real user is stored in a database.The deep belief neural network is designed using a restricted Boltzmann machine(RBM)so that the layer of RBM communicates with the previous and subsequent layers.The result is evaluated using a Cooja simulator based on the cloud environment.The accuracy and F-measure are highly improved compared with when using the existing long short-term memory and support vector machine. 展开更多
关键词 cloud computing security insider attack network security PRIVACY user interaction behavior deep belief neural network
下载PDF
Cooperative Content Caching and Delivery in Vehicular Networks: A Deep Neural Network Approach
15
作者 Xuelian Cai Jing Zheng +2 位作者 Yuchuan Fu Yao Zhang Weigang Wu 《China Communications》 SCIE CSCD 2023年第3期43-54,共12页
The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.H... The growing demand for low delay vehicular content has put tremendous strain on the backbone network.As a promising alternative,cooperative content caching among different cache nodes can reduce content access delay.However,heterogeneous cache nodes have different communication modes and limited caching capacities.In addition,the high mobility of vehicles renders the more complicated caching environment.Therefore,performing efficient cooperative caching becomes a key issue.In this paper,we propose a cross-tier cooperative caching architecture for all contents,which allows the distributed cache nodes to cooperate.Then,we devise the communication link and content caching model to facilitate timely content delivery.Aiming at minimizing transmission delay and cache cost,an optimization problem is formulated.Furthermore,we use a multi-agent deep reinforcement learning(MADRL)approach to model the decision-making process for caching among heterogeneous cache nodes,where each agent interacts with the environment collectively,receives observations yet a common reward,and learns its own optimal policy.Extensive simulations validate that the MADRL approach can enhance hit ratio while reducing transmission delay and cache cost. 展开更多
关键词 dynamic content delivery cooperative content caching deep neural network vehicular net-works
下载PDF
Secured Health Data Transmission Using Lagrange Interpolation and Artificial Neural Network
16
作者 S.Vidhya V.Kalaivani 《Computer Systems Science & Engineering》 SCIE EI 2023年第6期2673-2692,共20页
In recent decades,the cloud computing contributes a prominent role in health care sector as the patient health records are transferred and collected using cloud computing services.The doctors have switched to cloud co... In recent decades,the cloud computing contributes a prominent role in health care sector as the patient health records are transferred and collected using cloud computing services.The doctors have switched to cloud computing as it provides multiple advantageous measures including wide storage space and easy availability without any limitations.This necessitates the medical field to be redesigned by cloud technology to preserve information about patient’s critical diseases,electrocardiogram(ECG)reports,and payment details.The proposed work utilizes a hybrid cloud pattern to share Massachusetts Institute of Technology-Beth Israel Hospital(MIT-BIH)resources over the private and public cloud.The stored data are categorized as significant and non-significant by Artificial Neural Networks(ANN).The significant data undergoes encryption by Lagrange key management which automatically generates the key and stores it in the hidden layer.Upon receiving the request from a secondary user,the primary user verifies the authentication of the request and transmits the key via Gmail to the secondary user.Once the key matches the key in the hidden layer,the preserved information will be shared between the users.Due to the enhanced privacy preserving key generation,the proposed work prevents the tracking of keys by malicious users.The outcomes reveal that the introduced work provides improved success rate with reduced computational time. 展开更多
关键词 cloud computing homomorphic encryption artificial neural network lagrange method CRYPTOGRAPHY
下载PDF
Cloud Resource Integrated Prediction Model Based on Variational Modal Decomposition-Permutation Entropy and LSTM
17
作者 Xinfei Li Xiaolan Xie +1 位作者 Yigang Tang Qiang Guo 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2707-2724,共18页
Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking co... Predicting the usage of container cloud resources has always been an important and challenging problem in improving the performance of cloud resource clusters.We proposed an integrated prediction method of stacking container cloud resources based on variational modal decomposition(VMD)-Permutation entropy(PE)and long short-term memory(LSTM)neural network to solve the prediction difficulties caused by the non-stationarity and volatility of resource data.The variational modal decomposition algorithm decomposes the time series data of cloud resources to obtain intrinsic mode function and residual components,which solves the signal decomposition algorithm’s end-effect and modal confusion problems.The permutation entropy is used to evaluate the complexity of the intrinsic mode function,and the reconstruction based on similar entropy and low complexity is used to reduce the difficulty of modeling.Finally,we use the LSTM and stacking fusion models to predict and superimpose;the stacking integration model integrates Gradient boosting regression(GBR),Kernel ridge regression(KRR),and Elastic net regression(ENet)as primary learners,and the secondary learner adopts the kernel ridge regression method with solid generalization ability.The Amazon public data set experiment shows that compared with Holt-winters,LSTM,and Neuralprophet models,we can see that the optimization range of multiple evaluation indicators is 0.338∼1.913,0.057∼0.940,0.000∼0.017 and 1.038∼8.481 in root means square error(RMSE),mean absolute error(MAE),mean absolute percentage error(MAPE)and variance(VAR),showing its stability and better prediction accuracy. 展开更多
关键词 cloud resource prediction variational modal decomposition permutation entropy long and short-term neural network stacking integration
下载PDF
智能云平台异构数据库协同检索算法研究 被引量:1
18
作者 倪强 周守东 宋婷婷 《保定学院学报》 2024年第2期91-97,共7页
智能云平台是一种集成各种资源和功能的高效计算平台,可以为用户提供灵活的数据存储和高效的数据检索服务.随着信息技术飞速发展,异构数据库中数据呈爆炸式增长.为了提升网络异构数据库检索效果,提出智能云平台异构数据库协同检索算法.... 智能云平台是一种集成各种资源和功能的高效计算平台,可以为用户提供灵活的数据存储和高效的数据检索服务.随着信息技术飞速发展,异构数据库中数据呈爆炸式增长.为了提升网络异构数据库检索效果,提出智能云平台异构数据库协同检索算法.构建智能云平台异构数据库,均衡异构数据库中的节点能耗;排序云平台多源异构数据,预处理异构数据;建立以索引库为核心的检索服务引擎,利用神经网络提取多源异构数据特征,实现异构数据库的匹配检索.测试结果表明,所提算法查准率为96%,查全率为94%,数据丢失量仅为1.由此证明,所提方法有效提高了网络异构数据库检索效果. 展开更多
关键词 云平台 异构数据库 索引库 神经网络 匹配检索
下载PDF
基于纹理特征的深度学习云和云阴影检测
19
作者 张昊 焦瑞莉 +2 位作者 乔聪聪 霍娟 宗雪梅 《计算机工程与设计》 北大核心 2024年第5期1580-1587,共8页
针对云和云阴影检测过程中存在边界不准确以及易与地表混淆等问题,构建一种融合纹理特征模块的卷积神经网络模型对Landsat 8遥感图像进行云和云阴影检测。引入基于统计特性的纹理特征模块进行纹理特征的提取和学习,在训练过程采用焦点... 针对云和云阴影检测过程中存在边界不准确以及易与地表混淆等问题,构建一种融合纹理特征模块的卷积神经网络模型对Landsat 8遥感图像进行云和云阴影检测。引入基于统计特性的纹理特征模块进行纹理特征的提取和学习,在训练过程采用焦点损失函数削弱样本不均衡带来的影响。实验结果表明,该模型细化了云和云阴影的边界等纹理细节,减少了云和云阴影的误检和漏检现象,提高了云和云阴影的检测精度。 展开更多
关键词 云检测 云阴影检测 统计特性 纹理特征 卷积神经网络 遥感图像 焦点损失函数
下载PDF
基于图的点云研究综述
20
作者 梁循 李志莹 蒋洪迅 《计算机研究与发展》 EI CSCD 北大核心 2024年第11期3870-3896,共27页
点云的处理、传输、语义分割等是3维计算机视觉领域重要的分析任务.现如今,图神经网络和图结构在点云研究方面的有效性已被证实,基于图的点云(graph-based point cloud,GPC)研究不断涌现.因此,一种统一的研究角度、框架和方法论亟待形成... 点云的处理、传输、语义分割等是3维计算机视觉领域重要的分析任务.现如今,图神经网络和图结构在点云研究方面的有效性已被证实,基于图的点云(graph-based point cloud,GPC)研究不断涌现.因此,一种统一的研究角度、框架和方法论亟待形成.系统性梳理了GPC研究的各种应用场景,包括配准、降噪、压缩、表示学习、分类、分割、检测等任务,概括出GPC研究的一般性框架,提出了一条覆盖当前GPC全域研究的技术路线.具体来说,给出了GPC研究的分层概念范畴,包括底层数据处理、中层表示学习、高层识别任务;综述了各领域中的GPC模型或算法,包括静态和动态点云的处理算法、有监督和无监督的表示学习模型、传统或机器学习的GPC识别算法;总结了其中代表性的成果及其核心思想,譬如动态更新每层特征空间对应的最近邻图、分层以及参数共享的动态点聚合模块,结合图划分和图卷积提高分割精度;对比了模型性能,包括总体精度(overall accuracy,OA)、平均精度(mean accuracy,mAcc)、平均交并比(mean intersection over union,mIoU);在分析比较现有模型和方法的基础上,归纳了GPC目前面临的主要挑战,提出相应的研究问题,并展望未来的研究方向.建立的GPC研究框架具有一般性和通用性,为后续研究者从事GPC这个新型交叉领域研究提供了领域定位、技术总结及宏观视角.点云研究的出现,是探测器硬件技术长足进步后应运而生的结果;点云研究的现状表明在理论和实践之间存在一些挑战,一些关键问题还有待解决.同时,点云研究的发展将推动人工智能进入新的时代. 展开更多
关键词 点云 图结构 基于图的点云 图信号处理 时空图 图神经网络
下载PDF
上一页 1 2 33 下一页 到第
使用帮助 返回顶部