期刊文献+
共找到2,325篇文章
< 1 2 117 >
每页显示 20 50 100
An Optimized Approach to Deep Learning for Botnet Detection and Classification for Cybersecurity in Internet of Things Environment
1
作者 Abdulrahman Alzahrani 《Computers, Materials & Continua》 SCIE EI 2024年第8期2331-2349,共19页
The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent ... The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process. 展开更多
关键词 Botnet detection internet of things gorilla troops optimizer hyperparameter tuning intrusion detection system
下载PDF
The Impact of Domain Name Server(DNS)over Hypertext Transfer Protocol Secure(HTTPS)on Cyber Security:Limitations,Challenges,and Detection Techniques
2
作者 Muhammad Dawood Shanshan Tu +4 位作者 Chuangbai Xiao Muhammad Haris Hisham Alasmary Muhammad Waqas Sadaqat Ur Rehman 《Computers, Materials & Continua》 SCIE EI 2024年第9期4513-4542,共30页
The DNS over HTTPS(Hypertext Transfer Protocol Secure)(DoH)is a new technology that encrypts DNS traffic,enhancing the privacy and security of end-users.However,the adoption of DoH is still facing several research cha... The DNS over HTTPS(Hypertext Transfer Protocol Secure)(DoH)is a new technology that encrypts DNS traffic,enhancing the privacy and security of end-users.However,the adoption of DoH is still facing several research challenges,such as ensuring security,compatibility,standardization,performance,privacy,and increasing user awareness.DoH significantly impacts network security,including better end-user privacy and security,challenges for network security professionals,increasing usage of encrypted malware communication,and difficulty adapting DNS-based security measures.Therefore,it is important to understand the impact of DoH on network security and develop newprivacy-preserving techniques to allowthe analysis of DoH traffic without compromising user privacy.This paper provides an in-depth analysis of the effects of DoH on cybersecurity.We discuss various techniques for detecting DoH tunneling and identify essential research challenges that need to be addressed in future security studies.Overall,this paper highlights the need for continued research and development to ensure the effectiveness of DoH as a tool for improving privacy and security. 展开更多
关键词 DNS DNS over HTTPS CYBERSECURITY machine learning
下载PDF
Machine Learning Empowered Security and Privacy Architecture for IoT Networks with the Integration of Blockchain
3
作者 Sohaib Latif M.Saad Bin Ilyas +3 位作者 Azhar Imran Hamad Ali Abosaq Abdulaziz Alzubaidi Vincent Karovic Jr. 《Intelligent Automation & Soft Computing》 2024年第2期353-379,共27页
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ... The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively. 展开更多
关键词 Machine learning internet of things blockchain data privacy SECURITY Industry 4.0
下载PDF
Potential Benefits and Obstacles of the Use of Internet of Things in Saudi Universities: Empirical Study
4
作者 Najmah Adel Fallatah Fahad Mahmoud Ghabban +4 位作者 Omair Ameerbakhsh Ibrahim Alfadli Wael Ghazy Alheadary Salem Sulaiman Alatawi Ashwaq Hasen Al-Shehri 《Advances in Internet of Things》 2024年第1期1-20,共20页
Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of thi... Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities. 展开更多
关键词 Internet of Things (IoT) M2M Factors Obstacles Potential Benefits UNIVERSITIES
下载PDF
A High Efficiency Hardware Implementation of S-Boxes Based on Composite Field for Advanced Encryption Standard
5
作者 Yawen Wang Sini Bin +1 位作者 Shikai Zhu Xiaoting Hu 《Journal of Computer and Communications》 2024年第4期228-246,共19页
The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization... The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization implementation of the S-box is proposed based on the composite field inverse operation in this paper. This proposed S-box implementation is modeled using Verilog language and synthesized using Design Complier software under the premise of ensuring the correctness of the simulation result. The synthesis results show that, compared to several current S-box implementation schemes, the proposed implementation of the S-box significantly reduces the area overhead and critical path delay, then gets higher hardware efficiency. This provides strong support for realizing efficient and compact S-box ASIC designs. 展开更多
关键词 Advanced Encryption Standard (AES) S-BOX Tower Field Hardware Implementation Application Specific Integration Circuit (ASIC)
下载PDF
Research and Application of Caideng Model Rendering Technology for Virtual Reality
6
作者 Xuefeng Wang Yadong Wu +1 位作者 Yan Luo Dan Luo 《Journal of Computer and Communications》 2024年第4期95-110,共16页
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C... With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience. 展开更多
关键词 Virtual Reality Caideng Model Lighting Model Point Light Rendering
下载PDF
Road Traffic Monitoring from Aerial Images Using Template Matching and Invariant Features 被引量:1
7
作者 Asifa Mehmood Qureshi Naif Al Mudawi +2 位作者 Mohammed Alonazi Samia Allaoua Chelloug Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第3期3683-3701,共19页
Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibilit... Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved. 展开更多
关键词 Unmanned Aerial Vehicles(UAV) aerial images DATASET object detection object tracking data elimination template matching blob detection SIFT VAID
下载PDF
Leveraging User-Generated Comments and Fused BiLSTM Models to Detect and Predict Issues with Mobile Apps
8
作者 Wael M.S.Yafooz Abdullah Alsaeedi 《Computers, Materials & Continua》 SCIE EI 2024年第4期735-759,共25页
In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mo... In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user comments.However, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models. 展开更多
关键词 Mobile apps issues play store user comments deep learning LSTM bidirectional LSTM
下载PDF
Segmentation of Head and Neck Tumors Using Dual PET/CT Imaging:Comparative Analysis of 2D,2.5D,and 3D Approaches Using UNet Transformer
9
作者 Mohammed A.Mahdi Shahanawaj Ahamad +3 位作者 Sawsan A.Saad Alaa Dafhalla Alawi Alqushaibi Rizwan Qureshi 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第12期2351-2373,共23页
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p... The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging. 展开更多
关键词 PET/CT imaging tumor segmentation weighted fusion transformer multi-modal imaging deep learning neural networks clinical oncology
下载PDF
An object detection approach with residual feature fusion and second-order term attention mechanism
10
作者 Cuijin Li Zhong Qu Shengye Wang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期411-424,共14页
Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate a... Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate and it is difficult to distinguish overlapping and occluded objects,the authors propose a network model with a second-order term attention mechanism and occlusion loss.First,the backbone network is built on CSPDarkNet53.Then a method is designed for the feature extraction network based on an item-wise attention mechanism,which uses the filtered weighted feature vector to replace the original residual fusion and adds a second-order term to reduce the information loss in the process of fusion and accelerate the convergence of the model.Finally,an objected occlusion regression loss function is studied to reduce the problems of missed detections caused by dense objects.Sufficient experimental results demonstrate that the authors’method achieved state-of-the-art performance without reducing the detection speed.The mAP@.5 of the method is 85.8%on the Foggy_cityscapes dataset and the mAP@.5 of the method is 97.8%on the KITTI dataset. 展开更多
关键词 artificial intelligence computer vision image processing machine learning neural network object recognition
下载PDF
A Time Series Short-Term Prediction Method Based on Multi-Granularity Event Matching and Alignment
11
作者 Haibo Li Yongbo Yu +1 位作者 Zhenbo Zhao Xiaokang Tang 《Computers, Materials & Continua》 SCIE EI 2024年第1期653-676,共24页
Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same g... Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method. 展开更多
关键词 Time series short-term prediction multi-granularity event ALIGNMENT event matching
下载PDF
Cluster DetectionMethod of Endogenous Security Abnormal Attack Behavior in Air Traffic Control Network
12
作者 Ruchun Jia Jianwei Zhang +2 位作者 Yi Lin Yunxiang Han Feike Yang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2523-2546,共24页
In order to enhance the accuracy of Air Traffic Control(ATC)cybersecurity attack detection,in this paper,a new clustering detection method is designed for air traffic control network security attacks.The feature set f... In order to enhance the accuracy of Air Traffic Control(ATC)cybersecurity attack detection,in this paper,a new clustering detection method is designed for air traffic control network security attacks.The feature set for ATC cybersecurity attacks is constructed by setting the feature states,adding recursive features,and determining the feature criticality.The expected information gain and entropy of the feature data are computed to determine the information gain of the feature data and reduce the interference of similar feature data.An autoencoder is introduced into the AI(artificial intelligence)algorithm to encode and decode the characteristics of ATC network security attack behavior to reduce the dimensionality of the ATC network security attack behavior data.Based on the above processing,an unsupervised learning algorithm for clustering detection of ATC network security attacks is designed.First,determine the distance between the clustering clusters of ATC network security attack behavior characteristics,calculate the clustering threshold,and construct the initial clustering center.Then,the new average value of all feature objects in each cluster is recalculated as the new cluster center.Second,it traverses all objects in a cluster of ATC network security attack behavior feature data.Finally,the cluster detection of ATC network security attack behavior is completed by the computation of objective functions.The experiment took three groups of experimental attack behavior data sets as the test object,and took the detection rate,false detection rate and recall rate as the test indicators,and selected three similar methods for comparative test.The experimental results show that the detection rate of this method is about 98%,the false positive rate is below 1%,and the recall rate is above 97%.Research shows that this method can improve the detection performance of security attacks in air traffic control network. 展开更多
关键词 Air traffic control network security attack behavior cluster detection behavioral characteristics information gain cluster threshold automatic encoder
下载PDF
Preserving Data Secrecy and Integrity for Cloud Storage Using Smart Contracts and Cryptographic Primitives
13
作者 Maher Alharby 《Computers, Materials & Continua》 SCIE EI 2024年第5期2449-2463,共15页
Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This a... Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems. 展开更多
关键词 Cloud storage data secrecy data integrity smart contracts CRYPTOGRAPHY
下载PDF
Starlet:Network defense resource allocation with multi-armed bandits for cloud-edge crowd sensing in IoT
14
作者 Hui Xia Ning Huang +2 位作者 Xuecai Feng Rui Zhang Chao Liu 《Digital Communications and Networks》 SCIE CSCD 2024年第3期586-596,共11页
The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense ... The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense resource allocation with multi-armed bandits to maximize the network's overall benefit.Firstly,we propose the method for dynamic setting of node defense resource thresholds to obtain the defender(attacker)benefit function of edge servers(nodes)and distribution.Secondly,we design a defense resource sharing mechanism for neighboring nodes to obtain the defense capability of nodes.Subsequently,we use the decomposability and Lipschitz conti-nuity of the defender's total expected utility to reduce the difference between the utility's discrete and continuous arms and analyze the difference theoretically.Finally,experimental results show that the method maximizes the defender's total expected utility and reduces the difference between the discrete and continuous arms of the utility. 展开更多
关键词 Internet of things Defense resource sharing Multi-armed bandits Defense resource allocation
下载PDF
Enhancing personalized exercise recommendation with student and exercise portraits
15
作者 Wei-Wei Gao Hui-Fang Ma +2 位作者 Yan Zhao Jing Wang Quan-Hong Tian 《Journal of Electronic Science and Technology》 EI CAS CSCD 2024年第2期91-109,共19页
The exercise recommendation system is emerging as a promising application in online learning scenarios,providing personalized recommendations to assist students with explicit learning directions.Existing solutions gen... The exercise recommendation system is emerging as a promising application in online learning scenarios,providing personalized recommendations to assist students with explicit learning directions.Existing solutions generally follow a collaborative filtering paradigm,while the implicit connections between students(exercises)have been largely ignored.In this study,we aim to propose an exercise recommendation paradigm that can reveal the latent connections between student-student(exercise-exercise).Specifically,a new framework was proposed,namely personalized exercise recommendation with student and exercise portraits(PERP).It consists of three sequential and interdependent modules:Collaborative student exercise graph(CSEG)construction,joint random walk,and recommendation list optimization.Technically,CSEG is created as a unified heterogeneous graph with students’response behaviors and student(exercise)relationships.Then,a joint random walk to take full advantage of the spectral properties of nearly uncoupled Markov chains is performed on CSEG,which allows for full exploration of both similar exercises that students have finished and connections between students(exercises)with similar portraits.Finally,we propose to optimize the recommendation list to obtain different exercise suggestions.After analyses of two public datasets,the results demonstrated that PERP can satisfy novelty,accuracy,and diversity. 展开更多
关键词 Educational data mining Exercise recommend Joint random walk Nearly uncoupled Markov chains Optimization Personalized learning
下载PDF
Hierarchical Optimization Method for Federated Learning with Feature Alignment and Decision Fusion
16
作者 Ke Li Xiaofeng Wang Hu Wang 《Computers, Materials & Continua》 SCIE EI 2024年第10期1391-1407,共17页
In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate... In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate accuracy of the global model.Utilizing shared feature representations alongside customized classifiers for individual clients emerges as a promising personalized solution.Nonetheless,previous research has frequently neglected the integration of global knowledge into local representation learning and the synergy between global and local classifiers,thereby limiting model performance.To tackle these issues,this study proposes a hierarchical optimization method for federated learning with feature alignment and the fusion of classification decisions(FedFCD).FedFCD regularizes the relationship between global and local feature representations to achieve alignment and incorporates decision information from the global classifier,facilitating the late fusion of decision outputs from both global and local classifiers.Additionally,FedFCD employs a hierarchical optimization strategy to flexibly optimize model parameters.Through experiments on the Fashion-MNIST,CIFAR-10 and CIFAR-100 datasets,we demonstrate the effectiveness and superiority of FedFCD.For instance,on the CIFAR-100 dataset,FedFCD exhibited a significant improvement in average test accuracy by 6.83%compared to four outstanding personalized federated learning approaches.Furthermore,extended experiments confirm the robustness of FedFCD across various hyperparameter values. 展开更多
关键词 Federated learning data heterogeneity feature alignment decision fusion hierarchical optimization
下载PDF
Bifurcation analysis and control study of improved full-speed differential model in connected vehicle environment
17
作者 艾文欢 雷正清 +2 位作者 李丹洋 方栋梁 刘大为 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第7期245-266,共22页
In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems ... In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems and designing control schemes for unstable pivots can alleviate the traffic congestion problem from a new perspective. In this work, the full-speed differential model considering the vehicle network environment is improved in order to adjust the traffic flow from the perspective of bifurcation control, the existence conditions of Hopf bifurcation and saddle-node bifurcation in the model are proved theoretically, and the stability mutation point for the stability of the transportation system is found. For the unstable bifurcation point, a nonlinear system feedback controller is designed by using Chebyshev polynomial approximation and stochastic feedback control method. The advancement, postponement, and elimination of Hopf bifurcation are achieved without changing the system equilibrium point, and the mutation behavior of the transportation system is controlled so as to alleviate the traffic congestion. The changes in the stability of complex traffic systems are explained through the bifurcation analysis, which can better capture the characteristics of the traffic flow. By adjusting the control parameters in the feedback controllers, the influence of the boundary conditions on the stability of the traffic system is adequately described, and the effects of the unstable focuses and saddle points on the system are suppressed to slow down the traffic flow. In addition, the unstable bifurcation points can be eliminated and the Hopf bifurcation can be controlled to advance, delay, and disappear,so as to realize the control of the stability behavior of the traffic system, which can help to alleviate the traffic congestion and describe the actual traffic phenomena as well. 展开更多
关键词 bifurcation analysis vehicle queuing bifurcation control Hopf bifurcation
下载PDF
Contemporary Study for Detection of COVID-19 Using Machine Learning with Explainable AI
18
作者 Saad Akbar Humera Azam +3 位作者 Sulaiman Sulmi Almutairi Omar Alqahtani Habib Shah Aliya Aleryani 《Computers, Materials & Continua》 SCIE EI 2024年第7期1075-1104,共30页
The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplo... The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models. 展开更多
关键词 COVID-19 detection deep neural networks support vector machine CXIs InceptionV3 VGG16 VGG19 t-SNE embedding CLAHE attention mechanism XAI
下载PDF
For LEO Satellite Networks: Intelligent Interference Sensing and Signal Reconstruction Based on Blind Separation Technology
19
作者 Chengjie Li Lidong Zhu Zhen Zhang 《China Communications》 SCIE CSCD 2024年第2期85-95,共11页
In LEO satellite communication networks,the number of satellites has increased sharply, the relative velocity of satellites is very fast, then electronic signal aliasing occurs from time to time. Those aliasing signal... In LEO satellite communication networks,the number of satellites has increased sharply, the relative velocity of satellites is very fast, then electronic signal aliasing occurs from time to time. Those aliasing signals make the receiving ability of the signal receiver worse, the signal processing ability weaker,and the anti-interference ability of the communication system lower. Aiming at the above problems, to save communication resources and improve communication efficiency, and considering the irregularity of interference signals, the underdetermined blind separation technology can effectively deal with the problem of interference sensing and signal reconstruction in this scenario. In order to improve the stability of source signal separation and the security of information transmission, a greedy optimization algorithm can be executed. At the same time, to improve network information transmission efficiency and prevent algorithms from getting trapped in local optima, delete low-energy points during each iteration process. Ultimately, simulation experiments validate that the algorithm presented in this paper enhances both the transmission efficiency of the network transmission system and the security of the communication system, achieving the process of interference sensing and signal reconstruction in the LEO satellite communication system. 展开更多
关键词 blind source separation greedy optimization algorithm interference sensing LEO satellite communication networks signal reconstruction
下载PDF
RepBoTNet-CESA:An Alzheimer’s Disease Computer Aided Diagnosis Method Using Structural Reparameterization BoTNet and Cubic Embedding Self Attention
20
作者 Xiabin Zhang Zhongyi Hu +1 位作者 Lei Xiao Hui Huang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2879-2905,共27页
Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on l... Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks. 展开更多
关键词 Alzheimer CNN structural reparameterization multi head self attention computer aided diagnosis
下载PDF
上一页 1 2 117 下一页 到第
使用帮助 返回顶部