The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent ...The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process.展开更多
The DNS over HTTPS(Hypertext Transfer Protocol Secure)(DoH)is a new technology that encrypts DNS traffic,enhancing the privacy and security of end-users.However,the adoption of DoH is still facing several research cha...The DNS over HTTPS(Hypertext Transfer Protocol Secure)(DoH)is a new technology that encrypts DNS traffic,enhancing the privacy and security of end-users.However,the adoption of DoH is still facing several research challenges,such as ensuring security,compatibility,standardization,performance,privacy,and increasing user awareness.DoH significantly impacts network security,including better end-user privacy and security,challenges for network security professionals,increasing usage of encrypted malware communication,and difficulty adapting DNS-based security measures.Therefore,it is important to understand the impact of DoH on network security and develop newprivacy-preserving techniques to allowthe analysis of DoH traffic without compromising user privacy.This paper provides an in-depth analysis of the effects of DoH on cybersecurity.We discuss various techniques for detecting DoH tunneling and identify essential research challenges that need to be addressed in future security studies.Overall,this paper highlights the need for continued research and development to ensure the effectiveness of DoH as a tool for improving privacy and security.展开更多
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ...The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively.展开更多
Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of thi...Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities.展开更多
The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization...The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization implementation of the S-box is proposed based on the composite field inverse operation in this paper. This proposed S-box implementation is modeled using Verilog language and synthesized using Design Complier software under the premise of ensuring the correctness of the simulation result. The synthesis results show that, compared to several current S-box implementation schemes, the proposed implementation of the S-box significantly reduces the area overhead and critical path delay, then gets higher hardware efficiency. This provides strong support for realizing efficient and compact S-box ASIC designs.展开更多
With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of C...With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.展开更多
Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibilit...Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.展开更多
In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mo...In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user comments.However, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models.展开更多
The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment p...The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.展开更多
Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate a...Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate and it is difficult to distinguish overlapping and occluded objects,the authors propose a network model with a second-order term attention mechanism and occlusion loss.First,the backbone network is built on CSPDarkNet53.Then a method is designed for the feature extraction network based on an item-wise attention mechanism,which uses the filtered weighted feature vector to replace the original residual fusion and adds a second-order term to reduce the information loss in the process of fusion and accelerate the convergence of the model.Finally,an objected occlusion regression loss function is studied to reduce the problems of missed detections caused by dense objects.Sufficient experimental results demonstrate that the authors’method achieved state-of-the-art performance without reducing the detection speed.The mAP@.5 of the method is 85.8%on the Foggy_cityscapes dataset and the mAP@.5 of the method is 97.8%on the KITTI dataset.展开更多
Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same g...Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.展开更多
In order to enhance the accuracy of Air Traffic Control(ATC)cybersecurity attack detection,in this paper,a new clustering detection method is designed for air traffic control network security attacks.The feature set f...In order to enhance the accuracy of Air Traffic Control(ATC)cybersecurity attack detection,in this paper,a new clustering detection method is designed for air traffic control network security attacks.The feature set for ATC cybersecurity attacks is constructed by setting the feature states,adding recursive features,and determining the feature criticality.The expected information gain and entropy of the feature data are computed to determine the information gain of the feature data and reduce the interference of similar feature data.An autoencoder is introduced into the AI(artificial intelligence)algorithm to encode and decode the characteristics of ATC network security attack behavior to reduce the dimensionality of the ATC network security attack behavior data.Based on the above processing,an unsupervised learning algorithm for clustering detection of ATC network security attacks is designed.First,determine the distance between the clustering clusters of ATC network security attack behavior characteristics,calculate the clustering threshold,and construct the initial clustering center.Then,the new average value of all feature objects in each cluster is recalculated as the new cluster center.Second,it traverses all objects in a cluster of ATC network security attack behavior feature data.Finally,the cluster detection of ATC network security attack behavior is completed by the computation of objective functions.The experiment took three groups of experimental attack behavior data sets as the test object,and took the detection rate,false detection rate and recall rate as the test indicators,and selected three similar methods for comparative test.The experimental results show that the detection rate of this method is about 98%,the false positive rate is below 1%,and the recall rate is above 97%.Research shows that this method can improve the detection performance of security attacks in air traffic control network.展开更多
Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This a...Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.展开更多
The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense ...The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense resource allocation with multi-armed bandits to maximize the network's overall benefit.Firstly,we propose the method for dynamic setting of node defense resource thresholds to obtain the defender(attacker)benefit function of edge servers(nodes)and distribution.Secondly,we design a defense resource sharing mechanism for neighboring nodes to obtain the defense capability of nodes.Subsequently,we use the decomposability and Lipschitz conti-nuity of the defender's total expected utility to reduce the difference between the utility's discrete and continuous arms and analyze the difference theoretically.Finally,experimental results show that the method maximizes the defender's total expected utility and reduces the difference between the discrete and continuous arms of the utility.展开更多
The exercise recommendation system is emerging as a promising application in online learning scenarios,providing personalized recommendations to assist students with explicit learning directions.Existing solutions gen...The exercise recommendation system is emerging as a promising application in online learning scenarios,providing personalized recommendations to assist students with explicit learning directions.Existing solutions generally follow a collaborative filtering paradigm,while the implicit connections between students(exercises)have been largely ignored.In this study,we aim to propose an exercise recommendation paradigm that can reveal the latent connections between student-student(exercise-exercise).Specifically,a new framework was proposed,namely personalized exercise recommendation with student and exercise portraits(PERP).It consists of three sequential and interdependent modules:Collaborative student exercise graph(CSEG)construction,joint random walk,and recommendation list optimization.Technically,CSEG is created as a unified heterogeneous graph with students’response behaviors and student(exercise)relationships.Then,a joint random walk to take full advantage of the spectral properties of nearly uncoupled Markov chains is performed on CSEG,which allows for full exploration of both similar exercises that students have finished and connections between students(exercises)with similar portraits.Finally,we propose to optimize the recommendation list to obtain different exercise suggestions.After analyses of two public datasets,the results demonstrated that PERP can satisfy novelty,accuracy,and diversity.展开更多
In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate...In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate accuracy of the global model.Utilizing shared feature representations alongside customized classifiers for individual clients emerges as a promising personalized solution.Nonetheless,previous research has frequently neglected the integration of global knowledge into local representation learning and the synergy between global and local classifiers,thereby limiting model performance.To tackle these issues,this study proposes a hierarchical optimization method for federated learning with feature alignment and the fusion of classification decisions(FedFCD).FedFCD regularizes the relationship between global and local feature representations to achieve alignment and incorporates decision information from the global classifier,facilitating the late fusion of decision outputs from both global and local classifiers.Additionally,FedFCD employs a hierarchical optimization strategy to flexibly optimize model parameters.Through experiments on the Fashion-MNIST,CIFAR-10 and CIFAR-100 datasets,we demonstrate the effectiveness and superiority of FedFCD.For instance,on the CIFAR-100 dataset,FedFCD exhibited a significant improvement in average test accuracy by 6.83%compared to four outstanding personalized federated learning approaches.Furthermore,extended experiments confirm the robustness of FedFCD across various hyperparameter values.展开更多
In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems ...In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems and designing control schemes for unstable pivots can alleviate the traffic congestion problem from a new perspective. In this work, the full-speed differential model considering the vehicle network environment is improved in order to adjust the traffic flow from the perspective of bifurcation control, the existence conditions of Hopf bifurcation and saddle-node bifurcation in the model are proved theoretically, and the stability mutation point for the stability of the transportation system is found. For the unstable bifurcation point, a nonlinear system feedback controller is designed by using Chebyshev polynomial approximation and stochastic feedback control method. The advancement, postponement, and elimination of Hopf bifurcation are achieved without changing the system equilibrium point, and the mutation behavior of the transportation system is controlled so as to alleviate the traffic congestion. The changes in the stability of complex traffic systems are explained through the bifurcation analysis, which can better capture the characteristics of the traffic flow. By adjusting the control parameters in the feedback controllers, the influence of the boundary conditions on the stability of the traffic system is adequately described, and the effects of the unstable focuses and saddle points on the system are suppressed to slow down the traffic flow. In addition, the unstable bifurcation points can be eliminated and the Hopf bifurcation can be controlled to advance, delay, and disappear,so as to realize the control of the stability behavior of the traffic system, which can help to alleviate the traffic congestion and describe the actual traffic phenomena as well.展开更多
The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplo...The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.展开更多
In LEO satellite communication networks,the number of satellites has increased sharply, the relative velocity of satellites is very fast, then electronic signal aliasing occurs from time to time. Those aliasing signal...In LEO satellite communication networks,the number of satellites has increased sharply, the relative velocity of satellites is very fast, then electronic signal aliasing occurs from time to time. Those aliasing signals make the receiving ability of the signal receiver worse, the signal processing ability weaker,and the anti-interference ability of the communication system lower. Aiming at the above problems, to save communication resources and improve communication efficiency, and considering the irregularity of interference signals, the underdetermined blind separation technology can effectively deal with the problem of interference sensing and signal reconstruction in this scenario. In order to improve the stability of source signal separation and the security of information transmission, a greedy optimization algorithm can be executed. At the same time, to improve network information transmission efficiency and prevent algorithms from getting trapped in local optima, delete low-energy points during each iteration process. Ultimately, simulation experiments validate that the algorithm presented in this paper enhances both the transmission efficiency of the network transmission system and the security of the communication system, achieving the process of interference sensing and signal reconstruction in the LEO satellite communication system.展开更多
Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on l...Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks.展开更多
文摘The recent development of the Internet of Things(IoTs)resulted in the growth of IoT-based DDoS attacks.The detection of Botnet in IoT systems implements advanced cybersecurity measures to detect and reduce malevolent botnets in interconnected devices.Anomaly detection models evaluate transmission patterns,network traffic,and device behaviour to detect deviations from usual activities.Machine learning(ML)techniques detect patterns signalling botnet activity,namely sudden traffic increase,unusual command and control patterns,or irregular device behaviour.In addition,intrusion detection systems(IDSs)and signature-based techniques are applied to recognize known malware signatures related to botnets.Various ML and deep learning(DL)techniques have been developed to detect botnet attacks in IoT systems.To overcome security issues in an IoT environment,this article designs a gorilla troops optimizer with DL-enabled botnet attack detection and classification(GTODL-BADC)technique.The GTODL-BADC technique follows feature selection(FS)with optimal DL-based classification for accomplishing security in an IoT environment.For data preprocessing,the min-max data normalization approach is primarily used.The GTODL-BADC technique uses the GTO algorithm to select features and elect optimal feature subsets.Moreover,the multi-head attention-based long short-term memory(MHA-LSTM)technique was applied for botnet detection.Finally,the tree seed algorithm(TSA)was used to select the optimum hyperparameter for the MHA-LSTM method.The experimental validation of the GTODL-BADC technique can be tested on a benchmark dataset.The simulation results highlighted that the GTODL-BADC technique demonstrates promising performance in the botnet detection process.
基金Deanship of Scientific Research at King Khalid University for funding this work through a large group Research Project under Grant Number RGP.2/373/45.
文摘The DNS over HTTPS(Hypertext Transfer Protocol Secure)(DoH)is a new technology that encrypts DNS traffic,enhancing the privacy and security of end-users.However,the adoption of DoH is still facing several research challenges,such as ensuring security,compatibility,standardization,performance,privacy,and increasing user awareness.DoH significantly impacts network security,including better end-user privacy and security,challenges for network security professionals,increasing usage of encrypted malware communication,and difficulty adapting DNS-based security measures.Therefore,it is important to understand the impact of DoH on network security and develop newprivacy-preserving techniques to allowthe analysis of DoH traffic without compromising user privacy.This paper provides an in-depth analysis of the effects of DoH on cybersecurity.We discuss various techniques for detecting DoH tunneling and identify essential research challenges that need to be addressed in future security studies.Overall,this paper highlights the need for continued research and development to ensure the effectiveness of DoH as a tool for improving privacy and security.
文摘The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively.
文摘Internet of Things (IoT) among of all the technology revolutions has been considered the next evolution of the internet. IoT has become a far more popular area in the computing world. IoT combined a huge number of things (devices) that can be connected through the internet. The purpose: this paper aims to explore the concept of the Internet of Things (IoT) generally and outline the main definitions of IoT. The paper also aims to examine and discuss the obstacles and potential benefits of IoT in Saudi universities. Methodology: the researchers reviewed the previous literature and focused on several databases to use the recent studies and research related to the IoT. Then, the researchers also used quantitative methodology to examine the factors affecting the obstacles and potential benefits of IoT. The data were collected by using a questionnaire distributed online among academic staff and a total of 150 participants completed the survey. Finding: the result of this study reveals there are twelve factors that affect the potential benefits of using IoT such as reducing human errors, increasing business income and worker’s productivity. It also shows the eighteen factors which affect obstacles the IoT use, for example sensors’ cost, data privacy, and data security. These factors have the most influence on using IoT in Saudi universities.
文摘The SubBytes (S-box) transformation is the most crucial operation in the AES algorithm, significantly impacting the implementation performance of AES chips. To design a high-performance S-box, a segmented optimization implementation of the S-box is proposed based on the composite field inverse operation in this paper. This proposed S-box implementation is modeled using Verilog language and synthesized using Design Complier software under the premise of ensuring the correctness of the simulation result. The synthesis results show that, compared to several current S-box implementation schemes, the proposed implementation of the S-box significantly reduces the area overhead and critical path delay, then gets higher hardware efficiency. This provides strong support for realizing efficient and compact S-box ASIC designs.
文摘With the development of virtual reality (VR) technology, more and more industries are beginning to integrate with VR technology. In response to the problem of not being able to directly render the lighting effect of Caideng in digital Caideng scenes, this article analyzes the lighting model. It combines it with the lighting effect of Caideng scenes to design an optimized lighting model algorithm that fuses the bidirectional transmission distribution function (BTDF) model. This algorithm can efficiently render the lighting effect of Caideng models in a virtual environment. And using image optimization processing methods, the immersive experience effect on the VR is enhanced. Finally, a Caideng roaming interactive system was designed based on this method. The results show that the frame rate of the system is stable during operation, maintained above 60 fps, and has a good immersive experience.
基金supported by a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT),Republic of KoreaThe authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/13/40)+2 种基金Also,the authors are thankful to Prince Satam bin Abdulaziz University for supporting this study via funding from Prince Satam bin Abdulaziz University project number(PSAU/2024/R/1445)This work was also supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R54)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.
文摘In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user comments.However, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models.
基金supported by Scientific Research Deanship at University of Ha’il,Saudi Arabia through project number RG-23137.
文摘The segmentation of head and neck(H&N)tumors in dual Positron Emission Tomography/Computed Tomogra-phy(PET/CT)imaging is a critical task in medical imaging,providing essential information for diagnosis,treatment planning,and outcome prediction.Motivated by the need for more accurate and robust segmentation methods,this study addresses key research gaps in the application of deep learning techniques to multimodal medical images.Specifically,it investigates the limitations of existing 2D and 3D models in capturing complex tumor structures and proposes an innovative 2.5D UNet Transformer model as a solution.The primary research questions guiding this study are:(1)How can the integration of convolutional neural networks(CNNs)and transformer networks enhance segmentation accuracy in dual PET/CT imaging?(2)What are the comparative advantages of 2D,2.5D,and 3D model configurations in this context?To answer these questions,we aimed to develop and evaluate advanced deep-learning models that leverage the strengths of both CNNs and transformers.Our proposed methodology involved a comprehensive preprocessing pipeline,including normalization,contrast enhancement,and resampling,followed by segmentation using 2D,2.5D,and 3D UNet Transformer models.The models were trained and tested on three diverse datasets:HeckTor2022,AutoPET2023,and SegRap2023.Performance was assessed using metrics such as Dice Similarity Coefficient,Jaccard Index,Average Surface Distance(ASD),and Relative Absolute Volume Difference(RAVD).The findings demonstrate that the 2.5D UNet Transformer model consistently outperformed the 2D and 3D models across most metrics,achieving the highest Dice and Jaccard values,indicating superior segmentation accuracy.For instance,on the HeckTor2022 dataset,the 2.5D model achieved a Dice score of 81.777 and a Jaccard index of 0.705,surpassing other model configurations.The 3D model showed strong boundary delineation performance but exhibited variability across datasets,while the 2D model,although effective,generally underperformed compared to its 2.5D and 3D counterparts.Compared to related literature,our study confirms the advantages of incorporating additional spatial context,as seen in the improved performance of the 2.5D model.This research fills a significant gap by providing a detailed comparative analysis of different model dimensions and their impact on H&N segmentation accuracy in dual PET/CT imaging.
基金Doctoral Talent Training Project of Chongqing University of Posts and Telecommunications,Grant/Award Number:BYJS202007Natural Science Foundation of Chongqing,Grant/Award Number:cstc2021jcyj-msxmX0941+1 种基金National Natural Science Foundation of China,Grant/Award Number:62176034Scientific and Technological Research Program of Chongqing Municipal Education Commission,Grant/Award Number:KJQN202101901。
文摘Automatically detecting and locating remote occlusion small objects from the images of complex traffic environments is a valuable and challenging research.Since the boundary box location is not sufficiently accurate and it is difficult to distinguish overlapping and occluded objects,the authors propose a network model with a second-order term attention mechanism and occlusion loss.First,the backbone network is built on CSPDarkNet53.Then a method is designed for the feature extraction network based on an item-wise attention mechanism,which uses the filtered weighted feature vector to replace the original residual fusion and adds a second-order term to reduce the information loss in the process of fusion and accelerate the convergence of the model.Finally,an objected occlusion regression loss function is studied to reduce the problems of missed detections caused by dense objects.Sufficient experimental results demonstrate that the authors’method achieved state-of-the-art performance without reducing the detection speed.The mAP@.5 of the method is 85.8%on the Foggy_cityscapes dataset and the mAP@.5 of the method is 97.8%on the KITTI dataset.
基金funded by the Fujian Province Science and Technology Plan,China(Grant Number 2019H0017).
文摘Accurate forecasting of time series is crucial across various domains.Many prediction tasks rely on effectively segmenting,matching,and time series data alignment.For instance,regardless of time series with the same granularity,segmenting them into different granularity events can effectively mitigate the impact of varying time scales on prediction accuracy.However,these events of varying granularity frequently intersect with each other,which may possess unequal durations.Even minor differences can result in significant errors when matching time series with future trends.Besides,directly using matched events but unaligned events as state vectors in machine learning-based prediction models can lead to insufficient prediction accuracy.Therefore,this paper proposes a short-term forecasting method for time series based on a multi-granularity event,MGE-SP(multi-granularity event-based short-termprediction).First,amethodological framework for MGE-SP established guides the implementation steps.The framework consists of three key steps,including multi-granularity event matching based on the LTF(latest time first)strategy,multi-granularity event alignment using a piecewise aggregate approximation based on the compression ratio,and a short-term prediction model based on XGBoost.The data from a nationwide online car-hailing service in China ensures the method’s reliability.The average RMSE(root mean square error)and MAE(mean absolute error)of the proposed method are 3.204 and 2.360,lower than the respective values of 4.056 and 3.101 obtained using theARIMA(autoregressive integratedmoving average)method,as well as the values of 4.278 and 2.994 obtained using k-means-SVR(support vector regression)method.The other experiment is conducted on stock data froma public data set.The proposed method achieved an average RMSE and MAE of 0.836 and 0.696,lower than the respective values of 1.019 and 0.844 obtained using the ARIMA method,as well as the values of 1.350 and 1.172 obtained using the k-means-SVR method.
基金National Natural Science Foundation of China(U2133208,U20A20161)National Natural Science Foundation of China(No.62273244)Sichuan Science and Technology Program(No.2022YFG0180).
文摘In order to enhance the accuracy of Air Traffic Control(ATC)cybersecurity attack detection,in this paper,a new clustering detection method is designed for air traffic control network security attacks.The feature set for ATC cybersecurity attacks is constructed by setting the feature states,adding recursive features,and determining the feature criticality.The expected information gain and entropy of the feature data are computed to determine the information gain of the feature data and reduce the interference of similar feature data.An autoencoder is introduced into the AI(artificial intelligence)algorithm to encode and decode the characteristics of ATC network security attack behavior to reduce the dimensionality of the ATC network security attack behavior data.Based on the above processing,an unsupervised learning algorithm for clustering detection of ATC network security attacks is designed.First,determine the distance between the clustering clusters of ATC network security attack behavior characteristics,calculate the clustering threshold,and construct the initial clustering center.Then,the new average value of all feature objects in each cluster is recalculated as the new cluster center.Second,it traverses all objects in a cluster of ATC network security attack behavior feature data.Finally,the cluster detection of ATC network security attack behavior is completed by the computation of objective functions.The experiment took three groups of experimental attack behavior data sets as the test object,and took the detection rate,false detection rate and recall rate as the test indicators,and selected three similar methods for comparative test.The experimental results show that the detection rate of this method is about 98%,the false positive rate is below 1%,and the recall rate is above 97%.Research shows that this method can improve the detection performance of security attacks in air traffic control network.
文摘Cloud computing has emerged as a viable alternative to traditional computing infrastructures,offering various benefits.However,the adoption of cloud storage poses significant risks to data secrecy and integrity.This article presents an effective mechanism to preserve the secrecy and integrity of data stored on the public cloud by leveraging blockchain technology,smart contracts,and cryptographic primitives.The proposed approach utilizes a Solidity-based smart contract as an auditor for maintaining and verifying the integrity of outsourced data.To preserve data secrecy,symmetric encryption systems are employed to encrypt user data before outsourcing it.An extensive performance analysis is conducted to illustrate the efficiency of the proposed mechanism.Additionally,a rigorous assessment is conducted to ensure that the developed smart contract is free from vulnerabilities and to measure its associated running costs.The security analysis of the proposed system confirms that our approach can securely maintain the confidentiality and integrity of cloud storage,even in the presence of malicious entities.The proposed mechanism contributes to enhancing data security in cloud computing environments and can be used as a foundation for developing more secure cloud storage systems.
基金supported by the National Natural Science Foundation of China(NSFC)[grant numbers 62172377,61872205]the Shandong Provincial Natural Science Foundation[grant number ZR2019MF018]the Startup Research Foundation for Distinguished Scholars No.202112016.
文摘The cloud platform has limited defense resources to fully protect the edge servers used to process crowd sensing data in Internet of Things.To guarantee the network's overall security,we present a network defense resource allocation with multi-armed bandits to maximize the network's overall benefit.Firstly,we propose the method for dynamic setting of node defense resource thresholds to obtain the defender(attacker)benefit function of edge servers(nodes)and distribution.Secondly,we design a defense resource sharing mechanism for neighboring nodes to obtain the defense capability of nodes.Subsequently,we use the decomposability and Lipschitz conti-nuity of the defender's total expected utility to reduce the difference between the utility's discrete and continuous arms and analyze the difference theoretically.Finally,experimental results show that the method maximizes the defender's total expected utility and reduces the difference between the discrete and continuous arms of the utility.
基金supported by the Industrial Support Project of Gansu Colleges under Grant No.2022CYZC-11Gansu Natural Science Foundation Project under Grant No.21JR7RA114+1 种基金National Natural Science Foundation of China under Grants No.622760736,No.1762078,and No.61363058Northwest Normal University Teachers Research Capacity Promotion Plan under Grant No.NWNU-LKQN2019-2.
文摘The exercise recommendation system is emerging as a promising application in online learning scenarios,providing personalized recommendations to assist students with explicit learning directions.Existing solutions generally follow a collaborative filtering paradigm,while the implicit connections between students(exercises)have been largely ignored.In this study,we aim to propose an exercise recommendation paradigm that can reveal the latent connections between student-student(exercise-exercise).Specifically,a new framework was proposed,namely personalized exercise recommendation with student and exercise portraits(PERP).It consists of three sequential and interdependent modules:Collaborative student exercise graph(CSEG)construction,joint random walk,and recommendation list optimization.Technically,CSEG is created as a unified heterogeneous graph with students’response behaviors and student(exercise)relationships.Then,a joint random walk to take full advantage of the spectral properties of nearly uncoupled Markov chains is performed on CSEG,which allows for full exploration of both similar exercises that students have finished and connections between students(exercises)with similar portraits.Finally,we propose to optimize the recommendation list to obtain different exercise suggestions.After analyses of two public datasets,the results demonstrated that PERP can satisfy novelty,accuracy,and diversity.
基金the National Natural Science Foundation of China(Grant No.62062001)Ningxia Youth Top Talent Project(2021).
文摘In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate accuracy of the global model.Utilizing shared feature representations alongside customized classifiers for individual clients emerges as a promising personalized solution.Nonetheless,previous research has frequently neglected the integration of global knowledge into local representation learning and the synergy between global and local classifiers,thereby limiting model performance.To tackle these issues,this study proposes a hierarchical optimization method for federated learning with feature alignment and the fusion of classification decisions(FedFCD).FedFCD regularizes the relationship between global and local feature representations to achieve alignment and incorporates decision information from the global classifier,facilitating the late fusion of decision outputs from both global and local classifiers.Additionally,FedFCD employs a hierarchical optimization strategy to flexibly optimize model parameters.Through experiments on the Fashion-MNIST,CIFAR-10 and CIFAR-100 datasets,we demonstrate the effectiveness and superiority of FedFCD.For instance,on the CIFAR-100 dataset,FedFCD exhibited a significant improvement in average test accuracy by 6.83%compared to four outstanding personalized federated learning approaches.Furthermore,extended experiments confirm the robustness of FedFCD across various hyperparameter values.
基金Project supported by the National Natural Science Foundation of China(Grant No.72361031)the Gansu Province University Youth Doctoral Support Project(Grant No.2023QB-049)。
文摘In recent years, the traffic congestion problem has become more and more serious, and the research on traffic system control has become a new hot spot. Studying the bifurcation characteristics of traffic flow systems and designing control schemes for unstable pivots can alleviate the traffic congestion problem from a new perspective. In this work, the full-speed differential model considering the vehicle network environment is improved in order to adjust the traffic flow from the perspective of bifurcation control, the existence conditions of Hopf bifurcation and saddle-node bifurcation in the model are proved theoretically, and the stability mutation point for the stability of the transportation system is found. For the unstable bifurcation point, a nonlinear system feedback controller is designed by using Chebyshev polynomial approximation and stochastic feedback control method. The advancement, postponement, and elimination of Hopf bifurcation are achieved without changing the system equilibrium point, and the mutation behavior of the transportation system is controlled so as to alleviate the traffic congestion. The changes in the stability of complex traffic systems are explained through the bifurcation analysis, which can better capture the characteristics of the traffic flow. By adjusting the control parameters in the feedback controllers, the influence of the boundary conditions on the stability of the traffic system is adequately described, and the effects of the unstable focuses and saddle points on the system are suppressed to slow down the traffic flow. In addition, the unstable bifurcation points can be eliminated and the Hopf bifurcation can be controlled to advance, delay, and disappear,so as to realize the control of the stability behavior of the traffic system, which can help to alleviate the traffic congestion and describe the actual traffic phenomena as well.
基金the Deanship of Graduate Studies and Scientific Research at Qassim University for financial support(QU-APC-2024-9/1).
文摘The prompt spread of COVID-19 has emphasized the necessity for effective and precise diagnostic tools.In this article,a hybrid approach in terms of datasets as well as the methodology by utilizing a previously unexplored dataset obtained from a private hospital for detecting COVID-19,pneumonia,and normal conditions in chest X-ray images(CXIs)is proposed coupled with Explainable Artificial Intelligence(XAI).Our study leverages less preprocessing with pre-trained cutting-edge models like InceptionV3,VGG16,and VGG19 that excel in the task of feature extraction.The methodology is further enhanced by the inclusion of the t-SNE(t-Distributed Stochastic Neighbor Embedding)technique for visualizing the extracted image features and Contrast Limited Adaptive Histogram Equalization(CLAHE)to improve images before extraction of features.Additionally,an AttentionMechanism is utilized,which helps clarify how the modelmakes decisions,which builds trust in artificial intelligence(AI)systems.To evaluate the effectiveness of the proposed approach,both benchmark datasets and a private dataset obtained with permissions from Jinnah PostgraduateMedical Center(JPMC)in Karachi,Pakistan,are utilized.In 12 experiments,VGG19 showcased remarkable performance in the hybrid dataset approach,achieving 100%accuracy in COVID-19 vs.pneumonia classification and 97%in distinguishing normal cases.Overall,across all classes,the approach achieved 98%accuracy,demonstrating its efficiency in detecting COVID-19 and differentiating it fromother chest disorders(Pneumonia and healthy)while also providing insights into the decision-making process of the models.
基金supported by National Natural Science Foundation of China (62171390)Central Universities of Southwest Minzu University (ZYN2022032,2023NYXXS034)the State Scholarship Fund of the China Scholarship Council (NO.202008510081)。
文摘In LEO satellite communication networks,the number of satellites has increased sharply, the relative velocity of satellites is very fast, then electronic signal aliasing occurs from time to time. Those aliasing signals make the receiving ability of the signal receiver worse, the signal processing ability weaker,and the anti-interference ability of the communication system lower. Aiming at the above problems, to save communication resources and improve communication efficiency, and considering the irregularity of interference signals, the underdetermined blind separation technology can effectively deal with the problem of interference sensing and signal reconstruction in this scenario. In order to improve the stability of source signal separation and the security of information transmission, a greedy optimization algorithm can be executed. At the same time, to improve network information transmission efficiency and prevent algorithms from getting trapped in local optima, delete low-energy points during each iteration process. Ultimately, simulation experiments validate that the algorithm presented in this paper enhances both the transmission efficiency of the network transmission system and the security of the communication system, achieving the process of interference sensing and signal reconstruction in the LEO satellite communication system.
基金the Key Project of Zhejiang Provincial Natural Science Foundation under Grants LD21F020001,Z20F020022the National Natural Science Foundation of China under Grants 62072340,62076185the Major Project of Wenzhou Natural Science Foundation under Grants 2021HZSY0071,ZS2022001.
文摘Various deep learning models have been proposed for the accurate assisted diagnosis of early-stage Alzheimer’s disease(AD).Most studies predominantly employ Convolutional Neural Networks(CNNs),which focus solely on local features,thus encountering difficulties in handling global features.In contrast to natural images,Structural Magnetic Resonance Imaging(sMRI)images exhibit a higher number of channel dimensions.However,during the Position Embedding stage ofMulti Head Self Attention(MHSA),the coded information related to the channel dimension is disregarded.To tackle these issues,we propose theRepBoTNet-CESA network,an advanced AD-aided diagnostic model that is capable of learning local and global features simultaneously.It combines the advantages of CNN networks in capturing local information and Transformer networks in integrating global information,reducing computational costs while achieving excellent classification performance.Moreover,it uses the Cubic Embedding Self Attention(CESA)proposed in this paper to incorporate the channel code information,enhancing the classification performance within the Transformer structure.Finally,the RepBoTNet-CESA performs well in various AD-aided diagnosis tasks,with an accuracy of 96.58%,precision of 97.26%,and recall of 96.23%in the AD/NC task;an accuracy of 92.75%,precision of 92.84%,and recall of 93.18%in the EMCI/NC task;and an accuracy of 80.97%,precision of 83.86%,and recall of 80.91%in the AD/EMCI/LMCI/NC task.This demonstrates that RepBoTNet-CESA delivers outstanding outcomes in various AD-aided diagnostic tasks.Furthermore,our study has shown that MHSA exhibits superior performance compared to conventional attention mechanisms in enhancing ResNet performance.Besides,the Deeper RepBoTNet-CESA network fails to make further progress in AD-aided diagnostic tasks.