A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have ...A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field.展开更多
In this paper,we develop a 6G wireless powered Internet of Things(IoT)system assisted by unmanned aerial vehicles(UAVs)to intelligently supply energy and collect data at the same time.In our dual-UAV scheme,UAV-E,with...In this paper,we develop a 6G wireless powered Internet of Things(IoT)system assisted by unmanned aerial vehicles(UAVs)to intelligently supply energy and collect data at the same time.In our dual-UAV scheme,UAV-E,with a constant power supply,transmits energy to charge the IoT devices on the ground,whereas UAV-B serves the IoT devices by data collection as a base station.In this framework,the system's energy efficiency is maximized,which we define as a ratio of the sum rate of IoT devices to the energy consumption of two UAVs during a fixed working duration.With the constraints of duration,transmit power,energy,and mobility,a difficult non-convex issue is presented by optimizing the trajectory,time duration allocation,and uplink transmit power of concurrently.To tackle the non-convex fractional optimization issue,we deconstruct it into three subproblems and we solve each of them iteratively using the descent method in conjunction with sequential convex approximation(SCA)approaches and the Dinkelbach algorithm.The simulation findings indicate that the suggested cooperative design has the potential to greatly increase the energy efficiency of the 6G intelligent UAV-assisted wireless powered IoT system when compared to previous benchmark systems.展开更多
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int...The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.展开更多
In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the...In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the rapid development of artificial intelligence,semantic communication has attracted great attention as a new communication paradigm.However,for IoT devices,however,processing image information efficiently in real time is an essential task for the rapid transmission of semantic information.With the increase of model parameters in deep learning methods,the model inference time in sensor devices continues to increase.In contrast,the Pulse Coupled Neural Network(PCNN)has fewer parameters,making it more suitable for processing real-time scene tasks such as image segmentation,which lays the foundation for real-time,effective,and accurate image transmission.However,the parameters of PCNN are determined by trial and error,which limits its application.To overcome this limitation,an Improved Pulse Coupled Neural Networks(IPCNN)model is proposed in this work.The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons,and all its parameters are set adaptively,which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images.Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets.The IPCNN method achieves a better segmentation result without training,providing a new solution for the real-time transmission of image semantic information.展开更多
The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure ...The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.展开更多
A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combin...A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combined Web APIs and developed a new service,which is known as a mashup.The emergence of mashups greatly increases the number of services in mobile communications,especially in mobile networks and the Internet-of-Things(IoT),and has encouraged companies and individuals to develop even more mashups,which has led to the dramatic increase in the number of mashups.Such a trend brings with it big data,such as the massive text data from the mashups themselves and continually-generated usage data.Thus,the question of how to determine the most suitable mashups from big data has become a challenging problem.In this paper,we propose a mashup recommendation framework from big data in mobile networks and the IoT.The proposed framework is driven by machine learning techniques,including neural embedding,clustering,and matrix factorization.We employ neural embedding to learn the distributed representation of mashups and propose to use cluster analysis to learn the relationship among the mashups.We also develop a novel Joint Matrix Factorization(JMF)model to complete the mashup recommendation task,where we design a new objective function and an optimization algorithm.We then crawl through a real-world large mashup dataset and perform experiments.The experimental results demonstrate that our framework achieves high accuracy in mashup recommendation and performs better than all compared baselines.展开更多
As the key infrastructure of space-ground integrated information networks,satellite communication networks provide high-speed and reliable information transmission.In order to meet the burgeoning service demands of th...As the key infrastructure of space-ground integrated information networks,satellite communication networks provide high-speed and reliable information transmission.In order to meet the burgeoning service demands of the IoT and the Internet,the low-latency LEO satellite network has developed rapidly.However,LEO satellites face inherent problems such as small coverage,fast moving speed and short overhead time,which will be more severe when serving high-dynamic users,e.g.high-speed rails and airplanes.The heterogeneous network composed of GEO,MEO and LEO satellites can provide various services,whose network management and resource allocation are also more challenging.展开更多
The widespread adoption of Internet of Things(IoT)devices has resulted in notable progress in different fields,improving operational effectiveness while also raising concerns about privacy due to their vulnerability t...The widespread adoption of Internet of Things(IoT)devices has resulted in notable progress in different fields,improving operational effectiveness while also raising concerns about privacy due to their vulnerability to virus attacks.Further,the study suggests using an advanced approach that utilizes machine learning,specifically the Wide Residual Network(WRN),to identify hidden malware in IoT systems.The research intends to improve privacy protection by accurately identifying malicious software that undermines the security of IoT devices,using the MalMemAnalysis dataset.Moreover,thorough experimentation provides evidence for the effectiveness of the WRN-based strategy,resulting in exceptional performance measures such as accuracy,precision,F1-score,and recall.The study of the test data demonstrates highly impressive results,with a multiclass accuracy surpassing 99.97%and a binary class accuracy beyond 99.98%.The results emphasize the strength and dependability of using advanced deep learning methods such as WRN for identifying hidden malware risks in IoT environments.Furthermore,a comparison examination with the current body of literature emphasizes the originality and efficacy of the suggested methodology.This research builds upon previous studies that have investigated several machine learning methods for detecting malware on IoT devices.However,it distinguishes itself by showcasing exceptional performance metrics and validating its findings through thorough experimentation with real-world datasets.Utilizing WRN offers benefits in managing the intricacies of malware detection,emphasizing its capacity to enhance the security of IoT ecosystems.To summarize,this work proposes an effective way to address privacy concerns on IoT devices by utilizing advanced machine learning methods.The research provides useful insights into the changing landscape of IoT cybersecurity by emphasizing methodological rigor and conducting comparative performance analysis.Future research could focus on enhancing the recommended approach by adding more datasets and leveraging real-time monitoring capabilities to strengthen IoT devices’defenses against new cybersecurity threats.展开更多
Vehicular Adhoc Networks(VANETs)enable vehicles to act as mobile nodes that can fetch,share,and disseminate information about vehicle safety,emergency events,warning messages,and passenger infotainment.However,the con...Vehicular Adhoc Networks(VANETs)enable vehicles to act as mobile nodes that can fetch,share,and disseminate information about vehicle safety,emergency events,warning messages,and passenger infotainment.However,the continuous dissemination of information fromvehicles and their one-hop neighbor nodes,Road Side Units(RSUs),and VANET infrastructures can lead to performance degradation of VANETs in the existing hostcentric IP-based network.Therefore,Information Centric Networks(ICN)are being explored as an alternative architecture for vehicular communication to achieve robust content distribution in highly mobile,dynamic,and errorprone domains.In ICN-based Vehicular-IoT networks,consumer mobility is implicitly supported,but producer mobility may result in redundant data transmission and caching inefficiency at intermediate vehicular nodes.This paper proposes an efficient redundant transmission control algorithm based on network coding to reduce data redundancy and accelerate the efficiency of information dissemination.The proposed protocol,called Network Cording Multiple Solutions Scheduling(NCMSS),is receiver-driven collaborative scheduling between requesters and information sources that uses a global parameter expectation deadline to effectively manage the transmission of encoded data packets and control the selection of information sources.Experimental results for the proposed NCMSS protocol is demonstrated to analyze the performance of ICN-vehicular-IoT networks in terms of caching,data retrieval delay,and end-to-end application throughput.The end-to-end throughput in proposed NCMSS is 22%higher(for 1024 byte data)than existing solutions whereas delay in NCMSS is reduced by 5%in comparison with existing solutions.展开更多
Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc...Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc.,communication technologies.Heavy data traffic,huge capacity,minimal level of dynamic latency,etc.are some of the future requirements in 5G+and 6G communication systems.In emerging communication,technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication.In this paper,the state of the complex system considered as a complex network(the connection between the brain cells,neurons,etc.)needs measurement for analyzing the functions of the neurons during brain communication.Here,we measure the state of the complex system through observability.Using 5G+/6G-based photonic sensor nodes,finding observability influenced by the concept of contraction provides the stability of neurons.When IoT or any sensors fail to measure the state of the connectivity in the 5G+or 6G communication due to external noise and attacks,some information about the sensor nodes during the communication will be lost.Similarly,neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections.Therefore,affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions.In this compensation,loss of observability depends on the contraction size which is a key factor for employing a complex network.To analyze the observability recovery,we can use a contraction detection algorithm with complex network properties.Our survey paper shows that contraction size will allow us to improve the performance of brain communication,stability of neurons,etc.,through the clustering coefficient considered in the contraction detection algorithm.In addition,we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.展开更多
Cybersecurity has become the most significant research area in the domain of the Internet of Things(IoT)owing to the ever-increasing number of cyberattacks.The rapid penetration of Android platforms in mobile devices ...Cybersecurity has become the most significant research area in the domain of the Internet of Things(IoT)owing to the ever-increasing number of cyberattacks.The rapid penetration of Android platforms in mobile devices has made the detection of malware attacks a challenging process.Furthermore,Android malware is increasing on a daily basis.So,precise malware detection analytical techniques need a large number of hardware resources that are signifi-cantly resource-limited for mobile devices.In this research article,an optimal Graph Convolutional Neural Network-based Malware Detection and classification(OGCNN-MDC)model is introduced for an IoT-cloud environment.The pro-posed OGCNN-MDC model aims to recognize and categorize malware occur-rences in IoT-enabled cloud platforms.The presented OGCNN-MDC model has three stages in total,such as data pre-processing,malware detection and para-meter tuning.To detect and classify the malware,the GCNN model is exploited in this work.In order to enhance the overall efficiency of the GCNN model,the Group Mean-based Optimizer(GMBO)algorithm is utilized to appropriately adjust the GCNN parameters,and this phenomenon shows the novelty of the cur-rent study.A widespread experimental analysis was conducted to establish the superiority of the proposed OGCNN-MDC model.A comprehensive comparison study was conducted,and the outcomes highlighted the supreme performance of the proposed OGCNN-MDC model over other recent approaches.展开更多
The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during the...The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during these situations.Also,the security issues in the Internet of Medical Things(IoMT)used in these service,make the situation even more critical because cyberattacks on the medical devices might cause treatment delays or clinical failures.Hence,services in the healthcare ecosystem need rapid,uninterrupted,and secure facilities.The solution provided in this research addresses security concerns and services availability for patients with critical health in remote areas.This research aims to develop an intelligent Software Defined Networks(SDNs)enabled secure framework for IoT healthcare ecosystem.We propose a hybrid of machine learning and deep learning techniques(DNN+SVM)to identify network intrusions in the sensor-based healthcare data.In addition,this system can efficiently monitor connected devices and suspicious behaviours.Finally,we evaluate the performance of our proposed framework using various performance metrics based on the healthcare application scenarios.the experimental results show that the proposed approach effectively detects and mitigates attacks in the SDN-enabled IoT networks and performs better that other state-of-art-approaches.展开更多
Specific emitter identification can distin-guish individual transmitters by analyzing received signals and extracting inherent features of hard-ware circuits.Feature extraction is a key part of traditional machine lea...Specific emitter identification can distin-guish individual transmitters by analyzing received signals and extracting inherent features of hard-ware circuits.Feature extraction is a key part of traditional machine learning-based methods,but manual extrac-tion is generally limited by prior professional knowl-edge.At the same time,it has been noted that the per-formance of most specific emitter identification meth-ods degrades in the low signal-to-noise ratio(SNR)environments.The deep residual shrinkage network(DRSN)is proposed for specific emitter identification,particularly in the low SNRs.The soft threshold can preserve more key features for the improvement of performance,and an identity shortcut can speed up the training process.We collect signals via the receiver to create a dataset in the actual environments.The DRSN is trained to automatically extract features and imple-ment the classification of transmitters.Experimental results show that DRSN obtains the best accuracy un-der different SNRs and has less running time,which demonstrates the effectiveness of DRSN in identify-ing specific emitters.展开更多
The rapid growth in hardware technologies and the fourth industrial revolution-Industry 4.0 have enabled the Internet of Things(IoT)to be smarter.One of the main drivers in Industry 4.0 is smart and secured Industrial...The rapid growth in hardware technologies and the fourth industrial revolution-Industry 4.0 have enabled the Internet of Things(IoT)to be smarter.One of the main drivers in Industry 4.0 is smart and secured Industrial IoT(IIoT)[1].The IIoT results from the widespread use of computers and the interconnectedness of machines.It has made software a crucial tool for almost every industry,from bakeries and arts to manufacturing facilities and healthcare systems[2].The IIoT devices can be mobile and geographically distributed over a long distance,which exposes them to network disturbances,Quality of Service(QoS)degradation,and security vulnerabilities.In addition,the IIoT is a complex network at a large scale,and there is a dire need for network architecture and protocol design to accommodate these diverse domains and competencies and handle the increasing levels of complexity.Therefore,in this special issue,we aim to focus on the challenges of network architectures and communication protocol design in the context of the smart industry.This special issue has attracted numerous high-quality research articles and has accepted fourteen research papers[3–16].展开更多
Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to ef...Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to effectively reduce the content fetch delay for latency-sensitive services of Internet of Things(IoT)devices.Considering the time-varying scenario,the machine learning techniques could further reduce the content fetch delay by optimizing the caching decisions.In this paper,to minimize the content fetch delay and ensure the QoS of the network,a Device-to-Device(D2D)assisted fog computing network architecture is introduced,which supports federated learning and QoS-aware caching decisions based on time-varying user preferences.To release the network congestion and the risk of the user privacy leakage,federated learning,is enabled in the D2D-assisted fog computing network.Specifically,it has been observed that federated learning yields suboptimal results according to the Non-Independent Identical Distribution(Non-IID)of local users data.To address this issue,a distributed cluster-based user preference estimation algorithm is proposed to optimize the content caching placement,improve the cache hit rate,the content fetch delay and the convergence rate,which can effectively mitigate the impact of the Non-IID data set by clustering.The simulation results show that the proposed algorithm provides a considerable performance improvement with better learning results compared with the existing algorithms.展开更多
Nowadays,the widespread application of 5G has promoted rapid development in different areas,particularly in the Internet of Things(IoT),where 5G provides the advantages of higher data transfer rate,lower latency,and w...Nowadays,the widespread application of 5G has promoted rapid development in different areas,particularly in the Internet of Things(IoT),where 5G provides the advantages of higher data transfer rate,lower latency,and widespread connections.Wireless sensor networks(WSNs),which comprise various sensors,are crucial components of IoT.The main functions of WSN include providing users with real-time monitoring information,deploying regional information collection,and synchronizing with the Internet.Security in WSNs is becoming increasingly essential because of the across-the-board nature of wireless technology in many fields.Recently,Yu et al.proposed a user authentication protocol forWSN.However,their design is vulnerable to sensor capture and temporary information disclosure attacks.Thus,in this study,an improved protocol called PSAP-WSNis proposed.The security of PSAP-WSN is demonstrated by employing the ROR model,BAN logic,and ProVerif tool for the analysis.The experimental evaluation shows that our design is more efficient and suitable forWSN environments.展开更多
6G IoT networks aim for providing significantly higher data rates and extremely lower latency.However,due to the increasingly scarce spectrum bands and ever-growing massive number IoT devices(IoDs)deployed,6G IoT netw...6G IoT networks aim for providing significantly higher data rates and extremely lower latency.However,due to the increasingly scarce spectrum bands and ever-growing massive number IoT devices(IoDs)deployed,6G IoT networks face two critical challenges,i.e.,energy limitation and severe signal attenuation.Simultaneous wireless information and power transfer(SWIPT)and cooperative relaying provide effective ways to address these two challenges.In this paper,we investigate the energy self-sustainability(ESS)of 6G IoT network and propose an OFDM based bidirectional multi-relay SWIPT strategy for 6G IoT networks.In the proposed strategy,the transmission process is equally divided into two phases.Specifically,in phase1 two source nodes transmit their signals to relay nodes which will then use different subcarrier sets to decode information and harvest energy,respectively.In phase2 relay nodes forward signals to corresponding destination nodes with the harvested energy.We maximize the weighted sum transmission rate by optimizing subcarriers and power allocation.Our proposed strategy achieves larger weighted sum transmission rate comparing with the benchmark scheme.展开更多
In recent years,real-time video streaming has grown in popularity.The growing popularity of the Internet of Things(IoT)and other wireless heterogeneous networks mandates that network resources be carefully apportioned...In recent years,real-time video streaming has grown in popularity.The growing popularity of the Internet of Things(IoT)and other wireless heterogeneous networks mandates that network resources be carefully apportioned among versatile users in order to achieve the best Quality of Experience(QoE)and performance objectives.Most researchers focused on Forward Error Correction(FEC)techniques when attempting to strike a balance between QoE and performance.However,as network capacity increases,the performance degrades,impacting the live visual experience.Recently,Deep Learning(DL)algorithms have been successfully integrated with FEC to stream videos across multiple heterogeneous networks.But these algorithms need to be changed to make the experience better without sacrificing packet loss and delay time.To address the previous challenge,this paper proposes a novel intelligent algorithm that streams video in multi-home heterogeneous networks based on network-centric characteristics.The proposed framework contains modules such as Intelligent Content Extraction Module(ICEM),Channel Status Monitor(CSM),and Adaptive FEC(AFEC).This framework adopts the Cognitive Learning-based Scheduling(CLS)Module,which works on the deep Reinforced Gated Recurrent Networks(RGRN)principle and embeds them along with the FEC to achieve better performances.The complete framework was developed using the Objective Modular Network Testbed in C++(OMNET++),Internet networking(INET),and Python 3.10,with Keras as the front end and Tensorflow 2.10 as the back end.With extensive experimentation,the proposed model outperforms the other existing intelligentmodels in terms of improving the QoE,minimizing the End-to-End Delay(EED),and maintaining the highest accuracy(98%)and a lower Root Mean Square Error(RMSE)value of 0.001.展开更多
An IoT-based wireless sensor network(WSN)comprises many small sensors to collect the data and share it with the central repositories.These sensors are battery-driven and resource-restrained devices that consume most o...An IoT-based wireless sensor network(WSN)comprises many small sensors to collect the data and share it with the central repositories.These sensors are battery-driven and resource-restrained devices that consume most of the energy in sensing or collecting the data and transmitting it.During data sharing,security is an important concern in such networks as they are prone to many threats,of which the deadliest is the wormhole attack.These attacks are launched without acquiring the vital information of the network and they highly compromise the communication,security,and performance of the network.In the IoT-based network environment,its mitigation becomes more challenging because of the low resource availability in the sensing devices.We have performed an extensive literature study of the existing techniques against the wormhole attack and categorised them according to their methodology.The analysis of literature has motivated our research.In this paper,we developed the ESWI technique for detecting the wormhole attack while improving the performance and security.This algorithm has been designed to be simple and less complicated to avoid the overheads and the drainage of energy in its operation.The simulation results of our technique show competitive results for the detection rate and packet delivery ratio.It also gives an increased throughput,a decreased end-to-end delay,and a much-reduced consumption of energy.展开更多
文摘A large number of network security breaches in IoT networks have demonstrated the unreliability of current Network Intrusion Detection Systems(NIDSs).Consequently,network interruptions and loss of sensitive data have occurred,which led to an active research area for improving NIDS technologies.In an analysis of related works,it was observed that most researchers aim to obtain better classification results by using a set of untried combinations of Feature Reduction(FR)and Machine Learning(ML)techniques on NIDS datasets.However,these datasets are different in feature sets,attack types,and network design.Therefore,this paper aims to discover whether these techniques can be generalised across various datasets.Six ML models are utilised:a Deep Feed Forward(DFF),Convolutional Neural Network(CNN),Recurrent Neural Network(RNN),Decision Tree(DT),Logistic Regression(LR),and Naive Bayes(NB).The accuracy of three Feature Extraction(FE)algorithms is detected;Principal Component Analysis(PCA),Auto-encoder(AE),and Linear Discriminant Analysis(LDA),are evaluated using three benchmark datasets:UNSW-NB15,ToN-IoT and CSE-CIC-IDS2018.Although PCA and AE algorithms have been widely used,the determination of their optimal number of extracted dimensions has been overlooked.The results indicate that no clear FE method or ML model can achieve the best scores for all datasets.The optimal number of extracted dimensions has been identified for each dataset,and LDA degrades the performance of the ML models on two datasets.The variance is used to analyse the extracted dimensions of LDA and PCA.Finally,this paper concludes that the choice of datasets significantly alters the performance of the applied techniques.We believe that a universal(benchmark)feature set is needed to facilitate further advancement and progress of research in this field.
基金supported by the Natural Science Foundation of Beijing Municipality under Grant L192034。
文摘In this paper,we develop a 6G wireless powered Internet of Things(IoT)system assisted by unmanned aerial vehicles(UAVs)to intelligently supply energy and collect data at the same time.In our dual-UAV scheme,UAV-E,with a constant power supply,transmits energy to charge the IoT devices on the ground,whereas UAV-B serves the IoT devices by data collection as a base station.In this framework,the system's energy efficiency is maximized,which we define as a ratio of the sum rate of IoT devices to the energy consumption of two UAVs during a fixed working duration.With the constraints of duration,transmit power,energy,and mobility,a difficult non-convex issue is presented by optimizing the trajectory,time duration allocation,and uplink transmit power of concurrently.To tackle the non-convex fractional optimization issue,we deconstruct it into three subproblems and we solve each of them iteratively using the descent method in conjunction with sequential convex approximation(SCA)approaches and the Dinkelbach algorithm.The simulation findings indicate that the suggested cooperative design has the potential to greatly increase the energy efficiency of the 6G intelligent UAV-assisted wireless powered IoT system when compared to previous benchmark systems.
基金Researchers Supporting Project Number(RSP2024R206),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.
基金supported in part by the National Key Research and Development Program of China(Grant No.2019YFA0706200).
文摘In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the rapid development of artificial intelligence,semantic communication has attracted great attention as a new communication paradigm.However,for IoT devices,however,processing image information efficiently in real time is an essential task for the rapid transmission of semantic information.With the increase of model parameters in deep learning methods,the model inference time in sensor devices continues to increase.In contrast,the Pulse Coupled Neural Network(PCNN)has fewer parameters,making it more suitable for processing real-time scene tasks such as image segmentation,which lays the foundation for real-time,effective,and accurate image transmission.However,the parameters of PCNN are determined by trial and error,which limits its application.To overcome this limitation,an Improved Pulse Coupled Neural Networks(IPCNN)model is proposed in this work.The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons,and all its parameters are set adaptively,which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images.Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets.The IPCNN method achieves a better segmentation result without training,providing a new solution for the real-time transmission of image semantic information.
基金support of the Interdisciplinary Research Center for Intelligent Secure Systems(IRC-ISS)Internal Fund Grant#INSS2202.
文摘The use of the Internet of Things(IoT)is expanding at an unprecedented scale in many critical applications due to the ability to interconnect and utilize a plethora of wide range of devices.In critical infrastructure domains like oil and gas supply,intelligent transportation,power grids,and autonomous agriculture,it is essential to guarantee the confidentiality,integrity,and authenticity of data collected and exchanged.However,the limited resources coupled with the heterogeneity of IoT devices make it inefficient or sometimes infeasible to achieve secure data transmission using traditional cryptographic techniques.Consequently,designing a lightweight secure data transmission scheme is becoming essential.In this article,we propose lightweight secure data transmission(LSDT)scheme for IoT environments.LSDT consists of three phases and utilizes an effective combination of symmetric keys and the Elliptic Curve Menezes-Qu-Vanstone asymmetric key agreement protocol.We design the simulation environment and experiments to evaluate the performance of the LSDT scheme in terms of communication and computation costs.Security and performance analysis indicates that the LSDT scheme is secure,suitable for IoT applications,and performs better in comparison to other related security schemes.
基金supported by the National Key R&D Program of China (No.2021YFF0901002)the National Natural Science Foundation of China (No.61802291)+1 种基金Fundamental Research Funds for the Provincial Universities of Zhejiang (GK199900299012-025)Fundamental Research Funds for the Central Universities (No.JB210311).
文摘A large number of Web APIs have been released as services in mobile communications,but the service provided by a single Web API is usually limited.To enrich the services in mobile communications,developers have combined Web APIs and developed a new service,which is known as a mashup.The emergence of mashups greatly increases the number of services in mobile communications,especially in mobile networks and the Internet-of-Things(IoT),and has encouraged companies and individuals to develop even more mashups,which has led to the dramatic increase in the number of mashups.Such a trend brings with it big data,such as the massive text data from the mashups themselves and continually-generated usage data.Thus,the question of how to determine the most suitable mashups from big data has become a challenging problem.In this paper,we propose a mashup recommendation framework from big data in mobile networks and the IoT.The proposed framework is driven by machine learning techniques,including neural embedding,clustering,and matrix factorization.We employ neural embedding to learn the distributed representation of mashups and propose to use cluster analysis to learn the relationship among the mashups.We also develop a novel Joint Matrix Factorization(JMF)model to complete the mashup recommendation task,where we design a new objective function and an optimization algorithm.We then crawl through a real-world large mashup dataset and perform experiments.The experimental results demonstrate that our framework achieves high accuracy in mashup recommendation and performs better than all compared baselines.
文摘As the key infrastructure of space-ground integrated information networks,satellite communication networks provide high-speed and reliable information transmission.In order to meet the burgeoning service demands of the IoT and the Internet,the low-latency LEO satellite network has developed rapidly.However,LEO satellites face inherent problems such as small coverage,fast moving speed and short overhead time,which will be more severe when serving high-dynamic users,e.g.high-speed rails and airplanes.The heterogeneous network composed of GEO,MEO and LEO satellites can provide various services,whose network management and resource allocation are also more challenging.
基金The authors would like to thank Princess Nourah bint Abdulrahman University for funding this project through the researchers supporting project(PNURSP2024R435)and this research was funded by the Prince Sultan University,Riyadh,Saudi Arabia.
文摘The widespread adoption of Internet of Things(IoT)devices has resulted in notable progress in different fields,improving operational effectiveness while also raising concerns about privacy due to their vulnerability to virus attacks.Further,the study suggests using an advanced approach that utilizes machine learning,specifically the Wide Residual Network(WRN),to identify hidden malware in IoT systems.The research intends to improve privacy protection by accurately identifying malicious software that undermines the security of IoT devices,using the MalMemAnalysis dataset.Moreover,thorough experimentation provides evidence for the effectiveness of the WRN-based strategy,resulting in exceptional performance measures such as accuracy,precision,F1-score,and recall.The study of the test data demonstrates highly impressive results,with a multiclass accuracy surpassing 99.97%and a binary class accuracy beyond 99.98%.The results emphasize the strength and dependability of using advanced deep learning methods such as WRN for identifying hidden malware risks in IoT environments.Furthermore,a comparison examination with the current body of literature emphasizes the originality and efficacy of the suggested methodology.This research builds upon previous studies that have investigated several machine learning methods for detecting malware on IoT devices.However,it distinguishes itself by showcasing exceptional performance metrics and validating its findings through thorough experimentation with real-world datasets.Utilizing WRN offers benefits in managing the intricacies of malware detection,emphasizing its capacity to enhance the security of IoT ecosystems.To summarize,this work proposes an effective way to address privacy concerns on IoT devices by utilizing advanced machine learning methods.The research provides useful insights into the changing landscape of IoT cybersecurity by emphasizing methodological rigor and conducting comparative performance analysis.Future research could focus on enhancing the recommended approach by adding more datasets and leveraging real-time monitoring capabilities to strengthen IoT devices’defenses against new cybersecurity threats.
基金funded by Wenzhou Kean University under the IRSP Program“Hop by Hop Resource Reservation based Scheduling Function for Deterministic IoT networks”.
文摘Vehicular Adhoc Networks(VANETs)enable vehicles to act as mobile nodes that can fetch,share,and disseminate information about vehicle safety,emergency events,warning messages,and passenger infotainment.However,the continuous dissemination of information fromvehicles and their one-hop neighbor nodes,Road Side Units(RSUs),and VANET infrastructures can lead to performance degradation of VANETs in the existing hostcentric IP-based network.Therefore,Information Centric Networks(ICN)are being explored as an alternative architecture for vehicular communication to achieve robust content distribution in highly mobile,dynamic,and errorprone domains.In ICN-based Vehicular-IoT networks,consumer mobility is implicitly supported,but producer mobility may result in redundant data transmission and caching inefficiency at intermediate vehicular nodes.This paper proposes an efficient redundant transmission control algorithm based on network coding to reduce data redundancy and accelerate the efficiency of information dissemination.The proposed protocol,called Network Cording Multiple Solutions Scheduling(NCMSS),is receiver-driven collaborative scheduling between requesters and information sources that uses a global parameter expectation deadline to effectively manage the transmission of encoded data packets and control the selection of information sources.Experimental results for the proposed NCMSS protocol is demonstrated to analyze the performance of ICN-vehicular-IoT networks in terms of caching,data retrieval delay,and end-to-end application throughput.The end-to-end throughput in proposed NCMSS is 22%higher(for 1024 byte data)than existing solutions whereas delay in NCMSS is reduced by 5%in comparison with existing solutions.
基金support from the USA-based research group(Computing and Engineering,Indiana University)the KSA-based research group(Department of Computer Science,King Abdulaziz University).
文摘Complex networks on the Internet of Things(IoT)and brain communication are the main focus of this paper.The benefits of complex networks may be applicable in the future research directions of 6G,photonic,IoT,brain,etc.,communication technologies.Heavy data traffic,huge capacity,minimal level of dynamic latency,etc.are some of the future requirements in 5G+and 6G communication systems.In emerging communication,technologies such as 5G+/6G-based photonic sensor communication and complex networks play an important role in improving future requirements of IoT and brain communication.In this paper,the state of the complex system considered as a complex network(the connection between the brain cells,neurons,etc.)needs measurement for analyzing the functions of the neurons during brain communication.Here,we measure the state of the complex system through observability.Using 5G+/6G-based photonic sensor nodes,finding observability influenced by the concept of contraction provides the stability of neurons.When IoT or any sensors fail to measure the state of the connectivity in the 5G+or 6G communication due to external noise and attacks,some information about the sensor nodes during the communication will be lost.Similarly,neurons considered sing the complex networks concept neuron sensors in the brain lose communication and connections.Therefore,affected sensor nodes in a contraction are equivalent to compensate for maintaining stability conditions.In this compensation,loss of observability depends on the contraction size which is a key factor for employing a complex network.To analyze the observability recovery,we can use a contraction detection algorithm with complex network properties.Our survey paper shows that contraction size will allow us to improve the performance of brain communication,stability of neurons,etc.,through the clustering coefficient considered in the contraction detection algorithm.In addition,we discuss the scalability of IoT communication using 5G+/6G-based photonic technology.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R237)Princess Nourah bint Abdulrahman University,Riyadh,Saudi ArabiaThe authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4331004DSR13).
文摘Cybersecurity has become the most significant research area in the domain of the Internet of Things(IoT)owing to the ever-increasing number of cyberattacks.The rapid penetration of Android platforms in mobile devices has made the detection of malware attacks a challenging process.Furthermore,Android malware is increasing on a daily basis.So,precise malware detection analytical techniques need a large number of hardware resources that are signifi-cantly resource-limited for mobile devices.In this research article,an optimal Graph Convolutional Neural Network-based Malware Detection and classification(OGCNN-MDC)model is introduced for an IoT-cloud environment.The pro-posed OGCNN-MDC model aims to recognize and categorize malware occur-rences in IoT-enabled cloud platforms.The presented OGCNN-MDC model has three stages in total,such as data pre-processing,malware detection and para-meter tuning.To detect and classify the malware,the GCNN model is exploited in this work.In order to enhance the overall efficiency of the GCNN model,the Group Mean-based Optimizer(GMBO)algorithm is utilized to appropriately adjust the GCNN parameters,and this phenomenon shows the novelty of the cur-rent study.A widespread experimental analysis was conducted to establish the superiority of the proposed OGCNN-MDC model.A comprehensive comparison study was conducted,and the outcomes highlighted the supreme performance of the proposed OGCNN-MDC model over other recent approaches.
文摘The advent of pandemics such as COVID-19 significantly impacts human behaviour and lives every day.Therefore,it is essential to make medical services connected to internet,available in every remote location during these situations.Also,the security issues in the Internet of Medical Things(IoMT)used in these service,make the situation even more critical because cyberattacks on the medical devices might cause treatment delays or clinical failures.Hence,services in the healthcare ecosystem need rapid,uninterrupted,and secure facilities.The solution provided in this research addresses security concerns and services availability for patients with critical health in remote areas.This research aims to develop an intelligent Software Defined Networks(SDNs)enabled secure framework for IoT healthcare ecosystem.We propose a hybrid of machine learning and deep learning techniques(DNN+SVM)to identify network intrusions in the sensor-based healthcare data.In addition,this system can efficiently monitor connected devices and suspicious behaviours.Finally,we evaluate the performance of our proposed framework using various performance metrics based on the healthcare application scenarios.the experimental results show that the proposed approach effectively detects and mitigates attacks in the SDN-enabled IoT networks and performs better that other state-of-art-approaches.
基金the National Natural Science Foundation of China(No.U20B2038,No.61871398,NO.61901520 and No.61931011)the Natural Science Foundation for Distinguished Young Scholars of Jiangsu Province(No.BK20190030)the National Key R&D Program of China under Grant 2018YFB1801103.
文摘Specific emitter identification can distin-guish individual transmitters by analyzing received signals and extracting inherent features of hard-ware circuits.Feature extraction is a key part of traditional machine learning-based methods,but manual extrac-tion is generally limited by prior professional knowl-edge.At the same time,it has been noted that the per-formance of most specific emitter identification meth-ods degrades in the low signal-to-noise ratio(SNR)environments.The deep residual shrinkage network(DRSN)is proposed for specific emitter identification,particularly in the low SNRs.The soft threshold can preserve more key features for the improvement of performance,and an identity shortcut can speed up the training process.We collect signals via the receiver to create a dataset in the actual environments.The DRSN is trained to automatically extract features and imple-ment the classification of transmitters.Experimental results show that DRSN obtains the best accuracy un-der different SNRs and has less running time,which demonstrates the effectiveness of DRSN in identify-ing specific emitters.
文摘The rapid growth in hardware technologies and the fourth industrial revolution-Industry 4.0 have enabled the Internet of Things(IoT)to be smarter.One of the main drivers in Industry 4.0 is smart and secured Industrial IoT(IIoT)[1].The IIoT results from the widespread use of computers and the interconnectedness of machines.It has made software a crucial tool for almost every industry,from bakeries and arts to manufacturing facilities and healthcare systems[2].The IIoT devices can be mobile and geographically distributed over a long distance,which exposes them to network disturbances,Quality of Service(QoS)degradation,and security vulnerabilities.In addition,the IIoT is a complex network at a large scale,and there is a dire need for network architecture and protocol design to accommodate these diverse domains and competencies and handle the increasing levels of complexity.Therefore,in this special issue,we aim to focus on the challenges of network architectures and communication protocol design in the context of the smart industry.This special issue has attracted numerous high-quality research articles and has accepted fourteen research papers[3–16].
基金supported by the National Natural Science Foundation of China(NSFC)(61831002)the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No 734798Innovation Project of the Common Key Technology of Chongqing Science and Technology Industry(Grant no.cstc2018jcyjAX0383).
文摘Quality of Service(QoS)in the 6G application scenario is an important issue with the premise of the massive data transmission.Edge caching based on the fog computing network is considered as a potential solution to effectively reduce the content fetch delay for latency-sensitive services of Internet of Things(IoT)devices.Considering the time-varying scenario,the machine learning techniques could further reduce the content fetch delay by optimizing the caching decisions.In this paper,to minimize the content fetch delay and ensure the QoS of the network,a Device-to-Device(D2D)assisted fog computing network architecture is introduced,which supports federated learning and QoS-aware caching decisions based on time-varying user preferences.To release the network congestion and the risk of the user privacy leakage,federated learning,is enabled in the D2D-assisted fog computing network.Specifically,it has been observed that federated learning yields suboptimal results according to the Non-Independent Identical Distribution(Non-IID)of local users data.To address this issue,a distributed cluster-based user preference estimation algorithm is proposed to optimize the content caching placement,improve the cache hit rate,the content fetch delay and the convergence rate,which can effectively mitigate the impact of the Non-IID data set by clustering.The simulation results show that the proposed algorithm provides a considerable performance improvement with better learning results compared with the existing algorithms.
文摘Nowadays,the widespread application of 5G has promoted rapid development in different areas,particularly in the Internet of Things(IoT),where 5G provides the advantages of higher data transfer rate,lower latency,and widespread connections.Wireless sensor networks(WSNs),which comprise various sensors,are crucial components of IoT.The main functions of WSN include providing users with real-time monitoring information,deploying regional information collection,and synchronizing with the Internet.Security in WSNs is becoming increasingly essential because of the across-the-board nature of wireless technology in many fields.Recently,Yu et al.proposed a user authentication protocol forWSN.However,their design is vulnerable to sensor capture and temporary information disclosure attacks.Thus,in this study,an improved protocol called PSAP-WSNis proposed.The security of PSAP-WSN is demonstrated by employing the ROR model,BAN logic,and ProVerif tool for the analysis.The experimental evaluation shows that our design is more efficient and suitable forWSN environments.
基金This work was supported by China National Science Foundation under Grant No.61871348by University Key Laboratory of Advanced Wireless Communications of Guangdong Province,by the Project funded by China Postdoctoral Science Foundation under Grant 2019T120531+1 种基金by the Science and Technology Development Fund,Macao,China under Grant 0162/2019/A3by the Fundamental Research Funds for the Provincial Universities of Zhejiang under Grant RFA2019001.
文摘6G IoT networks aim for providing significantly higher data rates and extremely lower latency.However,due to the increasingly scarce spectrum bands and ever-growing massive number IoT devices(IoDs)deployed,6G IoT networks face two critical challenges,i.e.,energy limitation and severe signal attenuation.Simultaneous wireless information and power transfer(SWIPT)and cooperative relaying provide effective ways to address these two challenges.In this paper,we investigate the energy self-sustainability(ESS)of 6G IoT network and propose an OFDM based bidirectional multi-relay SWIPT strategy for 6G IoT networks.In the proposed strategy,the transmission process is equally divided into two phases.Specifically,in phase1 two source nodes transmit their signals to relay nodes which will then use different subcarrier sets to decode information and harvest energy,respectively.In phase2 relay nodes forward signals to corresponding destination nodes with the harvested energy.We maximize the weighted sum transmission rate by optimizing subcarriers and power allocation.Our proposed strategy achieves larger weighted sum transmission rate comparing with the benchmark scheme.
文摘In recent years,real-time video streaming has grown in popularity.The growing popularity of the Internet of Things(IoT)and other wireless heterogeneous networks mandates that network resources be carefully apportioned among versatile users in order to achieve the best Quality of Experience(QoE)and performance objectives.Most researchers focused on Forward Error Correction(FEC)techniques when attempting to strike a balance between QoE and performance.However,as network capacity increases,the performance degrades,impacting the live visual experience.Recently,Deep Learning(DL)algorithms have been successfully integrated with FEC to stream videos across multiple heterogeneous networks.But these algorithms need to be changed to make the experience better without sacrificing packet loss and delay time.To address the previous challenge,this paper proposes a novel intelligent algorithm that streams video in multi-home heterogeneous networks based on network-centric characteristics.The proposed framework contains modules such as Intelligent Content Extraction Module(ICEM),Channel Status Monitor(CSM),and Adaptive FEC(AFEC).This framework adopts the Cognitive Learning-based Scheduling(CLS)Module,which works on the deep Reinforced Gated Recurrent Networks(RGRN)principle and embeds them along with the FEC to achieve better performances.The complete framework was developed using the Objective Modular Network Testbed in C++(OMNET++),Internet networking(INET),and Python 3.10,with Keras as the front end and Tensorflow 2.10 as the back end.With extensive experimentation,the proposed model outperforms the other existing intelligentmodels in terms of improving the QoE,minimizing the End-to-End Delay(EED),and maintaining the highest accuracy(98%)and a lower Root Mean Square Error(RMSE)value of 0.001.
文摘An IoT-based wireless sensor network(WSN)comprises many small sensors to collect the data and share it with the central repositories.These sensors are battery-driven and resource-restrained devices that consume most of the energy in sensing or collecting the data and transmitting it.During data sharing,security is an important concern in such networks as they are prone to many threats,of which the deadliest is the wormhole attack.These attacks are launched without acquiring the vital information of the network and they highly compromise the communication,security,and performance of the network.In the IoT-based network environment,its mitigation becomes more challenging because of the low resource availability in the sensing devices.We have performed an extensive literature study of the existing techniques against the wormhole attack and categorised them according to their methodology.The analysis of literature has motivated our research.In this paper,we developed the ESWI technique for detecting the wormhole attack while improving the performance and security.This algorithm has been designed to be simple and less complicated to avoid the overheads and the drainage of energy in its operation.The simulation results of our technique show competitive results for the detection rate and packet delivery ratio.It also gives an increased throughput,a decreased end-to-end delay,and a much-reduced consumption of energy.