As the Internet of Things(IoT)continues to expand,incorporating a vast array of devices into a digital ecosystem also increases the risk of cyber threats,necessitating robust defense mechanisms.This paper presents an ...As the Internet of Things(IoT)continues to expand,incorporating a vast array of devices into a digital ecosystem also increases the risk of cyber threats,necessitating robust defense mechanisms.This paper presents an innovative hybrid deep learning architecture that excels at detecting IoT threats in real-world settings.Our proposed model combines Convolutional Neural Networks(CNN),Bidirectional Long Short-Term Memory(BLSTM),Gated Recurrent Units(GRU),and Attention mechanisms into a cohesive framework.This integrated structure aims to enhance the detection and classification of complex cyber threats while accommodating the operational constraints of diverse IoT systems.We evaluated our model using the RT-IoT2022 dataset,which includes various devices,standard operations,and simulated attacks.Our research’s significance lies in the comprehensive evaluation metrics,including Cohen Kappa and Matthews Correlation Coefficient(MCC),which underscore the model’s reliability and predictive quality.Our model surpassed traditional machine learning algorithms and the state-of-the-art,achieving over 99.6%precision,recall,F1-score,False Positive Rate(FPR),Detection Time,and accuracy,effectively identifying specific threats such as Message Queuing Telemetry Transport(MQTT)Publish,Denial of Service Synchronize network packet crafting tool(DOS SYN Hping),and Network Mapper Operating System Detection(NMAP OS DETECTION).The experimental analysis reveals a significant improvement over existing detection systems,significantly enhancing IoT security paradigms.Through our experimental analysis,we have demonstrated a remarkable enhancement in comparison to existing detection systems,which significantly strength-ens the security standards of IoT.Our model effectively addresses the need for advanced,dependable,and adaptable security solutions,serving as a symbol of the power of deep learning in strengthening IoT ecosystems amidst the constantly evolving cyber threat landscape.This achievement marks a significant stride towards protecting the integrity of IoT infrastructure,ensuring operational resilience,and building privacy in this groundbreaking technology.展开更多
Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection method...Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection methods.This paper proposes a risk assessment framework based on fuzzy logic and Particle Swarm Optimization(PSO)to address the risks associated with IoT botnets.Fuzzy logic addresses IoT threat uncertainties and ambiguities methodically.Fuzzy component settings are optimized using PSO to improve accuracy.The methodology allows for more complex thinking by transitioning from binary to continuous assessment.Instead of expert inputs,PSO data-driven tunes rules and membership functions.This study presents a complete IoT botnet risk assessment system.The methodology helps security teams allocate resources by categorizing threats as high,medium,or low severity.This study shows how CICIoT2023 can assess cyber risks.Our research has implications beyond detection,as it provides a proactive approach to risk management and promotes the development of more secure IoT environments.展开更多
In the early days of IoT’s introduction, it was challenging to introduce encryption communication due to the lackof performance of each component, such as computing resources like CPUs and batteries, to encrypt and d...In the early days of IoT’s introduction, it was challenging to introduce encryption communication due to the lackof performance of each component, such as computing resources like CPUs and batteries, to encrypt and decryptdata. Because IoT is applied and utilized in many important fields, a cyberattack on IoT can result in astronomicalfinancial and human casualties. For this reason, the application of encrypted communication to IoT has beenrequired, and the application of encrypted communication to IoT has become possible due to improvements inthe computing performance of IoT devices and the development of lightweight cryptography. The applicationof encrypted communication in IoT has made it possible to use encrypted communication channels to launchcyberattacks. The approach of extracting evidence of an attack based on the primary information of a networkpacket is no longer valid because critical information, such as the payload in a network packet, is encrypted byencrypted communication. For this reason, technology that can detect cyberattacks over encrypted network trafficoccurring in IoT environments is required. Therefore, this research proposes an encrypted cyberattack detectionsystem for the IoT (ECDS-IoT) that derives valid features for cyberattack detection from the cryptographic networktraffic generated in the IoT environment and performs cyberattack detection based on the derived features. ECDS-IoT identifies identifiable information from encrypted traffic collected in IoT environments and extracts statistics-based features through statistical analysis of identifiable information. ECDS-IoT understands information aboutnormal data by learning only statistical features extracted from normal data. ECDS-IoT detects cyberattacks basedonly on the normal data information it has trained. To evaluate the cyberattack detection performance of theproposed ECDS-IoT in this research, ECDS-IoT used CICIoT2023, a dataset containing encrypted traffic generatedby normal and seven categories of cyberattacks in the IoT environment and experimented with cyberattackdetection on encrypted traffic using Autoencoder, RNN, GRU, LSTM, BiLSTM, and AE-LSTM algorithms. Asa result of evaluating the performance of cyberattack detection for encrypted traffic, ECDS-IoT achieved highperformance such as accuracy 0.99739, precision 0.99154, recall 1.0, F1 score 0.99575, and ROC_AUC 0.99822when using the AE-LSTM algorithm. As shown by the cyberattack detection results of ECDS-IoT, it is possibleto detect most cyberattacks through encrypted traffic. By applying ECDS-IoT to IoT, it can effectively detectcyberattacks concealed in encrypted traffic, promoting the efficient operation of IoT and preventing financial andhuman damage caused by cyberattacks.展开更多
With the commercialization of 5th-generation mobile communications(5G)networks,a large-scale internet of things(IoT)environment is being built.Security is becoming increasingly crucial in 5G network environments due t...With the commercialization of 5th-generation mobile communications(5G)networks,a large-scale internet of things(IoT)environment is being built.Security is becoming increasingly crucial in 5G network environments due to the growing risk of various distributed denial of service(DDoS)attacks across vast IoT devices.Recently,research on automated intrusion detection using machine learning(ML)for 5G environments has been actively conducted.However,5G traffic has insufficient data due to privacy protection problems and imbalance problems with significantly fewer attack data.If this data is used to train an ML model,it will likely suffer from generalization errors due to not training enough different features on the attack data.Therefore,this paper aims to study a training method to mitigate the generalization error problem of the ML model that classifies IoT DDoS attacks even under conditions of insufficient and imbalanced 5G traffic.We built a 5G testbed to construct a 5G dataset for training to solve the problem of insufficient data.To solve the imbalance problem,synthetic minority oversampling technique(SMOTE)and generative adversarial network(GAN)-based conditional tabular GAN(CTGAN)of data augmentation were used.The performance of the trained ML models was compared and meaningfully analyzed regarding the generalization error problem.The experimental results showed that CTGAN decreased the accuracy and f1-score compared to the Baseline.Still,regarding the generalization error,the difference between the validation and test results was reduced by at least 1.7 and up to 22.88 times,indicating an improvement in the problem.This result suggests that the ML model training method that utilizes CTGANs to augment attack data for training data in the 5G environment mitigates the generalization error problem.展开更多
基金funding from Deanship of Scientific Research in King Faisal University with Grant Number KFU241648.
文摘As the Internet of Things(IoT)continues to expand,incorporating a vast array of devices into a digital ecosystem also increases the risk of cyber threats,necessitating robust defense mechanisms.This paper presents an innovative hybrid deep learning architecture that excels at detecting IoT threats in real-world settings.Our proposed model combines Convolutional Neural Networks(CNN),Bidirectional Long Short-Term Memory(BLSTM),Gated Recurrent Units(GRU),and Attention mechanisms into a cohesive framework.This integrated structure aims to enhance the detection and classification of complex cyber threats while accommodating the operational constraints of diverse IoT systems.We evaluated our model using the RT-IoT2022 dataset,which includes various devices,standard operations,and simulated attacks.Our research’s significance lies in the comprehensive evaluation metrics,including Cohen Kappa and Matthews Correlation Coefficient(MCC),which underscore the model’s reliability and predictive quality.Our model surpassed traditional machine learning algorithms and the state-of-the-art,achieving over 99.6%precision,recall,F1-score,False Positive Rate(FPR),Detection Time,and accuracy,effectively identifying specific threats such as Message Queuing Telemetry Transport(MQTT)Publish,Denial of Service Synchronize network packet crafting tool(DOS SYN Hping),and Network Mapper Operating System Detection(NMAP OS DETECTION).The experimental analysis reveals a significant improvement over existing detection systems,significantly enhancing IoT security paradigms.Through our experimental analysis,we have demonstrated a remarkable enhancement in comparison to existing detection systems,which significantly strength-ens the security standards of IoT.Our model effectively addresses the need for advanced,dependable,and adaptable security solutions,serving as a symbol of the power of deep learning in strengthening IoT ecosystems amidst the constantly evolving cyber threat landscape.This achievement marks a significant stride towards protecting the integrity of IoT infrastructure,ensuring operational resilience,and building privacy in this groundbreaking technology.
文摘Increasing Internet of Things(IoT)device connectivity makes botnet attacks more dangerous,carrying catastrophic hazards.As IoT botnets evolve,their dynamic and multifaceted nature hampers conventional detection methods.This paper proposes a risk assessment framework based on fuzzy logic and Particle Swarm Optimization(PSO)to address the risks associated with IoT botnets.Fuzzy logic addresses IoT threat uncertainties and ambiguities methodically.Fuzzy component settings are optimized using PSO to improve accuracy.The methodology allows for more complex thinking by transitioning from binary to continuous assessment.Instead of expert inputs,PSO data-driven tunes rules and membership functions.This study presents a complete IoT botnet risk assessment system.The methodology helps security teams allocate resources by categorizing threats as high,medium,or low severity.This study shows how CICIoT2023 can assess cyber risks.Our research has implications beyond detection,as it provides a proactive approach to risk management and promotes the development of more secure IoT environments.
基金supported by the Institute of Information&Communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2021-0-00493,5G Massive Next Generation Cyber Attack Deception Technology Development).
文摘In the early days of IoT’s introduction, it was challenging to introduce encryption communication due to the lackof performance of each component, such as computing resources like CPUs and batteries, to encrypt and decryptdata. Because IoT is applied and utilized in many important fields, a cyberattack on IoT can result in astronomicalfinancial and human casualties. For this reason, the application of encrypted communication to IoT has beenrequired, and the application of encrypted communication to IoT has become possible due to improvements inthe computing performance of IoT devices and the development of lightweight cryptography. The applicationof encrypted communication in IoT has made it possible to use encrypted communication channels to launchcyberattacks. The approach of extracting evidence of an attack based on the primary information of a networkpacket is no longer valid because critical information, such as the payload in a network packet, is encrypted byencrypted communication. For this reason, technology that can detect cyberattacks over encrypted network trafficoccurring in IoT environments is required. Therefore, this research proposes an encrypted cyberattack detectionsystem for the IoT (ECDS-IoT) that derives valid features for cyberattack detection from the cryptographic networktraffic generated in the IoT environment and performs cyberattack detection based on the derived features. ECDS-IoT identifies identifiable information from encrypted traffic collected in IoT environments and extracts statistics-based features through statistical analysis of identifiable information. ECDS-IoT understands information aboutnormal data by learning only statistical features extracted from normal data. ECDS-IoT detects cyberattacks basedonly on the normal data information it has trained. To evaluate the cyberattack detection performance of theproposed ECDS-IoT in this research, ECDS-IoT used CICIoT2023, a dataset containing encrypted traffic generatedby normal and seven categories of cyberattacks in the IoT environment and experimented with cyberattackdetection on encrypted traffic using Autoencoder, RNN, GRU, LSTM, BiLSTM, and AE-LSTM algorithms. Asa result of evaluating the performance of cyberattack detection for encrypted traffic, ECDS-IoT achieved highperformance such as accuracy 0.99739, precision 0.99154, recall 1.0, F1 score 0.99575, and ROC_AUC 0.99822when using the AE-LSTM algorithm. As shown by the cyberattack detection results of ECDS-IoT, it is possibleto detect most cyberattacks through encrypted traffic. By applying ECDS-IoT to IoT, it can effectively detectcyberattacks concealed in encrypted traffic, promoting the efficient operation of IoT and preventing financial andhuman damage caused by cyberattacks.
基金This work was supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2021-0-00796Research on Foundational Technologies for 6GAutonomous Security-by-Design toGuarantee Constant Quality of Security).
文摘With the commercialization of 5th-generation mobile communications(5G)networks,a large-scale internet of things(IoT)environment is being built.Security is becoming increasingly crucial in 5G network environments due to the growing risk of various distributed denial of service(DDoS)attacks across vast IoT devices.Recently,research on automated intrusion detection using machine learning(ML)for 5G environments has been actively conducted.However,5G traffic has insufficient data due to privacy protection problems and imbalance problems with significantly fewer attack data.If this data is used to train an ML model,it will likely suffer from generalization errors due to not training enough different features on the attack data.Therefore,this paper aims to study a training method to mitigate the generalization error problem of the ML model that classifies IoT DDoS attacks even under conditions of insufficient and imbalanced 5G traffic.We built a 5G testbed to construct a 5G dataset for training to solve the problem of insufficient data.To solve the imbalance problem,synthetic minority oversampling technique(SMOTE)and generative adversarial network(GAN)-based conditional tabular GAN(CTGAN)of data augmentation were used.The performance of the trained ML models was compared and meaningfully analyzed regarding the generalization error problem.The experimental results showed that CTGAN decreased the accuracy and f1-score compared to the Baseline.Still,regarding the generalization error,the difference between the validation and test results was reduced by at least 1.7 and up to 22.88 times,indicating an improvement in the problem.This result suggests that the ML model training method that utilizes CTGANs to augment attack data for training data in the 5G environment mitigates the generalization error problem.