Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both cus...Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.展开更多
Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple ...Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.展开更多
To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to imp...To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to improve its information management and increase the availability of accurate, real-time data for disaster and humanitarian response teams, significant gaps persist. There are inefficiencies in the emergency management of data at every stage of its lifecycle: collection, processing, analysis, distribution, storage, and retrieval. Disaster risk reduction and disaster risk management are the two main tenets of the United Nations’ worldwide plan for disaster management. Information systems are crucial because of the crucial roles they play in capturing, processing, and transmitting data. The management of information is seldom discussed in published works. The goal of this study is to employ qualitative research methods to provide insight by facilitating an expanded comprehension of relevant contexts, phenomena, and individual experiences. Humanitarian workers and OCHA staffers will take part in the research. The study subjects will be chosen using a random selection procedure. Online surveys with both closed- and open-ended questions will be used to compile the data. UN OCHA offers a structure for the handling of information via which all humanitarian actors may contribute to the overall response. This research will enable the UN Office for OCHA better gather, process, analyze, disseminate, store, and retrieve data in the event of a catastrophe or humanitarian crisis.展开更多
Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wa...Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases.展开更多
This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart ar...This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart arrhythmias.The escalating prevalence of cardiovascular diseases necessitates advanced diagnostic tools to enhance accuracy and efficiency.The model leverages the deep hierarchical feature extraction capabilities of ResNets,which are adept at identifying intricate patterns within electrocardiogram(ECG)data,while BiGRU layers capture the temporal dynamics essential for understanding the sequential nature of ECG signals.The integration of an Attention Mechanism refines the model’s focus on critical segments of ECG data,ensuring a nuanced analysis that highlights the most informative features for arrhythmia classification.Evaluated on a comprehensive dataset of 12-lead ECG recordings,our ensemble model demonstrates superior performance in distinguishing between various types of arrhythmias,with an accuracy of 98.4%,a precision of 98.1%,a recall of 98%,and an F-score of 98%.This novel combination of convolutional and recurrent neural networks,supplemented by attention-driven mechanisms,advances automated ECG analysis,contributing significantly to healthcare’s machine learning applications and presenting a step forward in developing non-invasive,efficient,and reliable tools for early diagnosis and management of heart diseases.展开更多
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w...Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.展开更多
The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective se...The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective server module.Although IoTs are cornerstones in different application domains,the device’s authenticity,i.e.,of server(s)and ordinary devices,is the most crucial issue and must be resolved on a priority basis.Therefore,various field-proven methodologies were presented to streamline the verification process of the communicating devices;however,location-aware authentication has not been reported as per our knowledge,which is a crucial metric,especially in scenarios where devices are mobile.This paper presents a lightweight and location-aware device-to-server authentication technique where the device’s membership with the nearest server is subjected to its location information along with other measures.Initially,Media Access Control(MAC)address and Advance Encryption Scheme(AES)along with a secret shared key,i.e.,λ_(i) of 128 bits,have been utilized by Trusted Authority(TA)to generate MaskIDs,which are used instead of the original ID,for every device,i.e.,server and member,and are shared in the offline phase.Secondly,TA shares a list of authentic devices,i.e.,server S_(j) and members C_(i),with every device in the IoT for the onward verification process,which is required to be executed before the initialization of the actual communication process.Additionally,every device should be located such that it lies within the coverage area of a server,and this location information is used in the authentication process.A thorough analytical analysis was carried out to check the susceptibility of the proposed and existing authentication approaches against well-known intruder attacks,i.e.,man-in-the-middle,masquerading,device,and server impersonations,etc.,especially in the IoT domain.Moreover,proposed authentication and existing state-of-the-art approaches have been simulated in the real environment of IoT to verify their performance,particularly in terms of various evaluation metrics,i.e.,processing,communication,and storage overheads.These results have verified the superiority of the proposed scheme against existing state-of-the-art approaches,preferably in terms of communication,storage,and processing costs.展开更多
As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There hav...As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There have been attempts to utilize Digital Twins(DTs)to facilitate the design,evaluation,and deployment of IoV-based systems,for example by supporting high-fidelity modeling,real-time monitoring,and advanced predictive capabilities.However,the literature review undertaken in this paper suggests that integrating DTs into IoV-based system design and deployment remains an understudied topic.In addition,this paper explains how DTs can benefit IoV system designers and implementers,as well as describes several challenges and opportunities for future researchers.展开更多
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i...As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.展开更多
Effective user authentication is key to ensuring equipment security,data privacy,and personalized services in Internet of Things(IoT)systems.However,conventional mode-based authentication methods(e.g.,passwords and sm...Effective user authentication is key to ensuring equipment security,data privacy,and personalized services in Internet of Things(IoT)systems.However,conventional mode-based authentication methods(e.g.,passwords and smart cards)may be vulnerable to a broad range of attacks(e.g.,eavesdropping and side-channel attacks).Hence,there have been attempts to design biometric-based authentication solutions,which rely on physiological and behavioral characteristics.Behavioral characteristics need continuous monitoring and specific environmental settings,which can be challenging to implement in practice.However,we can also leverage Artificial Intelligence(AI)in the extraction and classification of physiological characteristics from IoT devices processing to facilitate authentication.Thus,we review the literature on the use of AI in physiological characteristics recognition pub-lished after 2015.We use the three-layer architecture of the IoT(i.e.,sensing layer,feature layer,and algorithm layer)to guide the discussion of existing approaches and their limitations.We also identify a number of future research opportunities,which will hopefully guide the design of next generation solutions.展开更多
Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ...Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD.展开更多
Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on...Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis.展开更多
Face recognition (FR) technology has numerous applications in artificial intelligence including biometrics, security,authentication, law enforcement, and surveillance. Deep learning (DL) models, notably convolutional ...Face recognition (FR) technology has numerous applications in artificial intelligence including biometrics, security,authentication, law enforcement, and surveillance. Deep learning (DL) models, notably convolutional neuralnetworks (CNNs), have shown promising results in the field of FR. However CNNs are easily fooled since theydo not encode position and orientation correlations between features. Hinton et al. envisioned Capsule Networksas a more robust design capable of retaining pose information and spatial correlations to recognize objects morelike the brain does. Lower-level capsules hold 8-dimensional vectors of attributes like position, hue, texture, andso on, which are routed to higher-level capsules via a new routing by agreement algorithm. This provides capsulenetworks with viewpoint invariance, which has previously evaded CNNs. This research presents a FR model basedon capsule networks that was tested using the LFW dataset, COMSATS face dataset, and own acquired photos usingcameras measuring 128 × 128 pixels, 40 × 40 pixels, and 30 × 30 pixels. The trained model outperforms state-ofthe-art algorithms, achieving 95.82% test accuracy and performing well on unseen faces that have been blurred orrotated. Additionally, the suggested model outperformed the recently released approaches on the COMSATS facedataset, achieving a high accuracy of 92.47%. Based on the results of this research as well as previous results, capsulenetworks perform better than deeper CNNs on unobserved altered data because of their special equivarianceproperties.展开更多
The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes ...The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively.展开更多
End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data a...End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data analysis. However, Excel functionalities have limits compared to dedicated programming languages. This paper addresses this gap by proposing a prototype for integrating Python’s capabilities into Excel through on-premises desktop to build custom spreadsheet functions with Python. This approach overcomes potential latency issues associated with cloud-based solutions. This prototype utilizes Excel-DNA and IronPython. Excel-DNA allows creating custom Python functions that seamlessly integrate with Excel’s calculation engine. IronPython enables the execution of these Python (CSFs) directly within Excel. C# and VSTO add-ins form the core components, facilitating communication between Python and Excel. This approach empowers users with a potentially open-ended set of Python (CSFs) for tasks like mathematical calculations, statistical analysis, and even predictive modeling, all within the familiar Excel interface. This prototype demonstrates smooth integration, allowing users to call Python (CSFs) just like standard Excel functions. This research contributes to enhancing spreadsheet capabilities for end-user programmers by leveraging Python’s power within Excel. Future research could explore expanding data analysis capabilities by expanding the (CSFs) functions for complex calculations, statistical analysis, data manipulation, and even external library integration. The possibility of integrating machine learning models through the (CSFs) functions within the familiar Excel environment.展开更多
In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), s...In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), such as extracting information related to people’s behaviors and interactions to analyze feelings or understand the behavior of users or groups, and many others. This extracted knowledge has a very important role in decision-making, creating and improving marketing objectives and competitive advantage, monitoring events, whether political or economic, and development in all fields. Therefore, to extract this knowledge, we need to analyze the vast amount of data found within social media using the most popular data mining techniques and applications related to social media sites.展开更多
In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based ...In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.展开更多
This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli...This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli distribution are employed to model the randomly occurring communication delays which could be different for different state variables. A discrete switching function that is different from those in the existing literature is first proposed. Then, expressed as the feasibility of a linear matrix inequality (LMI) with an equality constraint, sufficient conditions are derived in order to ensure the globally mean-square asymptotic stability of the system dynamics on the sliding surface. A discrete-time SMC controller is then synthesized to guarantee the discrete-time sliding mode reaching condition with the specified sliding surface. Finally, a simulation example is given to show the effectiveness of the proposed method.展开更多
Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man...Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man-made physical world(exchanging and sharing of data and information with other physical objects and systems)must be safe,especially in bi-directional communications.In particular,there is a need to suitably address security and/or privacy concerns in this human-in-the-loop CPS ecosystem.However,existing centralized architecture models in CPS,and also the more general IoT systems,have a number of associated limitations,in terms of single point of failure,data privacy,security,robustness,etc.Such limitations reinforce the importance of designing reliable,secure and privacy-preserving distributed solutions and other novel approaches,such as those based on blockchain technology due to its features(e.g.,decentralization,transparency and immutability of data).This is the focus of this special issue.展开更多
Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traff...Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traffic features with high information gain are primarily found in data link layers rather than application layers in wired networks.This survey investigates some of the complexities and challenges in deploying wireless IDS in terms of data collection methods,IDS techniques,IDS placement strategies,and traffic data analysis techniques.This paper’s main finding highlights the lack of available network traces for training modern machine-learning models against IoT specific intrusions.Specifically,the Knowledge Discovery in Databases(KDD)Cup dataset is reviewed to highlight the design challenges of wireless intrusion detection based on current data attributes and proposed several guidelines to future-proof following traffic capture methods in the wireless network(WN).The paper starts with a review of various intrusion detection techniques,data collection methods and placement methods.The main goal of this paper is to study the design challenges of deploying intrusion detection system in a wireless environment.Intrusion detection system deployment in a wireless environment is not as straightforward as in the wired network environment due to the architectural complexities.So this paper reviews the traditional wired intrusion detection deployment methods and discusses how these techniques could be adopted into the wireless environment and also highlights the design challenges in the wireless environment.The main wireless environments to look into would be Wireless Sensor Networks(WSN),Mobile Ad Hoc Networks(MANET)and IoT as this are the future trends and a lot of attacks have been targeted into these networks.So it is very crucial to design an IDS specifically to target on the wireless networks.展开更多
基金Ministry of Higher Education of Malaysia under theResearch GrantLRGS/1/2019/UKM-UKM/5/2 and Princess Nourah bint Abdulrahman University for financing this researcher through Supporting Project Number(PNURSP2024R235),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Due to the overwhelming characteristics of the Internet of Things(IoT)and its adoption in approximately every aspect of our lives,the concept of individual devices’privacy has gained prominent attention from both customers,i.e.,people,and industries as wearable devices collect sensitive information about patients(both admitted and outdoor)in smart healthcare infrastructures.In addition to privacy,outliers or noise are among the crucial issues,which are directly correlated with IoT infrastructures,as most member devices are resource-limited and could generate or transmit false data that is required to be refined before processing,i.e.,transmitting.Therefore,the development of privacy-preserving information fusion techniques is highly encouraged,especially those designed for smart IoT-enabled domains.In this paper,we are going to present an effective hybrid approach that can refine raw data values captured by the respectivemember device before transmission while preserving its privacy through the utilization of the differential privacy technique in IoT infrastructures.Sliding window,i.e.,δi based dynamic programming methodology,is implemented at the device level to ensure precise and accurate detection of outliers or noisy data,and refine it prior to activation of the respective transmission activity.Additionally,an appropriate privacy budget has been selected,which is enough to ensure the privacy of every individualmodule,i.e.,a wearable device such as a smartwatch attached to the patient’s body.In contrast,the end module,i.e.,the server in this case,can extract important information with approximately the maximum level of accuracy.Moreover,refined data has been processed by adding an appropriate nose through the Laplace mechanism to make it useless or meaningless for the adversary modules in the IoT.The proposed hybrid approach is trusted from both the device’s privacy and the integrity of the transmitted information perspectives.Simulation and analytical results have proved that the proposed privacy-preserving information fusion technique for wearable devices is an ideal solution for resource-constrained infrastructures such as IoT and the Internet ofMedical Things,where both device privacy and information integrity are important.Finally,the proposed hybrid approach is proven against well-known intruder attacks,especially those related to the privacy of the respective device in IoT infrastructures.
基金[King Abdulaziz University][Deanship of Scientific Research]Grant Number[KEP-PHD-20-611-42].
文摘Recently,researchers have shown increasing interest in combining more than one programming model into systems running on high performance computing systems(HPCs)to achieve exascale by applying parallelism at multiple levels.Combining different programming paradigms,such as Message Passing Interface(MPI),Open Multiple Processing(OpenMP),and Open Accelerators(OpenACC),can increase computation speed and improve performance.During the integration of multiple models,the probability of runtime errors increases,making their detection difficult,especially in the absence of testing techniques that can detect these errors.Numerous studies have been conducted to identify these errors,but no technique exists for detecting errors in three-level programming models.Despite the increasing research that integrates the three programming models,MPI,OpenMP,and OpenACC,a testing technology to detect runtime errors,such as deadlocks and race conditions,which can arise from this integration has not been developed.Therefore,this paper begins with a definition and explanation of runtime errors that result fromintegrating the three programming models that compilers cannot detect.For the first time,this paper presents a classification of operational errors that can result from the integration of the three models.This paper also proposes a parallel hybrid testing technique for detecting runtime errors in systems built in the C++programming language that uses the triple programming models MPI,OpenMP,and OpenACC.This hybrid technology combines static technology and dynamic technology,given that some errors can be detected using static techniques,whereas others can be detected using dynamic technology.The hybrid technique can detect more errors because it combines two distinct technologies.The proposed static technology detects a wide range of error types in less time,whereas a portion of the potential errors that may or may not occur depending on the 4502 CMC,2023,vol.74,no.2 operating environment are left to the dynamic technology,which completes the validation.
文摘To guarantee a unified response to disasters, humanitarian organizations work together via the United Nations Office for the Coordination of Humanitarian Affairs (OCHA). Although the OCHA has made great strides to improve its information management and increase the availability of accurate, real-time data for disaster and humanitarian response teams, significant gaps persist. There are inefficiencies in the emergency management of data at every stage of its lifecycle: collection, processing, analysis, distribution, storage, and retrieval. Disaster risk reduction and disaster risk management are the two main tenets of the United Nations’ worldwide plan for disaster management. Information systems are crucial because of the crucial roles they play in capturing, processing, and transmitting data. The management of information is seldom discussed in published works. The goal of this study is to employ qualitative research methods to provide insight by facilitating an expanded comprehension of relevant contexts, phenomena, and individual experiences. Humanitarian workers and OCHA staffers will take part in the research. The study subjects will be chosen using a random selection procedure. Online surveys with both closed- and open-ended questions will be used to compile the data. UN OCHA offers a structure for the handling of information via which all humanitarian actors may contribute to the overall response. This research will enable the UN Office for OCHA better gather, process, analyze, disseminate, store, and retrieve data in the event of a catastrophe or humanitarian crisis.
文摘Olive trees are susceptible to a variety of diseases that can cause significant crop damage and economic losses.Early detection of these diseases is essential for effective management.We propose a novel transformed wavelet,feature-fused,pre-trained deep learning model for detecting olive leaf diseases.The proposed model combines wavelet transforms with pre-trained deep-learning models to extract discriminative features from olive leaf images.The model has four main phases:preprocessing using data augmentation,three-level wavelet transformation,learning using pre-trained deep learning models,and a fused deep learning model.In the preprocessing phase,the image dataset is augmented using techniques such as resizing,rescaling,flipping,rotation,zooming,and contrasting.In wavelet transformation,the augmented images are decomposed into three frequency levels.Three pre-trained deep learning models,EfficientNet-B7,DenseNet-201,and ResNet-152-V2,are used in the learning phase.The models were trained using the approximate images of the third-level sub-band of the wavelet transform.In the fused phase,the fused model consists of a merge layer,three dense layers,and two dropout layers.The proposed model was evaluated using a dataset of images of healthy and infected olive leaves.It achieved an accuracy of 99.72%in the diagnosis of olive leaf diseases,which exceeds the accuracy of other methods reported in the literature.This finding suggests that our proposed method is a promising tool for the early detection of olive leaf diseases.
基金supported by the research project—Application of Machine Learning Methods for Early Diagnosis of Pathologies of the Cardiovascular System funded by the Ministry of Science and Higher Education of the Republic of Kazakhstan.Grant No.IRN AP13068289.
文摘This research introduces an innovative ensemble approach,combining Deep Residual Networks(ResNets)and Bidirectional Gated Recurrent Units(BiGRU),augmented with an Attention Mechanism,for the classification of heart arrhythmias.The escalating prevalence of cardiovascular diseases necessitates advanced diagnostic tools to enhance accuracy and efficiency.The model leverages the deep hierarchical feature extraction capabilities of ResNets,which are adept at identifying intricate patterns within electrocardiogram(ECG)data,while BiGRU layers capture the temporal dynamics essential for understanding the sequential nature of ECG signals.The integration of an Attention Mechanism refines the model’s focus on critical segments of ECG data,ensuring a nuanced analysis that highlights the most informative features for arrhythmia classification.Evaluated on a comprehensive dataset of 12-lead ECG recordings,our ensemble model demonstrates superior performance in distinguishing between various types of arrhythmias,with an accuracy of 98.4%,a precision of 98.1%,a recall of 98%,and an F-score of 98%.This novel combination of convolutional and recurrent neural networks,supplemented by attention-driven mechanisms,advances automated ECG analysis,contributing significantly to healthcare’s machine learning applications and presenting a step forward in developing non-invasive,efficient,and reliable tools for early diagnosis and management of heart diseases.
基金via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.
文摘The Internet of Things(IoT)is a smart networking infrastructure of physical devices,i.e.,things,that are embedded with sensors,actuators,software,and other technologies,to connect and share data with the respective server module.Although IoTs are cornerstones in different application domains,the device’s authenticity,i.e.,of server(s)and ordinary devices,is the most crucial issue and must be resolved on a priority basis.Therefore,various field-proven methodologies were presented to streamline the verification process of the communicating devices;however,location-aware authentication has not been reported as per our knowledge,which is a crucial metric,especially in scenarios where devices are mobile.This paper presents a lightweight and location-aware device-to-server authentication technique where the device’s membership with the nearest server is subjected to its location information along with other measures.Initially,Media Access Control(MAC)address and Advance Encryption Scheme(AES)along with a secret shared key,i.e.,λ_(i) of 128 bits,have been utilized by Trusted Authority(TA)to generate MaskIDs,which are used instead of the original ID,for every device,i.e.,server and member,and are shared in the offline phase.Secondly,TA shares a list of authentic devices,i.e.,server S_(j) and members C_(i),with every device in the IoT for the onward verification process,which is required to be executed before the initialization of the actual communication process.Additionally,every device should be located such that it lies within the coverage area of a server,and this location information is used in the authentication process.A thorough analytical analysis was carried out to check the susceptibility of the proposed and existing authentication approaches against well-known intruder attacks,i.e.,man-in-the-middle,masquerading,device,and server impersonations,etc.,especially in the IoT domain.Moreover,proposed authentication and existing state-of-the-art approaches have been simulated in the real environment of IoT to verify their performance,particularly in terms of various evaluation metrics,i.e.,processing,communication,and storage overheads.These results have verified the superiority of the proposed scheme against existing state-of-the-art approaches,preferably in terms of communication,storage,and processing costs.
基金supported by the Natural Science Foundation of Jiangsu Province of China under grant no.BK20211284the Financial and Science Technology Plan Project of Xinjiang Production and Construction Corps under grant no.2020DB005.
文摘As autonomous vehicles and the other supporting infrastructures(e.g.,smart cities and intelligent transportation systems)become more commonplace,the Internet of Vehicles(IoV)is getting increasingly prevalent.There have been attempts to utilize Digital Twins(DTs)to facilitate the design,evaluation,and deployment of IoV-based systems,for example by supporting high-fidelity modeling,real-time monitoring,and advanced predictive capabilities.However,the literature review undertaken in this paper suggests that integrating DTs into IoV-based system design and deployment remains an understudied topic.In addition,this paper explains how DTs can benefit IoV system designers and implementers,as well as describes several challenges and opportunities for future researchers.
文摘As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads.
基金funded in part by the National Natural Science Foundation of China under Grant No.61872038in part by the Fundamental Research Funds for the Central Universities under Grant No.FRF-GF-20-15B.
文摘Effective user authentication is key to ensuring equipment security,data privacy,and personalized services in Internet of Things(IoT)systems.However,conventional mode-based authentication methods(e.g.,passwords and smart cards)may be vulnerable to a broad range of attacks(e.g.,eavesdropping and side-channel attacks).Hence,there have been attempts to design biometric-based authentication solutions,which rely on physiological and behavioral characteristics.Behavioral characteristics need continuous monitoring and specific environmental settings,which can be challenging to implement in practice.However,we can also leverage Artificial Intelligence(AI)in the extraction and classification of physiological characteristics from IoT devices processing to facilitate authentication.Thus,we review the literature on the use of AI in physiological characteristics recognition pub-lished after 2015.We use the three-layer architecture of the IoT(i.e.,sensing layer,feature layer,and algorithm layer)to guide the discussion of existing approaches and their limitations.We also identify a number of future research opportunities,which will hopefully guide the design of next generation solutions.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number PNURSP2024R333,Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Chronic kidney disease(CKD)is a major health concern today,requiring early and accurate diagnosis.Machine learning has emerged as a powerful tool for disease detection,and medical professionals are increasingly using ML classifier algorithms to identify CKD early.This study explores the application of advanced machine learning techniques on a CKD dataset obtained from the University of California,UC Irvine Machine Learning repository.The research introduces TrioNet,an ensemble model combining extreme gradient boosting,random forest,and extra tree classifier,which excels in providing highly accurate predictions for CKD.Furthermore,K nearest neighbor(KNN)imputer is utilized to deal withmissing values while synthetic minority oversampling(SMOTE)is used for class-imbalance problems.To ascertain the efficacy of the proposed model,a comprehensive comparative analysis is conducted with various machine learning models.The proposed TrioNet using KNN imputer and SMOTE outperformed other models with 98.97%accuracy for detectingCKD.This in-depth analysis demonstrates the model’s capabilities and underscores its potential as a valuable tool in the diagnosis of CKD.
基金supported by MRC,UK (MC_PC_17171)Royal Society,UK (RP202G0230)+8 种基金BHF,UK (AA/18/3/34220)Hope Foundation for Cancer Research,UK (RM60G0680)GCRF,UK (P202PF11)Sino-UK Industrial Fund,UK (RP202G0289)LIAS,UK (P202ED10,P202RE969)Data Science Enhancement Fund,UK (P202RE237)Fight for Sight,UK (24NN201)Sino-UK Education Fund,UK (OP202006)BBSRC,UK (RM32G0178B8).
文摘Aim:This study aims to establish an artificial intelligence model,ThyroidNet,to diagnose thyroid nodules using deep learning techniques accurately.Methods:A novel method,ThyroidNet,is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules.First,we propose the multitask TransUnet,which combines the TransUnet encoder and decoder with multitask learning.Second,we propose the DualLoss function,tailored to the thyroid nodule localization and classification tasks.It balances the learning of the localization and classification tasks to help improve the model’s generalization ability.Third,we introduce strategies for augmenting the data.Finally,we submit a novel deep learning model,ThyroidNet,to accurately detect thyroid nodules.Results:ThyroidNet was evaluated on private datasets and was comparable to other existing methods,including U-Net and TransUnet.Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules.It achieved improved accuracy of 3.9%and 1.5%,respectively.Conclusion:ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks.Future research directions include optimization of the model structure,expansion of the dataset size,reduction of computational complexity and memory requirements,and exploration of additional applications of ThyroidNet in medical image analysis.
基金Princess Nourah bint Abdulrahman University Riyadh,Saudi Arabia with Researchers Supporting Project Number:PNURSP2024R234.
文摘Face recognition (FR) technology has numerous applications in artificial intelligence including biometrics, security,authentication, law enforcement, and surveillance. Deep learning (DL) models, notably convolutional neuralnetworks (CNNs), have shown promising results in the field of FR. However CNNs are easily fooled since theydo not encode position and orientation correlations between features. Hinton et al. envisioned Capsule Networksas a more robust design capable of retaining pose information and spatial correlations to recognize objects morelike the brain does. Lower-level capsules hold 8-dimensional vectors of attributes like position, hue, texture, andso on, which are routed to higher-level capsules via a new routing by agreement algorithm. This provides capsulenetworks with viewpoint invariance, which has previously evaded CNNs. This research presents a FR model basedon capsule networks that was tested using the LFW dataset, COMSATS face dataset, and own acquired photos usingcameras measuring 128 × 128 pixels, 40 × 40 pixels, and 30 × 30 pixels. The trained model outperforms state-ofthe-art algorithms, achieving 95.82% test accuracy and performing well on unseen faces that have been blurred orrotated. Additionally, the suggested model outperformed the recently released approaches on the COMSATS facedataset, achieving a high accuracy of 92.47%. Based on the results of this research as well as previous results, capsulenetworks perform better than deeper CNNs on unobserved altered data because of their special equivarianceproperties.
文摘The Internet of Things(IoT)is growing rapidly and impacting almost every aspect of our lives,fromwearables and healthcare to security,traffic management,and fleet management systems.This has generated massive volumes of data and security,and data privacy risks are increasing with the advancement of technology and network connections.Traditional access control solutions are inadequate for establishing access control in IoT systems to provide data protection owing to their vulnerability to single-point OF failure.Additionally,conventional privacy preservation methods have high latency costs and overhead for resource-constrained devices.Previous machine learning approaches were also unable to detect denial-of-service(DoS)attacks.This study introduced a novel decentralized and secure framework for blockchain integration.To avoid single-point OF failure,an accredited access control scheme is incorporated,combining blockchain with local peers to record each transaction and verify the signature to access.Blockchain-based attribute-based cryptography is implemented to protect data storage privacy by generating threshold parameters,managing keys,and revoking users on the blockchain.An innovative contract-based DOS attack mitigation method is also incorporated to effectively validate devices with intelligent contracts as trusted or untrusted,preventing the server from becoming overwhelmed.The proposed framework effectively controls access,safeguards data privacy,and reduces the risk of cyberattacks.The results depict that the suggested framework outperforms the results in terms of accuracy,precision,sensitivity,recall,and F-measure at 96.9%,98.43%,98.8%,98.43%,and 98.4%,respectively.
文摘End-user computing empowers non-developers to manage data and applications, enhancing collaboration and efficiency. Spreadsheets, a prime example of end-user programming environments widely used in business for data analysis. However, Excel functionalities have limits compared to dedicated programming languages. This paper addresses this gap by proposing a prototype for integrating Python’s capabilities into Excel through on-premises desktop to build custom spreadsheet functions with Python. This approach overcomes potential latency issues associated with cloud-based solutions. This prototype utilizes Excel-DNA and IronPython. Excel-DNA allows creating custom Python functions that seamlessly integrate with Excel’s calculation engine. IronPython enables the execution of these Python (CSFs) directly within Excel. C# and VSTO add-ins form the core components, facilitating communication between Python and Excel. This approach empowers users with a potentially open-ended set of Python (CSFs) for tasks like mathematical calculations, statistical analysis, and even predictive modeling, all within the familiar Excel interface. This prototype demonstrates smooth integration, allowing users to call Python (CSFs) just like standard Excel functions. This research contributes to enhancing spreadsheet capabilities for end-user programmers by leveraging Python’s power within Excel. Future research could explore expanding data analysis capabilities by expanding the (CSFs) functions for complex calculations, statistical analysis, data manipulation, and even external library integration. The possibility of integrating machine learning models through the (CSFs) functions within the familiar Excel environment.
文摘In light of the rapid growth and development of social media, it has become the focus of interest in many different scientific fields. They seek to extract useful information from it, and this is called (knowledge), such as extracting information related to people’s behaviors and interactions to analyze feelings or understand the behavior of users or groups, and many others. This extracted knowledge has a very important role in decision-making, creating and improving marketing objectives and competitive advantage, monitoring events, whether political or economic, and development in all fields. Therefore, to extract this knowledge, we need to analyze the vast amount of data found within social media using the most popular data mining techniques and applications related to social media sites.
文摘In the past decade,online Peer-to-Peer(P2P)lending platforms have transformed the lending industry,which has been historically dominated by commercial banks.Information technology breakthroughs such as big data-based financial technologies(Fintech)have been identified as important disruptive driving forces for this paradigm shift.In this paper,we take an information economics perspective to investigate how big data affects the transformation of the lending industry.By identifying how signaling and search costs are reduced by big data analytics for credit risk management of P2P lending,we discuss how information asymmetry is reduced in the big data era.Rooted in the lending business,we propose a theory on the economics of big data and outline a number of research opportunities and challenging issues.
基金supported by the Engineering and Physical Sciences Research Council(EPSRC)of the UK(No.GR/S27658/01)the Royal Society of the UK and the Alexander von Humboldt Foundation of Germany
文摘This paper deals with the robust control problem for a class of uncertain nonlinear networked systems with stochastic communication delays via sliding mode conception (SMC). A sequence of variables obeying Bernoulli distribution are employed to model the randomly occurring communication delays which could be different for different state variables. A discrete switching function that is different from those in the existing literature is first proposed. Then, expressed as the feasibility of a linear matrix inequality (LMI) with an equality constraint, sufficient conditions are derived in order to ensure the globally mean-square asymptotic stability of the system dynamics on the sliding surface. A discrete-time SMC controller is then synthesized to guarantee the discrete-time sliding mode reaching condition with the specified sliding surface. Finally, a simulation example is given to show the effectiveness of the proposed method.
文摘Cyber-physical systems(CPS)are increasingly commonplace,with applications in energy,health,transportation,and many other sectors.One of the major requirements in CPS is that the interaction between cyber-world and man-made physical world(exchanging and sharing of data and information with other physical objects and systems)must be safe,especially in bi-directional communications.In particular,there is a need to suitably address security and/or privacy concerns in this human-in-the-loop CPS ecosystem.However,existing centralized architecture models in CPS,and also the more general IoT systems,have a number of associated limitations,in terms of single point of failure,data privacy,security,robustness,etc.Such limitations reinforce the importance of designing reliable,secure and privacy-preserving distributed solutions and other novel approaches,such as those based on blockchain technology due to its features(e.g.,decentralization,transparency and immutability of data).This is the focus of this special issue.
基金The authors acknowledge Jouf University,Saudi Arabia for his funding support.
文摘Internet of Things(IoT)devices work mainly in wireless mediums;requiring different Intrusion Detection System(IDS)kind of solutions to leverage 802.11 header information for intrusion detection.Wireless-specific traffic features with high information gain are primarily found in data link layers rather than application layers in wired networks.This survey investigates some of the complexities and challenges in deploying wireless IDS in terms of data collection methods,IDS techniques,IDS placement strategies,and traffic data analysis techniques.This paper’s main finding highlights the lack of available network traces for training modern machine-learning models against IoT specific intrusions.Specifically,the Knowledge Discovery in Databases(KDD)Cup dataset is reviewed to highlight the design challenges of wireless intrusion detection based on current data attributes and proposed several guidelines to future-proof following traffic capture methods in the wireless network(WN).The paper starts with a review of various intrusion detection techniques,data collection methods and placement methods.The main goal of this paper is to study the design challenges of deploying intrusion detection system in a wireless environment.Intrusion detection system deployment in a wireless environment is not as straightforward as in the wired network environment due to the architectural complexities.So this paper reviews the traditional wired intrusion detection deployment methods and discusses how these techniques could be adopted into the wireless environment and also highlights the design challenges in the wireless environment.The main wireless environments to look into would be Wireless Sensor Networks(WSN),Mobile Ad Hoc Networks(MANET)and IoT as this are the future trends and a lot of attacks have been targeted into these networks.So it is very crucial to design an IDS specifically to target on the wireless networks.