Digital signal processing of electroencephalography(EEG)data is now widely utilized in various applications,including motor imagery classification,seizure detection and prediction,emotion classification,mental task cl...Digital signal processing of electroencephalography(EEG)data is now widely utilized in various applications,including motor imagery classification,seizure detection and prediction,emotion classification,mental task classification,drug impact identification and sleep state classification.With the increasing number of recorded EEG channels,it has become clear that effective channel selection algorithms are required for various applications.Guided Whale Optimization Method(Guided WOA),a suggested feature selection algorithm based on Stochastic Fractal Search(SFS)technique,evaluates the chosen subset of channels.This may be used to select the optimum EEG channels for use in Brain-Computer Interfaces(BCIs),the method for identifying essential and irrelevant characteristics in a dataset,and the complexity to be eliminated.This enables(SFS-Guided WOA)algorithm to choose the most appropriate EEG channels while assisting machine learning classification in its tasks and training the classifier with the dataset.The(SFSGuided WOA)algorithm is superior in performance metrics,and statistical tests such as ANOVA and Wilcoxon rank-sum are used to demonstrate this.展开更多
Fruit classification is found to be one of the rising fields in computer and machine vision.Many deep learning-based procedures worked out so far to classify images may have some ill-posed issues.The performance of th...Fruit classification is found to be one of the rising fields in computer and machine vision.Many deep learning-based procedures worked out so far to classify images may have some ill-posed issues.The performance of the classification scheme depends on the range of captured images,the volume of features,types of characters,choice of features from extracted features,and type of classifiers used.This paper aims to propose a novel deep learning approach consisting of Convolution Neural Network(CNN),Recurrent Neural Network(RNN),and Long Short-TermMemory(LSTM)application to classify the fruit images.Classification accuracy depends on the extracted and selected optimal features.Deep learning applications CNN,RNN,and LSTM were collectively involved to classify the fruits.CNN is used to extract the image features.RNN is used to select the extracted optimal features and LSTM is used to classify the fruits based on extracted and selected images features by CNN and RNN.Empirical study shows the supremacy of proposed over existing Support Vector Machine(SVM),Feed-forwardNeural Network(FFNN),and Adaptive Neuro-Fuzzy Inference System(ANFIS)competitive techniques for fruit images classification.The accuracy rate of the proposed approach is quite better than the SVM,FFNN,and ANFIS schemes.It has been concluded that the proposed technique outperforms existing schemes.展开更多
Today,Internet of Things(IoT)is a technology paradigm which convinces many researchers for the purpose of achieving high performance of packets delivery in IoT applications such as smart cities.Interconnecting various...Today,Internet of Things(IoT)is a technology paradigm which convinces many researchers for the purpose of achieving high performance of packets delivery in IoT applications such as smart cities.Interconnecting various physical devices such as sensors or actuators with the Internet may causes different constraints on the network resources such as packets delivery ratio,energy efficiency,end-to-end delays etc.However,traditional scheduling methodologies in large-scale environments such as big data smart cities cannot meet the requirements for high performance network metrics.In big data smart cities applications which need fast packets transmission ratio such as sending priority packets to hospitals for an emergency case,an efficient schedulingmechanism ismandatory which is the main concern of this paper.In this paper,we overcome the shortcoming issues of the traditional scheduling algorithms that are utilized in big data smart cities emergency applications.Transmission information about the priority packets between the source nodes(i.e.,people with emergency cases)and the destination nodes(i.e.,hospitals)is performed before sending the packets in order to reserve transmission channels and prepare the sequence of transmission of theses priority packets between the two parties.In our proposed mechanism,Software Defined Networking(SDN)with centralized communication controller will be responsible for determining the scheduling and processing sequences for priority packets in big data smart cities environments.In this paper,we compare between our proposed Priority Packets Deadline First scheduling scheme(PPDF)with existing and traditional scheduling algorithms that can be used in urgent smart cities applications in order to illustrate the outstanding network performance parameters of our scheme such as the average waiting time,packets loss rates,priority packets end-to-end delay,and efficient energy consumption.展开更多
The sample’s hemoglobin and glucose levels can be determined by obtaining a blood sample from the human body using a needle and analyzing it.Hemoglobin(HGB)is a critical component of the human body because it transpo...The sample’s hemoglobin and glucose levels can be determined by obtaining a blood sample from the human body using a needle and analyzing it.Hemoglobin(HGB)is a critical component of the human body because it transports oxygen from the lungs to the body’s tissues and returns carbon dioxide from the tissues to the lungs.Calculating the HGB level is a critical step in any blood analysis job.TheHGBlevels often indicate whether a person is anemic or polycythemia vera.Constructing ensemble models by combining two or more base machine learning(ML)models can help create a more improved model.The purpose of this work is to present a weighted average ensemble model for predicting hemoglobin levels.An optimization method is utilized to get the ensemble’s optimum weights.The optimum weight for this work is determined using a sine cosine algorithm based on stochastic fractal search(SCSFS).The proposed SCSFS ensemble is compared toDecision Tree,Multilayer perceptron(MLP),Support Vector Regression(SVR)and Random Forest Regressors as model-based approaches and the average ensemble model.The SCSFS results indicate that the proposed model outperforms existing models and provides an almost accurate hemoglobin estimate.展开更多
Breast cancer(BC)is the most widely recognized cancer in women worldwide.By 2018,627,000 women had died of breast cancer(World Health Organization Report 2018).To diagnose BC,the evaluation of tumours is achieved by a...Breast cancer(BC)is the most widely recognized cancer in women worldwide.By 2018,627,000 women had died of breast cancer(World Health Organization Report 2018).To diagnose BC,the evaluation of tumours is achieved by analysis of histological specimens.At present,the Nottingham Bloom Richardson framework is the least expensive approach used to grade BC aggressiveness.Pathologists contemplate three elements,1.mitotic count,2.gland formation,and 3.nuclear atypia,which is a laborious process that witness’s variations in expert’s opinions.Recently,some algorithms have been proposed for the detection of mitotic cells,but nuclear atypia in breast cancer histopathology has not received much consideration.Nuclear atypia analysis is performed not only to grade BC but also to provide critical information in the discrimination of normal breast,non-invasive breast(usual ductal hyperplasia,atypical ductal hyperplasia)and pre-invasive breast(ductal carcinoma in situ)and invasive breast lesions.We proposed a deep-stacked multi-layer autoencoder ensemble with a softmax layer for the feature extraction and classification process.The classification results show the value of the multilayer autoencoder model in the evaluation of nuclear polymorphisms.The proposed method has indicated promising results,making them more fit in breast cancer grading.展开更多
Many patients have begun to use mobile applications to handle different health needs because they can better access high-speed Internet and smartphones.These devices and mobile applications are now increasingly used a...Many patients have begun to use mobile applications to handle different health needs because they can better access high-speed Internet and smartphones.These devices and mobile applications are now increasingly used and integrated through the medical Internet of Things(mIoT).mIoT is an important part of the digital transformation of healthcare,because it can introduce new business models and allow efficiency improvements,cost control and improve patient experience.In the mIoT system,when migrating from traditional medical services to electronic medical services,patient protection and privacy are the priorities of each stakeholder.Therefore,it is recommended to use different user authentication and authorization methods to improve security and privacy.In this paper,our prosed model involves a shared identity verification process with different situations in the e-health system.We aim to reduce the strict and formal specification of the joint key authentication model.We use the AVISPA tool to verify through the wellknown HLPSL specification language to develop user authentication and smart card use cases in a user-friendly environment.Our model has economic and strategic advantages for healthcare organizations and healthcare workers.The medical staff can increase their knowledge and ability to analyze medical data more easily.Our model can continuously track health indicators to automatically manage treatments and monitor health data in real time.Further,it can help customers prevent chronic diseases with the enhanced cognitive functions support.The necessity for efficient identity verification in e-health care is even more crucial for cognitive mitigation because we increasingly rely on mIoT systems.展开更多
Industrial automation or assembly automation is a strictly monitored environment,in which changes occur at a good speed.There are many types of entities in the focusing environment,and the data generated by these devi...Industrial automation or assembly automation is a strictly monitored environment,in which changes occur at a good speed.There are many types of entities in the focusing environment,and the data generated by these devices is huge.In addition,because the robustness is achieved by sensing redundant data,the data becomes larger.The data generating device,whether it is a sensing device or a physical device,streams the data to a higher-level deception device for calculation,so that it can be driven and configured according to the updated conditions.With the emergence of the Industry 4.0 concept that includes a variety of automation technologies,various data is generated through numerous devices.Therefore,the data generated for industrial automation requires unique Information Architecture(IA).IA should be able to satisfy hard real-time constraints to spontaneously change the environment and the instantaneous configuration of all participants.To understand its applicability,we used an example smart grid analogy.The smart grid system needs an IA to fulfill the communication requirements to report the hard real-time changes in the power immediately following the system.In addition,in a smart grid system,it needs to report changes on either side of the system,i.e.,consumers and suppliers configure and reconfigure the system according to the changes.In this article,we propose an analogy of a physical phenomenon.A point charge is used as a data generating device,the streamline of electric flux is used as a data flow,and the charge distribution on a closed surface is used as a configuration.Finally,the intensity changes are used in the physical process,e.g.,the smart grid.This analogy is explained by metaphors,and the structural mapping framework is used for its theoretical proof.The proposed analogy provides a theoretical basis for the development of such information architectures that can represent data flows,definition changes(deterministic and non-deterministic),events,and instantaneous configuration definitions of entities in the system.The proposed analogy provides a mechanism to perform calculations during communication, using a simpleconcept on the closed surface to integrate two-layer cyber-physical systems(computation, communication, and physical process). The proposed analogyis a good candidate for implementation in smart grid security.展开更多
In recent years,it has been observed that the disclosure of information increases the risk of terrorism.Without restricting the accessibility of information,providing security is difficult.So,there is a demand for tim...In recent years,it has been observed that the disclosure of information increases the risk of terrorism.Without restricting the accessibility of information,providing security is difficult.So,there is a demand for time tofill the gap between security and accessibility of information.In fact,security tools should be usable for improving the security as well as the accessibility of information.Though security and accessibility are not directly influenced,some of their factors are indirectly influenced by each other.Attributes play an important role in bridging the gap between security and accessibility.In this paper,we identify the key attributes of accessibility and security that impact directly and indirectly on each other,such as confidentiality,integrity,availability,and severity.The significance of every attribute on the basis of obtained weight is important for its effect on security during the big data security life cycle process.To calculate the proposed work,researchers utilised the Fuzzy Analytic Hierarchy Process(Fuzzy AHP).Thefindings show that the Fuzzy AHP is a very accurate mechanism for determining the best security solution in a real-time healthcare context.The study also looks at the rapidly evolving security technologies in healthcare that could help improve healthcare services and the future prospects in this area.展开更多
In Wireless Sensor Network (WSNs), sensor nodes collect data and send them to a Base Station (BS) for further processing. One of the most issues in WSNs that researchers have proposed a hundred of technique to solve i...In Wireless Sensor Network (WSNs), sensor nodes collect data and send them to a Base Station (BS) for further processing. One of the most issues in WSNs that researchers have proposed a hundred of technique to solve its impact is the energy constraint since sensor nodes have small battery, small memory and less data processing with low computational capabilities. However, many researches efforts have focused on how to prolong the battery lifetime of sensor nodes by proposing different routing, MAC, localization, data aggregation, topology construction techniques. In this paper, we will focus on routing techniques which aim to prolonging the network lifetime. Hence, we propose an Energy-Efficient Routing technique in WSNs based on Stationary and Mobile nodes (EERSM). Sensing filed is divided into intersected circles which contain Mobile Nodes (MN). The proposed data aggregation technique via the circular topology will eliminate the redundant data to be sent to the Base Station (BS). MN in each circle will rout packets for their source nodes, and move to the intersected area where another MN is waiting (sleep mode) to receive the transmitted packet, and then the packet will be delivered to the next intersected area until the packet is arrived to the BS. Our proposed EERSM technique is simulated using MATLAB and compared with conventional multi-hop techniques under different network models and scenarios. In the simulation, we will show how the proposed EERSM technique overcomes many routing protocols in terms of the number of hops counted when sending packets from a source node to the destination (i.e. BS), the average residual energy, number of sent packets to the BS, and the number of a live sensor nodes verse the simulation rounds.展开更多
Collision detection mechanisms in Wireless Sensor Networks (WSNs) have largely been revolving around direct demodulation and decoding of received packets and deciding on a collision based on some form of a frame error...Collision detection mechanisms in Wireless Sensor Networks (WSNs) have largely been revolving around direct demodulation and decoding of received packets and deciding on a collision based on some form of a frame error detection mechanism, such as a CRC check. The obvious drawback of full detection of a received packet is the need to expend a significant amount of energy and processing complexity in order to fully decode a packet, only to discover the packet is illegible due to a collision. In this paper, we propose a suite of novel, yet simple and power-efficient algorithms to detect a collision without the need for full-decoding of the received packet. Our novel algorithms aim at detecting collision through fast examination of the signal statistics of a short snippet of the received packet via a relatively small number of computations over a small number of received IQ samples. Hence, the proposed algorithms operate directly at the output of the receiver's analog-to-digital converter and eliminate the need to pass the signal through the entire. In addition, we present a complexity and power-saving comparison between our novel algorithms and conventional full-decoding (for select coding schemes) to demonstrate the significant power and complexity saving advantage of our algorithms.展开更多
Currently,the majority of institutions have made use of information technologies to improve and develop their diverse educational methods to attract more learners.Through information technologies,e-learning and learni...Currently,the majority of institutions have made use of information technologies to improve and develop their diverse educational methods to attract more learners.Through information technologies,e-learning and learning-on-the go have been adopted by the institutions to provide affordability and flexibility of educational services.Most of the educational institutes are offering online teaching classes using the technologies like cloud computing,networking,etc.Educational institutes have developed their e-learning platforms for the online learning process,through this way they have paved the way for distance learning.But e-learning platform has to face a lot of security challenges in terms of cyberattacks and data hacking through unauthorized access.Fog computing is one of the new technologies that facilitate control over access to big data,as it acts as a mediator between the cloud and the user to bring services closer and reduce their latency.This report presents the use of fog computing for the development of an e-learning platform.and introduced different algorithms to secure the data and information sharing through e-learning platforms.Moreover,this report provides a comparison among RSA,AES,and ECC algorithms for fog-enabled cybersecurity systems.These Algorithms are compared by developing them using python-based language program,in terms of encryption/decryption time,key generations techniques,and other features offered.In addition,we proposed to use a hybrid cryptography system of two types of encryption algorithms such as RSA with AES to fulfill the security,file size,and latency required for the communication between the fog and the e-learning system.we tested our proposed system and highlight the pros and cons of the Integrated Encryption Schemes by performing a testbed for e-learning website scenario using ASP.net and C#.展开更多
基金Funding for this study is received from Taif University Researchers Supporting Project No.(Project No.TURSP-2020/150)Taif University,Taif,Saudi Arabia。
文摘Digital signal processing of electroencephalography(EEG)data is now widely utilized in various applications,including motor imagery classification,seizure detection and prediction,emotion classification,mental task classification,drug impact identification and sleep state classification.With the increasing number of recorded EEG channels,it has become clear that effective channel selection algorithms are required for various applications.Guided Whale Optimization Method(Guided WOA),a suggested feature selection algorithm based on Stochastic Fractal Search(SFS)technique,evaluates the chosen subset of channels.This may be used to select the optimum EEG channels for use in Brain-Computer Interfaces(BCIs),the method for identifying essential and irrelevant characteristics in a dataset,and the complexity to be eliminated.This enables(SFS-Guided WOA)algorithm to choose the most appropriate EEG channels while assisting machine learning classification in its tasks and training the classifier with the dataset.The(SFSGuided WOA)algorithm is superior in performance metrics,and statistical tests such as ANOVA and Wilcoxon rank-sum are used to demonstrate this.
基金This research is funded by Taif University,TURSP-2020/150.
文摘Fruit classification is found to be one of the rising fields in computer and machine vision.Many deep learning-based procedures worked out so far to classify images may have some ill-posed issues.The performance of the classification scheme depends on the range of captured images,the volume of features,types of characters,choice of features from extracted features,and type of classifiers used.This paper aims to propose a novel deep learning approach consisting of Convolution Neural Network(CNN),Recurrent Neural Network(RNN),and Long Short-TermMemory(LSTM)application to classify the fruit images.Classification accuracy depends on the extracted and selected optimal features.Deep learning applications CNN,RNN,and LSTM were collectively involved to classify the fruits.CNN is used to extract the image features.RNN is used to select the extracted optimal features and LSTM is used to classify the fruits based on extracted and selected images features by CNN and RNN.Empirical study shows the supremacy of proposed over existing Support Vector Machine(SVM),Feed-forwardNeural Network(FFNN),and Adaptive Neuro-Fuzzy Inference System(ANFIS)competitive techniques for fruit images classification.The accuracy rate of the proposed approach is quite better than the SVM,FFNN,and ANFIS schemes.It has been concluded that the proposed technique outperforms existing schemes.
基金This study is supported through Taif University Researchers Supporting Project Number(TURSP-2020/150),Taif University,Taif,Saudi Arabia.
文摘Today,Internet of Things(IoT)is a technology paradigm which convinces many researchers for the purpose of achieving high performance of packets delivery in IoT applications such as smart cities.Interconnecting various physical devices such as sensors or actuators with the Internet may causes different constraints on the network resources such as packets delivery ratio,energy efficiency,end-to-end delays etc.However,traditional scheduling methodologies in large-scale environments such as big data smart cities cannot meet the requirements for high performance network metrics.In big data smart cities applications which need fast packets transmission ratio such as sending priority packets to hospitals for an emergency case,an efficient schedulingmechanism ismandatory which is the main concern of this paper.In this paper,we overcome the shortcoming issues of the traditional scheduling algorithms that are utilized in big data smart cities emergency applications.Transmission information about the priority packets between the source nodes(i.e.,people with emergency cases)and the destination nodes(i.e.,hospitals)is performed before sending the packets in order to reserve transmission channels and prepare the sequence of transmission of theses priority packets between the two parties.In our proposed mechanism,Software Defined Networking(SDN)with centralized communication controller will be responsible for determining the scheduling and processing sequences for priority packets in big data smart cities environments.In this paper,we compare between our proposed Priority Packets Deadline First scheduling scheme(PPDF)with existing and traditional scheduling algorithms that can be used in urgent smart cities applications in order to illustrate the outstanding network performance parameters of our scheme such as the average waiting time,packets loss rates,priority packets end-to-end delay,and efficient energy consumption.
基金Funding for this study is received from Taif University Researchers Supporting Project No.(Project No.TURSP-2020/150),Taif University,Taif,Saudi Arabia.
文摘The sample’s hemoglobin and glucose levels can be determined by obtaining a blood sample from the human body using a needle and analyzing it.Hemoglobin(HGB)is a critical component of the human body because it transports oxygen from the lungs to the body’s tissues and returns carbon dioxide from the tissues to the lungs.Calculating the HGB level is a critical step in any blood analysis job.TheHGBlevels often indicate whether a person is anemic or polycythemia vera.Constructing ensemble models by combining two or more base machine learning(ML)models can help create a more improved model.The purpose of this work is to present a weighted average ensemble model for predicting hemoglobin levels.An optimization method is utilized to get the ensemble’s optimum weights.The optimum weight for this work is determined using a sine cosine algorithm based on stochastic fractal search(SCSFS).The proposed SCSFS ensemble is compared toDecision Tree,Multilayer perceptron(MLP),Support Vector Regression(SVR)and Random Forest Regressors as model-based approaches and the average ensemble model.The SCSFS results indicate that the proposed model outperforms existing models and provides an almost accurate hemoglobin estimate.
基金This work was supported by Taif University(in Taif,Saudi Arabia)through the Researchers Supporting Project Number(TURSP-2020/150).
文摘Breast cancer(BC)is the most widely recognized cancer in women worldwide.By 2018,627,000 women had died of breast cancer(World Health Organization Report 2018).To diagnose BC,the evaluation of tumours is achieved by analysis of histological specimens.At present,the Nottingham Bloom Richardson framework is the least expensive approach used to grade BC aggressiveness.Pathologists contemplate three elements,1.mitotic count,2.gland formation,and 3.nuclear atypia,which is a laborious process that witness’s variations in expert’s opinions.Recently,some algorithms have been proposed for the detection of mitotic cells,but nuclear atypia in breast cancer histopathology has not received much consideration.Nuclear atypia analysis is performed not only to grade BC but also to provide critical information in the discrimination of normal breast,non-invasive breast(usual ductal hyperplasia,atypical ductal hyperplasia)and pre-invasive breast(ductal carcinoma in situ)and invasive breast lesions.We proposed a deep-stacked multi-layer autoencoder ensemble with a softmax layer for the feature extraction and classification process.The classification results show the value of the multilayer autoencoder model in the evaluation of nuclear polymorphisms.The proposed method has indicated promising results,making them more fit in breast cancer grading.
基金This work was supported by Taif University(in Taif,Saudi Arabia)through the Researchers Supporting Project Number(TURSP-2020/150).
文摘Many patients have begun to use mobile applications to handle different health needs because they can better access high-speed Internet and smartphones.These devices and mobile applications are now increasingly used and integrated through the medical Internet of Things(mIoT).mIoT is an important part of the digital transformation of healthcare,because it can introduce new business models and allow efficiency improvements,cost control and improve patient experience.In the mIoT system,when migrating from traditional medical services to electronic medical services,patient protection and privacy are the priorities of each stakeholder.Therefore,it is recommended to use different user authentication and authorization methods to improve security and privacy.In this paper,our prosed model involves a shared identity verification process with different situations in the e-health system.We aim to reduce the strict and formal specification of the joint key authentication model.We use the AVISPA tool to verify through the wellknown HLPSL specification language to develop user authentication and smart card use cases in a user-friendly environment.Our model has economic and strategic advantages for healthcare organizations and healthcare workers.The medical staff can increase their knowledge and ability to analyze medical data more easily.Our model can continuously track health indicators to automatically manage treatments and monitor health data in real time.Further,it can help customers prevent chronic diseases with the enhanced cognitive functions support.The necessity for efficient identity verification in e-health care is even more crucial for cognitive mitigation because we increasingly rely on mIoT systems.
基金This work was supported by Taif University(in Taif,Saudi Arabia)through the Researchers Supporting Project Number(TURSP-2020/150).
文摘Industrial automation or assembly automation is a strictly monitored environment,in which changes occur at a good speed.There are many types of entities in the focusing environment,and the data generated by these devices is huge.In addition,because the robustness is achieved by sensing redundant data,the data becomes larger.The data generating device,whether it is a sensing device or a physical device,streams the data to a higher-level deception device for calculation,so that it can be driven and configured according to the updated conditions.With the emergence of the Industry 4.0 concept that includes a variety of automation technologies,various data is generated through numerous devices.Therefore,the data generated for industrial automation requires unique Information Architecture(IA).IA should be able to satisfy hard real-time constraints to spontaneously change the environment and the instantaneous configuration of all participants.To understand its applicability,we used an example smart grid analogy.The smart grid system needs an IA to fulfill the communication requirements to report the hard real-time changes in the power immediately following the system.In addition,in a smart grid system,it needs to report changes on either side of the system,i.e.,consumers and suppliers configure and reconfigure the system according to the changes.In this article,we propose an analogy of a physical phenomenon.A point charge is used as a data generating device,the streamline of electric flux is used as a data flow,and the charge distribution on a closed surface is used as a configuration.Finally,the intensity changes are used in the physical process,e.g.,the smart grid.This analogy is explained by metaphors,and the structural mapping framework is used for its theoretical proof.The proposed analogy provides a theoretical basis for the development of such information architectures that can represent data flows,definition changes(deterministic and non-deterministic),events,and instantaneous configuration definitions of entities in the system.The proposed analogy provides a mechanism to perform calculations during communication, using a simpleconcept on the closed surface to integrate two-layer cyber-physical systems(computation, communication, and physical process). The proposed analogyis a good candidate for implementation in smart grid security.
基金Funding for this study was received from the Taif University,Taif,Saudi Arabia under the Grant No.TURSP-2020/150.
文摘In recent years,it has been observed that the disclosure of information increases the risk of terrorism.Without restricting the accessibility of information,providing security is difficult.So,there is a demand for time tofill the gap between security and accessibility of information.In fact,security tools should be usable for improving the security as well as the accessibility of information.Though security and accessibility are not directly influenced,some of their factors are indirectly influenced by each other.Attributes play an important role in bridging the gap between security and accessibility.In this paper,we identify the key attributes of accessibility and security that impact directly and indirectly on each other,such as confidentiality,integrity,availability,and severity.The significance of every attribute on the basis of obtained weight is important for its effect on security during the big data security life cycle process.To calculate the proposed work,researchers utilised the Fuzzy Analytic Hierarchy Process(Fuzzy AHP).Thefindings show that the Fuzzy AHP is a very accurate mechanism for determining the best security solution in a real-time healthcare context.The study also looks at the rapidly evolving security technologies in healthcare that could help improve healthcare services and the future prospects in this area.
文摘In Wireless Sensor Network (WSNs), sensor nodes collect data and send them to a Base Station (BS) for further processing. One of the most issues in WSNs that researchers have proposed a hundred of technique to solve its impact is the energy constraint since sensor nodes have small battery, small memory and less data processing with low computational capabilities. However, many researches efforts have focused on how to prolong the battery lifetime of sensor nodes by proposing different routing, MAC, localization, data aggregation, topology construction techniques. In this paper, we will focus on routing techniques which aim to prolonging the network lifetime. Hence, we propose an Energy-Efficient Routing technique in WSNs based on Stationary and Mobile nodes (EERSM). Sensing filed is divided into intersected circles which contain Mobile Nodes (MN). The proposed data aggregation technique via the circular topology will eliminate the redundant data to be sent to the Base Station (BS). MN in each circle will rout packets for their source nodes, and move to the intersected area where another MN is waiting (sleep mode) to receive the transmitted packet, and then the packet will be delivered to the next intersected area until the packet is arrived to the BS. Our proposed EERSM technique is simulated using MATLAB and compared with conventional multi-hop techniques under different network models and scenarios. In the simulation, we will show how the proposed EERSM technique overcomes many routing protocols in terms of the number of hops counted when sending packets from a source node to the destination (i.e. BS), the average residual energy, number of sent packets to the BS, and the number of a live sensor nodes verse the simulation rounds.
文摘Collision detection mechanisms in Wireless Sensor Networks (WSNs) have largely been revolving around direct demodulation and decoding of received packets and deciding on a collision based on some form of a frame error detection mechanism, such as a CRC check. The obvious drawback of full detection of a received packet is the need to expend a significant amount of energy and processing complexity in order to fully decode a packet, only to discover the packet is illegible due to a collision. In this paper, we propose a suite of novel, yet simple and power-efficient algorithms to detect a collision without the need for full-decoding of the received packet. Our novel algorithms aim at detecting collision through fast examination of the signal statistics of a short snippet of the received packet via a relatively small number of computations over a small number of received IQ samples. Hence, the proposed algorithms operate directly at the output of the receiver's analog-to-digital converter and eliminate the need to pass the signal through the entire. In addition, we present a complexity and power-saving comparison between our novel algorithms and conventional full-decoding (for select coding schemes) to demonstrate the significant power and complexity saving advantage of our algorithms.
基金This work was supported at Taif University by TRUSP(2020/150).
文摘Currently,the majority of institutions have made use of information technologies to improve and develop their diverse educational methods to attract more learners.Through information technologies,e-learning and learning-on-the go have been adopted by the institutions to provide affordability and flexibility of educational services.Most of the educational institutes are offering online teaching classes using the technologies like cloud computing,networking,etc.Educational institutes have developed their e-learning platforms for the online learning process,through this way they have paved the way for distance learning.But e-learning platform has to face a lot of security challenges in terms of cyberattacks and data hacking through unauthorized access.Fog computing is one of the new technologies that facilitate control over access to big data,as it acts as a mediator between the cloud and the user to bring services closer and reduce their latency.This report presents the use of fog computing for the development of an e-learning platform.and introduced different algorithms to secure the data and information sharing through e-learning platforms.Moreover,this report provides a comparison among RSA,AES,and ECC algorithms for fog-enabled cybersecurity systems.These Algorithms are compared by developing them using python-based language program,in terms of encryption/decryption time,key generations techniques,and other features offered.In addition,we proposed to use a hybrid cryptography system of two types of encryption algorithms such as RSA with AES to fulfill the security,file size,and latency required for the communication between the fog and the e-learning system.we tested our proposed system and highlight the pros and cons of the Integrated Encryption Schemes by performing a testbed for e-learning website scenario using ASP.net and C#.