With the rapid evolution of Internet technology,fog computing has taken a major role in managing large amounts of data.The major concerns in this domain are security and privacy.Therefore,attaining a reliable level of...With the rapid evolution of Internet technology,fog computing has taken a major role in managing large amounts of data.The major concerns in this domain are security and privacy.Therefore,attaining a reliable level of confidentiality in the fog computing environment is a pivotal task.Among different types of data stored in the fog,the 3D point and mesh fog data are increasingly popular in recent days,due to the growth of 3D modelling and 3D printing technologies.Hence,in this research,we propose a novel scheme for preserving the privacy of 3D point and mesh fog data.Chaotic Cat mapbased data encryption is a recently trending research area due to its unique properties like pseudo-randomness,deterministic nature,sensitivity to initial conditions,ergodicity,etc.To boost encryption efficiency significantly,in this work,we propose a novel Chaotic Cat map.The sequence generated by this map is used to transform the coordinates of the fog data.The improved range of the proposed map is depicted using bifurcation analysis.The quality of the proposed Chaotic Cat map is also analyzed using metrics like Lyapunov exponent and approximate entropy.We also demonstrate the performance of the proposed encryption framework using attacks like brute-force attack and statistical attack.The experimental results clearly depict that the proposed framework produces the best results compared to the previous works in the literature.展开更多
Because stress has such a powerful impact on human health,we must be able to identify it automatically in our everyday lives.The human activity recognition(HAR)system use data from several kinds of sensors to try to r...Because stress has such a powerful impact on human health,we must be able to identify it automatically in our everyday lives.The human activity recognition(HAR)system use data from several kinds of sensors to try to recognize and evaluate human actions automatically recognize and evaluate human actions.Using the multimodal dataset DEAP(Database for Emotion Analysis using Physiological Signals),this paper presents deep learning(DL)technique for effectively detecting human stress.The combination of vision-based and sensor-based approaches for recognizing human stress will help us achieve the increased efficiency of current stress recognition systems and predict probable actions in advance of when fatal.Based on visual and EEG(Electroencephalogram)data,this research aims to enhance the performance and extract the dominating characteristics of stress detection.For the stress identification test,we utilized the DEAP dataset,which included video and EEG data.We also demonstrate that combining video and EEG characteristics may increase overall performance,with the suggested stochastic features providing the most accurate results.In the first step,CNN(Convolutional Neural Network)extracts feature vectors from video frames and EEG data.Feature Level(FL)fusion that combines the features extracted from video and EEG data.We use XGBoost as our classifier model to predict stress,and we put it into action.The stress recognition accuracy of the proposed method is compared to existing methods of Decision Tree(DT),Random Forest(RF),AdaBoost,Linear Discriminant Analysis(LDA),and KNearest Neighborhood(KNN).When we compared our technique to existing state-of-the-art approaches,we found that the suggested DL methodology combining multimodal and heterogeneous inputs may improve stress identification.展开更多
The latest 6G improvements secured autonomous driving's realism in Intelligent Autonomous Transport Systems(IATS).Despite the IATS's benefits,security remains a significant challenge.Blockchain technology has ...The latest 6G improvements secured autonomous driving's realism in Intelligent Autonomous Transport Systems(IATS).Despite the IATS's benefits,security remains a significant challenge.Blockchain technology has grown in popularity as a means of implementing safe,dependable,and decentralised independent IATS systems,allowing for more utilisation of legacy IATS infrastructures and resources,which is especially advantageous for crowdsourcing technologies.Blockchain technology can be used to address security concerns in the IATS and to aid in logistics development.In light of the inadequacy of reliance and inattention to rights created by centralised and conventional logistics systems,this paper discusses the creation of a blockchain-based IATS powered by deep learning for secure cargo and vehicle matching(BDL-IATS).The BDL-IATS approach utilises Ethereum as the primary blockchain for storing private data such as order and shipment details.Additionally,the deep belief network(DBN)model is used to select suitable vehicles and goods for transportation.Additionally,the chaotic krill herd technique is used to tune the DBN model’s hyper-parameters.The performance of the BDL-IATS technique is validated,and the findings are inspected under a variety of conditions.The simulationfindings indicated that the BDL-IATS strategy outperformed recent state-of-the-art approaches.展开更多
Due to an increase in agricultural mislabeling and careless handling of non-perishable foods in recent years,consumers have been calling for the food sector to be more transparent.Due to information dispersion between...Due to an increase in agricultural mislabeling and careless handling of non-perishable foods in recent years,consumers have been calling for the food sector to be more transparent.Due to information dispersion between divisions and the propensity to record inaccurate data,current traceability solutions typically fail to provide reliable farm-to-fork histories of products.The threemost enticing characteristics of blockchain technology are openness,integrity,and traceability,which make it a potentially crucial tool for guaranteeing the integrity and correctness of data.In this paper,we suggest a permissioned blockchain system run by organizations,such as regulatory bodies,to promote the origin-tracking of shelf-stable agricultural products.We propose a four-tiered architecture,parallel side chains,Zero-Knowledge Proofs(ZKPs),and Interplanetary File Systems(IPFS).These ensure that information about where an item came from is shared,those commercial competitors cannot get to it,those big storage problems are handled,and the system can be scaled to handle many transactions at once.The solution maintains the confidentiality of all transaction flows when provenance data is queried utilizing smart contracts and a consumer-grade reliance rate.Extensive simulation testing using Ethereum Rinkeby and Polygon demonstrates reduced execution time,latency,and throughput overheads.展开更多
Crop protection is a great obstacle to food safety,with crop diseases being one of the most serious issues.Plant diseases diminish the quality of crop yield.To detect disease spots on grape leaves,deep learning techno...Crop protection is a great obstacle to food safety,with crop diseases being one of the most serious issues.Plant diseases diminish the quality of crop yield.To detect disease spots on grape leaves,deep learning technology might be employed.On the other hand,the precision and efficiency of identification remain issues.The quantity of images of ill leaves taken from plants is often uneven.With an uneven collection and few images,spotting disease is hard.The plant leaves dataset needs to be expanded to detect illness accurately.A novel hybrid technique employing segmentation,augmentation,and a capsule neural network(CapsNet)is used in this paper to tackle these challenges.The proposed method involves three phases.First,a graph-based technique extracts leaf area from a plant image.The second step expands the dataset using an Efficient Generative Adversarial Network E-GAN.Third,a CapsNet identifies the illness and stage.The proposed work has experimented on real-time grape leaf images which are captured using an SD1000 camera and PlantVillage grape leaf datasets.The proposed method achieves an effective classification of accuracy for disease type and disease stages detection compared to other existing models.展开更多
The extensive utilization of the Internet in everyday life can be attributed to the substantial accessibility of online services and the growing significance of the data transmitted via the Internet.Regrettably,this d...The extensive utilization of the Internet in everyday life can be attributed to the substantial accessibility of online services and the growing significance of the data transmitted via the Internet.Regrettably,this development has expanded the potential targets that hackers might exploit.Without adequate safeguards,data transmitted on the internet is significantly more susceptible to unauthorized access,theft,or alteration.The identification of unauthorised access attempts is a critical component of cybersecurity as it aids in the detection and prevention of malicious attacks.This research paper introduces a novel intrusion detection framework that utilizes Recurrent Neural Networks(RNN)integrated with Long Short-Term Memory(LSTM)units.The proposed model can identify various types of cyberattacks,including conventional and distinctive forms.Recurrent networks,a specific kind of feedforward neural networks,possess an intrinsic memory component.Recurrent Neural Networks(RNNs)incorporating Long Short-Term Memory(LSTM)mechanisms have demonstrated greater capabilities in retaining and utilizing data dependencies over extended periods.Metrics such as data types,training duration,accuracy,number of false positives,and number of false negatives are among the parameters employed to assess the effectiveness of these models in identifying both common and unusual cyberattacks.RNNs are utilised in conjunction with LSTM to support human analysts in identifying possible intrusion events,hence enhancing their decision-making capabilities.A potential solution to address the limitations of Shallow learning is the introduction of the Eccentric Intrusion Detection Model.This model utilises Recurrent Neural Networks,specifically exploiting LSTM techniques.The proposed model achieves detection accuracy(99.5%),generalisation(99%),and false-positive rate(0.72%),the parameters findings reveal that it is superior to state-of-the-art techniques.展开更多
The most noteworthy neurodegenerative disorder nationwide is appar-ently the Alzheimer's disease(AD)which ha no proven viable treatment till date and despite the clinical trials showing the potential of preclinica...The most noteworthy neurodegenerative disorder nationwide is appar-ently the Alzheimer's disease(AD)which ha no proven viable treatment till date and despite the clinical trials showing the potential of preclinical therapy,a sen-sitive method for evaluating the AD has to be developed yet.Due to the correla-tions between ocular and brain tissue,the eye(retinal blood vessels)has been investigated for predicting the AD.Hence,en enhanced method named Enhanced Long Short Term Memory(E-LSTM)has been proposed in this work which aims atfinding the severity of AD from ocular biomarkers.Tofind the level of disease severity,the new layer named precise layer was introduced in E-LSTM which will help the doctors to provide the apt treatments for the patients rapidly.To avoid the problem of overfitting,a dropout has been added to LSTM.In the existing work,boundary detection of retinal layers was found to be inaccurate during the seg-mentation process of Optical Coherence Tomography(OCT)image and to over-come this issue;Particle Swarm Optimization(PSO)has been utilized.To the best of our understanding,this is thefirst paper to use Particle Swarm Optimization.When compared with the existing works,the proposed work is found to be per-forming better in terms of F1 Score,Precision,Recall,training loss,and segmen-tation accuracy and it is found that the prediction accuracy was increased to 10%higher than the existing systems.展开更多
Forested areas are extremely vulnerable to disasters leading to environmental destruction.Forest Fire is one among them which requires immediate attention.There are lot of works done by authors where Wireless Sensors ...Forested areas are extremely vulnerable to disasters leading to environmental destruction.Forest Fire is one among them which requires immediate attention.There are lot of works done by authors where Wireless Sensors and IoT have been used for forest fire monitoring.So,towards monitoring the forest fire and managing the energy efficiently in IoT,Energy Efficient Routing Protocol for Low power lossy networks(E-RPL)was developed.There were challenges about the scalability of the network resulting in a large end-to-end delay and less packet delivery which led to the development of Aggregator-based Energy Efficient RPL with Data Compression(CAAERPL).Though CAA-ERPL proved effective in terms of reduced packet delivery,less energy consumption,and increased packet delivery ratio for varying number of nodes,there is still challenge in the selection of aggregator which is based purely on probability percentage of nodes.There has been research work where fuzzy logic been employed for Mobile Ad-hoc Routing,RPL routing and cluster head selection in Wireless Sensor.There has been no work where fuzzy logic is employed for aggregator selection in Energy Efficient RPL.So accordingly,we here have proposed Fuzzy Based Aggregator selection in Energy-efficient RPL for region thereby forming DODAG for communicating to Fog/Edge.We here have developed fuzzy inference rules for selecting the aggregator based on strength which takes residual power,Node degree,and Expected Transmission Count(ETX)as input metrics.The Fuzzy Aggregator Energy Efficient RPL(FA-ERPL)based on fuzzy inference rules were analysed against E-RPL in terms of scalability(First and Half Node die),Energy Consumption,and aggregator node energy deviation.From the analysis,it was found that FA-ERPL performed better than E-RPL.These were simulated using MATLAB and results.展开更多
Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifical...Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifically at the intersections in order to collect traffic information from the vehicles and disseminate alarms and messages in emergency situations to the neighborhood vehicles cooperating with the network.However,the development of a predominant RSUs placement algorithm for ensuring competent communication in VANETs is a challenging issue due to the hindrance of obstacles like water bodies,trees and buildings.In this paper,Ruppert’s Delaunay Triangulation Refinement Scheme(RDTRS)for optimal RSUs placement is proposed for accurately estimating the optimal number of RSUs that has the possibility of enhancing the area of coverage during data communication.This RDTRS is proposed by considering the maximum number of factors such as global coverage,intersection popularity,vehicle density and obstacles present in the map for optimal RSUs placement,which is considered as the core improvement over the existing RSUs optimal placement strategies.It is contributed for deploying requisite RSUs with essential transmission range for maximal coverage in the convex map such that each position of the map could be effectively covered by at least one RSU in the presence of obstacles.The simulation experiments of the proposed RDTRS are conducted with complex road traffic environments.The results of this proposed RDTRS confirmed its predominance in reducing the end-to-end delay by 21.32%,packet loss by 9.38%with improved packet delivery rate of 10.68%,compared to the benchmarked schemes.展开更多
Every year,the number of women affected by breast tumors is increasing worldwide.Hence,detecting and segmenting the cancer regions in mammogram images is important to prevent death in women patients due to breast canc...Every year,the number of women affected by breast tumors is increasing worldwide.Hence,detecting and segmenting the cancer regions in mammogram images is important to prevent death in women patients due to breast cancer.The conventional methods obtained low sensitivity and specificity with cancer region segmentation accuracy.The high-resolution standard mammogram images were supported by conventional methods as one of the main drawbacks.The conventional methods mostly segmented the cancer regions in mammogram images concerning their exterior pixel boundaries.These drawbacks are resolved by the proposed cancer region detection methods stated in this paper.The mammogram images are clas-sified into normal,benign,and malignant types using the Adaptive Neuro-Fuzzy Inference System(ANFIS)approach in this paper.This mammogram classification process consists of a noise filtering module,spatial-frequency transformation module,feature computation module,and classification mod-ule.The Gaussian Filtering Algorithm(GFA)is used as the pixel smooth filtering method and the Ridgelet transform is used as the spatial-frequency transformation module.The statistical Ridgelet feature metrics are computed from the transformed coefficients and these values are classified by the ANFIS technique in this paper.Finally,Probability Histogram Segmentation Algo-rithm(PHSA)is proposed in this work to compute and segment the tumor pixels in the abnormal mammogram images.This proposed breast cancer detection approach is evaluated on the mammogram images in MIAS and DDSM datasets.From the extensive analysis of the proposed tumor detection methods stated in this work with other works,the proposed work significantly achieves a higher performance.The methodologies proposed in this paper can be used in breast cancer detection hospitals to assist the breast surgeon to detect and segment the cancer regions.展开更多
Rapidly rising the quantity of Big Data is an opportunity to flout the privacy of people. Whenhigh processing capacity and massive storage are required for Big Data, distributed networkshave been used. There are sever...Rapidly rising the quantity of Big Data is an opportunity to flout the privacy of people. Whenhigh processing capacity and massive storage are required for Big Data, distributed networkshave been used. There are several people involved in these activities, the system may contributeto privacy infringements frameworks have been developed for the preservation of privacy atvarious levels (e.g. information age, information the executives and information preparing) asfor the existing pattern of huge information. We plan to frame this paper as a literature surveyof these classifications, including the Privacy Processes in Big Data and the presentation of theAssociate Challenges. Homomorphic encryption is particularised aimed at solitary single actionon the ciphered information. Homomorphic enciphering is restrained to an honest operation onthe encoded data. The reference to encryption project fulfils many accurate trading operationson coded numerical data;therefore, it protects the written in code-sensible information evenmore.展开更多
基金This work was supprted by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R151),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘With the rapid evolution of Internet technology,fog computing has taken a major role in managing large amounts of data.The major concerns in this domain are security and privacy.Therefore,attaining a reliable level of confidentiality in the fog computing environment is a pivotal task.Among different types of data stored in the fog,the 3D point and mesh fog data are increasingly popular in recent days,due to the growth of 3D modelling and 3D printing technologies.Hence,in this research,we propose a novel scheme for preserving the privacy of 3D point and mesh fog data.Chaotic Cat mapbased data encryption is a recently trending research area due to its unique properties like pseudo-randomness,deterministic nature,sensitivity to initial conditions,ergodicity,etc.To boost encryption efficiency significantly,in this work,we propose a novel Chaotic Cat map.The sequence generated by this map is used to transform the coordinates of the fog data.The improved range of the proposed map is depicted using bifurcation analysis.The quality of the proposed Chaotic Cat map is also analyzed using metrics like Lyapunov exponent and approximate entropy.We also demonstrate the performance of the proposed encryption framework using attacks like brute-force attack and statistical attack.The experimental results clearly depict that the proposed framework produces the best results compared to the previous works in the literature.
文摘Because stress has such a powerful impact on human health,we must be able to identify it automatically in our everyday lives.The human activity recognition(HAR)system use data from several kinds of sensors to try to recognize and evaluate human actions automatically recognize and evaluate human actions.Using the multimodal dataset DEAP(Database for Emotion Analysis using Physiological Signals),this paper presents deep learning(DL)technique for effectively detecting human stress.The combination of vision-based and sensor-based approaches for recognizing human stress will help us achieve the increased efficiency of current stress recognition systems and predict probable actions in advance of when fatal.Based on visual and EEG(Electroencephalogram)data,this research aims to enhance the performance and extract the dominating characteristics of stress detection.For the stress identification test,we utilized the DEAP dataset,which included video and EEG data.We also demonstrate that combining video and EEG characteristics may increase overall performance,with the suggested stochastic features providing the most accurate results.In the first step,CNN(Convolutional Neural Network)extracts feature vectors from video frames and EEG data.Feature Level(FL)fusion that combines the features extracted from video and EEG data.We use XGBoost as our classifier model to predict stress,and we put it into action.The stress recognition accuracy of the proposed method is compared to existing methods of Decision Tree(DT),Random Forest(RF),AdaBoost,Linear Discriminant Analysis(LDA),and KNearest Neighborhood(KNN).When we compared our technique to existing state-of-the-art approaches,we found that the suggested DL methodology combining multimodal and heterogeneous inputs may improve stress identification.
文摘The latest 6G improvements secured autonomous driving's realism in Intelligent Autonomous Transport Systems(IATS).Despite the IATS's benefits,security remains a significant challenge.Blockchain technology has grown in popularity as a means of implementing safe,dependable,and decentralised independent IATS systems,allowing for more utilisation of legacy IATS infrastructures and resources,which is especially advantageous for crowdsourcing technologies.Blockchain technology can be used to address security concerns in the IATS and to aid in logistics development.In light of the inadequacy of reliance and inattention to rights created by centralised and conventional logistics systems,this paper discusses the creation of a blockchain-based IATS powered by deep learning for secure cargo and vehicle matching(BDL-IATS).The BDL-IATS approach utilises Ethereum as the primary blockchain for storing private data such as order and shipment details.Additionally,the deep belief network(DBN)model is used to select suitable vehicles and goods for transportation.Additionally,the chaotic krill herd technique is used to tune the DBN model’s hyper-parameters.The performance of the BDL-IATS technique is validated,and the findings are inspected under a variety of conditions.The simulationfindings indicated that the BDL-IATS strategy outperformed recent state-of-the-art approaches.
文摘Due to an increase in agricultural mislabeling and careless handling of non-perishable foods in recent years,consumers have been calling for the food sector to be more transparent.Due to information dispersion between divisions and the propensity to record inaccurate data,current traceability solutions typically fail to provide reliable farm-to-fork histories of products.The threemost enticing characteristics of blockchain technology are openness,integrity,and traceability,which make it a potentially crucial tool for guaranteeing the integrity and correctness of data.In this paper,we suggest a permissioned blockchain system run by organizations,such as regulatory bodies,to promote the origin-tracking of shelf-stable agricultural products.We propose a four-tiered architecture,parallel side chains,Zero-Knowledge Proofs(ZKPs),and Interplanetary File Systems(IPFS).These ensure that information about where an item came from is shared,those commercial competitors cannot get to it,those big storage problems are handled,and the system can be scaled to handle many transactions at once.The solution maintains the confidentiality of all transaction flows when provenance data is queried utilizing smart contracts and a consumer-grade reliance rate.Extensive simulation testing using Ethereum Rinkeby and Polygon demonstrates reduced execution time,latency,and throughput overheads.
文摘Crop protection is a great obstacle to food safety,with crop diseases being one of the most serious issues.Plant diseases diminish the quality of crop yield.To detect disease spots on grape leaves,deep learning technology might be employed.On the other hand,the precision and efficiency of identification remain issues.The quantity of images of ill leaves taken from plants is often uneven.With an uneven collection and few images,spotting disease is hard.The plant leaves dataset needs to be expanded to detect illness accurately.A novel hybrid technique employing segmentation,augmentation,and a capsule neural network(CapsNet)is used in this paper to tackle these challenges.The proposed method involves three phases.First,a graph-based technique extracts leaf area from a plant image.The second step expands the dataset using an Efficient Generative Adversarial Network E-GAN.Third,a CapsNet identifies the illness and stage.The proposed work has experimented on real-time grape leaf images which are captured using an SD1000 camera and PlantVillage grape leaf datasets.The proposed method achieves an effective classification of accuracy for disease type and disease stages detection compared to other existing models.
基金This work was supported partially by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)Support Program(IITP-2024-2018-0-01431)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘The extensive utilization of the Internet in everyday life can be attributed to the substantial accessibility of online services and the growing significance of the data transmitted via the Internet.Regrettably,this development has expanded the potential targets that hackers might exploit.Without adequate safeguards,data transmitted on the internet is significantly more susceptible to unauthorized access,theft,or alteration.The identification of unauthorised access attempts is a critical component of cybersecurity as it aids in the detection and prevention of malicious attacks.This research paper introduces a novel intrusion detection framework that utilizes Recurrent Neural Networks(RNN)integrated with Long Short-Term Memory(LSTM)units.The proposed model can identify various types of cyberattacks,including conventional and distinctive forms.Recurrent networks,a specific kind of feedforward neural networks,possess an intrinsic memory component.Recurrent Neural Networks(RNNs)incorporating Long Short-Term Memory(LSTM)mechanisms have demonstrated greater capabilities in retaining and utilizing data dependencies over extended periods.Metrics such as data types,training duration,accuracy,number of false positives,and number of false negatives are among the parameters employed to assess the effectiveness of these models in identifying both common and unusual cyberattacks.RNNs are utilised in conjunction with LSTM to support human analysts in identifying possible intrusion events,hence enhancing their decision-making capabilities.A potential solution to address the limitations of Shallow learning is the introduction of the Eccentric Intrusion Detection Model.This model utilises Recurrent Neural Networks,specifically exploiting LSTM techniques.The proposed model achieves detection accuracy(99.5%),generalisation(99%),and false-positive rate(0.72%),the parameters findings reveal that it is superior to state-of-the-art techniques.
文摘The most noteworthy neurodegenerative disorder nationwide is appar-ently the Alzheimer's disease(AD)which ha no proven viable treatment till date and despite the clinical trials showing the potential of preclinical therapy,a sen-sitive method for evaluating the AD has to be developed yet.Due to the correla-tions between ocular and brain tissue,the eye(retinal blood vessels)has been investigated for predicting the AD.Hence,en enhanced method named Enhanced Long Short Term Memory(E-LSTM)has been proposed in this work which aims atfinding the severity of AD from ocular biomarkers.Tofind the level of disease severity,the new layer named precise layer was introduced in E-LSTM which will help the doctors to provide the apt treatments for the patients rapidly.To avoid the problem of overfitting,a dropout has been added to LSTM.In the existing work,boundary detection of retinal layers was found to be inaccurate during the seg-mentation process of Optical Coherence Tomography(OCT)image and to over-come this issue;Particle Swarm Optimization(PSO)has been utilized.To the best of our understanding,this is thefirst paper to use Particle Swarm Optimization.When compared with the existing works,the proposed work is found to be per-forming better in terms of F1 Score,Precision,Recall,training loss,and segmen-tation accuracy and it is found that the prediction accuracy was increased to 10%higher than the existing systems.
基金This work is partially funded by FCT/MCTES through national funds and,when applicable,co-funded EU funds under the Project UIDB/50008/2020Ministry of Science and Higher Education of the Russian Federation,Grant 08-08by the Brazilian National Council for Scientific and Technological Development-CNPq,via Grant No.313036/2020-9.
文摘Forested areas are extremely vulnerable to disasters leading to environmental destruction.Forest Fire is one among them which requires immediate attention.There are lot of works done by authors where Wireless Sensors and IoT have been used for forest fire monitoring.So,towards monitoring the forest fire and managing the energy efficiently in IoT,Energy Efficient Routing Protocol for Low power lossy networks(E-RPL)was developed.There were challenges about the scalability of the network resulting in a large end-to-end delay and less packet delivery which led to the development of Aggregator-based Energy Efficient RPL with Data Compression(CAAERPL).Though CAA-ERPL proved effective in terms of reduced packet delivery,less energy consumption,and increased packet delivery ratio for varying number of nodes,there is still challenge in the selection of aggregator which is based purely on probability percentage of nodes.There has been research work where fuzzy logic been employed for Mobile Ad-hoc Routing,RPL routing and cluster head selection in Wireless Sensor.There has been no work where fuzzy logic is employed for aggregator selection in Energy Efficient RPL.So accordingly,we here have proposed Fuzzy Based Aggregator selection in Energy-efficient RPL for region thereby forming DODAG for communicating to Fog/Edge.We here have developed fuzzy inference rules for selecting the aggregator based on strength which takes residual power,Node degree,and Expected Transmission Count(ETX)as input metrics.The Fuzzy Aggregator Energy Efficient RPL(FA-ERPL)based on fuzzy inference rules were analysed against E-RPL in terms of scalability(First and Half Node die),Energy Consumption,and aggregator node energy deviation.From the analysis,it was found that FA-ERPL performed better than E-RPL.These were simulated using MATLAB and results.
文摘Road Side Units(RSUs)are the essential component of vehicular communication for the objective of improving safety and mobility in the road transportation.RSUs are generally deployed at the roadside and more specifically at the intersections in order to collect traffic information from the vehicles and disseminate alarms and messages in emergency situations to the neighborhood vehicles cooperating with the network.However,the development of a predominant RSUs placement algorithm for ensuring competent communication in VANETs is a challenging issue due to the hindrance of obstacles like water bodies,trees and buildings.In this paper,Ruppert’s Delaunay Triangulation Refinement Scheme(RDTRS)for optimal RSUs placement is proposed for accurately estimating the optimal number of RSUs that has the possibility of enhancing the area of coverage during data communication.This RDTRS is proposed by considering the maximum number of factors such as global coverage,intersection popularity,vehicle density and obstacles present in the map for optimal RSUs placement,which is considered as the core improvement over the existing RSUs optimal placement strategies.It is contributed for deploying requisite RSUs with essential transmission range for maximal coverage in the convex map such that each position of the map could be effectively covered by at least one RSU in the presence of obstacles.The simulation experiments of the proposed RDTRS are conducted with complex road traffic environments.The results of this proposed RDTRS confirmed its predominance in reducing the end-to-end delay by 21.32%,packet loss by 9.38%with improved packet delivery rate of 10.68%,compared to the benchmarked schemes.
文摘Every year,the number of women affected by breast tumors is increasing worldwide.Hence,detecting and segmenting the cancer regions in mammogram images is important to prevent death in women patients due to breast cancer.The conventional methods obtained low sensitivity and specificity with cancer region segmentation accuracy.The high-resolution standard mammogram images were supported by conventional methods as one of the main drawbacks.The conventional methods mostly segmented the cancer regions in mammogram images concerning their exterior pixel boundaries.These drawbacks are resolved by the proposed cancer region detection methods stated in this paper.The mammogram images are clas-sified into normal,benign,and malignant types using the Adaptive Neuro-Fuzzy Inference System(ANFIS)approach in this paper.This mammogram classification process consists of a noise filtering module,spatial-frequency transformation module,feature computation module,and classification mod-ule.The Gaussian Filtering Algorithm(GFA)is used as the pixel smooth filtering method and the Ridgelet transform is used as the spatial-frequency transformation module.The statistical Ridgelet feature metrics are computed from the transformed coefficients and these values are classified by the ANFIS technique in this paper.Finally,Probability Histogram Segmentation Algo-rithm(PHSA)is proposed in this work to compute and segment the tumor pixels in the abnormal mammogram images.This proposed breast cancer detection approach is evaluated on the mammogram images in MIAS and DDSM datasets.From the extensive analysis of the proposed tumor detection methods stated in this work with other works,the proposed work significantly achieves a higher performance.The methodologies proposed in this paper can be used in breast cancer detection hospitals to assist the breast surgeon to detect and segment the cancer regions.
文摘Rapidly rising the quantity of Big Data is an opportunity to flout the privacy of people. Whenhigh processing capacity and massive storage are required for Big Data, distributed networkshave been used. There are several people involved in these activities, the system may contributeto privacy infringements frameworks have been developed for the preservation of privacy atvarious levels (e.g. information age, information the executives and information preparing) asfor the existing pattern of huge information. We plan to frame this paper as a literature surveyof these classifications, including the Privacy Processes in Big Data and the presentation of theAssociate Challenges. Homomorphic encryption is particularised aimed at solitary single actionon the ciphered information. Homomorphic enciphering is restrained to an honest operation onthe encoded data. The reference to encryption project fulfils many accurate trading operationson coded numerical data;therefore, it protects the written in code-sensible information evenmore.