Several unique characteristics of Internet of Things(IoT)devices,such as distributed deployment and limited storage,make it challenging for standard centralized access control systems to enable access control in today...Several unique characteristics of Internet of Things(IoT)devices,such as distributed deployment and limited storage,make it challenging for standard centralized access control systems to enable access control in today’s large-scale IoT ecosystem.To solve these challenges,this study presents an IoT access control system called Ether-IoT based on the Ethereum Blockchain(BC)infrastructure with Attribute-Based Access Control(ABAC).Access Contract(AC),Cache Contract(CC),Device Contract(DC),and Policy Contract(PC)are the four central smart contracts(SCs)that are included in the proposed system.CC offers a way to save user characteristics in a local cache system to avoid delays during transactions between BC and IoT devices.AC is the fundamental program users typically need to run to build an access control technique.DC offers a means for storing the resource data created by devices and a method for querying that data.PC offers administrative settings to handle ABAC policies on users’behalf.Ether-IoT,combined with ABAC and the BC,enables IoT access control management that is decentralized,fine-grained and dynamically scalable.This research gives a real-world case study to illustrate the suggested framework’s implementation.In the end,a simulation experiment is performed to evaluate the system’s performance.To ensure data integrity in dispersed systems,the results show that Ether-IoT can sustain high throughput in contexts with a large number of requests.展开更多
Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process...Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process for producing a shorter document in order to quickly access the important goals and main features of the input document.In this study,a novel method is introduced for selective text summarization using the genetic algorithm and generation of repetitive patterns.One of the important features of the proposed summarization is to identify and extract the relationship between the main features of the input text and the creation of repetitive patterns in order to produce and optimize the vector of the main document features in the production of the summary document compared to other previous methods.In this study,attempts were made to encompass all the main parameters of the summary text including unambiguous summary with the highest precision,continuity and consistency.To investigate the efficiency of the proposed algorithm,the results of the study were evaluated with respect to the precision and recall criteria.The results of the study evaluation showed the optimization the dimensions of the features and generation of a sequence of summary document sentences having the most consistency with the main goals and features of the input document.展开更多
These days,imbalanced datasets,denoted throughout the paper by ID,(a dataset that contains some(usually two)classes where one contains considerably smaller number of samples than the other(s))emerge in many real world...These days,imbalanced datasets,denoted throughout the paper by ID,(a dataset that contains some(usually two)classes where one contains considerably smaller number of samples than the other(s))emerge in many real world problems(like health care systems or disease diagnosis systems,anomaly detection,fraud detection,stream based malware detection systems,and so on)and these datasets cause some problems(like under-training of minority class(es)and over-training of majority class(es),bias towards majority class(es),and so on)in classification process and application.Therefore,these datasets take the focus of many researchers in any science and there are several solutions for dealing with this problem.The main aim of this study for dealing with IDs is to resample the borderline samples discovered by Support Vector Data Description(SVDD).There are naturally two kinds of resampling:Under-sampling(U-S)and oversampling(O-S).The O-S may cause the occurrence of over-fitting(the occurrence of over-fitting is its main drawback).The U-S can cause the occurrence of significant information loss(the occurrence of significant information loss is its main drawback).In this study,to avoid the drawbacks of the sampling techniques,we focus on the samples that may be misclassified.The data points that can be misclassified are considered to be the borderline data points which are on border(s)between the majority class(es)and minority class(es).First by SVDD,we find the borderline examples;then,the data resampling is applied over them.At the next step,the base classifier is trained on the newly created dataset.Finally,we compare the result of our method in terms of Area Under Curve(AUC)and F-measure and G-mean with the other state-of-the-art methods.We show that our method has betterresults than the other state-of-the-art methods on our experimental study.展开更多
An invariant can be described as an essential relationship between program variables.The invariants are very useful in software checking and verification.The tools that are used to detect invariants are invariant dete...An invariant can be described as an essential relationship between program variables.The invariants are very useful in software checking and verification.The tools that are used to detect invariants are invariant detectors.There are two types of invariant detectors:dynamic invariant detectors and static invariant detectors.Daikon software is an available computer program that implements a special case of a dynamic invariant detection algorithm.Daikon proposes a dynamic invariant detection algorithm based on several runs of the tested program;then,it gathers the values of its variables,and finally,it detects relationships between the variables based on a simple statistical analysis.This method has some drawbacks.One of its biggest drawbacks is its overwhelming time order.It is observed that the runtime for the Daikon invariant detection tool is dependent on the ordering of traces in the trace file.A mechanism is proposed in order to reduce differences in adjacent trace files.It is done by applying some special techniques of mutation/crossover in genetic algorithm(GA).An experiment is run to assess the benefits of this approach.Experimental findings reveal that the runtime of the proposed dynamic invariant detection algorithm is superior to the main approach with respect to these improvements.展开更多
基金This work was supported by Universiti Kebangsaan Malaysia under“Dana Pecutan Penerbitan FTSM 2022,Dana Softam 2022”。
文摘Several unique characteristics of Internet of Things(IoT)devices,such as distributed deployment and limited storage,make it challenging for standard centralized access control systems to enable access control in today’s large-scale IoT ecosystem.To solve these challenges,this study presents an IoT access control system called Ether-IoT based on the Ethereum Blockchain(BC)infrastructure with Attribute-Based Access Control(ABAC).Access Contract(AC),Cache Contract(CC),Device Contract(DC),and Policy Contract(PC)are the four central smart contracts(SCs)that are included in the proposed system.CC offers a way to save user characteristics in a local cache system to avoid delays during transactions between BC and IoT devices.AC is the fundamental program users typically need to run to build an access control technique.DC offers a means for storing the resource data created by devices and a method for querying that data.PC offers administrative settings to handle ABAC policies on users’behalf.Ether-IoT,combined with ABAC and the BC,enables IoT access control management that is decentralized,fine-grained and dynamically scalable.This research gives a real-world case study to illustrate the suggested framework’s implementation.In the end,a simulation experiment is performed to evaluate the system’s performance.To ensure data integrity in dispersed systems,the results show that Ether-IoT can sustain high throughput in contexts with a large number of requests.
文摘Taking into account the increasing volume of text documents,automatic summarization is one of the important tools for quick and optimal utilization of such sources.Automatic summarization is a text compression process for producing a shorter document in order to quickly access the important goals and main features of the input document.In this study,a novel method is introduced for selective text summarization using the genetic algorithm and generation of repetitive patterns.One of the important features of the proposed summarization is to identify and extract the relationship between the main features of the input text and the creation of repetitive patterns in order to produce and optimize the vector of the main document features in the production of the summary document compared to other previous methods.In this study,attempts were made to encompass all the main parameters of the summary text including unambiguous summary with the highest precision,continuity and consistency.To investigate the efficiency of the proposed algorithm,the results of the study were evaluated with respect to the precision and recall criteria.The results of the study evaluation showed the optimization the dimensions of the features and generation of a sequence of summary document sentences having the most consistency with the main goals and features of the input document.
基金grants to HAR and HP.HAR is supported by UNSW Scientia Program Fellowship and is a member of the UNSW Graduate School of Biomedical Engineering.
文摘These days,imbalanced datasets,denoted throughout the paper by ID,(a dataset that contains some(usually two)classes where one contains considerably smaller number of samples than the other(s))emerge in many real world problems(like health care systems or disease diagnosis systems,anomaly detection,fraud detection,stream based malware detection systems,and so on)and these datasets cause some problems(like under-training of minority class(es)and over-training of majority class(es),bias towards majority class(es),and so on)in classification process and application.Therefore,these datasets take the focus of many researchers in any science and there are several solutions for dealing with this problem.The main aim of this study for dealing with IDs is to resample the borderline samples discovered by Support Vector Data Description(SVDD).There are naturally two kinds of resampling:Under-sampling(U-S)and oversampling(O-S).The O-S may cause the occurrence of over-fitting(the occurrence of over-fitting is its main drawback).The U-S can cause the occurrence of significant information loss(the occurrence of significant information loss is its main drawback).In this study,to avoid the drawbacks of the sampling techniques,we focus on the samples that may be misclassified.The data points that can be misclassified are considered to be the borderline data points which are on border(s)between the majority class(es)and minority class(es).First by SVDD,we find the borderline examples;then,the data resampling is applied over them.At the next step,the base classifier is trained on the newly created dataset.Finally,we compare the result of our method in terms of Area Under Curve(AUC)and F-measure and G-mean with the other state-of-the-art methods.We show that our method has betterresults than the other state-of-the-art methods on our experimental study.
文摘An invariant can be described as an essential relationship between program variables.The invariants are very useful in software checking and verification.The tools that are used to detect invariants are invariant detectors.There are two types of invariant detectors:dynamic invariant detectors and static invariant detectors.Daikon software is an available computer program that implements a special case of a dynamic invariant detection algorithm.Daikon proposes a dynamic invariant detection algorithm based on several runs of the tested program;then,it gathers the values of its variables,and finally,it detects relationships between the variables based on a simple statistical analysis.This method has some drawbacks.One of its biggest drawbacks is its overwhelming time order.It is observed that the runtime for the Daikon invariant detection tool is dependent on the ordering of traces in the trace file.A mechanism is proposed in order to reduce differences in adjacent trace files.It is done by applying some special techniques of mutation/crossover in genetic algorithm(GA).An experiment is run to assess the benefits of this approach.Experimental findings reveal that the runtime of the proposed dynamic invariant detection algorithm is superior to the main approach with respect to these improvements.