Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookie...Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookies are exploited to carry client patterns observed by a website.These client patterns facilitate the particular client’s future visit to the corresponding website.However,security and privacy are the primary concerns owing to the value of information over public channels and the storage of client information on the browser.Several protocols have been introduced that maintain HTTP cookies,but many of those fail to achieve the required security,or require a lot of resource overheads.In this article,we have introduced a lightweight Elliptic Curve Cryptographic(ECC)based protocol for authenticating client and server transactions to maintain the privacy and security of HTTP cookies.Our proposed protocol uses a secret key embedded within a cookie.The proposed protocol ismore efficient and lightweight than related protocols because of its reduced computation,storage,and communication costs.Moreover,the analysis presented in this paper confirms that proposed protocol resists various known attacks.展开更多
The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In clo...The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.展开更多
A submergible robot model has been presented, and for 3D printing measures, their parts have been modified enough. It has been shown in our design that using printable connectors—a few engines and weight arrangements...A submergible robot model has been presented, and for 3D printing measures, their parts have been modified enough. It has been shown in our design that using printable connectors—a few engines and weight arrangements can be carried out, permitting distinctive moving prospects. After presenting our configuration and delineating a bunch of potential structures, a helpful model dependent on open-source equipment and programming arrangements has been presented conditionally. The model can be effectively tried in a few makes-a plunge streams and lakes throughout the planet. The unwavering quality of the printed models can be strained distinctly in generally shallow waters. Nonetheless, we accept that their accessibility will inspire the overall population to construct and test submerged robots, subsequently accelerating the improvement of imaginative arrangements and applications.展开更多
Home users are using a wide and increasing range of different technologies, devices, platforms, applications and services every day. In parallel, home users are also installing and using an enormous number of apps, wh...Home users are using a wide and increasing range of different technologies, devices, platforms, applications and services every day. In parallel, home users are also installing and using an enormous number of apps, which collect and share a large amount of data. Users are also often unaware of what information apps collect about them, which is really valuable and sensitive for them. Therefore, users are becoming increasingly concerned about their personal information that is stored in these apps. While most mobile operating systems such as Android and iOS provide some privacy safeguards for users, it is unrealistic to manage and control a large volume of data. Accordingly, there is a need for a new technique, which has the ability to predict many of a user’s mobile app privacy preferences. A major contribution of this work is to utilise different machine learning techniques for assigning users to the privacy profiles that most closely capture their privacy preferences. Applying privacy profiles as default settings for initial interfaces could significantly reduce the burden and frustration of the user. The result shows that it’s possible to reduce the user’s burden from 46 to 10 questions by achieving 86% accuracy, which indicates that it’s possible to predict many of a user’s mobile app privacy preferences by asking the user a small number of questions.展开更多
Developing successful software with no defects is one of the main goals of software projects.In order to provide a software project with the anticipated software quality,the prediction of software defects plays a vita...Developing successful software with no defects is one of the main goals of software projects.In order to provide a software project with the anticipated software quality,the prediction of software defects plays a vital role.Machine learning,and particularly deep learning,have been advocated for predicting software defects,however both suffer from inadequate accuracy,overfitting,and complicated structure.In this paper,we aim to address such issues in predicting software defects.We propose a novel structure of 1-Dimensional Convolutional Neural Network(1D-CNN),a deep learning architecture to extract useful knowledge,identifying and modelling the knowledge in the data sequence,reduce overfitting,and finally,predict whether the units of code are defects prone.We design large-scale empirical studies to reveal the proposed model’s effectiveness by comparing four established traditional machine learning baseline models and four state-of-the-art baselines in software defect prediction based on the NASA datasets.The experimental results demonstrate that in terms of f-measure,an optimal and modest 1DCNN with a dropout layer outperforms baseline and state-of-the-art models by 66.79%and 23.88%,respectively,in ways that minimize overfitting and improving prediction performance for software defects.According to the results,1D-CNN seems to be successful in predicting software defects and may be applied and adopted for a practical problem in software engineering.This,in turn,could lead to saving software development resources and producing more reliable software.展开更多
基金support from Abu Dhabi University’s Office of Research and Sponsored Programs Grant Number:19300810.
文摘Cookies are considered a fundamental means of web application services for authenticating various Hypertext Transfer Protocol(HTTP)requests andmaintains the states of clients’information over the Internet.HTTP cookies are exploited to carry client patterns observed by a website.These client patterns facilitate the particular client’s future visit to the corresponding website.However,security and privacy are the primary concerns owing to the value of information over public channels and the storage of client information on the browser.Several protocols have been introduced that maintain HTTP cookies,but many of those fail to achieve the required security,or require a lot of resource overheads.In this article,we have introduced a lightweight Elliptic Curve Cryptographic(ECC)based protocol for authenticating client and server transactions to maintain the privacy and security of HTTP cookies.Our proposed protocol uses a secret key embedded within a cookie.The proposed protocol ismore efficient and lightweight than related protocols because of its reduced computation,storage,and communication costs.Moreover,the analysis presented in this paper confirms that proposed protocol resists various known attacks.
文摘The rapid growth of service-oriented and cloud computing has created large-scale data centres worldwide.Modern data centres’operating costs mostly come from back-end cloud infrastructure and energy consumption.In cloud computing,extensive communication resources are required.Moreover,cloud applications require more bandwidth to transfer large amounts of data to satisfy end-user requirements.It is also essential that no communication source can cause congestion or bag loss owing to unnecessary switching buffers.This paper proposes a novel Energy and Communication(EC)aware scheduling(EC-scheduler)algorithm for green cloud computing,which optimizes data centre energy consumption and traffic load.The primary goal of the proposed EC-scheduler is to assign user applications to cloud data centre resources with minimal utilization of data centres.We first introduce a Multi-Objective Leader Salp Swarm(MLSS)algorithm for task sorting,which ensures traffic load balancing,and then an Emotional Artificial Neural Network(EANN)for efficient resource allocation.EC-scheduler schedules cloud user requirements to the cloud server by optimizing both energy and communication delay,which supports the lower emission of carbon dioxide by the cloud server system,enabling a green,unalloyed environment.We tested the proposed plan and existing cloud scheduling methods using the GreenCloud simulator to analyze the efficiency of optimizing data centre energy and other scheduler metrics.The EC-scheduler parameters Power Usage Effectiveness(PUE),Data Centre Energy Productivity(DCEP),Throughput,Average Execution Time(AET),Energy Consumption,and Makespan showed up to 26.738%,37.59%,50%,4.34%,34.2%,and 33.54%higher efficiency,respectively,than existing state of the art schedulers concerning number of user applications and number of user requests.
文摘A submergible robot model has been presented, and for 3D printing measures, their parts have been modified enough. It has been shown in our design that using printable connectors—a few engines and weight arrangements can be carried out, permitting distinctive moving prospects. After presenting our configuration and delineating a bunch of potential structures, a helpful model dependent on open-source equipment and programming arrangements has been presented conditionally. The model can be effectively tried in a few makes-a plunge streams and lakes throughout the planet. The unwavering quality of the printed models can be strained distinctly in generally shallow waters. Nonetheless, we accept that their accessibility will inspire the overall population to construct and test submerged robots, subsequently accelerating the improvement of imaginative arrangements and applications.
文摘Home users are using a wide and increasing range of different technologies, devices, platforms, applications and services every day. In parallel, home users are also installing and using an enormous number of apps, which collect and share a large amount of data. Users are also often unaware of what information apps collect about them, which is really valuable and sensitive for them. Therefore, users are becoming increasingly concerned about their personal information that is stored in these apps. While most mobile operating systems such as Android and iOS provide some privacy safeguards for users, it is unrealistic to manage and control a large volume of data. Accordingly, there is a need for a new technique, which has the ability to predict many of a user’s mobile app privacy preferences. A major contribution of this work is to utilise different machine learning techniques for assigning users to the privacy profiles that most closely capture their privacy preferences. Applying privacy profiles as default settings for initial interfaces could significantly reduce the burden and frustration of the user. The result shows that it’s possible to reduce the user’s burden from 46 to 10 questions by achieving 86% accuracy, which indicates that it’s possible to predict many of a user’s mobile app privacy preferences by asking the user a small number of questions.
文摘Developing successful software with no defects is one of the main goals of software projects.In order to provide a software project with the anticipated software quality,the prediction of software defects plays a vital role.Machine learning,and particularly deep learning,have been advocated for predicting software defects,however both suffer from inadequate accuracy,overfitting,and complicated structure.In this paper,we aim to address such issues in predicting software defects.We propose a novel structure of 1-Dimensional Convolutional Neural Network(1D-CNN),a deep learning architecture to extract useful knowledge,identifying and modelling the knowledge in the data sequence,reduce overfitting,and finally,predict whether the units of code are defects prone.We design large-scale empirical studies to reveal the proposed model’s effectiveness by comparing four established traditional machine learning baseline models and four state-of-the-art baselines in software defect prediction based on the NASA datasets.The experimental results demonstrate that in terms of f-measure,an optimal and modest 1DCNN with a dropout layer outperforms baseline and state-of-the-art models by 66.79%and 23.88%,respectively,in ways that minimize overfitting and improving prediction performance for software defects.According to the results,1D-CNN seems to be successful in predicting software defects and may be applied and adopted for a practical problem in software engineering.This,in turn,could lead to saving software development resources and producing more reliable software.