Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin ...Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.展开更多
Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects an...Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.展开更多
The publisher would like to draw the reader's attention to the following errors.Ethics approval statements were not included in the published version of the following articles that appeared in previous issues of G...The publisher would like to draw the reader's attention to the following errors.Ethics approval statements were not included in the published version of the following articles that appeared in previous issues of Grain&Oil Science and Technology.The authors were contacted after publication to request ethical approval statements for the following articles.展开更多
Biological tests provide information on the medical analysis requested by both the patient and the prescriber. It is a communication link between the prescriber and the laboratory staff. The lack of some information o...Biological tests provide information on the medical analysis requested by both the patient and the prescriber. It is a communication link between the prescriber and the laboratory staff. The lack of some information on request forms not only affects the drafting quality of the test and patient care, but could also make thousands of data produced by healthcare centers unusable. The aim of this study was to assess the drafting quality of request forms submitted to the Malaria and Parasitology Units at the Institut Pasteur de Côte d’Ivoire. Methods: It was a descriptive cross-sectional study to assess the drafting quality of request forms of various prescribers received at the Institut Pasteur de Côte d’Ivoire. This study was conducted at the Malaria and Parasitology Units, department of Parasitology and Mycology (Institut Pasteur de Côte d’Ivoire), from 6<sup>th</sup> December 2020 to 6<sup>th</sup> December 2021. The information on each request forms was recorded on a data collection form designed for this purpose. Each data collection form corresponds to a request forms and each test to a patient. Results: Out of a total of 1990 request forms received, the patient’s age and sex were missing on 18% and 26.8% of the tests respectively. More than half (51.80%) of request forms did not indicate the patient’s place of residence. Clinical information was not provided on 45.90% of the tests. Prescribers omitting their signatures were 51%, stamps were 50.3% and contacts were 71.2%. Only 5.4% of request forms were of good drafting quality. Providing all the required information on the forms could facilitate the use and analysis of data and samples.展开更多
Considering the escalating frequency and sophistication of cyber threats targeting web applications, this paper proposes the development of an automated web security analysis tool to address the accessibility gap for ...Considering the escalating frequency and sophistication of cyber threats targeting web applications, this paper proposes the development of an automated web security analysis tool to address the accessibility gap for non-security professionals. This paper presents the design and implementation of an automated web security analysis tool, AWSAT, aimed at enabling individuals with limited security expertise to effectively assess and mitigate vulnerabilities in web applications. Leveraging advanced scanning techniques, the tool identifies common threats such as Cross-Site Scripting (XSS), SQL Injection, and Cross-Site Request Forgery (CSRF), providing detailed reports with actionable insights. By integrating sample payloads and reference study links, the tool facilitates informed decision-making in enhancing the security posture of web applications. Through its user-friendly interface and robust functionality, the tool aims to democratize web security practices, empowering a wider audience to proactively safeguard against cyber threats.展开更多
In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pa...In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.展开更多
Currently, open-source software is gradually being integrated into industrial software, while industry protocolsin industrial software are also gradually transferred to open-source community development. Industrial pr...Currently, open-source software is gradually being integrated into industrial software, while industry protocolsin industrial software are also gradually transferred to open-source community development. Industrial protocolstandardization organizations are confronted with fragmented and numerous code PR (Pull Request) and informalproposals, and differentworkflowswill lead to increased operating costs. The open-source community maintenanceteam needs software that is more intelligent to guide the identification and classification of these issues. To solvethe above problems, this paper proposes a PR review prediction model based on multi-dimensional features. Weextract 43 features of PR and divide them into five dimensions: contributor, reviewer, software project, PR, andsocial network of developers. The model integrates the above five-dimensional features, and a prediction model isbuilt based on a Random Forest Classifier to predict the review results of PR. On the other hand, to improve thequality of rejected PRs, we focus on problems raised in the review process and review comments of similar PRs.Wepropose a PR revision recommendation model based on the PR review knowledge graph. Entity information andrelationships between entities are extracted from text and code information of PRs, historical review comments,and related issues. PR revisions will be recommended to code contributors by graph-based similarity calculation.The experimental results illustrate that the above twomodels are effective and robust in PR review result predictionand PR revision recommendation.展开更多
Data is growing quickly due to a significant increase in social media applications.Today,billions of people use an enormous amount of data to access the Internet.The backbone network experiences a substantial load as ...Data is growing quickly due to a significant increase in social media applications.Today,billions of people use an enormous amount of data to access the Internet.The backbone network experiences a substantial load as a result of an increase in users.Users in the same region or company frequently ask for similar material,especially on social media platforms.The subsequent request for the same content can be satisfied from the edge if stored in proximity to the user.Applications that require relatively low latency can use Content Delivery Network(CDN)technology to meet their requirements.An edge and the data center con-stitute the CDN architecture.To fulfill requests from the edge and minimize the impact on the network,the requested content can be buffered closer to the user device.Which content should be kept on the edge is the primary concern.The cache policy has been optimized using various conventional and unconventional methods,but they have yet to include the timestamp beside a video request.The 24-h content request pattern was obtained from publicly available datasets.The popularity of a video is influenced by the time of day,as shown by a time-based video profile.We present a cache optimization method based on a time-based pat-tern of requests.The problem is described as a cache hit ratio maximization pro-blem emphasizing a relevance score and machine learning model accuracy.A model predicts the video to be cached in the next time stamp,and the relevance score identifies the video to be removed from the cache.Afterwards,we gather the logs and generate the content requests using an extracted video request pattern.These logs are pre-processed to create a dataset divided into three-time slots per day.A Long short-term memory(LSTM)model is trained on this dataset to forecast the video at the next time interval.The proposed optimized caching policy is evaluated on our CDN architecture deployed on the Korean Advanced Research Network(KOREN)infrastructure.Our findings demonstrate how add-ing time-based request patterns impacts the system by increasing the cache hit rate.To show the effectiveness of the proposed model,we compare the results with state-of-the-art techniques.展开更多
A blockchain-based power transaction method is proposed for Active Distribution Network(ADN),considering the poor security and high cost of a centralized power trading system.Firstly,the decentralized blockchain struc...A blockchain-based power transaction method is proposed for Active Distribution Network(ADN),considering the poor security and high cost of a centralized power trading system.Firstly,the decentralized blockchain structure of the ADN power transaction is built and the transaction information is kept in blocks.Secondly,considering the transaction needs between users and power suppliers in ADN,an energy request mechanism is proposed,and the optimization objective function is designed by integrating cost aware requests and storage aware requests.Finally,the particle swarm optimization algorithm is used for multi-objective optimal search to find the power trading scheme with the minimum power purchase cost of users and the maximum power sold by power suppliers.The experimental demonstration of the proposed method based on the experimental platform shows that when the number of participants is no more than 10,the transaction delay time is 0.2 s,and the transaction cost fluctuates at 200,000 yuan,which is better than other comparison methods.展开更多
互联网标准文档RFC之于互联网用户,如同空气之于普通人,它透明、隐形,让人意识不到其存在,但同时它又无比关键、必要,赋予了互联网真正的生命与活力,使其在几十年的发展中愈发蓬勃。RFC的概念与意义RFC(Request for Comments),即征求意...互联网标准文档RFC之于互联网用户,如同空气之于普通人,它透明、隐形,让人意识不到其存在,但同时它又无比关键、必要,赋予了互联网真正的生命与活力,使其在几十年的发展中愈发蓬勃。RFC的概念与意义RFC(Request for Comments),即征求意见稿,是由互联网工程任务组(简称IETF,是世界上唯一的国际性、非政府、开放性的互联网技术开发和标准制定组织)发布的一系列备忘录,包含了关于互联网的重要文字资料。可以说,如果一项技术想要成为全世界软件开发商、硬件制造商以及网络运营商自愿实施的标准和遵守的规范,那么成为R F C是它的必经之路。展开更多
文摘Amid the landscape of Cloud Computing(CC),the Cloud Datacenter(DC)stands as a conglomerate of physical servers,whose performance can be hindered by bottlenecks within the realm of proliferating CC services.A linchpin in CC’s performance,the Cloud Service Broker(CSB),orchestrates DC selection.Failure to adroitly route user requests with suitable DCs transforms the CSB into a bottleneck,endangering service quality.To tackle this,deploying an efficient CSB policy becomes imperative,optimizing DC selection to meet stringent Qualityof-Service(QoS)demands.Amidst numerous CSB policies,their implementation grapples with challenges like costs and availability.This article undertakes a holistic review of diverse CSB policies,concurrently surveying the predicaments confronted by current policies.The foremost objective is to pinpoint research gaps and remedies to invigorate future policy development.Additionally,it extensively clarifies various DC selection methodologies employed in CC,enriching practitioners and researchers alike.Employing synthetic analysis,the article systematically assesses and compares myriad DC selection techniques.These analytical insights equip decision-makers with a pragmatic framework to discern the apt technique for their needs.In summation,this discourse resoundingly underscores the paramount importance of adept CSB policies in DC selection,highlighting the imperative role of efficient CSB policies in optimizing CC performance.By emphasizing the significance of these policies and their modeling implications,the article contributes to both the general modeling discourse and its practical applications in the CC domain.
文摘Discovering floating wastes,especially bottles on water,is a crucial research problem in environmental hygiene.Nevertheless,real-world applications often face challenges such as interference from irrelevant objects and the high cost associated with data collection.Consequently,devising algorithms capable of accurately localizing specific objects within a scene in scenarios where annotated data is limited remains a formidable challenge.To solve this problem,this paper proposes an object discovery by request problem setting and a corresponding algorithmic framework.The proposed problem setting aims to identify specified objects in scenes,and the associated algorithmic framework comprises pseudo data generation and object discovery by request network.Pseudo-data generation generates images resembling natural scenes through various data augmentation rules,using a small number of object samples and scene images.The network structure of object discovery by request utilizes the pre-trained Vision Transformer(ViT)model as the backbone,employs object-centric methods to learn the latent representations of foreground objects,and applies patch-level reconstruction constraints to the model.During the validation phase,we use the generated pseudo datasets as training sets and evaluate the performance of our model on the original test sets.Experiments have proved that our method achieves state-of-the-art performance on Unmanned Aerial Vehicles-Bottle Detection(UAV-BD)dataset and self-constructed dataset Bottle,especially in multi-object scenarios.
文摘The publisher would like to draw the reader's attention to the following errors.Ethics approval statements were not included in the published version of the following articles that appeared in previous issues of Grain&Oil Science and Technology.The authors were contacted after publication to request ethical approval statements for the following articles.
文摘Biological tests provide information on the medical analysis requested by both the patient and the prescriber. It is a communication link between the prescriber and the laboratory staff. The lack of some information on request forms not only affects the drafting quality of the test and patient care, but could also make thousands of data produced by healthcare centers unusable. The aim of this study was to assess the drafting quality of request forms submitted to the Malaria and Parasitology Units at the Institut Pasteur de Côte d’Ivoire. Methods: It was a descriptive cross-sectional study to assess the drafting quality of request forms of various prescribers received at the Institut Pasteur de Côte d’Ivoire. This study was conducted at the Malaria and Parasitology Units, department of Parasitology and Mycology (Institut Pasteur de Côte d’Ivoire), from 6<sup>th</sup> December 2020 to 6<sup>th</sup> December 2021. The information on each request forms was recorded on a data collection form designed for this purpose. Each data collection form corresponds to a request forms and each test to a patient. Results: Out of a total of 1990 request forms received, the patient’s age and sex were missing on 18% and 26.8% of the tests respectively. More than half (51.80%) of request forms did not indicate the patient’s place of residence. Clinical information was not provided on 45.90% of the tests. Prescribers omitting their signatures were 51%, stamps were 50.3% and contacts were 71.2%. Only 5.4% of request forms were of good drafting quality. Providing all the required information on the forms could facilitate the use and analysis of data and samples.
文摘Considering the escalating frequency and sophistication of cyber threats targeting web applications, this paper proposes the development of an automated web security analysis tool to address the accessibility gap for non-security professionals. This paper presents the design and implementation of an automated web security analysis tool, AWSAT, aimed at enabling individuals with limited security expertise to effectively assess and mitigate vulnerabilities in web applications. Leveraging advanced scanning techniques, the tool identifies common threats such as Cross-Site Scripting (XSS), SQL Injection, and Cross-Site Request Forgery (CSRF), providing detailed reports with actionable insights. By integrating sample payloads and reference study links, the tool facilitates informed decision-making in enhancing the security posture of web applications. Through its user-friendly interface and robust functionality, the tool aims to democratize web security practices, empowering a wider audience to proactively safeguard against cyber threats.
文摘In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.
基金support of National Social Science Fund(NSSF)under Grant(No.22BTQ033).
文摘Currently, open-source software is gradually being integrated into industrial software, while industry protocolsin industrial software are also gradually transferred to open-source community development. Industrial protocolstandardization organizations are confronted with fragmented and numerous code PR (Pull Request) and informalproposals, and differentworkflowswill lead to increased operating costs. The open-source community maintenanceteam needs software that is more intelligent to guide the identification and classification of these issues. To solvethe above problems, this paper proposes a PR review prediction model based on multi-dimensional features. Weextract 43 features of PR and divide them into five dimensions: contributor, reviewer, software project, PR, andsocial network of developers. The model integrates the above five-dimensional features, and a prediction model isbuilt based on a Random Forest Classifier to predict the review results of PR. On the other hand, to improve thequality of rejected PRs, we focus on problems raised in the review process and review comments of similar PRs.Wepropose a PR revision recommendation model based on the PR review knowledge graph. Entity information andrelationships between entities are extracted from text and code information of PRs, historical review comments,and related issues. PR revisions will be recommended to code contributors by graph-based similarity calculation.The experimental results illustrate that the above twomodels are effective and robust in PR review result predictionand PR revision recommendation.
基金This research was supported by the 2022 scientific promotion program funded by Jeju National University.
文摘Data is growing quickly due to a significant increase in social media applications.Today,billions of people use an enormous amount of data to access the Internet.The backbone network experiences a substantial load as a result of an increase in users.Users in the same region or company frequently ask for similar material,especially on social media platforms.The subsequent request for the same content can be satisfied from the edge if stored in proximity to the user.Applications that require relatively low latency can use Content Delivery Network(CDN)technology to meet their requirements.An edge and the data center con-stitute the CDN architecture.To fulfill requests from the edge and minimize the impact on the network,the requested content can be buffered closer to the user device.Which content should be kept on the edge is the primary concern.The cache policy has been optimized using various conventional and unconventional methods,but they have yet to include the timestamp beside a video request.The 24-h content request pattern was obtained from publicly available datasets.The popularity of a video is influenced by the time of day,as shown by a time-based video profile.We present a cache optimization method based on a time-based pat-tern of requests.The problem is described as a cache hit ratio maximization pro-blem emphasizing a relevance score and machine learning model accuracy.A model predicts the video to be cached in the next time stamp,and the relevance score identifies the video to be removed from the cache.Afterwards,we gather the logs and generate the content requests using an extracted video request pattern.These logs are pre-processed to create a dataset divided into three-time slots per day.A Long short-term memory(LSTM)model is trained on this dataset to forecast the video at the next time interval.The proposed optimized caching policy is evaluated on our CDN architecture deployed on the Korean Advanced Research Network(KOREN)infrastructure.Our findings demonstrate how add-ing time-based request patterns impacts the system by increasing the cache hit rate.To show the effectiveness of the proposed model,we compare the results with state-of-the-art techniques.
基金supported by the Postdoctoral Research Funding Program of Jiangsu Province under Grant 2021K622C.
文摘A blockchain-based power transaction method is proposed for Active Distribution Network(ADN),considering the poor security and high cost of a centralized power trading system.Firstly,the decentralized blockchain structure of the ADN power transaction is built and the transaction information is kept in blocks.Secondly,considering the transaction needs between users and power suppliers in ADN,an energy request mechanism is proposed,and the optimization objective function is designed by integrating cost aware requests and storage aware requests.Finally,the particle swarm optimization algorithm is used for multi-objective optimal search to find the power trading scheme with the minimum power purchase cost of users and the maximum power sold by power suppliers.The experimental demonstration of the proposed method based on the experimental platform shows that when the number of participants is no more than 10,the transaction delay time is 0.2 s,and the transaction cost fluctuates at 200,000 yuan,which is better than other comparison methods.
文摘互联网标准文档RFC之于互联网用户,如同空气之于普通人,它透明、隐形,让人意识不到其存在,但同时它又无比关键、必要,赋予了互联网真正的生命与活力,使其在几十年的发展中愈发蓬勃。RFC的概念与意义RFC(Request for Comments),即征求意见稿,是由互联网工程任务组(简称IETF,是世界上唯一的国际性、非政府、开放性的互联网技术开发和标准制定组织)发布的一系列备忘录,包含了关于互联网的重要文字资料。可以说,如果一项技术想要成为全世界软件开发商、硬件制造商以及网络运营商自愿实施的标准和遵守的规范,那么成为R F C是它的必经之路。