Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices ...Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.展开更多
In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pa...In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.展开更多
Drying is a complicated physical process which involves simultaneous heat and mass transfer in the removal of solvents inside propellants.Inappropriate drying techniques may result in the formation of a hard skin laye...Drying is a complicated physical process which involves simultaneous heat and mass transfer in the removal of solvents inside propellants.Inappropriate drying techniques may result in the formation of a hard skin layer near the surface to block the free access of most solvent through for long stick propellants with large web thickness,which lead to lower drying efficiency and worse drying quality.This study aims to gain a comprehensive understanding of drying process and clarify the mechanism of the blocked layer near the propellant surface.A new three-dimensional coupled heat and mass transfer(3D-CHMT)model was successfully developed under transient conditions.The drying experiment results show that the 3DCHMT model could be applied to describe the drying process well since the relative error of the content of solvent between simulation and experiment values is only 5.5%.The solvent behavior simulation demonstrates that the mass transfer process can be divided into super-fast(SF)and subsequent minorfast(MF)stages,and the SF stage is vital to the prevention of the blocked layer against the free access for solvent molecules inside propellant grains.The effective solvent diffusion coefficient(Deff)of the propellant surface initially increases from 3.4×10^(-6)to 5.3×10^(-6)m^(2)/s as the temperature increases,and then decreases to 4.1×10^(-8)m^(2)/s at 60-100 min.The value of Deffof surface between 0-1.4 mm has a unique trend of change compared with other regions,and it is much lower than that of the internal at100 min under simulation conditions.Meanwhile,the temperature of the propellant surface increases rapidly at the SF stage(0-100 min)and then very slowly thereafter.Both the evolution of Deffand temperature distribution demonstrate that the blocked layer near the propellant surface has been formed in the time period of approximately 0-100 min and its thickness is about 1.4 mm.To mitigate the formation of blocked layer and improve its drying quality of finial propellant products effectively,it should be initially dried at lower drying temperature(30-40℃)in 0-100 min and then dried at higher drying temperature(50-60℃)to reduce drying time for later drying process in double base gun propellants.The present results can provide theoretical guidance for drying process and optimization of drying parameters for long stick propellants with large web thickness.展开更多
Background With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system ...Background With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system load and resource consumption in the processing of these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper. Methods By observing the geometry redundancy introduced by man-made operations in the modeling procedure, several categories of light-weighting related work that aim at reducing the amount of data and resource consumption are elaborated for Web3D visualization. Results By comparing perspectives, the characteristics of each method are summarized, and among the reviewed methods, the geometric redundancy removal that achieves the lightweight goal by detecting and removing the repeated components is an appropriate method for current online Web3D visualization. Meanwhile, the learning algorithm, still in improvement period at present, is our expected future research topic. Conclusions Various aspects should be considered in an efficient lightweight method for online Web3D visualization, such as characteristics of original data, combination or extension of existing methods, scheduling strategy, cache man-agement, and rendering mechanism. Meanwhile, innovation methods, particularly the learning algorithm, are worth exploring.展开更多
Aggregation of species with similar ecological properties is one of the effective methods to simplify food web researches.However,species aggregation will affect not only the complexity of modeling process but also th...Aggregation of species with similar ecological properties is one of the effective methods to simplify food web researches.However,species aggregation will affect not only the complexity of modeling process but also the accuracy of models’outputs.Selection of aggregation methods and the number of trophospecies are the keys to study the simplification of food web.In this study,three aggregation methods,including taxonomic aggregation(TA),structural equivalence aggregation(SEA),and self-organizing maps(SOM),were analyzed and compared with the linear inverse model–Markov Chain Monte Carlo(LIM-MCMC)model.Impacts of aggregation methods and trophospecies number on food webs were evaluated based on the robustness and unitless of ecological net-work indices.Results showed that aggregation method of SEA performed better than the other two methods in estimating food web structure and function indices.The effects of aggregation methods were driven by the differences in species aggregation principles,which will alter food web structure and function through the redistribution of energy flow.According to the results of mean absolute percentage error(MAPE)which can be applied to evaluate the accuracy of the model,we found that MAPE in food web indices will increase with the reducing trophospecies number,and MAPE in food web function indices were smaller and more stable than those in food web structure indices.Therefore,trade-off between simplifying food webs and reflecting the status of ecosystem should be con-sidered in food web studies.These findings highlight the importance of aggregation methods and trophospecies number in the analy-sis of food web simplification.This study provided a framework to explore the extent to which food web models are affected by dif-ferent species aggregation,and will provide scientific basis for the construction of food webs.展开更多
Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of s...Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.展开更多
确保无线电波的质量和安全播出是广播电视行业的基本要求,依赖无线发射台站的高效管理。传统的基于客户端的台站管理系统因其高成本、低效率等缺点逐渐不能满足当前的需求。因此,提出基于WebGIS(Web Geographic Information System)的...确保无线电波的质量和安全播出是广播电视行业的基本要求,依赖无线发射台站的高效管理。传统的基于客户端的台站管理系统因其高成本、低效率等缺点逐渐不能满足当前的需求。因此,提出基于WebGIS(Web Geographic Information System)的广播电视无线发射台站管理系统,旨在提升广播电视无线发射台站的管理能力。首先阐述系统特点,其次设计系统的总体结构,分析系统的主要功能模块,最后研究各功能模块的实现。展开更多
为在智慧建造的基础上,创新竣工交付模式。以Web与建筑信息模型(Building Information Modeling,BIM)技术为基础,通过对服务器选型、网络设备选型等硬件设计;软件框架设计、基于Web与BIM技术的项目模型标准化处理、BIM模型轻量化与质检...为在智慧建造的基础上,创新竣工交付模式。以Web与建筑信息模型(Building Information Modeling,BIM)技术为基础,通过对服务器选型、网络设备选型等硬件设计;软件框架设计、基于Web与BIM技术的项目模型标准化处理、BIM模型轻量化与质检资料的智能关联、电子档案全过程信息化管理与交付等软件设计,提出一种全新的数字化跋工交付平台。将该平台应用于北京市朝阳站建设项目中,实现朝阳站数字建筑和物理建筑的数字李生交付,为运维阶段提供基础数据,探索平台与城建档案馆档案接收系统的业务对接。展开更多
基金This work has been funded by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project Number(RSPD2024R857).
文摘Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
文摘In this paper, we present a novel approach to model user request patterns in the World Wide Web. Instead of focusing on the user traffic for web pages, we capture the user interaction at the object level of the web pages. Our framework model consists of three sub-models: one for user file access, one for web pages, and one for storage servers. Web pages are assumed to consist of different types and sizes of objects, which are characterized using several categories: articles, media, and mosaics. The model is implemented with a discrete event simulation and then used to investigate the performance of our system over a variety of parameters in our model. Our performance measure of choice is mean response time and by varying the composition of web pages through our categories, we find that our framework model is able to capture a wide range of conditions that serve as a basis for generating a variety of user request patterns. In addition, we are able to establish a set of parameters that can be used as base cases. One of the goals of this research is for the framework model to be general enough that the parameters can be varied such that it can serve as input for investigating other distributed applications that require the generation of user request access patterns.
基金supported by the National Natural Science Foundation of China(Grant No.22075146)。
文摘Drying is a complicated physical process which involves simultaneous heat and mass transfer in the removal of solvents inside propellants.Inappropriate drying techniques may result in the formation of a hard skin layer near the surface to block the free access of most solvent through for long stick propellants with large web thickness,which lead to lower drying efficiency and worse drying quality.This study aims to gain a comprehensive understanding of drying process and clarify the mechanism of the blocked layer near the propellant surface.A new three-dimensional coupled heat and mass transfer(3D-CHMT)model was successfully developed under transient conditions.The drying experiment results show that the 3DCHMT model could be applied to describe the drying process well since the relative error of the content of solvent between simulation and experiment values is only 5.5%.The solvent behavior simulation demonstrates that the mass transfer process can be divided into super-fast(SF)and subsequent minorfast(MF)stages,and the SF stage is vital to the prevention of the blocked layer against the free access for solvent molecules inside propellant grains.The effective solvent diffusion coefficient(Deff)of the propellant surface initially increases from 3.4×10^(-6)to 5.3×10^(-6)m^(2)/s as the temperature increases,and then decreases to 4.1×10^(-8)m^(2)/s at 60-100 min.The value of Deffof surface between 0-1.4 mm has a unique trend of change compared with other regions,and it is much lower than that of the internal at100 min under simulation conditions.Meanwhile,the temperature of the propellant surface increases rapidly at the SF stage(0-100 min)and then very slowly thereafter.Both the evolution of Deffand temperature distribution demonstrate that the blocked layer near the propellant surface has been formed in the time period of approximately 0-100 min and its thickness is about 1.4 mm.To mitigate the formation of blocked layer and improve its drying quality of finial propellant products effectively,it should be initially dried at lower drying temperature(30-40℃)in 0-100 min and then dried at higher drying temperature(50-60℃)to reduce drying time for later drying process in double base gun propellants.The present results can provide theoretical guidance for drying process and optimization of drying parameters for long stick propellants with large web thickness.
文摘Background With the rapid development of Web3D technologies, the online Web3D visualization, particularly for complex models or scenes, has been in a great demand. Owing to the major conflict between the Web3D system load and resource consumption in the processing of these huge models, the huge 3D model lightweighting methods for online Web3D visualization are reviewed in this paper. Methods By observing the geometry redundancy introduced by man-made operations in the modeling procedure, several categories of light-weighting related work that aim at reducing the amount of data and resource consumption are elaborated for Web3D visualization. Results By comparing perspectives, the characteristics of each method are summarized, and among the reviewed methods, the geometric redundancy removal that achieves the lightweight goal by detecting and removing the repeated components is an appropriate method for current online Web3D visualization. Meanwhile, the learning algorithm, still in improvement period at present, is our expected future research topic. Conclusions Various aspects should be considered in an efficient lightweight method for online Web3D visualization, such as characteristics of original data, combination or extension of existing methods, scheduling strategy, cache man-agement, and rendering mechanism. Meanwhile, innovation methods, particularly the learning algorithm, are worth exploring.
基金supported by the National Key R&D Program of China(Nos.2019YFD0901204,2019YFD 0901205).
文摘Aggregation of species with similar ecological properties is one of the effective methods to simplify food web researches.However,species aggregation will affect not only the complexity of modeling process but also the accuracy of models’outputs.Selection of aggregation methods and the number of trophospecies are the keys to study the simplification of food web.In this study,three aggregation methods,including taxonomic aggregation(TA),structural equivalence aggregation(SEA),and self-organizing maps(SOM),were analyzed and compared with the linear inverse model–Markov Chain Monte Carlo(LIM-MCMC)model.Impacts of aggregation methods and trophospecies number on food webs were evaluated based on the robustness and unitless of ecological net-work indices.Results showed that aggregation method of SEA performed better than the other two methods in estimating food web structure and function indices.The effects of aggregation methods were driven by the differences in species aggregation principles,which will alter food web structure and function through the redistribution of energy flow.According to the results of mean absolute percentage error(MAPE)which can be applied to evaluate the accuracy of the model,we found that MAPE in food web indices will increase with the reducing trophospecies number,and MAPE in food web function indices were smaller and more stable than those in food web structure indices.Therefore,trade-off between simplifying food webs and reflecting the status of ecosystem should be con-sidered in food web studies.These findings highlight the importance of aggregation methods and trophospecies number in the analy-sis of food web simplification.This study provided a framework to explore the extent to which food web models are affected by dif-ferent species aggregation,and will provide scientific basis for the construction of food webs.
文摘Today, in the field of computer networks, new services have been developed on the Internet or intranets, including the mail server, database management, sounds, videos and the web server itself Apache. The number of solutions for this server is therefore growing continuously, these services are becoming more and more complex and expensive, without being able to fulfill the needs of the users. The absence of benchmarks for websites with dynamic content is the major obstacle to research in this area. These users place high demands on the speed of access to information on the Internet. This is why the performance of the web server is critically important. Several factors influence performance, such as server execution speed, network saturation on the internet or intranet, increased response time, and throughputs. By measuring these factors, we propose a performance evaluation strategy for servers that allows us to determine the actual performance of different servers in terms of user satisfaction. Furthermore, we identified performance characteristics such as throughput, resource utilization, and response time of a system through measurement and modeling by simulation. Finally, we present a simple queue model of an Apache web server, which reasonably represents the behavior of a saturated web server using the Simulink model in Matlab (Matrix Laboratory) and also incorporates sporadic incoming traffic. We obtain server performance metrics such as average response time and throughput through simulations. Compared to other models, our model is conceptually straightforward. The model has been validated through measurements and simulations during the tests that we conducted.
文摘确保无线电波的质量和安全播出是广播电视行业的基本要求,依赖无线发射台站的高效管理。传统的基于客户端的台站管理系统因其高成本、低效率等缺点逐渐不能满足当前的需求。因此,提出基于WebGIS(Web Geographic Information System)的广播电视无线发射台站管理系统,旨在提升广播电视无线发射台站的管理能力。首先阐述系统特点,其次设计系统的总体结构,分析系统的主要功能模块,最后研究各功能模块的实现。
文摘为在智慧建造的基础上,创新竣工交付模式。以Web与建筑信息模型(Building Information Modeling,BIM)技术为基础,通过对服务器选型、网络设备选型等硬件设计;软件框架设计、基于Web与BIM技术的项目模型标准化处理、BIM模型轻量化与质检资料的智能关联、电子档案全过程信息化管理与交付等软件设计,提出一种全新的数字化跋工交付平台。将该平台应用于北京市朝阳站建设项目中,实现朝阳站数字建筑和物理建筑的数字李生交付,为运维阶段提供基础数据,探索平台与城建档案馆档案接收系统的业务对接。