Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices ...Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.展开更多
安卓系统为浏览器分配资源时无法感知网页内容,会导致资源过度分配和电量不必要损失。同时,由于CPU可调节频率密度的增长,通过动态电压频率缩放(dynamic voltage and frequency scaling, DVFS)技术实现能耗优化的难度也随之增大。另外...安卓系统为浏览器分配资源时无法感知网页内容,会导致资源过度分配和电量不必要损失。同时,由于CPU可调节频率密度的增长,通过动态电压频率缩放(dynamic voltage and frequency scaling, DVFS)技术实现能耗优化的难度也随之增大。另外在系统默认的调控策略下,忽视了图形处理器(graphics processing unit, GPU)对浏览器运行的作用。针对上述问题,提出一种协同调控CPU和GPU实现功耗优化的方法。首先根据网页加载时处理器运行特征利用逻辑回归对网页进行分类,对网页特征加权实现复杂度量化,根据类别与复杂度采用DVFS技术限制CPU频率的同时调节GPU频率。该方法被应用于谷歌Pixel2 XL上的Chromium浏览器,对排名前500的中文网站进行测试,平均节省了12%功耗的同时减少了5%网页加载时间。展开更多
基金This work has been funded by King Saud University,Riyadh,Saudi Arabia,through Researchers Supporting Project Number(RSPD2024R857).
文摘Scalability and information personal privacy are vital for training and deploying large-scale deep learning models.Federated learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web browsers.Nevertheless,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client numbers.Additionally,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for training.In this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training models.As a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning process.WebFLex is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous devices.Furthermore,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central server.WebFLex has actually been measured in various setups using the MNIST dataset.Experimental results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data aggregation.In addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network variability.Additionally,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
文摘安卓系统为浏览器分配资源时无法感知网页内容,会导致资源过度分配和电量不必要损失。同时,由于CPU可调节频率密度的增长,通过动态电压频率缩放(dynamic voltage and frequency scaling, DVFS)技术实现能耗优化的难度也随之增大。另外在系统默认的调控策略下,忽视了图形处理器(graphics processing unit, GPU)对浏览器运行的作用。针对上述问题,提出一种协同调控CPU和GPU实现功耗优化的方法。首先根据网页加载时处理器运行特征利用逻辑回归对网页进行分类,对网页特征加权实现复杂度量化,根据类别与复杂度采用DVFS技术限制CPU频率的同时调节GPU频率。该方法被应用于谷歌Pixel2 XL上的Chromium浏览器,对排名前500的中文网站进行测试,平均节省了12%功耗的同时减少了5%网页加载时间。