The rapid development of emerging technologies,such as edge intelligence and digital twins,have added momentum towards the development of the Industrial Internet of Things(IIo T).However,the massive amount of data gen...The rapid development of emerging technologies,such as edge intelligence and digital twins,have added momentum towards the development of the Industrial Internet of Things(IIo T).However,the massive amount of data generated by the IIo T,coupled with heterogeneous computation capacity across IIo T devices,and users’data privacy concerns,have posed challenges towards achieving industrial edge intelligence(IEI).To achieve IEI,in this paper,we propose a semi-federated learning framework where a portion of the data with higher privacy is kept locally and a portion of the less private data can be potentially uploaded to the edge server.In addition,we leverage digital twins to overcome the problem of computation capacity heterogeneity of IIo T devices through the mapping of physical entities.We formulate a synchronization latency minimization problem which jointly optimizes edge association and the proportion of uploaded nonprivate data.As the joint problem is NP-hard and combinatorial and taking into account the reality of largescale device training,we develop a multi-agent hybrid action deep reinforcement learning(DRL)algorithm to find the optimal solution.Simulation results show that our proposed DRL algorithm can reduce latency and have a better convergence performance for semi-federated learning compared to benchmark algorithms.展开更多
As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improveme...As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improvement of all clients;however,the overall performance improvement often sacrifices the performance of certain clients,such as clients with less data.Ignoring fairness may greatly reduce the willingness of some clients to participate in federated learning.In order to solve the above problem,the authors propose Ada-FFL,an adaptive fairness federated aggregation learning algorithm,which can dynamically adjust the fairness coefficient according to the update of the local models,ensuring the convergence performance of the global model and the fairness between federated learning clients.By integrating coarse-grained and fine-grained equity solutions,the authors evaluate the deviation of local models by considering both global equity and individual equity,then the weight ratio will be dynamically allocated for each client based on the evaluated deviation value,which can ensure that the update differences of local models are fully considered in each round of training.Finally,by combining a regularisation term to limit the local model update to be closer to the global model,the sensitivity of the model to input perturbations can be reduced,and the generalisation ability of the global model can be improved.Through numerous experiments on several federal data sets,the authors show that our method has more advantages in convergence effect and fairness than the existing baselines.展开更多
As a basic technology at physical layer of mobile communications,non-orthogonal multiple access has been attracting wide attention across the academia and the industry.During the standardization of the fifth-generatio...As a basic technology at physical layer of mobile communications,non-orthogonal multiple access has been attracting wide attention across the academia and the industry.During the standardization of the fifth-generation(5G)of mobile communications,3GPP conducted preliminary study on non-orthogonal multiple access without reaching the consensus to standardize the technology.展开更多
Multi‐agent reinforcement learning relies on reward signals to guide the policy networks of individual agents.However,in high‐dimensional continuous spaces,the non‐stationary environment can provide outdated experi...Multi‐agent reinforcement learning relies on reward signals to guide the policy networks of individual agents.However,in high‐dimensional continuous spaces,the non‐stationary environment can provide outdated experiences that hinder convergence,resulting in ineffective training performance for multi‐agent systems.To tackle this issue,a novel reinforcement learning scheme,Mutual Information Oriented Deep Skill Chaining(MioDSC),is proposed that generates an optimised cooperative policy by incorporating intrinsic rewards based on mutual information to improve exploration efficiency.These rewards encourage agents to diversify their learning process by engaging in actions that increase the mutual information between their actions and the environment state.In addition,MioDSC can generate cooperative policies using the options framework,allowing agents to learn and reuse complex action sequences and accelerating the convergence speed of multi‐agent learning.MioDSC was evaluated in the multi‐agent particle environment and the StarCraft multi‐agent challenge at varying difficulty levels.The experimental results demonstrate that MioDSC outperforms state‐of‐the‐art methods and is robust across various multi‐agent system tasks with high stability.展开更多
This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to co...This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to consider the complexity of quantum states, offering a more nuanced understanding of measurement precision. By adding a complexity term to the uncertainty relation, we explore nonlinear modifications such as polynomial, exponential, and logarithmic functions. Rigorous mathematical derivations demonstrate the consistency of the modified principle with classical quantum mechanics and quantum information theory. We investigate the implications of this modified HUP for various aspects of quantum mechanics, including quantum metrology, quantum algorithms, quantum error correction, and quantum chaos. Additionally, we propose experimental protocols to test the validity of the modified HUP, evaluating their feasibility with current and near-term quantum technologies. This work highlights the importance of quantum complexity in quantum mechanics and provides a refined perspective on the interplay between complexity, entanglement, and uncertainty in quantum systems. The modified HUP has the potential to stimulate interdisciplinary research at the intersection of quantum physics, information theory, and complexity theory, with significant implications for the development of quantum technologies and the understanding of the quantum-to-classical transition.展开更多
This paper presents a novel framework for understanding time as an emergent phenomenon arising from quantum information dynamics. We propose that the flow of time and its directional arrow are intrinsically linked to ...This paper presents a novel framework for understanding time as an emergent phenomenon arising from quantum information dynamics. We propose that the flow of time and its directional arrow are intrinsically linked to the growth of quantum complexity and the evolution of entanglement entropy in physical systems. By integrating principles from quantum mechanics, information theory, and holography, we develop a comprehensive theory that explains how time can emerge from timeless quantum processes. Our approach unifies concepts from quantum mechanics, general relativity, and thermodynamics, providing new perspectives on longstanding puzzles such as the black hole information paradox and the arrow of time. We derive modified Friedmann equations that incorporate quantum information measures, offering novel insights into cosmic evolution and the nature of dark energy. The paper presents a series of experimental proposals to test key aspects of this theory, ranging from quantum simulations to cosmological observations. Our framework suggests a deeply information-theoretic view of the universe, challenging our understanding of the nature of reality and opening new avenues for technological applications in quantum computing and sensing. This work contributes to the ongoing quest for a unified theory of quantum gravity and information, potentially with far-reaching implications for our understanding of space, time, and the fundamental structure of the cosmos.展开更多
This paper proposes an extension to the Einstein Field Equations by integrating quantum informational measures, specifically entanglement entropy and quantum complexity. These modified equations aim to bridge the gap ...This paper proposes an extension to the Einstein Field Equations by integrating quantum informational measures, specifically entanglement entropy and quantum complexity. These modified equations aim to bridge the gap between general relativity and quantum mechanics, offering a unified framework that incorporates the geometric properties of spacetime with fundamental aspects of quantum information theory. The theoretical implications of this approach include potential resolutions to longstanding issues like the black hole information paradox and new perspectives on dark energy. The paper presents modified versions of classical solutions such as the Schwarzschild metric and Friedmann equations, incorporating quantum corrections. It also outlines testable predictions in areas including gravitational wave propagation, black hole shadows, and cosmological observables. We propose several avenues for future research, including exploring connections with other quantum gravity approaches designing experiments to test the theory’s predictions. This work contributes to the ongoing exploration of quantum gravity, offering a framework that potentially unifies general relativity and quantum mechanics with testable predictions.展开更多
The Large Language Models (LLMs), such as GPT and BERT, were proposed for natural language processing (NLP) and have shown promising results as general-purpose language models. An increasing number of industry profess...The Large Language Models (LLMs), such as GPT and BERT, were proposed for natural language processing (NLP) and have shown promising results as general-purpose language models. An increasing number of industry professionals and researchers are adopting LLMs for program analysis tasks. However, one significant difference between programming languages and natural languages is that a programmer has the flexibility to assign any names to variables, methods, and functions in the program, whereas a natural language writer does not. Intuitively, the quality of naming in a program affects the performance of LLMs in program analysis tasks. This paper investigates how naming affects LLMs on code analysis tasks. Specifically, we create a set of datasets with code containing nonsense or misleading names for variables, methods, and functions, respectively. We then use well-trained models (CodeBERT) to perform code analysis tasks on these datasets. The experimental results show that naming has a significant impact on the performance of code analysis tasks based on LLMs, indicating that code representation learning based on LLMs heavily relies on well-defined names in code. Additionally, we conduct a case study on some special code analysis tasks using GPT, providing further insights.展开更多
基金supported in part by the National Nature Science Foundation of China under Grant 62001168in part by the Foundation and Application Research Grant of Guangzhou under Grant 202102020515。
文摘The rapid development of emerging technologies,such as edge intelligence and digital twins,have added momentum towards the development of the Industrial Internet of Things(IIo T).However,the massive amount of data generated by the IIo T,coupled with heterogeneous computation capacity across IIo T devices,and users’data privacy concerns,have posed challenges towards achieving industrial edge intelligence(IEI).To achieve IEI,in this paper,we propose a semi-federated learning framework where a portion of the data with higher privacy is kept locally and a portion of the less private data can be potentially uploaded to the edge server.In addition,we leverage digital twins to overcome the problem of computation capacity heterogeneity of IIo T devices through the mapping of physical entities.We formulate a synchronization latency minimization problem which jointly optimizes edge association and the proportion of uploaded nonprivate data.As the joint problem is NP-hard and combinatorial and taking into account the reality of largescale device training,we develop a multi-agent hybrid action deep reinforcement learning(DRL)algorithm to find the optimal solution.Simulation results show that our proposed DRL algorithm can reduce latency and have a better convergence performance for semi-federated learning compared to benchmark algorithms.
基金National Natural Science Foundation of China,Grant/Award Number:62272114Joint Research Fund of Guangzhou and University,Grant/Award Number:202201020380+3 种基金Guangdong Higher Education Innovation Group,Grant/Award Number:2020KCXTD007Pearl River Scholars Funding Program of Guangdong Universities(2019)National Key R&D Program of China,Grant/Award Number:2022ZD0119602Major Key Project of PCL,Grant/Award Number:PCL2022A03。
文摘As the scale of federated learning expands,solving the Non-IID data problem of federated learning has become a key challenge of interest.Most existing solutions generally aim to solve the overall performance improvement of all clients;however,the overall performance improvement often sacrifices the performance of certain clients,such as clients with less data.Ignoring fairness may greatly reduce the willingness of some clients to participate in federated learning.In order to solve the above problem,the authors propose Ada-FFL,an adaptive fairness federated aggregation learning algorithm,which can dynamically adjust the fairness coefficient according to the update of the local models,ensuring the convergence performance of the global model and the fairness between federated learning clients.By integrating coarse-grained and fine-grained equity solutions,the authors evaluate the deviation of local models by considering both global equity and individual equity,then the weight ratio will be dynamically allocated for each client based on the evaluated deviation value,which can ensure that the update differences of local models are fully considered in each round of training.Finally,by combining a regularisation term to limit the local model update to be closer to the global model,the sensitivity of the model to input perturbations can be reduced,and the generalisation ability of the global model can be improved.Through numerous experiments on several federal data sets,the authors show that our method has more advantages in convergence effect and fairness than the existing baselines.
文摘As a basic technology at physical layer of mobile communications,non-orthogonal multiple access has been attracting wide attention across the academia and the industry.During the standardization of the fifth-generation(5G)of mobile communications,3GPP conducted preliminary study on non-orthogonal multiple access without reaching the consensus to standardize the technology.
基金National Natural Science Foundation of China,Grant/Award Number:61872171The Belt and Road Special Foundation of the State Key Laboratory of Hydrology‐Water Resources and Hydraulic Engineering,Grant/Award Number:2021490811。
文摘Multi‐agent reinforcement learning relies on reward signals to guide the policy networks of individual agents.However,in high‐dimensional continuous spaces,the non‐stationary environment can provide outdated experiences that hinder convergence,resulting in ineffective training performance for multi‐agent systems.To tackle this issue,a novel reinforcement learning scheme,Mutual Information Oriented Deep Skill Chaining(MioDSC),is proposed that generates an optimised cooperative policy by incorporating intrinsic rewards based on mutual information to improve exploration efficiency.These rewards encourage agents to diversify their learning process by engaging in actions that increase the mutual information between their actions and the environment state.In addition,MioDSC can generate cooperative policies using the options framework,allowing agents to learn and reuse complex action sequences and accelerating the convergence speed of multi‐agent learning.MioDSC was evaluated in the multi‐agent particle environment and the StarCraft multi‐agent challenge at varying difficulty levels.The experimental results demonstrate that MioDSC outperforms state‐of‐the‐art methods and is robust across various multi‐agent system tasks with high stability.
文摘This work introduces a modification to the Heisenberg Uncertainty Principle (HUP) by incorporating quantum complexity, including potential nonlinear effects. Our theoretical framework extends the traditional HUP to consider the complexity of quantum states, offering a more nuanced understanding of measurement precision. By adding a complexity term to the uncertainty relation, we explore nonlinear modifications such as polynomial, exponential, and logarithmic functions. Rigorous mathematical derivations demonstrate the consistency of the modified principle with classical quantum mechanics and quantum information theory. We investigate the implications of this modified HUP for various aspects of quantum mechanics, including quantum metrology, quantum algorithms, quantum error correction, and quantum chaos. Additionally, we propose experimental protocols to test the validity of the modified HUP, evaluating their feasibility with current and near-term quantum technologies. This work highlights the importance of quantum complexity in quantum mechanics and provides a refined perspective on the interplay between complexity, entanglement, and uncertainty in quantum systems. The modified HUP has the potential to stimulate interdisciplinary research at the intersection of quantum physics, information theory, and complexity theory, with significant implications for the development of quantum technologies and the understanding of the quantum-to-classical transition.
文摘This paper presents a novel framework for understanding time as an emergent phenomenon arising from quantum information dynamics. We propose that the flow of time and its directional arrow are intrinsically linked to the growth of quantum complexity and the evolution of entanglement entropy in physical systems. By integrating principles from quantum mechanics, information theory, and holography, we develop a comprehensive theory that explains how time can emerge from timeless quantum processes. Our approach unifies concepts from quantum mechanics, general relativity, and thermodynamics, providing new perspectives on longstanding puzzles such as the black hole information paradox and the arrow of time. We derive modified Friedmann equations that incorporate quantum information measures, offering novel insights into cosmic evolution and the nature of dark energy. The paper presents a series of experimental proposals to test key aspects of this theory, ranging from quantum simulations to cosmological observations. Our framework suggests a deeply information-theoretic view of the universe, challenging our understanding of the nature of reality and opening new avenues for technological applications in quantum computing and sensing. This work contributes to the ongoing quest for a unified theory of quantum gravity and information, potentially with far-reaching implications for our understanding of space, time, and the fundamental structure of the cosmos.
文摘This paper proposes an extension to the Einstein Field Equations by integrating quantum informational measures, specifically entanglement entropy and quantum complexity. These modified equations aim to bridge the gap between general relativity and quantum mechanics, offering a unified framework that incorporates the geometric properties of spacetime with fundamental aspects of quantum information theory. The theoretical implications of this approach include potential resolutions to longstanding issues like the black hole information paradox and new perspectives on dark energy. The paper presents modified versions of classical solutions such as the Schwarzschild metric and Friedmann equations, incorporating quantum corrections. It also outlines testable predictions in areas including gravitational wave propagation, black hole shadows, and cosmological observables. We propose several avenues for future research, including exploring connections with other quantum gravity approaches designing experiments to test the theory’s predictions. This work contributes to the ongoing exploration of quantum gravity, offering a framework that potentially unifies general relativity and quantum mechanics with testable predictions.
文摘The Large Language Models (LLMs), such as GPT and BERT, were proposed for natural language processing (NLP) and have shown promising results as general-purpose language models. An increasing number of industry professionals and researchers are adopting LLMs for program analysis tasks. However, one significant difference between programming languages and natural languages is that a programmer has the flexibility to assign any names to variables, methods, and functions in the program, whereas a natural language writer does not. Intuitively, the quality of naming in a program affects the performance of LLMs in program analysis tasks. This paper investigates how naming affects LLMs on code analysis tasks. Specifically, we create a set of datasets with code containing nonsense or misleading names for variables, methods, and functions, respectively. We then use well-trained models (CodeBERT) to perform code analysis tasks on these datasets. The experimental results show that naming has a significant impact on the performance of code analysis tasks based on LLMs, indicating that code representation learning based on LLMs heavily relies on well-defined names in code. Additionally, we conduct a case study on some special code analysis tasks using GPT, providing further insights.