Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metavers...Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.展开更多
Key frame extraction based on sparse coding can reduce the redundancy of continuous frames and concisely express the entire video.However,how to develop a key frame extraction algorithm that can automatically extract ...Key frame extraction based on sparse coding can reduce the redundancy of continuous frames and concisely express the entire video.However,how to develop a key frame extraction algorithm that can automatically extract a few frames with a low reconstruction error remains a challenge.In this paper,we propose a novel model of structured sparse-codingbased key frame extraction,wherein a nonconvex group log-regularizer is used with strong sparsity and a low reconstruction error.To automatically extract key frames,a decomposition scheme is designed to separate the sparse coefficient matrix by rows.The rows enforced by the nonconvex group log-regularizer become zero or nonzero,leading to the learning of the structured sparse coefficient matrix.To solve the nonconvex problems due to the log-regularizer,the difference of convex algorithm(DCA)is employed to decompose the log-regularizer into the difference of two convex functions related to the l1 norm,which can be directly obtained through the proximal operator.Therefore,an efficient structured sparse coding algorithm with the group log-regularizer for key frame extraction is developed,which can automatically extract a few frames directly from the video to represent the entire video with a low reconstruction error.Experimental results demonstrate that the proposed algorithm can extract more accurate key frames from most Sum Me videos compared to the stateof-the-art methods.Furthermore,the proposed algorithm can obtain a higher compression with a nearly 18% increase compared to sparse modeling representation selection(SMRS)and an 8% increase compared to SC-det on the VSUMM dataset.展开更多
The active contour model based on local image fitting (LIF) energy is an effective method to deal with intensity inhomo- geneities, but it always conflicts with the local minimum problem because LIF has a nonconvex ...The active contour model based on local image fitting (LIF) energy is an effective method to deal with intensity inhomo- geneities, but it always conflicts with the local minimum problem because LIF has a nonconvex energy function form. At the same time, the parameters of LIF are hard to be chosen for better per- formance. A global minimization of the adaptive LIF energy model is proposed. The regularized length term which constrains the zero level set is introduced to improve the accuracy of the bound- aries, and a global minimization of the active contour model is presented, in addition, based on the statistical information of the intensity histogram, the standard deviation σ with respect to the truncated Gaussian window is automatically computed according to images. Consequently, the proposed method improves the performance and adaptivity to deal with the intensity inhomo- geneities. Experimental results for synthetic and real images show desirable performance and efficiency of the proposed method.展开更多
Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency o...Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency of multi-view data,while neglecting the diversity among different views as well as the high-order relationships of data,resulting in the loss of valuable complementary information.In this paper,we design a hypergraph regularized diverse deep matrix factorization(HDDMF)model for multi-view data representation,to jointly utilize multi-view diversity and a high-order manifold in a multilayer factorization framework.A novel diversity enhancement term is designed to exploit the structural complementarity between different views of data.Hypergraph regularization is utilized to preserve the high-order geometry structure of data in each view.An efficient iterative optimization algorithm is developed to solve the proposed model with theoretical convergence analysis.Experimental results on five real-world data sets demonstrate that the proposed method significantly outperforms stateof-the-art multi-view learning approaches.展开更多
The concept of reward is fundamental in reinforcement learning with a wide range of applications in natural and social sciences.Seeking an interpretable reward for decision-making that largely shapes the system's ...The concept of reward is fundamental in reinforcement learning with a wide range of applications in natural and social sciences.Seeking an interpretable reward for decision-making that largely shapes the system's behavior has always been a challenge in reinforcement learning.In this work,we explore a discrete-time reward for reinforcement learning in continuous time and action spaces that represent many phenomena captured by applying physical laws.We find that the discrete-time reward leads to the extraction of the unique continuous-time decision law and improved computational efficiency by dropping the integrator operator that appears in classical results with integral rewards.We apply this finding to solve output-feedback design problems in power systems.The results reveal that our approach removes an intermediate stage of identifying dynamical models.Our work suggests that the discrete-time reward is efficient in search of the desired decision law,which provides a computational tool to understand and modify the behavior of large-scale engineering systems using the optimal learned decision.展开更多
基金supported in part by NSFC (62102099, U22A2054, 62101594)in part by the Pearl River Talent Recruitment Program (2021QN02S643)+9 种基金Guangzhou Basic Research Program (2023A04J1699)in part by the National Research Foundation, SingaporeInfocomm Media Development Authority under its Future Communications Research Development ProgrammeDSO National Laboratories under the AI Singapore Programme under AISG Award No AISG2-RP-2020-019Energy Research Test-Bed and Industry Partnership Funding Initiative, Energy Grid (EG) 2.0 programmeDesCartes and the Campus for Research Excellence and Technological Enterprise (CREATE) programmeMOE Tier 1 under Grant RG87/22in part by the Singapore University of Technology and Design (SUTD) (SRG-ISTD-2021- 165)in part by the SUTD-ZJU IDEA Grant SUTD-ZJU (VP) 202102in part by the Ministry of Education, Singapore, through its SUTD Kickstarter Initiative (SKI 20210204)。
文摘Avatars, as promising digital representations and service assistants of users in Metaverses, can enable drivers and passengers to immerse themselves in 3D virtual services and spaces of UAV-assisted vehicular Metaverses. However, avatar tasks include a multitude of human-to-avatar and avatar-to-avatar interactive applications, e.g., augmented reality navigation,which consumes intensive computing resources. It is inefficient and impractical for vehicles to process avatar tasks locally. Fortunately, migrating avatar tasks to the nearest roadside units(RSU)or unmanned aerial vehicles(UAV) for execution is a promising solution to decrease computation overhead and reduce task processing latency, while the high mobility of vehicles brings challenges for vehicles to independently perform avatar migration decisions depending on current and future vehicle status. To address these challenges, in this paper, we propose a novel avatar task migration system based on multi-agent deep reinforcement learning(MADRL) to execute immersive vehicular avatar tasks dynamically. Specifically, we first formulate the problem of avatar task migration from vehicles to RSUs/UAVs as a partially observable Markov decision process that can be solved by MADRL algorithms. We then design the multi-agent proximal policy optimization(MAPPO) approach as the MADRL algorithm for the avatar task migration problem. To overcome slow convergence resulting from the curse of dimensionality and non-stationary issues caused by shared parameters in MAPPO, we further propose a transformer-based MAPPO approach via sequential decision-making models for the efficient representation of relationships among agents. Finally, to motivate terrestrial or non-terrestrial edge servers(e.g., RSUs or UAVs) to share computation resources and ensure traceability of the sharing records, we apply smart contracts and blockchain technologies to achieve secure sharing management. Numerical results demonstrate that the proposed approach outperforms the MAPPO approach by around 2% and effectively reduces approximately 20% of the latency of avatar task execution in UAV-assisted vehicular Metaverses.
基金supported in part by the National Natural Science Foundation of China(61903090,61727810,62073086,62076077,61803096,U191140003)the Guangzhou Science and Technology Program Project(202002030289)Japan Society for the Promotion of Science(JSPS)KAKENHI(18K18083)。
文摘Key frame extraction based on sparse coding can reduce the redundancy of continuous frames and concisely express the entire video.However,how to develop a key frame extraction algorithm that can automatically extract a few frames with a low reconstruction error remains a challenge.In this paper,we propose a novel model of structured sparse-codingbased key frame extraction,wherein a nonconvex group log-regularizer is used with strong sparsity and a low reconstruction error.To automatically extract key frames,a decomposition scheme is designed to separate the sparse coefficient matrix by rows.The rows enforced by the nonconvex group log-regularizer become zero or nonzero,leading to the learning of the structured sparse coefficient matrix.To solve the nonconvex problems due to the log-regularizer,the difference of convex algorithm(DCA)is employed to decompose the log-regularizer into the difference of two convex functions related to the l1 norm,which can be directly obtained through the proximal operator.Therefore,an efficient structured sparse coding algorithm with the group log-regularizer for key frame extraction is developed,which can automatically extract a few frames directly from the video to represent the entire video with a low reconstruction error.Experimental results demonstrate that the proposed algorithm can extract more accurate key frames from most Sum Me videos compared to the stateof-the-art methods.Furthermore,the proposed algorithm can obtain a higher compression with a nearly 18% increase compared to sparse modeling representation selection(SMRS)and an 8% increase compared to SC-det on the VSUMM dataset.
基金supported by the National Natural Science Foundation of China(6100317061372142+2 种基金61103121)the Fundamental Research Funds for the Central Universities SCUT(2014ZG0037)the China Postdoctoral Science Foundation(2012M511561)
文摘The active contour model based on local image fitting (LIF) energy is an effective method to deal with intensity inhomo- geneities, but it always conflicts with the local minimum problem because LIF has a nonconvex energy function form. At the same time, the parameters of LIF are hard to be chosen for better per- formance. A global minimization of the adaptive LIF energy model is proposed. The regularized length term which constrains the zero level set is introduced to improve the accuracy of the bound- aries, and a global minimization of the active contour model is presented, in addition, based on the statistical information of the intensity histogram, the standard deviation σ with respect to the truncated Gaussian window is automatically computed according to images. Consequently, the proposed method improves the performance and adaptivity to deal with the intensity inhomo- geneities. Experimental results for synthetic and real images show desirable performance and efficiency of the proposed method.
基金supported in part by National Natural Science Foundation of China(61573108,61273192,61333013)the Ministry of Education of New Century Excellent Talent(NCET-12-0637)+1 种基金Natural Science Foundation of Guangdong Province through the Science Fund for Distinguished Young Scholars(S20120011437)Doctoral Fund of Ministry of Education of China(20124420130001)
基金This work was supported by the National Natural Science Foundation of China(62073087,62071132,61973090).
文摘Deep matrix factorization(DMF)has been demonstrated to be a powerful tool to take in the complex hierarchical information of multi-view data(MDR).However,existing multiview DMF methods mainly explore the consistency of multi-view data,while neglecting the diversity among different views as well as the high-order relationships of data,resulting in the loss of valuable complementary information.In this paper,we design a hypergraph regularized diverse deep matrix factorization(HDDMF)model for multi-view data representation,to jointly utilize multi-view diversity and a high-order manifold in a multilayer factorization framework.A novel diversity enhancement term is designed to exploit the structural complementarity between different views of data.Hypergraph regularization is utilized to preserve the high-order geometry structure of data in each view.An efficient iterative optimization algorithm is developed to solve the proposed model with theoretical convergence analysis.Experimental results on five real-world data sets demonstrate that the proposed method significantly outperforms stateof-the-art multi-view learning approaches.
基金supported by the Guangdong Basic and Applied Basic Research Foundation(2024A1515011936)the National Natural Science Foundation of China(62320106008)
文摘The concept of reward is fundamental in reinforcement learning with a wide range of applications in natural and social sciences.Seeking an interpretable reward for decision-making that largely shapes the system's behavior has always been a challenge in reinforcement learning.In this work,we explore a discrete-time reward for reinforcement learning in continuous time and action spaces that represent many phenomena captured by applying physical laws.We find that the discrete-time reward leads to the extraction of the unique continuous-time decision law and improved computational efficiency by dropping the integrator operator that appears in classical results with integral rewards.We apply this finding to solve output-feedback design problems in power systems.The results reveal that our approach removes an intermediate stage of identifying dynamical models.Our work suggests that the discrete-time reward is efficient in search of the desired decision law,which provides a computational tool to understand and modify the behavior of large-scale engineering systems using the optimal learned decision.