Persistent memory(PM)file systems have been developed to achieve high performance by exploiting the advanced features of PMs,including nonvolatility,byte addressability,and dynamic random access memory(DRAM)like perfo...Persistent memory(PM)file systems have been developed to achieve high performance by exploiting the advanced features of PMs,including nonvolatility,byte addressability,and dynamic random access memory(DRAM)like performance.Unfortunately,these PMs suffer from limited write endurance.Existing space management strategies of PM file systems can induce a severely unbalanced wear problem,which can damage the underlying PMs quickly.In this paper,we propose a Wear-leveling-aware Multi-grained Allocator,called WMAlloc,to achieve the wear leveling of PMs while improving the performance of file systems.WMAlloc adopts multiple min-heaps to manage the unused space of PMs.Each heap represents an allocation granularity.Then,WMAlloc allocates less-worn blocks from the corresponding min-heap for allocation requests.Moreover,to avoid recursive split and inefficient heap locations in WMAlloc,we further propose a bitmap-based multi-heap tree(BMT)to enhance WMAlloc,namely,WMAlloc-BMT.We implement WMAlloc and WMAlloc-BMT in the Linux kernel based on NOVA,a typical PM file system.Experimental results show that,compared with the original NOVA and dynamic wear-aware range management(DWARM),which is the state-of-the-art wear-leveling-aware allocator of PM file systems,WMAlloc can,respectively,achieve 4.11×and 1.81×maximum write number reduction and 1.02×and 1.64×performance with four workloads on average.Furthermore,WMAlloc-BMT outperforms WMAlloc with 1.08×performance and achieves 1.17×maximum write number reduction with four workloads on average.展开更多
High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency...High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.展开更多
BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are p...BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.展开更多
Understanding understory seedling regeneration mechanisms is important for the sustainable development of temperate primary forests in the context of increasingly intense climate warming events.The poor regeneration o...Understanding understory seedling regeneration mechanisms is important for the sustainable development of temperate primary forests in the context of increasingly intense climate warming events.The poor regeneration of dominant tree species,however,is one of the biggest challenges it faces at the moment.Especially,the regeneration of the shade-intolerant Quercus mongolica seedling is difficult in primary forests,which contrasts with the extreme abundance of understory seedlings in secondary forests.The mechanism behind the interesting phenomenon is still unknown.This study used in-situ monitoring and nursery-controlled experiment to investigate the survival rate,growth performance,as well as nonstructural carbohydrate (NSC) concentrations and pools of various organ tissues of seedlings for two consecutive years,further analyze the understory light availability and simulate the foliage carbon (C) gain in the secondary and primary forest.Results suggested that seedlings in the secondary forest had greater biomass allocation aboveground,height and specific leaf area (SLA) in summer,which allowed the seedling to survive longer in the canopy closure period.High light availability and positive C gain in early spring and late autumn are key factors affecting the growth and survival of understory seedlings in the secondary forest,whereas seedlings in the primary forest had annual negative carbon gain.Through the growing season,the total NSC concentrations of seedlings gradually decreased,whereas those of seedlings in the secondary forest increased significantly in autumn,and were mainly stored in roots for winter consumption and the following year's summer shade period,which was verified by the nursery-controlled experiment that simulated autumn enhanced light availability improved seedling survival rate and NSC pools.In conclusion,our results revealed the survival trade-off strategies of Quercus mongolica seedlings and highlighted the necessity of high light availability during the spring and autumn phenological periods for shade-intolerant tree seedling recruitment.展开更多
With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of...With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of the aircraft play an important role in the judgment and command of the Operational Control Center(OCC). However, how to transmit various operational status data from abnormal aircraft back to the OCC in an emergency is still an open problem. In this paper, we propose a novel Telemetry, Tracking,and Command(TT&C) architecture named Collaborative TT&C(CoTT&C) based on mega-constellation to solve such a problem. CoTT&C allows each satellite to help the abnormal aircraft by sharing TT&C resources when needed, realizing real-time and reliable aeronautical communication in an emergency. Specifically, we design a dynamic resource sharing mechanism for CoTT&C and model the mechanism as a single-leader-multi-follower Stackelberg game. Further, we give an unique Nash Equilibrium(NE) of the game as a closed form. Simulation results demonstrate that the proposed resource sharing mechanism is effective, incentive compatible, fair, and reciprocal. We hope that our findings can shed some light for future research on aeronautical communications in an emergency.展开更多
In this paper,we explore a cooperative decode-and-forward(DF)relay network comprised of a source,a relay,and a destination in the presence of an eavesdropper.To improve physical-layer security of the relay system,we p...In this paper,we explore a cooperative decode-and-forward(DF)relay network comprised of a source,a relay,and a destination in the presence of an eavesdropper.To improve physical-layer security of the relay system,we propose a jamming aided decodeand-forward relay(JDFR)scheme combining the use of artificial noise and DF relaying which requires two stages to transmit a packet.Specifically,in stage one,the source sends confidential message to the relay while the destination acts as a friendly jammer and transmits artificial noise to confound the eavesdropper.In stage two,the relay forwards its re-encoded message to the destination while the source emits artificial noise to confuse the eavesdropper.In addition,we analyze the security-reliability tradeoff(SRT)performance of the proposed JDFR scheme,where security and reliability are evaluated by deriving intercept probability(IP)and outage probability(OP),respectively.For the purpose of comparison,SRT of the traditional decode-and-forward relay(TDFR)scheme is also analyzed.Numerical results show that the SRT performance of the proposed JDFR scheme is better than that of the TDFR scheme.Also,it is shown that for the JDFR scheme,a better SRT performance can be obtained by the optimal power allocation(OPA)between the friendly jammer and user.展开更多
Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus t...Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.展开更多
In this paper,we optimize the spectrum efficiency(SE)of uplink massive multiple-input multiple-output(MIMO)system with imperfect channel state information(CSI)over Rayleigh fading channel.The SE optimization problem i...In this paper,we optimize the spectrum efficiency(SE)of uplink massive multiple-input multiple-output(MIMO)system with imperfect channel state information(CSI)over Rayleigh fading channel.The SE optimization problem is formulated under the constraints of maximum power and minimum rate of each user.Then,we develop a near-optimal power allocation(PA)scheme by using the successive convex approximation(SCA)method,Lagrange multiplier method,and block coordinate descent(BCD)method,and it can obtain almost the same SE as the benchmark scheme with lower complexity.Since this scheme needs three-layer iteration,a suboptimal PA scheme is developed to further reduce the complexity,where the characteristic of massive MIMO(i.e.,numerous receive antennas)is utilized for convex reformulation,and the rate constraint is converted to linear constraints.This suboptimal scheme only needs single-layer iteration,thus has lower complexity than the near-optimal scheme.Finally,we joint design the pilot power and data power to further improve the performance,and propose an two-stage algorithm to obtain joint PA.Simulation results verify the effectiveness of the proposed schemes,and superior SE performance is achieved.展开更多
Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encoun...Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.展开更多
Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) net...Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) networks can support such robots by providing on-demand communication services. However, under traditional open-loop communication paradigm, the network resources are usually divided into user-wise mostly-independent links,via ignoring the task-level dependency of robot collaboration. Thus, it is imperative to develop a new communication paradigm, taking into account the highlevel content and values behind, to facilitate multirobot operation. Inspired by Wiener’s Cybernetics theory, this article explores a closed-loop communication paradigm for the robot-oriented satellite-UAV network. This paradigm turns to handle group-wise structured links, so as to allocate resources in a taskoriented manner. It could also exploit the mobility of robots to liberate the network from full coverage,enabling new orchestration between network serving and positive mobility control of robots. Moreover,the integration of sensing, communications, computing and control would enlarge the benefit of this new paradigm. We present a case study for joint mobile edge computing(MEC) offloading and mobility control of robots, and finally outline potential challenges and open issues.展开更多
With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both local...With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both localand global allocation of resources in data centers.Hence,we propose an adaptive hybrid optimization strategy thatcombines dynamic programming and neural networks to improve resource utilization and service quality in datacenters.Our approach encompasses a service function chain simulation generator,a parallel architecture servicesystem,a dynamic programming strategy formaximizing the utilization of local server resources,a neural networkfor predicting the global utilization rate of resources and a global resource optimization strategy for bottleneck andredundant resources.With the implementation of our local and global resource allocation strategies,the systemperformance is significantly optimized through simulation.展开更多
Quantum key distribution(QKD)is a technology that can resist the threat of quantum computers to existing conventional cryptographic protocols.However,due to the stringent requirements of the quantum key generation env...Quantum key distribution(QKD)is a technology that can resist the threat of quantum computers to existing conventional cryptographic protocols.However,due to the stringent requirements of the quantum key generation environment,the generated quantum keys are considered valuable,and the slow key generation rate conflicts with the high-speed data transmission in traditional optical networks.In this paper,for the QKD network with a trusted relay,which is mainly based on point-to-point quantum keys and has complex changes in network resources,we aim to allocate resources reasonably for data packet distribution.Firstly,we formulate a linear programming constraint model for the key resource allocation(KRA)problem based on the time-slot scheduling.Secondly,we propose a new scheduling scheme based on the graded key security requirements(GKSR)and a new micro-log key storage algorithm for effective storage and management of key resources.Finally,we propose a key resource consumption(KRC)routing optimization algorithm to properly allocate time slots,routes,and key resources.Simulation results show that the proposed scheme significantly improves the key distribution success rate and key resource utilization rate,among others.展开更多
To meet the communication services with diverse requirements,dynamic resource allocation has shown increasing importance.In this paper,we consider the multi-slot and multi-user resource allocation(MSMU-RA)in a downlin...To meet the communication services with diverse requirements,dynamic resource allocation has shown increasing importance.In this paper,we consider the multi-slot and multi-user resource allocation(MSMU-RA)in a downlink cellular scenario with the aim of maximizing system spectral efficiency while guaranteeing user fairness.We first model the MSMURA problem as a dual-sequence decision-making process,and then solve it by a novel Transformerbased deep reinforcement learning(TDRL)approach.Specifically,the proposed TDRL approach can be achieved based on two aspects:1)To adapt to the dynamic wireless environment,the proximal policy optimization(PPO)algorithm is used to optimize the multi-slot RA strategy.2)To avoid co-channel interference,the Transformer-based PPO algorithm is presented to obtain the optimal multi-user RA scheme by exploring the mapping between user sequence and resource sequence.Experimental results show that:i)the proposed approach outperforms both the traditional and DRL methods in spectral efficiency and user fairness,ii)the proposed algorithm is superior to DRL approaches in terms of convergence speed and generalization performance.展开更多
The modular system can change its physical structure by self-assembly and self-disassembly between modules to dynamically adapt to task and environmental requirements. Recognizing the adaptive capability of modular sy...The modular system can change its physical structure by self-assembly and self-disassembly between modules to dynamically adapt to task and environmental requirements. Recognizing the adaptive capability of modular systems, we introduce a modular reconfigurable flight array(MRFA) to pursue a multifunction aircraft fitting for diverse tasks and requirements,and investigate the attitude control and the control allocation problem by using the modular reconfigurable flight array as a platform. First, considering the variable and irregular topological configuration of the modular array, a center-of-mass-independent flight array dynamics model is proposed to allow control allocation under over-actuated situations. Secondly, in order to meet the stable, fast and accurate attitude tracking performance of the MRFA, a fixed-time convergent sliding mode controller with state-dependent variable exponent coefficients is proposed to ensure fast convergence rate both away from and near the system equilibrium point without encountering the singularity. It is shown that the controller also has fixed-time convergent characteristics even in the presence of external disturbances. Finally,simulation results are provided to demonstrate the effectiveness of the proposed modeling and control strategies.展开更多
In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Se...In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users.展开更多
Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC a...Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC and blockchain,processing users’tasks and then uploading the task related information to the blockchain.That is,each edge server runs both users’offloaded tasks and blockchain tasks simultaneously.Note that there is a trade-off between the resource allocation for MEC and blockchain tasks.Therefore,the allocation of the resources of edge servers to the blockchain and theMEC is crucial for the processing delay of blockchain-based MEC.Most of the existing research tackles the problem of resource allocation in either blockchain or MEC,which leads to unfavorable performance of the blockchain-based MEC system.In this paper,we study how to allocate the computing resources of edge servers to the MEC and blockchain tasks with the aimtominimize the total systemprocessing delay.For the problem,we propose a computing resource Allocation algorithmfor Blockchain-based MEC(ABM)which utilizes the Slater’s condition,Karush-Kuhn-Tucker(KKT)conditions,partial derivatives of the Lagrangian function and subgradient projection method to obtain the solution.Simulation results show that ABM converges and effectively reduces the processing delay of blockchain-based MEC.展开更多
A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that ...A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.展开更多
As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dim...As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks.展开更多
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ...Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.展开更多
In this paper,we investigate IRS-aided user cooperation(UC)scheme in millimeter wave(mmWave)wirelesspowered sensor networks(WPSN),where two single-antenna users are wireless powered in the wireless energy transfer(WET...In this paper,we investigate IRS-aided user cooperation(UC)scheme in millimeter wave(mmWave)wirelesspowered sensor networks(WPSN),where two single-antenna users are wireless powered in the wireless energy transfer(WET)phase first and then cooperatively transmit information to a hybrid access point(AP)in the wireless information transmission(WIT)phase,following which the IRS is deployed to enhance the system performance of theWET andWIT.We maximized the weighted sum-rate problem by jointly optimizing the transmit time slots,power allocations,and the phase shifts of the IRS.Due to the non-convexity of the original problem,a semidefinite programming relaxation-based approach is proposed to convert the formulated problem to a convex optimization framework,which can obtain the optimal global solution.Simulation results demonstrate that the weighted sum throughput of the proposed UC scheme outperforms the non-UC scheme whether equipped with IRS or not.展开更多
基金Project supported by the National Natural Science Foundation of China(No.62162011)the Doctor Funds of Guizhou University,China(Nos.2020(13)and 2022(44))。
文摘Persistent memory(PM)file systems have been developed to achieve high performance by exploiting the advanced features of PMs,including nonvolatility,byte addressability,and dynamic random access memory(DRAM)like performance.Unfortunately,these PMs suffer from limited write endurance.Existing space management strategies of PM file systems can induce a severely unbalanced wear problem,which can damage the underlying PMs quickly.In this paper,we propose a Wear-leveling-aware Multi-grained Allocator,called WMAlloc,to achieve the wear leveling of PMs while improving the performance of file systems.WMAlloc adopts multiple min-heaps to manage the unused space of PMs.Each heap represents an allocation granularity.Then,WMAlloc allocates less-worn blocks from the corresponding min-heap for allocation requests.Moreover,to avoid recursive split and inefficient heap locations in WMAlloc,we further propose a bitmap-based multi-heap tree(BMT)to enhance WMAlloc,namely,WMAlloc-BMT.We implement WMAlloc and WMAlloc-BMT in the Linux kernel based on NOVA,a typical PM file system.Experimental results show that,compared with the original NOVA and dynamic wear-aware range management(DWARM),which is the state-of-the-art wear-leveling-aware allocator of PM file systems,WMAlloc can,respectively,achieve 4.11×and 1.81×maximum write number reduction and 1.02×and 1.64×performance with four workloads on average.Furthermore,WMAlloc-BMT outperforms WMAlloc with 1.08×performance and achieves 1.17×maximum write number reduction with four workloads on average.
基金supported in part by the National Natural Science Foundation of China(62371116 and 62231020)in part by the Science and Technology Project of Hebei Province Education Department(ZD2022164)+2 种基金in part by the Fundamental Research Funds for the Central Universities(N2223031)in part by the Open Research Project of Xidian University(ISN24-08)Key Laboratory of Cognitive Radio and Information Processing,Ministry of Education(Guilin University of Electronic Technology,China,CRKL210203)。
文摘High-efficiency and low-cost knowledge sharing can improve the decision-making ability of autonomous vehicles by mining knowledge from the Internet of Vehicles(IoVs).However,it is challenging to ensure high efficiency of local data learning models while preventing privacy leakage in a high mobility environment.In order to protect data privacy and improve data learning efficiency in knowledge sharing,we propose an asynchronous federated broad learning(FBL)framework that integrates broad learning(BL)into federated learning(FL).In FBL,we design a broad fully connected model(BFCM)as a local model for training client data.To enhance the wireless channel quality for knowledge sharing and reduce the communication and computation cost of participating clients,we construct a joint resource allocation and reconfigurable intelligent surface(RIS)configuration optimization framework for FBL.The problem is decoupled into two convex subproblems.Aiming to improve the resource scheduling efficiency in FBL,a double Davidon–Fletcher–Powell(DDFP)algorithm is presented to solve the time slot allocation and RIS configuration problem.Based on the results of resource scheduling,we design a reward-allocation algorithm based on federated incentive learning(FIL)in FBL to compensate clients for their costs.The simulation results show that the proposed FBL framework achieves better performance than the comparison models in terms of efficiency,accuracy,and cost for knowledge sharing in the IoV.
文摘BACKGROUND Liver transplantation(LT)is a life-saving intervention for patients with end-stage liver disease.However,the equitable allocation of scarce donor organs remains a formidable challenge.Prognostic tools are pivotal in identifying the most suitable transplant candidates.Traditionally,scoring systems like the model for end-stage liver disease have been instrumental in this process.Nevertheless,the landscape of prognostication is undergoing a transformation with the integration of machine learning(ML)and artificial intelligence models.AIM To assess the utility of ML models in prognostication for LT,comparing their performance and reliability to established traditional scoring systems.METHODS Following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis guidelines,we conducted a thorough and standardized literature search using the PubMed/MEDLINE database.Our search imposed no restrictions on publication year,age,or gender.Exclusion criteria encompassed non-English studies,review articles,case reports,conference papers,studies with missing data,or those exhibiting evident methodological flaws.RESULTS Our search yielded a total of 64 articles,with 23 meeting the inclusion criteria.Among the selected studies,60.8%originated from the United States and China combined.Only one pediatric study met the criteria.Notably,91%of the studies were published within the past five years.ML models consistently demonstrated satisfactory to excellent area under the receiver operating characteristic curve values(ranging from 0.6 to 1)across all studies,surpassing the performance of traditional scoring systems.Random forest exhibited superior predictive capabilities for 90-d mortality following LT,sepsis,and acute kidney injury(AKI).In contrast,gradient boosting excelled in predicting the risk of graft-versus-host disease,pneumonia,and AKI.CONCLUSION This study underscores the potential of ML models in guiding decisions related to allograft allocation and LT,marking a significant evolution in the field of prognostication.
基金supported by the Ministry of Science and Technology of China (No.2019FY101602)。
文摘Understanding understory seedling regeneration mechanisms is important for the sustainable development of temperate primary forests in the context of increasingly intense climate warming events.The poor regeneration of dominant tree species,however,is one of the biggest challenges it faces at the moment.Especially,the regeneration of the shade-intolerant Quercus mongolica seedling is difficult in primary forests,which contrasts with the extreme abundance of understory seedlings in secondary forests.The mechanism behind the interesting phenomenon is still unknown.This study used in-situ monitoring and nursery-controlled experiment to investigate the survival rate,growth performance,as well as nonstructural carbohydrate (NSC) concentrations and pools of various organ tissues of seedlings for two consecutive years,further analyze the understory light availability and simulate the foliage carbon (C) gain in the secondary and primary forest.Results suggested that seedlings in the secondary forest had greater biomass allocation aboveground,height and specific leaf area (SLA) in summer,which allowed the seedling to survive longer in the canopy closure period.High light availability and positive C gain in early spring and late autumn are key factors affecting the growth and survival of understory seedlings in the secondary forest,whereas seedlings in the primary forest had annual negative carbon gain.Through the growing season,the total NSC concentrations of seedlings gradually decreased,whereas those of seedlings in the secondary forest increased significantly in autumn,and were mainly stored in roots for winter consumption and the following year's summer shade period,which was verified by the nursery-controlled experiment that simulated autumn enhanced light availability improved seedling survival rate and NSC pools.In conclusion,our results revealed the survival trade-off strategies of Quercus mongolica seedlings and highlighted the necessity of high light availability during the spring and autumn phenological periods for shade-intolerant tree seedling recruitment.
基金supported by the National Natural Science Foundation of China under Grant 62131012/61971261。
文摘With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of the aircraft play an important role in the judgment and command of the Operational Control Center(OCC). However, how to transmit various operational status data from abnormal aircraft back to the OCC in an emergency is still an open problem. In this paper, we propose a novel Telemetry, Tracking,and Command(TT&C) architecture named Collaborative TT&C(CoTT&C) based on mega-constellation to solve such a problem. CoTT&C allows each satellite to help the abnormal aircraft by sharing TT&C resources when needed, realizing real-time and reliable aeronautical communication in an emergency. Specifically, we design a dynamic resource sharing mechanism for CoTT&C and model the mechanism as a single-leader-multi-follower Stackelberg game. Further, we give an unique Nash Equilibrium(NE) of the game as a closed form. Simulation results demonstrate that the proposed resource sharing mechanism is effective, incentive compatible, fair, and reciprocal. We hope that our findings can shed some light for future research on aeronautical communications in an emergency.
基金supported in part by the National Natural Science Foundation of China under Grant 62271268,Grant 62071253,and Grant 62371252in part by the Jiangsu Provincial Key Research and Development Program under Grant BE2022800in part by the Jiangsu Provincial 333 Talent Project。
文摘In this paper,we explore a cooperative decode-and-forward(DF)relay network comprised of a source,a relay,and a destination in the presence of an eavesdropper.To improve physical-layer security of the relay system,we propose a jamming aided decodeand-forward relay(JDFR)scheme combining the use of artificial noise and DF relaying which requires two stages to transmit a packet.Specifically,in stage one,the source sends confidential message to the relay while the destination acts as a friendly jammer and transmits artificial noise to confound the eavesdropper.In stage two,the relay forwards its re-encoded message to the destination while the source emits artificial noise to confuse the eavesdropper.In addition,we analyze the security-reliability tradeoff(SRT)performance of the proposed JDFR scheme,where security and reliability are evaluated by deriving intercept probability(IP)and outage probability(OP),respectively.For the purpose of comparison,SRT of the traditional decode-and-forward relay(TDFR)scheme is also analyzed.Numerical results show that the SRT performance of the proposed JDFR scheme is better than that of the TDFR scheme.Also,it is shown that for the JDFR scheme,a better SRT performance can be obtained by the optimal power allocation(OPA)between the friendly jammer and user.
基金supported in part by the National Key R&D Program of China under Grant 2020YFB1005900the National Natural Science Foundation of China under Grant 62001220+3 种基金the Jiangsu Provincial Key Research and Development Program under Grants BE2022068the Natural Science Foundation of Jiangsu Province under Grants BK20200440the Future Network Scientific Research Fund Project FNSRFP-2021-YB-03the Young Elite Scientist Sponsorship Program,China Association for Science and Technology.
文摘Collaborative edge computing is a promising direction to handle the computation intensive tasks in B5G wireless networks.However,edge computing servers(ECSs)from different operators may not trust each other,and thus the incentives for collaboration cannot be guaranteed.In this paper,we propose a consortium blockchain enabled collaborative edge computing framework,where users can offload computing tasks to ECSs from different operators.To minimize the total delay of users,we formulate a joint task offloading and resource optimization problem,under the constraint of the computing capability of each ECS.We apply the Tammer decomposition method and heuristic optimization algorithms to obtain the optimal solution.Finally,we propose a reputation based node selection approach to facilitate the consensus process,and also consider a completion time based primary node selection to avoid monopolization of certain edge node and enhance the security of the blockchain.Simulation results validate the effectiveness of the proposed algorithm,and the total delay can be reduced by up to 40%compared with the non-cooperative case.
基金supported by the Fundamental Research Funds for the Central Universities of NUAA(No.kfjj20200414)Natural Science Foundation of Jiangsu Province in China(No.BK20181289).
文摘In this paper,we optimize the spectrum efficiency(SE)of uplink massive multiple-input multiple-output(MIMO)system with imperfect channel state information(CSI)over Rayleigh fading channel.The SE optimization problem is formulated under the constraints of maximum power and minimum rate of each user.Then,we develop a near-optimal power allocation(PA)scheme by using the successive convex approximation(SCA)method,Lagrange multiplier method,and block coordinate descent(BCD)method,and it can obtain almost the same SE as the benchmark scheme with lower complexity.Since this scheme needs three-layer iteration,a suboptimal PA scheme is developed to further reduce the complexity,where the characteristic of massive MIMO(i.e.,numerous receive antennas)is utilized for convex reformulation,and the rate constraint is converted to linear constraints.This suboptimal scheme only needs single-layer iteration,thus has lower complexity than the near-optimal scheme.Finally,we joint design the pilot power and data power to further improve the performance,and propose an two-stage algorithm to obtain joint PA.Simulation results verify the effectiveness of the proposed schemes,and superior SE performance is achieved.
基金National Natural Science Foundation of China(62072392).
文摘Crowdsourcing technology is widely recognized for its effectiveness in task scheduling and resource allocation.While traditional methods for task allocation can help reduce costs and improve efficiency,they may encounter challenges when dealing with abnormal data flow nodes,leading to decreased allocation accuracy and efficiency.To address these issues,this study proposes a novel two-part invalid detection task allocation framework.In the first step,an anomaly detection model is developed using a dynamic self-attentive GAN to identify anomalous data.Compared to the baseline method,the model achieves an approximately 4%increase in the F1 value on the public dataset.In the second step of the framework,task allocation modeling is performed using a twopart graph matching method.This phase introduces a P-queue KM algorithm that implements a more efficient optimization strategy.The allocation efficiency is improved by approximately 23.83%compared to the baseline method.Empirical results confirm the effectiveness of the proposed framework in detecting abnormal data nodes,enhancing allocation precision,and achieving efficient allocation.
基金supported in part by the National Key Research and Development Program of China (Grant No.2020YFA0711301)in part by the National Natural Science Foundation of China (Grant No.62341110 and U22A2002)in part by the Suzhou Science and Technology Project。
文摘Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) networks can support such robots by providing on-demand communication services. However, under traditional open-loop communication paradigm, the network resources are usually divided into user-wise mostly-independent links,via ignoring the task-level dependency of robot collaboration. Thus, it is imperative to develop a new communication paradigm, taking into account the highlevel content and values behind, to facilitate multirobot operation. Inspired by Wiener’s Cybernetics theory, this article explores a closed-loop communication paradigm for the robot-oriented satellite-UAV network. This paradigm turns to handle group-wise structured links, so as to allocate resources in a taskoriented manner. It could also exploit the mobility of robots to liberate the network from full coverage,enabling new orchestration between network serving and positive mobility control of robots. Moreover,the integration of sensing, communications, computing and control would enlarge the benefit of this new paradigm. We present a case study for joint mobile edge computing(MEC) offloading and mobility control of robots, and finally outline potential challenges and open issues.
基金the Fundamental Research Program of Guangdong,China,under Grants 2020B1515310023 and 2023A1515011281in part by the National Natural Science Foundation of China under Grant 61571005.
文摘With the rapid development of Network Function Virtualization(NFV),the problem of low resource utilizationin traditional data centers is gradually being addressed.However,existing research does not optimize both localand global allocation of resources in data centers.Hence,we propose an adaptive hybrid optimization strategy thatcombines dynamic programming and neural networks to improve resource utilization and service quality in datacenters.Our approach encompasses a service function chain simulation generator,a parallel architecture servicesystem,a dynamic programming strategy formaximizing the utilization of local server resources,a neural networkfor predicting the global utilization rate of resources and a global resource optimization strategy for bottleneck andredundant resources.With the implementation of our local and global resource allocation strategies,the systemperformance is significantly optimized through simulation.
基金Project supported by the Natural Science Foundation of Jilin Province of China(Grant No.20210101417JC).
文摘Quantum key distribution(QKD)is a technology that can resist the threat of quantum computers to existing conventional cryptographic protocols.However,due to the stringent requirements of the quantum key generation environment,the generated quantum keys are considered valuable,and the slow key generation rate conflicts with the high-speed data transmission in traditional optical networks.In this paper,for the QKD network with a trusted relay,which is mainly based on point-to-point quantum keys and has complex changes in network resources,we aim to allocate resources reasonably for data packet distribution.Firstly,we formulate a linear programming constraint model for the key resource allocation(KRA)problem based on the time-slot scheduling.Secondly,we propose a new scheduling scheme based on the graded key security requirements(GKSR)and a new micro-log key storage algorithm for effective storage and management of key resources.Finally,we propose a key resource consumption(KRC)routing optimization algorithm to properly allocate time slots,routes,and key resources.Simulation results show that the proposed scheme significantly improves the key distribution success rate and key resource utilization rate,among others.
基金supported by the National Natural Science Foundation of China(No.62071354)the Key Research and Development Program of Shaanxi(No.2022ZDLGY05-08)supported by the ISN State Key Laboratory。
文摘To meet the communication services with diverse requirements,dynamic resource allocation has shown increasing importance.In this paper,we consider the multi-slot and multi-user resource allocation(MSMU-RA)in a downlink cellular scenario with the aim of maximizing system spectral efficiency while guaranteeing user fairness.We first model the MSMURA problem as a dual-sequence decision-making process,and then solve it by a novel Transformerbased deep reinforcement learning(TDRL)approach.Specifically,the proposed TDRL approach can be achieved based on two aspects:1)To adapt to the dynamic wireless environment,the proximal policy optimization(PPO)algorithm is used to optimize the multi-slot RA strategy.2)To avoid co-channel interference,the Transformer-based PPO algorithm is presented to obtain the optimal multi-user RA scheme by exploring the mapping between user sequence and resource sequence.Experimental results show that:i)the proposed approach outperforms both the traditional and DRL methods in spectral efficiency and user fairness,ii)the proposed algorithm is superior to DRL approaches in terms of convergence speed and generalization performance.
基金supported by the National Nature Science Foundation of China (62063011,62273169, 61922037, 61873115)Yunnan Fundamental Research Projects(202001AV070001)+1 种基金Yunnan Major Scientific and Technological Projects(202202AG050002)partially supported by the Open Foundation of Key Laboratory in Software Engineering of Yunnan Province (2020SE502)。
文摘The modular system can change its physical structure by self-assembly and self-disassembly between modules to dynamically adapt to task and environmental requirements. Recognizing the adaptive capability of modular systems, we introduce a modular reconfigurable flight array(MRFA) to pursue a multifunction aircraft fitting for diverse tasks and requirements,and investigate the attitude control and the control allocation problem by using the modular reconfigurable flight array as a platform. First, considering the variable and irregular topological configuration of the modular array, a center-of-mass-independent flight array dynamics model is proposed to allow control allocation under over-actuated situations. Secondly, in order to meet the stable, fast and accurate attitude tracking performance of the MRFA, a fixed-time convergent sliding mode controller with state-dependent variable exponent coefficients is proposed to ensure fast convergence rate both away from and near the system equilibrium point without encountering the singularity. It is shown that the controller also has fixed-time convergent characteristics even in the presence of external disturbances. Finally,simulation results are provided to demonstrate the effectiveness of the proposed modeling and control strategies.
基金supported by the National Natural Science Foundation of China(Grant No.61971057).
文摘In this paper,we propose the Two-way Deep Reinforcement Learning(DRL)-Based resource allocation algorithm,which solves the problem of resource allocation in the cognitive downlink network based on the underlay mode.Secondary users(SUs)in the cognitive network are multiplexed by a new Power Domain Sparse Code Multiple Access(PD-SCMA)scheme,and the physical resources of the cognitive base station are virtualized into two types of slices:enhanced mobile broadband(eMBB)slice and ultrareliable low latency communication(URLLC)slice.We design the Double Deep Q Network(DDQN)network output the optimal codebook assignment scheme and simultaneously use the Deep Deterministic Policy Gradient(DDPG)network output the optimal power allocation scheme.The objective is to jointly optimize the spectral efficiency of the system and the Quality of Service(QoS)of SUs.Simulation results show that the proposed algorithm outperforms the CNDDQN algorithm and modified JEERA algorithm in terms of spectral efficiency and QoS satisfaction.Additionally,compared with the Power Domain Non-orthogonal Multiple Access(PD-NOMA)slices and the Sparse Code Multiple Access(SCMA)slices,the PD-SCMA slices can dramatically enhance spectral efficiency and increase the number of accessible users.
基金supported by the Key Research and Development Project in Anhui Province of China(Grant No.202304a05020059)the Fundamental Research Funds for the Central Universities of China(Grant No.PA2023GDSK0055)the Project of Anhui Province Economic and Information Bureau(Grant No.JB20099).
文摘Users and edge servers are not fullymutually trusted inmobile edge computing(MEC),and hence blockchain can be introduced to provide trustableMEC.In blockchain-basedMEC,each edge server functions as a node in bothMEC and blockchain,processing users’tasks and then uploading the task related information to the blockchain.That is,each edge server runs both users’offloaded tasks and blockchain tasks simultaneously.Note that there is a trade-off between the resource allocation for MEC and blockchain tasks.Therefore,the allocation of the resources of edge servers to the blockchain and theMEC is crucial for the processing delay of blockchain-based MEC.Most of the existing research tackles the problem of resource allocation in either blockchain or MEC,which leads to unfavorable performance of the blockchain-based MEC system.In this paper,we study how to allocate the computing resources of edge servers to the MEC and blockchain tasks with the aimtominimize the total systemprocessing delay.For the problem,we propose a computing resource Allocation algorithmfor Blockchain-based MEC(ABM)which utilizes the Slater’s condition,Karush-Kuhn-Tucker(KKT)conditions,partial derivatives of the Lagrangian function and subgradient projection method to obtain the solution.Simulation results show that ABM converges and effectively reduces the processing delay of blockchain-based MEC.
基金The research has been generously supported by Tianjin Education Commission Scientific Research Program(2020KJ056),ChinaTianjin Science and Technology Planning Project(22YDTPJC00970),China.The authors would like to express their sincere appreciation for all support provided.
文摘A real-time adaptive roles allocation method based on reinforcement learning is proposed to improve humanrobot cooperation performance for a curtain wall installation task.This method breaks the traditional idea that the robot is regarded as the follower or only adjusts the leader and the follower in cooperation.In this paper,a self-learning method is proposed which can dynamically adapt and continuously adjust the initiative weight of the robot according to the change of the task.Firstly,the physical human-robot cooperation model,including the role factor is built.Then,a reinforcement learningmodel that can adjust the role factor in real time is established,and a reward and actionmodel is designed.The role factor can be adjusted continuously according to the comprehensive performance of the human-robot interaction force and the robot’s Jerk during the repeated installation.Finally,the roles adjustment rule established above continuously improves the comprehensive performance.Experiments of the dynamic roles allocation and the effect of the performance weighting coefficient on the result have been verified.The results show that the proposed method can realize the role adaptation and achieve the dual optimization goal of reducing the sum of the cooperator force and the robot’s Jerk.
基金supported in part by the National Key Research and Development Program of China under Grant 2020YFB1807700in part by the National Science Foundation of China under Grant U200120122
文摘As a mature distributed machine learning paradigm,federated learning enables wireless edge devices to collaboratively train a shared AI-model by stochastic gradient descent(SGD).However,devices need to upload high-dimensional stochastic gradients to edge server in training,which cause severe communication bottleneck.To address this problem,we compress the communication by sparsifying and quantizing the stochastic gradients of edge devices.We first derive a closed form of the communication compression in terms of sparsification and quantization factors.Then,the convergence rate of this communicationcompressed system is analyzed and several insights are obtained.Finally,we formulate and deal with the quantization resource allocation problem for the goal of minimizing the convergence upper bound,under the constraint of multiple-access channel capacity.Simulations show that the proposed scheme outperforms the benchmarks.
基金supported by National Key Research and Development Program of China(2018YFC1504502).
文摘Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes.
基金This work was supported in part by the open research fund of National Mobile Communications Research Laboratory,Southeast University(No.2023D11)in part by Sponsored by program for Science&Technology Innovation Talents in Universities of Henan Province(23HASTIT019)+2 种基金in part by Natural Science Foundation of Henan Province(20232300421097)in part by the project funded by China Postdoctoral Science Foundation(2020M682345)in part by the Henan Postdoctoral Foundation(202001015).
文摘In this paper,we investigate IRS-aided user cooperation(UC)scheme in millimeter wave(mmWave)wirelesspowered sensor networks(WPSN),where two single-antenna users are wireless powered in the wireless energy transfer(WET)phase first and then cooperatively transmit information to a hybrid access point(AP)in the wireless information transmission(WIT)phase,following which the IRS is deployed to enhance the system performance of theWET andWIT.We maximized the weighted sum-rate problem by jointly optimizing the transmit time slots,power allocations,and the phase shifts of the IRS.Due to the non-convexity of the original problem,a semidefinite programming relaxation-based approach is proposed to convert the formulated problem to a convex optimization framework,which can obtain the optimal global solution.Simulation results demonstrate that the weighted sum throughput of the proposed UC scheme outperforms the non-UC scheme whether equipped with IRS or not.