Growth of gallium nitride(GaN)inverted pyramids on c-plane sapphire substrates is benefit for fabricating novel devices as it forms the semipolar facets.In this work,GaN inverted pyramids are directly grown on c-plane...Growth of gallium nitride(GaN)inverted pyramids on c-plane sapphire substrates is benefit for fabricating novel devices as it forms the semipolar facets.In this work,GaN inverted pyramids are directly grown on c-plane patterned sapphire substrates(PSS)by metal organic vapor phase epitaxy(MOVPE).The influences of growth conditions on the surface morphol-ogy are experimentally studied and explained by Wulff constructions.The competition of growth rate among{0001},{1011},and{1122}facets results in the various surface morphologies of GaN.A higher growth temperature of 985 ℃ and a lowerⅤ/Ⅲratio of 25 can expand the area of{}facets in GaN inverted pyramids.On the other hand,GaN inverted pyramids with almost pure{}facets are obtained by using a lower growth temperature of 930℃,a higherⅤ/Ⅲratio of 100,and PSS with pattern arrangement perpendicular to the substrate primary flat.展开更多
This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization pr...This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization problem is formulated by jointly optimizing the user scheduling and data assignment.Due to the non-analytic expression of the WSA w.r.t.the optimization variables and the unknowability of future network information,the problem cannot be solved with known solution methods.Therefore,an online Joint Partial Offloading and User Scheduling Optimization(JPOUSO)algorithm is proposed by transforming the original problem into a single-slot data assignment subproblem and a single-slot user scheduling sub-problem and solving the two sub-problems separately.We analyze the computational complexity of the presented JPO-USO algorithm,which is of O(N),with N being the number of users.Simulation results show that the proposed JPO-USO algorithm is able to achieve better AoI performance compared with various baseline methods.It is shown that both the user’s data assignment and the user’s AoI should be jointly taken into account to decrease the system WSA when scheduling users.展开更多
In the upcoming large-scale Internet of Things(Io T),it is increasingly challenging to defend against malicious traffic,due to the heterogeneity of Io T devices and the diversity of Io T communication protocols.In thi...In the upcoming large-scale Internet of Things(Io T),it is increasingly challenging to defend against malicious traffic,due to the heterogeneity of Io T devices and the diversity of Io T communication protocols.In this paper,we propose a semi-supervised learning-based approach to detect malicious traffic at the access side.It overcomes the resource-bottleneck problem of traditional malicious traffic defenders which are deployed at the victim side,and also is free of labeled traffic data in model training.Specifically,we design a coarse-grained behavior model of Io T devices by self-supervised learning with unlabeled traffic data.Then,we fine-tune this model to improve its accuracy in malicious traffic detection by adopting a transfer learning method using a small amount of labeled data.Experimental results show that our method can achieve the accuracy of 99.52%and the F1-score of 99.52%with only 1%of the labeled training data based on the CICDDoS2019 dataset.Moreover,our method outperforms the stateof-the-art supervised learning-based methods in terms of accuracy,precision,recall and F1-score with 1%of the training data.展开更多
In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of u...In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of users transmitting status updates,which are collected by the user randomly over time,to an edge server through unreliable orthogonal channels.It begs a natural question:with random status update arrivals and obscure channel conditions,can we devise an intelligent scheduling policy that matches the users and channels to stabilize the queues of all users while minimizing the average AoI?To give an adequate answer,we define a bipartite graph and formulate a dynamic edge activation problem with stability constraints.Then,we propose an online matching while learning algorithm(MatL)and discuss its implementation for wireless scheduling.Finally,simulation results demonstrate that the MatL is reliable to learn the channel states and manage the users’buffers for fresher information at the edge.展开更多
Providing alternative PNT service to GNSS-challenged users will be an important function of next-generation NGSO broadband satellite communication systems.Herein,a packet-based PNT service architecture in NGSO broadba...Providing alternative PNT service to GNSS-challenged users will be an important function of next-generation NGSO broadband satellite communication systems.Herein,a packet-based PNT service architecture in NGSO broadband systems is proposed in which a primary satellite and selected assistant satellites work together to provide PNT service to requesting users.Its positioning performance bounds are mathematically formulated by rigorously analyzing the bounds constrained by different waveforms.Simulations are conducted on different configurations of Walker Delta MEO constellations and Walker Star LEO constellations for corroboration,revealing the following:(1)Both MEO and LEO constellations achieve sub-meter-level positioning precision given enough satellites.(2)Compared to the GNSS Doppler-based velocity estimation method,the position advance based velocity estimation algorithm is more precise and applicable to the PNT service in NGSO broadband systems.(3)To provide PNT service to users in GNSS-challenged environments,the primary and each assistant satellite need only∼0.1‰of the time of one downlink beam.展开更多
With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of...With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of the aircraft play an important role in the judgment and command of the Operational Control Center(OCC). However, how to transmit various operational status data from abnormal aircraft back to the OCC in an emergency is still an open problem. In this paper, we propose a novel Telemetry, Tracking,and Command(TT&C) architecture named Collaborative TT&C(CoTT&C) based on mega-constellation to solve such a problem. CoTT&C allows each satellite to help the abnormal aircraft by sharing TT&C resources when needed, realizing real-time and reliable aeronautical communication in an emergency. Specifically, we design a dynamic resource sharing mechanism for CoTT&C and model the mechanism as a single-leader-multi-follower Stackelberg game. Further, we give an unique Nash Equilibrium(NE) of the game as a closed form. Simulation results demonstrate that the proposed resource sharing mechanism is effective, incentive compatible, fair, and reciprocal. We hope that our findings can shed some light for future research on aeronautical communications in an emergency.展开更多
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ...AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.展开更多
The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive st...The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.展开更多
Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channe...Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channel in a fixed time slot per frame,while the other intra-frame channels are usually recovered by interpolation.However,these approaches suffer from a serious interpolation loss,especially for mobile millimeter-wave communications.To solve this challenging problem,we propose a tensor neural ordinary differential equation(TN-ODE)based continuous-time channel prediction scheme to realize the direct prediction of intra-frame channels.Specifically,inspired by the recently developed continuous mapping model named neural ODE in the field of machine learning,we first utilize the neural ODE model to predict future continuous-time channels.To improve the channel prediction accuracy and reduce computational complexity,we then propose the TN-ODE scheme to learn the structural characteristics of the high-dimensional channel by low-dimensional learnable transform.Simulation results show that the proposed scheme is able to achieve higher intra-frame channel prediction accuracy than existing schemes.展开更多
This paper considers the frameasynchronous grant-free rateless multiple access(FAGF-RMA)scenario,where users can initiate access at any symbol time,using shared channel resources to transmit data to the base station.R...This paper considers the frameasynchronous grant-free rateless multiple access(FAGF-RMA)scenario,where users can initiate access at any symbol time,using shared channel resources to transmit data to the base station.Rateless coding is introduced to enhance the reliability of the system.Previous literature has shown that FA-GFRMA can achieve lower access delay than framesynchronous grant-free rateless multiple access(FSGF-RMA),with extreme reliability enabled by rateless coding.To support FA-GF-RMA in more practical scenarios,a joint activity and data detection(JADD)scheme is proposed.Exploiting the feature of sporadic traffic,approximate message passing(AMP)is exploited for transmission signal matrix estimation.Then,to determine the packet start points,a maximum posterior probability(MAP)estimation problem is solved based on the recovered transmitted signals,leveraging the intrinsic power pattern in the codeword.An iterative power-pattern-aided AMP algorithm is devised to enhance the estimation performance of AMP.Simulation results verify that the proposed solution achieves a delay performance that is comparable to the performance limit of FA-GF-RMA.展开更多
Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target ...Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target sentences,including aspect terms,aspect categories,corresponding opinion terms,and sentiment polarity.However,most existing research has focused on English datasets.Consequently,while ASQP has seen significant progress in English,the Chinese ASQP task has remained relatively stagnant.Drawing inspiration from methods applied to English ASQP,we propose Chinese generation templates and employ prompt-based instruction learning to enhance the model’s understanding of the task,ultimately improving ASQP performance in the Chinese context.Ultimately,under the same pre-training model configuration,our approach achieved a 5.79%improvement in the F1 score compared to the previously leading method.Furthermore,when utilizing a larger model with reduced training parameters,the F1 score demonstrated an 8.14%enhancement.Additionally,we suggest a novel evaluation metric based on the characteristics of generative models,better-reflecting model generalization.Experimental results validate the effectiveness of our approach.展开更多
Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) net...Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) networks can support such robots by providing on-demand communication services. However, under traditional open-loop communication paradigm, the network resources are usually divided into user-wise mostly-independent links,via ignoring the task-level dependency of robot collaboration. Thus, it is imperative to develop a new communication paradigm, taking into account the highlevel content and values behind, to facilitate multirobot operation. Inspired by Wiener’s Cybernetics theory, this article explores a closed-loop communication paradigm for the robot-oriented satellite-UAV network. This paradigm turns to handle group-wise structured links, so as to allocate resources in a taskoriented manner. It could also exploit the mobility of robots to liberate the network from full coverage,enabling new orchestration between network serving and positive mobility control of robots. Moreover,the integration of sensing, communications, computing and control would enlarge the benefit of this new paradigm. We present a case study for joint mobile edge computing(MEC) offloading and mobility control of robots, and finally outline potential challenges and open issues.展开更多
Human mobility prediction is important for many applications.However,training an accurate mobility prediction model requires a large scale of human trajectories,where privacy issues become an important problem.The ris...Human mobility prediction is important for many applications.However,training an accurate mobility prediction model requires a large scale of human trajectories,where privacy issues become an important problem.The rising federated learning provides us with a promising solution to this problem,which enables mobile devices to collaboratively learn a shared prediction model while keeping all the training data on the device,decoupling the ability to do machine learning from the need to store the data in the cloud.However,existing federated learningbased methods either do not provide privacy guarantees or have vulnerability in terms of privacy leakage.In this paper,we combine the techniques of data perturbation and model perturbation mechanisms and propose a privacy-preserving mobility prediction algorithm,where we add noise to the transmitted model and the raw data collaboratively to protect user privacy and keep the mobility prediction performance.Extensive experimental results show that our proposed method significantly outperforms the existing stateof-the-art mobility prediction method in terms of defensive performance against practical attacks while having comparable mobility prediction performance,demonstrating its effectiveness.展开更多
Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial i...Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial intelligence analysis of EEG signals with high energy efficiency poses significant challenges for electronic processors on edge computing devices,especially with large neural network models.Herein,we propose an EEG opto-processor based on diffractive photonic computing units(DPUs)to process extracranial and intracranial EEG signals effectively and to detect epileptic seizures.The signals of the EEG channels within a second-time window are optically encoded as inputs to the constructed diffractive neural networks for classification,which monitors the brain state to identify symptoms of an epileptic seizure.We developed both free-space and integrated DPUs as edge computing systems and demonstrated their applications for real-time epileptic seizure detection using benchmark datasets,that is,the Children’s Hospital Boston(CHB)–Massachusetts Institute of Technology(MIT)extracranial and Epilepsy-iEEG-Multicenter intracranial EEG datasets,with excellent computing performance results.Along with the channel selection mechanism,both numerical evaluations and experimental results validated the sufficiently high classification accuracies of the proposed opto-processors for supervising clinical diagnosis.Our study opens a new research direction for utilizing photonic computing techniques to process large-scale EEG signals and promote broader applications.展开更多
Anticipating others’actions is innate and essential in order for humans to navigate and interact well with others in dense crowds.This ability is urgently required for unmanned systems such as service robots and self...Anticipating others’actions is innate and essential in order for humans to navigate and interact well with others in dense crowds.This ability is urgently required for unmanned systems such as service robots and self-driving cars.However,existing solutions struggle to predict pedestrian anticipation accurately,because the influence of group-related social behaviors has not been well considered.While group relationships and group interactions are ubiquitous and significantly influence pedestrian anticipation,their influence is diverse and subtle,making it difficult to explicitly quantify.Here,we propose the group interaction field(GIF),a novel group-aware representation that quantifies pedestrian anticipation into a probability field of pedestrians’future locations and attention orientations.An end-to-end neural network,GIFNet,is tailored to estimate the GIF from explicit multidimensional observations.GIFNet quantifies the influence of group behaviors by formulating a group interaction graph with propagation and graph attention that is adaptive to the group size and dynamic interaction states.The experimental results show that the GIF effectively represents the change in pedestrians’anticipation under the prominent impact of group behaviors and accurately predicts pedestrians’future states.Moreover,the GIF contributes to explaining various predictions of pedestrians’behavior in different social states.The proposed GIF will eventually be able to allow unmanned systems to work in a human-like manner and comply with social norms,thereby promoting harmonious human-machine relationships.展开更多
With rapid development of unmanned aerial vehicles(UAVs), more and more UAVs access satellite networks for data transmission. To improve the spectral efficiency, non-orthogonal multiple access(NOMA) is adopted to inte...With rapid development of unmanned aerial vehicles(UAVs), more and more UAVs access satellite networks for data transmission. To improve the spectral efficiency, non-orthogonal multiple access(NOMA) is adopted to integrate UAVs into the satellite network, where multiple satellites cooperatively serve the UAVs and mobile terminal using the Ku-band and above. Taking into account the rain fading and the fading correlation, the outage performance is first analytically obtained for fixed power allocation and then efficiently calculated by the proposed power allocation algorithm to guarantee the user fairness. Simulation results verify the outage performance analysis and show the performance improvement of the proposed power allocation scheme.展开更多
There is a contradiction between high processing complexity and limited processing resources when turbo codes are used on the on-board processing(OBP)satellite platform.To solve this problem,this paper proposes a part...There is a contradiction between high processing complexity and limited processing resources when turbo codes are used on the on-board processing(OBP)satellite platform.To solve this problem,this paper proposes a partial iterative decode method for on-board application,in which satellite only carries out limited number of iteration according to the on-board processing resource limitation and the throughput capacity requirements.In this method,the soft information of parity bits,which is not obtained individually in conventional turbo decoder,is encoded and forwarded along with those of information bits.To save downlink transmit power,the soft information is limited and normalized before forwarding.The iteration number and limiter parameters are optimized with the help of EXIT chart and numerical analysis,respectively.Simulation results show that the proposed method can effectively decrease the complexity of onboard processing while achieve most of the decoding gain..展开更多
Indoor Wi-Fi localization of mobile devices plays a more and more important role along with the rapid growth of location-based services and Wi-Fi mobile devices.In this paper,a new method of constructing the channel s...Indoor Wi-Fi localization of mobile devices plays a more and more important role along with the rapid growth of location-based services and Wi-Fi mobile devices.In this paper,a new method of constructing the channel state information(CSI)image is proposed to improve the localization accuracy.Compared with previous methods of constructing the CSI image,the new kind of CSI image proposed is able to contain more channel information such as the angle of arrival(AoA),the time of arrival(TOA)and the amplitude.We construct three gray images by using phase differences of different antennas and amplitudes of different subcarriers of one antenna,and then merge them to form one RGB image.The localization method has off-line stage and on-line stage.In the off-line stage,the composed three-channel RGB images at training locations are used to train a convolutional neural network(CNN)which has been proved to be efficient in image recognition.In the on-line stage,images at test locations are fed to the well-trained CNN model and the localization result is the weighted mean value with highest output values.The performance of the proposed method is verified with extensive experiments in the representative indoor environment.展开更多
As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly...As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly and accurately is a significant, popular and meaningful task.Classification methods based on laser-induced breakdown spectroscopy(LIBS) have been reported in recent years. Although LIBS is an advanced detection technology, it is necessary to combine it with some algorithm to reach the goal of rapid and accurate classification. As an important machine learning method, the random forest(RF) algorithm plays a great role in pattern recognition and material classification. This paper introduces a rapid classification method of Al alloy based on LIBS and the RF algorithm. The results show that the best accuracy that can be reached using this method to classify Al alloy samples is 98.59%, the average of which is 98.45%. It also reveals through the relationship laws that the accuracy varies with the number of trees in the RF and the size of the training sample set in the RF. According to the laws, researchers can find out the optimized parameters in the RF algorithm in order to achieve,as expected, a good result. These results prove that LIBS with the RF algorithm can exactly classify Al alloy effectively, precisely and rapidly with high accuracy, which obviously has significant practical value.展开更多
By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task off...By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task offloading in multi-user MEC systems with heterogeneous clouds, including edge clouds and remote clouds. Tasks are forwarded from mobile devices to edge clouds via wireless channels, and they can be further forwarded to remote clouds via the Internet. Our objective is to minimize the total energy consumption of multiple mobile devices, subject to bounded-delay requirements of tasks. Based on dynamic programming, we propose an algorithm that minimizes the energy consumption, by jointly allocating bandwidth and computational resources to mobile devices. The algorithm is of pseudo-polynomial complexity. To further reduce the complexity, we propose an approximation algorithm with energy discretization, and its total energy consumption is proved to be within a bounded gap from the optimum. Simulation results show that, nearly 82.7% energy of mobile devices can be saved by task offloading compared with mobile device execution.展开更多
基金the National Key Research and Development Program(2021YFA0716400)the National Natural Science Foundation of China(62225405,62350002,61991443)+1 种基金the Key R&D Project of Jiangsu Province,China(BE2020004)the Collaborative Innovation Centre of Solid-State Lighting and Energy-Saving Electronics.
文摘Growth of gallium nitride(GaN)inverted pyramids on c-plane sapphire substrates is benefit for fabricating novel devices as it forms the semipolar facets.In this work,GaN inverted pyramids are directly grown on c-plane patterned sapphire substrates(PSS)by metal organic vapor phase epitaxy(MOVPE).The influences of growth conditions on the surface morphol-ogy are experimentally studied and explained by Wulff constructions.The competition of growth rate among{0001},{1011},and{1122}facets results in the various surface morphologies of GaN.A higher growth temperature of 985 ℃ and a lowerⅤ/Ⅲratio of 25 can expand the area of{}facets in GaN inverted pyramids.On the other hand,GaN inverted pyramids with almost pure{}facets are obtained by using a lower growth temperature of 930℃,a higherⅤ/Ⅲratio of 100,and PSS with pattern arrangement perpendicular to the substrate primary flat.
基金supported in part by the Fundamental Research Funds for the Central Universities under Grant 2022JBGP003in part by the National Natural Science Foundation of China(NSFC)under Grant 62071033in part by ZTE IndustryUniversity-Institute Cooperation Funds under Grant No.IA20230217003。
文摘This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization problem is formulated by jointly optimizing the user scheduling and data assignment.Due to the non-analytic expression of the WSA w.r.t.the optimization variables and the unknowability of future network information,the problem cannot be solved with known solution methods.Therefore,an online Joint Partial Offloading and User Scheduling Optimization(JPOUSO)algorithm is proposed by transforming the original problem into a single-slot data assignment subproblem and a single-slot user scheduling sub-problem and solving the two sub-problems separately.We analyze the computational complexity of the presented JPO-USO algorithm,which is of O(N),with N being the number of users.Simulation results show that the proposed JPO-USO algorithm is able to achieve better AoI performance compared with various baseline methods.It is shown that both the user’s data assignment and the user’s AoI should be jointly taken into account to decrease the system WSA when scheduling users.
基金supported in part by the National Key R&D Program of China under Grant 2018YFA0701601part by the National Natural Science Foundation of China(Grant No.U22A2002,61941104,62201605)part by Tsinghua University-China Mobile Communications Group Co.,Ltd.Joint Institute。
文摘In the upcoming large-scale Internet of Things(Io T),it is increasingly challenging to defend against malicious traffic,due to the heterogeneity of Io T devices and the diversity of Io T communication protocols.In this paper,we propose a semi-supervised learning-based approach to detect malicious traffic at the access side.It overcomes the resource-bottleneck problem of traditional malicious traffic defenders which are deployed at the victim side,and also is free of labeled traffic data in model training.Specifically,we design a coarse-grained behavior model of Io T devices by self-supervised learning with unlabeled traffic data.Then,we fine-tune this model to improve its accuracy in malicious traffic detection by adopting a transfer learning method using a small amount of labeled data.Experimental results show that our method can achieve the accuracy of 99.52%and the F1-score of 99.52%with only 1%of the labeled training data based on the CICDDoS2019 dataset.Moreover,our method outperforms the stateof-the-art supervised learning-based methods in terms of accuracy,precision,recall and F1-score with 1%of the training data.
基金supported in part by Shanghai Pujiang Program under Grant No.21PJ1402600in part by Natural Science Foundation of Chongqing,China under Grant No.CSTB2022NSCQ-MSX0375+4 种基金in part by Song Shan Laboratory Foundation,under Grant No.YYJC022022007in part by Zhejiang Provincial Natural Science Foundation of China under Grant LGJ22F010001in part by National Key Research and Development Program of China under Grant 2020YFA0711301in part by National Natural Science Foundation of China under Grant 61922049。
文摘In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of users transmitting status updates,which are collected by the user randomly over time,to an edge server through unreliable orthogonal channels.It begs a natural question:with random status update arrivals and obscure channel conditions,can we devise an intelligent scheduling policy that matches the users and channels to stabilize the queues of all users while minimizing the average AoI?To give an adequate answer,we define a bipartite graph and formulate a dynamic edge activation problem with stability constraints.Then,we propose an online matching while learning algorithm(MatL)and discuss its implementation for wireless scheduling.Finally,simulation results demonstrate that the MatL is reliable to learn the channel states and manage the users’buffers for fresher information at the edge.
基金the National Key Research and Development Program of China(2020YFB1804800)the National Natural Science Foundation of China(No.62071270).
文摘Providing alternative PNT service to GNSS-challenged users will be an important function of next-generation NGSO broadband satellite communication systems.Herein,a packet-based PNT service architecture in NGSO broadband systems is proposed in which a primary satellite and selected assistant satellites work together to provide PNT service to requesting users.Its positioning performance bounds are mathematically formulated by rigorously analyzing the bounds constrained by different waveforms.Simulations are conducted on different configurations of Walker Delta MEO constellations and Walker Star LEO constellations for corroboration,revealing the following:(1)Both MEO and LEO constellations achieve sub-meter-level positioning precision given enough satellites.(2)Compared to the GNSS Doppler-based velocity estimation method,the position advance based velocity estimation algorithm is more precise and applicable to the PNT service in NGSO broadband systems.(3)To provide PNT service to users in GNSS-challenged environments,the primary and each assistant satellite need only∼0.1‰of the time of one downlink beam.
基金supported by the National Natural Science Foundation of China under Grant 62131012/61971261。
文摘With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of the aircraft play an important role in the judgment and command of the Operational Control Center(OCC). However, how to transmit various operational status data from abnormal aircraft back to the OCC in an emergency is still an open problem. In this paper, we propose a novel Telemetry, Tracking,and Command(TT&C) architecture named Collaborative TT&C(CoTT&C) based on mega-constellation to solve such a problem. CoTT&C allows each satellite to help the abnormal aircraft by sharing TT&C resources when needed, realizing real-time and reliable aeronautical communication in an emergency. Specifically, we design a dynamic resource sharing mechanism for CoTT&C and model the mechanism as a single-leader-multi-follower Stackelberg game. Further, we give an unique Nash Equilibrium(NE) of the game as a closed form. Simulation results demonstrate that the proposed resource sharing mechanism is effective, incentive compatible, fair, and reciprocal. We hope that our findings can shed some light for future research on aeronautical communications in an emergency.
基金Project supported in part by the National Key Research and Development Program of China(Grant No.2021YFA0716400)the National Natural Science Foundation of China(Grant Nos.62225405,62150027,61974080,61991443,61975093,61927811,61875104,62175126,and 62235011)+2 种基金the Ministry of Science and Technology of China(Grant Nos.2021ZD0109900 and 2021ZD0109903)the Collaborative Innovation Center of Solid-State Lighting and Energy-Saving ElectronicsTsinghua University Initiative Scientific Research Program.
文摘AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI.
基金supported by the EU H2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement(Project-DEEP,Grant number:101109045)National Key R&D Program of China with Grant number 2018YFB1800804+2 种基金the National Natural Science Foundation of China(Nos.NSFC 61925105,and 62171257)Tsinghua University-China Mobile Communications Group Co.,Ltd,Joint Institutethe Fundamental Research Funds for the Central Universities,China(No.FRF-NP-20-03)。
文摘The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications.
基金supported in part by the National Key Research and Development Program of China(Grant No.2020YFB1805005)in part by the National Natural Science Foundation of China(Grant No.62031019)in part by the European Commission through the H2020-MSCA-ITN META WIRELESS Research Project under Grant 956256。
文摘Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channel in a fixed time slot per frame,while the other intra-frame channels are usually recovered by interpolation.However,these approaches suffer from a serious interpolation loss,especially for mobile millimeter-wave communications.To solve this challenging problem,we propose a tensor neural ordinary differential equation(TN-ODE)based continuous-time channel prediction scheme to realize the direct prediction of intra-frame channels.Specifically,inspired by the recently developed continuous mapping model named neural ODE in the field of machine learning,we first utilize the neural ODE model to predict future continuous-time channels.To improve the channel prediction accuracy and reduce computational complexity,we then propose the TN-ODE scheme to learn the structural characteristics of the high-dimensional channel by low-dimensional learnable transform.Simulation results show that the proposed scheme is able to achieve higher intra-frame channel prediction accuracy than existing schemes.
基金supported by the projects as follows,Key Research and Development Program of China(2018YFB1801102)the Key Research and Development Program of China(2020YFB1806603)+3 种基金Fundamental Research Funds for the Central Universities under Grant 2242022k60006Tsinghua University-China Mobile Communications Group Co.,Ltd.Joint Institute,Civil Aerospace Technology Project(D040202)National Natural Science Foundation of China(Grant No.92067206)TsinghuaQualcomm Joint Project,Tsinghua University Initiative Scientific Research Program(20193080005)。
文摘This paper considers the frameasynchronous grant-free rateless multiple access(FAGF-RMA)scenario,where users can initiate access at any symbol time,using shared channel resources to transmit data to the base station.Rateless coding is introduced to enhance the reliability of the system.Previous literature has shown that FA-GFRMA can achieve lower access delay than framesynchronous grant-free rateless multiple access(FSGF-RMA),with extreme reliability enabled by rateless coding.To support FA-GF-RMA in more practical scenarios,a joint activity and data detection(JADD)scheme is proposed.Exploiting the feature of sporadic traffic,approximate message passing(AMP)is exploited for transmission signal matrix estimation.Then,to determine the packet start points,a maximum posterior probability(MAP)estimation problem is solved based on the recovered transmitted signals,leveraging the intrinsic power pattern in the codeword.An iterative power-pattern-aided AMP algorithm is devised to enhance the estimation performance of AMP.Simulation results verify that the proposed solution achieves a delay performance that is comparable to the performance limit of FA-GF-RMA.
基金supported by the National Key Research and Development Program(Nos.2021YFF0901705,2021YFF0901700)the State Key Laboratory of Media Convergence and Communication,Communication University of China+1 种基金the Fundamental Research Funds for the Central Universitiesthe High-Quality and Cutting-Edge Disciplines Construction Project for Universities in Beijing(Internet Information,Communication University of China).
文摘Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target sentences,including aspect terms,aspect categories,corresponding opinion terms,and sentiment polarity.However,most existing research has focused on English datasets.Consequently,while ASQP has seen significant progress in English,the Chinese ASQP task has remained relatively stagnant.Drawing inspiration from methods applied to English ASQP,we propose Chinese generation templates and employ prompt-based instruction learning to enhance the model’s understanding of the task,ultimately improving ASQP performance in the Chinese context.Ultimately,under the same pre-training model configuration,our approach achieved a 5.79%improvement in the F1 score compared to the previously leading method.Furthermore,when utilizing a larger model with reduced training parameters,the F1 score demonstrated an 8.14%enhancement.Additionally,we suggest a novel evaluation metric based on the characteristics of generative models,better-reflecting model generalization.Experimental results validate the effectiveness of our approach.
基金supported in part by the National Key Research and Development Program of China (Grant No.2020YFA0711301)in part by the National Natural Science Foundation of China (Grant No.62341110 and U22A2002)in part by the Suzhou Science and Technology Project。
文摘Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) networks can support such robots by providing on-demand communication services. However, under traditional open-loop communication paradigm, the network resources are usually divided into user-wise mostly-independent links,via ignoring the task-level dependency of robot collaboration. Thus, it is imperative to develop a new communication paradigm, taking into account the highlevel content and values behind, to facilitate multirobot operation. Inspired by Wiener’s Cybernetics theory, this article explores a closed-loop communication paradigm for the robot-oriented satellite-UAV network. This paradigm turns to handle group-wise structured links, so as to allocate resources in a taskoriented manner. It could also exploit the mobility of robots to liberate the network from full coverage,enabling new orchestration between network serving and positive mobility control of robots. Moreover,the integration of sensing, communications, computing and control would enlarge the benefit of this new paradigm. We present a case study for joint mobile edge computing(MEC) offloading and mobility control of robots, and finally outline potential challenges and open issues.
基金supported in part by the National Key Research and Development Program of China under 2020AAA0106000the National Natural Science Foundation of China under U20B2060 and U21B2036supported by a grant from the Guoqiang Institute, Tsinghua University under 2021GQG1005
文摘Human mobility prediction is important for many applications.However,training an accurate mobility prediction model requires a large scale of human trajectories,where privacy issues become an important problem.The rising federated learning provides us with a promising solution to this problem,which enables mobile devices to collaboratively learn a shared prediction model while keeping all the training data on the device,decoupling the ability to do machine learning from the need to store the data in the cloud.However,existing federated learningbased methods either do not provide privacy guarantees or have vulnerability in terms of privacy leakage.In this paper,we combine the techniques of data perturbation and model perturbation mechanisms and propose a privacy-preserving mobility prediction algorithm,where we add noise to the transmitted model and the raw data collaboratively to protect user privacy and keep the mobility prediction performance.Extensive experimental results show that our proposed method significantly outperforms the existing stateof-the-art mobility prediction method in terms of defensive performance against practical attacks while having comparable mobility prediction performance,demonstrating its effectiveness.
基金supported by the National Major Science and Technology Projects of China(2021ZD0109902 and 2020AA0105500)the National Natural Science Fundation of China(62275139 and 62088102)the Tsinghua University Initiative Scientific Research Program.
文摘Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial intelligence analysis of EEG signals with high energy efficiency poses significant challenges for electronic processors on edge computing devices,especially with large neural network models.Herein,we propose an EEG opto-processor based on diffractive photonic computing units(DPUs)to process extracranial and intracranial EEG signals effectively and to detect epileptic seizures.The signals of the EEG channels within a second-time window are optically encoded as inputs to the constructed diffractive neural networks for classification,which monitors the brain state to identify symptoms of an epileptic seizure.We developed both free-space and integrated DPUs as edge computing systems and demonstrated their applications for real-time epileptic seizure detection using benchmark datasets,that is,the Children’s Hospital Boston(CHB)–Massachusetts Institute of Technology(MIT)extracranial and Epilepsy-iEEG-Multicenter intracranial EEG datasets,with excellent computing performance results.Along with the channel selection mechanism,both numerical evaluations and experimental results validated the sufficiently high classification accuracies of the proposed opto-processors for supervising clinical diagnosis.Our study opens a new research direction for utilizing photonic computing techniques to process large-scale EEG signals and promote broader applications.
基金supported in part by the National Natural Science Foundation of China (NSFC,62125106,61860206003,and 62088102)in part by the Ministry of Science and Technology of China (2021ZD0109901)in part by the Provincial Key Research and Development Program of Zhejiang (2021C01016).
文摘Anticipating others’actions is innate and essential in order for humans to navigate and interact well with others in dense crowds.This ability is urgently required for unmanned systems such as service robots and self-driving cars.However,existing solutions struggle to predict pedestrian anticipation accurately,because the influence of group-related social behaviors has not been well considered.While group relationships and group interactions are ubiquitous and significantly influence pedestrian anticipation,their influence is diverse and subtle,making it difficult to explicitly quantify.Here,we propose the group interaction field(GIF),a novel group-aware representation that quantifies pedestrian anticipation into a probability field of pedestrians’future locations and attention orientations.An end-to-end neural network,GIFNet,is tailored to estimate the GIF from explicit multidimensional observations.GIFNet quantifies the influence of group behaviors by formulating a group interaction graph with propagation and graph attention that is adaptive to the group size and dynamic interaction states.The experimental results show that the GIF effectively represents the change in pedestrians’anticipation under the prominent impact of group behaviors and accurately predicts pedestrians’future states.Moreover,the GIF contributes to explaining various predictions of pedestrians’behavior in different social states.The proposed GIF will eventually be able to allow unmanned systems to work in a human-like manner and comply with social norms,thereby promoting harmonious human-machine relationships.
基金supported in part by the National Natural Science Foundation of China (No. 91638205, 91438206, 61771286, 61621091)
文摘With rapid development of unmanned aerial vehicles(UAVs), more and more UAVs access satellite networks for data transmission. To improve the spectral efficiency, non-orthogonal multiple access(NOMA) is adopted to integrate UAVs into the satellite network, where multiple satellites cooperatively serve the UAVs and mobile terminal using the Ku-band and above. Taking into account the rain fading and the fading correlation, the outage performance is first analytically obtained for fixed power allocation and then efficiently calculated by the proposed power allocation algorithm to guarantee the user fairness. Simulation results verify the outage performance analysis and show the performance improvement of the proposed power allocation scheme.
基金supported by National High Technology Research and Development Program(863 Program,2012AA01A502)National Natural Science Foundation of China (41206031)National Basic Research Program(2012CB316000)
文摘There is a contradiction between high processing complexity and limited processing resources when turbo codes are used on the on-board processing(OBP)satellite platform.To solve this problem,this paper proposes a partial iterative decode method for on-board application,in which satellite only carries out limited number of iteration according to the on-board processing resource limitation and the throughput capacity requirements.In this method,the soft information of parity bits,which is not obtained individually in conventional turbo decoder,is encoded and forwarded along with those of information bits.To save downlink transmit power,the soft information is limited and normalized before forwarding.The iteration number and limiter parameters are optimized with the help of EXIT chart and numerical analysis,respectively.Simulation results show that the proposed method can effectively decrease the complexity of onboard processing while achieve most of the decoding gain..
基金supported by the National Natural Science Foundation of China (No.61631013)National Key Basic Research Program of China (973 Program) (No. 2013CB329002)National Major Project (NO. 2018ZX03001006003)
文摘Indoor Wi-Fi localization of mobile devices plays a more and more important role along with the rapid growth of location-based services and Wi-Fi mobile devices.In this paper,a new method of constructing the channel state information(CSI)image is proposed to improve the localization accuracy.Compared with previous methods of constructing the CSI image,the new kind of CSI image proposed is able to contain more channel information such as the angle of arrival(AoA),the time of arrival(TOA)and the amplitude.We construct three gray images by using phase differences of different antennas and amplitudes of different subcarriers of one antenna,and then merge them to form one RGB image.The localization method has off-line stage and on-line stage.In the off-line stage,the composed three-channel RGB images at training locations are used to train a convolutional neural network(CNN)which has been proved to be efficient in image recognition.In the on-line stage,images at test locations are fed to the well-trained CNN model and the localization result is the weighted mean value with highest output values.The performance of the proposed method is verified with extensive experiments in the representative indoor environment.
基金supported by National High Technology Research and Development Program of China (863 Program. No. 2013AA102402)
文摘As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly and accurately is a significant, popular and meaningful task.Classification methods based on laser-induced breakdown spectroscopy(LIBS) have been reported in recent years. Although LIBS is an advanced detection technology, it is necessary to combine it with some algorithm to reach the goal of rapid and accurate classification. As an important machine learning method, the random forest(RF) algorithm plays a great role in pattern recognition and material classification. This paper introduces a rapid classification method of Al alloy based on LIBS and the RF algorithm. The results show that the best accuracy that can be reached using this method to classify Al alloy samples is 98.59%, the average of which is 98.45%. It also reveals through the relationship laws that the accuracy varies with the number of trees in the RF and the size of the training sample set in the RF. According to the laws, researchers can find out the optimized parameters in the RF algorithm in order to achieve,as expected, a good result. These results prove that LIBS with the RF algorithm can exactly classify Al alloy effectively, precisely and rapidly with high accuracy, which obviously has significant practical value.
基金the National Key R&D Program of China 2018YFB1800804the Nature Science Foundation of China (No. 61871254,No. 61861136003,No. 91638204)Hitachi Ltd.
文摘By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task offloading in multi-user MEC systems with heterogeneous clouds, including edge clouds and remote clouds. Tasks are forwarded from mobile devices to edge clouds via wireless channels, and they can be further forwarded to remote clouds via the Internet. Our objective is to minimize the total energy consumption of multiple mobile devices, subject to bounded-delay requirements of tasks. Based on dynamic programming, we propose an algorithm that minimizes the energy consumption, by jointly allocating bandwidth and computational resources to mobile devices. The algorithm is of pseudo-polynomial complexity. To further reduce the complexity, we propose an approximation algorithm with energy discretization, and its total energy consumption is proved to be within a bounded gap from the optimum. Simulation results show that, nearly 82.7% energy of mobile devices can be saved by task offloading compared with mobile device execution.