期刊文献+
共找到667篇文章
< 1 2 34 >
每页显示 20 50 100
Control of GaN inverted pyramids growth on c-plane patterned sapphire substrates
1
作者 Luming Yu Xun Wang +8 位作者 Zhibiao Hao Yi Luo Changzheng Sun Bing Xiong Yanjun Han Jian Wang Hongtao Li Lin Gan Lai Wang 《Journal of Semiconductors》 EI CAS CSCD 2024年第6期92-96,共5页
Growth of gallium nitride(GaN)inverted pyramids on c-plane sapphire substrates is benefit for fabricating novel devices as it forms the semipolar facets.In this work,GaN inverted pyramids are directly grown on c-plane... Growth of gallium nitride(GaN)inverted pyramids on c-plane sapphire substrates is benefit for fabricating novel devices as it forms the semipolar facets.In this work,GaN inverted pyramids are directly grown on c-plane patterned sapphire substrates(PSS)by metal organic vapor phase epitaxy(MOVPE).The influences of growth conditions on the surface morphol-ogy are experimentally studied and explained by Wulff constructions.The competition of growth rate among{0001},{1011},and{1122}facets results in the various surface morphologies of GaN.A higher growth temperature of 985 ℃ and a lowerⅤ/Ⅲratio of 25 can expand the area of{}facets in GaN inverted pyramids.On the other hand,GaN inverted pyramids with almost pure{}facets are obtained by using a lower growth temperature of 930℃,a higherⅤ/Ⅲratio of 100,and PSS with pattern arrangement perpendicular to the substrate primary flat. 展开更多
关键词 inverted pyramids GAN MOVPE crystal growth competition model
下载PDF
Age of Information Based User Scheduling and Data Assignment in Multi-User Mobile Edge Computing Networks:An Online Algorithm
2
作者 Ge Yiyang Xiong Ke +3 位作者 Dong Rui Lu Yang Fan Pingyi Qu Gang 《China Communications》 SCIE CSCD 2024年第5期153-165,共13页
This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization pr... This paper investigates the age of information(AoI)-based multi-user mobile edge computing(MEC)network with partial offloading mode.The weighted sum AoI(WSA)is first analyzed and derived,and then a WSA minimization problem is formulated by jointly optimizing the user scheduling and data assignment.Due to the non-analytic expression of the WSA w.r.t.the optimization variables and the unknowability of future network information,the problem cannot be solved with known solution methods.Therefore,an online Joint Partial Offloading and User Scheduling Optimization(JPOUSO)algorithm is proposed by transforming the original problem into a single-slot data assignment subproblem and a single-slot user scheduling sub-problem and solving the two sub-problems separately.We analyze the computational complexity of the presented JPO-USO algorithm,which is of O(N),with N being the number of users.Simulation results show that the proposed JPO-USO algorithm is able to achieve better AoI performance compared with various baseline methods.It is shown that both the user’s data assignment and the user’s AoI should be jointly taken into account to decrease the system WSA when scheduling users. 展开更多
关键词 age of information(aoi) mobile edge computing(mec) user scheduling
下载PDF
Detecting While Accessing:A Semi-Supervised Learning-Based Approach for Malicious Traffic Detection in Internet of Things 被引量:1
3
作者 Yantian Luo Hancun Sun +3 位作者 Xu Chen Ning Ge Wei Feng Jianhua Lu 《China Communications》 SCIE CSCD 2023年第4期302-314,共13页
In the upcoming large-scale Internet of Things(Io T),it is increasingly challenging to defend against malicious traffic,due to the heterogeneity of Io T devices and the diversity of Io T communication protocols.In thi... In the upcoming large-scale Internet of Things(Io T),it is increasingly challenging to defend against malicious traffic,due to the heterogeneity of Io T devices and the diversity of Io T communication protocols.In this paper,we propose a semi-supervised learning-based approach to detect malicious traffic at the access side.It overcomes the resource-bottleneck problem of traditional malicious traffic defenders which are deployed at the victim side,and also is free of labeled traffic data in model training.Specifically,we design a coarse-grained behavior model of Io T devices by self-supervised learning with unlabeled traffic data.Then,we fine-tune this model to improve its accuracy in malicious traffic detection by adopting a transfer learning method using a small amount of labeled data.Experimental results show that our method can achieve the accuracy of 99.52%and the F1-score of 99.52%with only 1%of the labeled training data based on the CICDDoS2019 dataset.Moreover,our method outperforms the stateof-the-art supervised learning-based methods in terms of accuracy,precision,recall and F1-score with 1%of the training data. 展开更多
关键词 malicious traffic detection semi-supervised learning Internet of Things(Io T) TRANSFORMER masked behavior model
下载PDF
Matching while Learning: Wireless Scheduling for Age of Information Optimization at the Edge 被引量:2
4
作者 Kun Guo Hao Yang +2 位作者 Peng Yang Wei Feng Tony Q.S.Quek 《China Communications》 SCIE CSCD 2023年第3期347-360,共14页
In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of u... In this paper,we investigate the minimization of age of information(AoI),a metric that measures the information freshness,at the network edge with unreliable wireless communications.Particularly,we consider a set of users transmitting status updates,which are collected by the user randomly over time,to an edge server through unreliable orthogonal channels.It begs a natural question:with random status update arrivals and obscure channel conditions,can we devise an intelligent scheduling policy that matches the users and channels to stabilize the queues of all users while minimizing the average AoI?To give an adequate answer,we define a bipartite graph and formulate a dynamic edge activation problem with stability constraints.Then,we propose an online matching while learning algorithm(MatL)and discuss its implementation for wireless scheduling.Finally,simulation results demonstrate that the MatL is reliable to learn the channel states and manage the users’buffers for fresher information at the edge. 展开更多
关键词 information freshness Lyapunov opti-mization multi-armed bandit wireless scheduling
下载PDF
Performance Analysis of the Packet-Based PNT Service in NGSO Broadband Satellite Communication Systems
5
作者 Xi Chen Qihui Wei +1 位作者 Yafeng Zhan Linling Kuang 《China Communications》 SCIE CSCD 2023年第9期247-259,共13页
Providing alternative PNT service to GNSS-challenged users will be an important function of next-generation NGSO broadband satellite communication systems.Herein,a packet-based PNT service architecture in NGSO broadba... Providing alternative PNT service to GNSS-challenged users will be an important function of next-generation NGSO broadband satellite communication systems.Herein,a packet-based PNT service architecture in NGSO broadband systems is proposed in which a primary satellite and selected assistant satellites work together to provide PNT service to requesting users.Its positioning performance bounds are mathematically formulated by rigorously analyzing the bounds constrained by different waveforms.Simulations are conducted on different configurations of Walker Delta MEO constellations and Walker Star LEO constellations for corroboration,revealing the following:(1)Both MEO and LEO constellations achieve sub-meter-level positioning precision given enough satellites.(2)Compared to the GNSS Doppler-based velocity estimation method,the position advance based velocity estimation algorithm is more precise and applicable to the PNT service in NGSO broadband systems.(3)To provide PNT service to users in GNSS-challenged environments,the primary and each assistant satellite need only∼0.1‰of the time of one downlink beam. 展开更多
关键词 6G non-terrestrial network NGSO satellite positioning performance analysis
下载PDF
Mega-Constellations Based TT&C Resource Sharing: Keep Reliable Aeronautical Communication in an Emergency
6
作者 Haoran Xie Yafeng Zhan Jianhua Lu 《China Communications》 SCIE CSCD 2024年第2期1-16,共16页
With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of... With the development of the transportation industry, the effective guidance of aircraft in an emergency to prevent catastrophic accidents remains one of the top safety concerns. Undoubtedly, operational status data of the aircraft play an important role in the judgment and command of the Operational Control Center(OCC). However, how to transmit various operational status data from abnormal aircraft back to the OCC in an emergency is still an open problem. In this paper, we propose a novel Telemetry, Tracking,and Command(TT&C) architecture named Collaborative TT&C(CoTT&C) based on mega-constellation to solve such a problem. CoTT&C allows each satellite to help the abnormal aircraft by sharing TT&C resources when needed, realizing real-time and reliable aeronautical communication in an emergency. Specifically, we design a dynamic resource sharing mechanism for CoTT&C and model the mechanism as a single-leader-multi-follower Stackelberg game. Further, we give an unique Nash Equilibrium(NE) of the game as a closed form. Simulation results demonstrate that the proposed resource sharing mechanism is effective, incentive compatible, fair, and reciprocal. We hope that our findings can shed some light for future research on aeronautical communications in an emergency. 展开更多
关键词 aeronautical emergency communication mega-constellation networked TT&C resource allocation stackelberg game
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
7
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
下载PDF
Data Component:An Innovative Framework for Information Value Metrics in the Digital Economy
8
作者 Tao Xiaoming Wang Yu +5 位作者 Peng Jieyang Zhao Yuelin Wang Yue Wang Youzheng Hu Chengsheng Lu Zhipeng 《China Communications》 SCIE CSCD 2024年第5期17-35,共19页
The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive st... The increasing dependence on data highlights the need for a detailed understanding of its behavior,encompassing the challenges involved in processing and evaluating it.However,current research lacks a comprehensive structure for measuring the worth of data elements,hindering effective navigation of the changing digital environment.This paper aims to fill this research gap by introducing the innovative concept of“data components.”It proposes a graphtheoretic representation model that presents a clear mathematical definition and demonstrates the superiority of data components over traditional processing methods.Additionally,the paper introduces an information measurement model that provides a way to calculate the information entropy of data components and establish their increased informational value.The paper also assesses the value of information,suggesting a pricing mechanism based on its significance.In conclusion,this paper establishes a robust framework for understanding and quantifying the value of implicit information in data,laying the groundwork for future research and practical applications. 展开更多
关键词 data component data element data governance data science information theory
下载PDF
Continuous-Time Channel Prediction Based on Tensor Neural Ordinary Differential Equation
9
作者 Mingyao Cui Hao Jiang +2 位作者 Yuhao Chen Yang Du Linglong Dai 《China Communications》 SCIE CSCD 2024年第1期163-174,共12页
Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channe... Channel prediction is critical to address the channel aging issue in mobile scenarios.Existing channel prediction techniques are mainly designed for discrete channel prediction,which can only predict the future channel in a fixed time slot per frame,while the other intra-frame channels are usually recovered by interpolation.However,these approaches suffer from a serious interpolation loss,especially for mobile millimeter-wave communications.To solve this challenging problem,we propose a tensor neural ordinary differential equation(TN-ODE)based continuous-time channel prediction scheme to realize the direct prediction of intra-frame channels.Specifically,inspired by the recently developed continuous mapping model named neural ODE in the field of machine learning,we first utilize the neural ODE model to predict future continuous-time channels.To improve the channel prediction accuracy and reduce computational complexity,we then propose the TN-ODE scheme to learn the structural characteristics of the high-dimensional channel by low-dimensional learnable transform.Simulation results show that the proposed scheme is able to achieve higher intra-frame channel prediction accuracy than existing schemes. 展开更多
关键词 channel prediction massive multipleinput-multiple-output millimeter-wave communications ordinary differential equation
下载PDF
A Joint Activity and Data Detection Scheme for Asynchronous Grant-Free Rateless Multiple Access
10
作者 Wei Zhang Xiaofeng Zhong Shidong Zhou 《China Communications》 SCIE CSCD 2024年第1期34-52,共19页
This paper considers the frameasynchronous grant-free rateless multiple access(FAGF-RMA)scenario,where users can initiate access at any symbol time,using shared channel resources to transmit data to the base station.R... This paper considers the frameasynchronous grant-free rateless multiple access(FAGF-RMA)scenario,where users can initiate access at any symbol time,using shared channel resources to transmit data to the base station.Rateless coding is introduced to enhance the reliability of the system.Previous literature has shown that FA-GFRMA can achieve lower access delay than framesynchronous grant-free rateless multiple access(FSGF-RMA),with extreme reliability enabled by rateless coding.To support FA-GF-RMA in more practical scenarios,a joint activity and data detection(JADD)scheme is proposed.Exploiting the feature of sporadic traffic,approximate message passing(AMP)is exploited for transmission signal matrix estimation.Then,to determine the packet start points,a maximum posterior probability(MAP)estimation problem is solved based on the recovered transmitted signals,leveraging the intrinsic power pattern in the codeword.An iterative power-pattern-aided AMP algorithm is devised to enhance the estimation performance of AMP.Simulation results verify that the proposed solution achieves a delay performance that is comparable to the performance limit of FA-GF-RMA. 展开更多
关键词 asynchronous grant-free JADD rateless codes
下载PDF
Improve Chinese Aspect Sentiment Quadruplet Prediction via Instruction Learning Based on Large Generate Models
11
作者 Zhaoliang Wu Yuewei Wu +2 位作者 Xiaoli Feng Jiajun Zou Fulian Yin 《Computers, Materials & Continua》 SCIE EI 2024年第3期3391-3412,共22页
Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target ... Aspect-Based Sentiment Analysis(ABSA)is a fundamental area of research in Natural Language Processing(NLP).Within ABSA,Aspect Sentiment Quad Prediction(ASQP)aims to accurately identify sentiment quadruplets in target sentences,including aspect terms,aspect categories,corresponding opinion terms,and sentiment polarity.However,most existing research has focused on English datasets.Consequently,while ASQP has seen significant progress in English,the Chinese ASQP task has remained relatively stagnant.Drawing inspiration from methods applied to English ASQP,we propose Chinese generation templates and employ prompt-based instruction learning to enhance the model’s understanding of the task,ultimately improving ASQP performance in the Chinese context.Ultimately,under the same pre-training model configuration,our approach achieved a 5.79%improvement in the F1 score compared to the previously leading method.Furthermore,when utilizing a larger model with reduced training parameters,the F1 score demonstrated an 8.14%enhancement.Additionally,we suggest a novel evaluation metric based on the characteristics of generative models,better-reflecting model generalization.Experimental results validate the effectiveness of our approach. 展开更多
关键词 ABSA ASQP LLMs sentiment analysis Chinese comments
下载PDF
Robot-Oriented 6G Satellite-UAV Networks: Requirements, Paradigm Shifts, and Case Studies
12
作者 Peng Wei Wei Feng +2 位作者 Yunfei Chen Ning Ge Wei Xiang 《China Communications》 SCIE CSCD 2024年第2期74-84,共11页
Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) net... Networked robots can perceive their surroundings, interact with each other or humans,and make decisions to accomplish specified tasks in remote/hazardous/complex environments. Satelliteunmanned aerial vehicle(UAV) networks can support such robots by providing on-demand communication services. However, under traditional open-loop communication paradigm, the network resources are usually divided into user-wise mostly-independent links,via ignoring the task-level dependency of robot collaboration. Thus, it is imperative to develop a new communication paradigm, taking into account the highlevel content and values behind, to facilitate multirobot operation. Inspired by Wiener’s Cybernetics theory, this article explores a closed-loop communication paradigm for the robot-oriented satellite-UAV network. This paradigm turns to handle group-wise structured links, so as to allocate resources in a taskoriented manner. It could also exploit the mobility of robots to liberate the network from full coverage,enabling new orchestration between network serving and positive mobility control of robots. Moreover,the integration of sensing, communications, computing and control would enlarge the benefit of this new paradigm. We present a case study for joint mobile edge computing(MEC) offloading and mobility control of robots, and finally outline potential challenges and open issues. 展开更多
关键词 closed-loop communication mobility control satellite-UAV network structured resource allocation
下载PDF
Privacy-Preserving Federated Mobility Prediction with Compound Data and Model Perturbation Mechanism
13
作者 Long Qingyue Wang Huandong +4 位作者 Chen Huiming Jin Depeng Zhu Lin Yu Li Li Yong 《China Communications》 SCIE CSCD 2024年第3期160-173,共14页
Human mobility prediction is important for many applications.However,training an accurate mobility prediction model requires a large scale of human trajectories,where privacy issues become an important problem.The ris... Human mobility prediction is important for many applications.However,training an accurate mobility prediction model requires a large scale of human trajectories,where privacy issues become an important problem.The rising federated learning provides us with a promising solution to this problem,which enables mobile devices to collaboratively learn a shared prediction model while keeping all the training data on the device,decoupling the ability to do machine learning from the need to store the data in the cloud.However,existing federated learningbased methods either do not provide privacy guarantees or have vulnerability in terms of privacy leakage.In this paper,we combine the techniques of data perturbation and model perturbation mechanisms and propose a privacy-preserving mobility prediction algorithm,where we add noise to the transmitted model and the raw data collaboratively to protect user privacy and keep the mobility prediction performance.Extensive experimental results show that our proposed method significantly outperforms the existing stateof-the-art mobility prediction method in terms of defensive performance against practical attacks while having comparable mobility prediction performance,demonstrating its effectiveness. 展开更多
关键词 federated learning mobility prediction PRIVACY
下载PDF
EEG光子处理器——基于衍射光子计算单元的癫痫发作检测
14
作者 Tao Yan Maoqi Zhang +6 位作者 Hang Chen Sen Wan Kaifeng Shang Haiou Zhang Xun Cao Xing Lin Qionghai Dai 《Engineering》 SCIE EI CAS CSCD 2024年第4期56-66,共11页
Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial i... Electroencephalography(EEG)analysis extracts critical information from brain signals,enabling brain disease diagnosis and providing fundamental support for brain–computer interfaces.However,performing an artificial intelligence analysis of EEG signals with high energy efficiency poses significant challenges for electronic processors on edge computing devices,especially with large neural network models.Herein,we propose an EEG opto-processor based on diffractive photonic computing units(DPUs)to process extracranial and intracranial EEG signals effectively and to detect epileptic seizures.The signals of the EEG channels within a second-time window are optically encoded as inputs to the constructed diffractive neural networks for classification,which monitors the brain state to identify symptoms of an epileptic seizure.We developed both free-space and integrated DPUs as edge computing systems and demonstrated their applications for real-time epileptic seizure detection using benchmark datasets,that is,the Children’s Hospital Boston(CHB)–Massachusetts Institute of Technology(MIT)extracranial and Epilepsy-iEEG-Multicenter intracranial EEG datasets,with excellent computing performance results.Along with the channel selection mechanism,both numerical evaluations and experimental results validated the sufficiently high classification accuracies of the proposed opto-processors for supervising clinical diagnosis.Our study opens a new research direction for utilizing photonic computing techniques to process large-scale EEG signals and promote broader applications. 展开更多
关键词 Epileptic seizure detection EEG analysis Diffractive photonic computing unit Photonic computing
下载PDF
用于学习和解释行人预期行为的群体交互场
15
作者 Xueyang Wang Xuecheng Chen +6 位作者 Puhua Jiang Haozhe Lin Xiaoyun Yuan Mengqi Ji Yuchen Guo Ruqi Huang Lu Fang 《Engineering》 SCIE EI CAS CSCD 2024年第3期70-82,共13页
Anticipating others’actions is innate and essential in order for humans to navigate and interact well with others in dense crowds.This ability is urgently required for unmanned systems such as service robots and self... Anticipating others’actions is innate and essential in order for humans to navigate and interact well with others in dense crowds.This ability is urgently required for unmanned systems such as service robots and self-driving cars.However,existing solutions struggle to predict pedestrian anticipation accurately,because the influence of group-related social behaviors has not been well considered.While group relationships and group interactions are ubiquitous and significantly influence pedestrian anticipation,their influence is diverse and subtle,making it difficult to explicitly quantify.Here,we propose the group interaction field(GIF),a novel group-aware representation that quantifies pedestrian anticipation into a probability field of pedestrians’future locations and attention orientations.An end-to-end neural network,GIFNet,is tailored to estimate the GIF from explicit multidimensional observations.GIFNet quantifies the influence of group behaviors by formulating a group interaction graph with propagation and graph attention that is adaptive to the group size and dynamic interaction states.The experimental results show that the GIF effectively represents the change in pedestrians’anticipation under the prominent impact of group behaviors and accurately predicts pedestrians’future states.Moreover,the GIF contributes to explaining various predictions of pedestrians’behavior in different social states.The proposed GIF will eventually be able to allow unmanned systems to work in a human-like manner and comply with social norms,thereby promoting harmonious human-machine relationships. 展开更多
关键词 Human behavior modeling and prediction Implicit representation of pedestrian ANTICIPATION Group interaction Graph neural network
下载PDF
Outage Performance of Non-Orthogonal Multiple Access Based Unmanned Aerial Vehicles Satellite Networks 被引量:18
16
作者 Ting Qi Wei Feng Youzheng Wang 《China Communications》 SCIE CSCD 2018年第5期1-8,共8页
With rapid development of unmanned aerial vehicles(UAVs), more and more UAVs access satellite networks for data transmission. To improve the spectral efficiency, non-orthogonal multiple access(NOMA) is adopted to inte... With rapid development of unmanned aerial vehicles(UAVs), more and more UAVs access satellite networks for data transmission. To improve the spectral efficiency, non-orthogonal multiple access(NOMA) is adopted to integrate UAVs into the satellite network, where multiple satellites cooperatively serve the UAVs and mobile terminal using the Ku-band and above. Taking into account the rain fading and the fading correlation, the outage performance is first analytically obtained for fixed power allocation and then efficiently calculated by the proposed power allocation algorithm to guarantee the user fairness. Simulation results verify the outage performance analysis and show the performance improvement of the proposed power allocation scheme. 展开更多
关键词 卫星网络 存取 多重 车辆 天线 直角 性能 分配算法
下载PDF
Partial Iterative Decode of Turbo Codes for On-Board Processing Satellite Platform 被引量:8
17
作者 LI Hang GAO Zhen +1 位作者 ZHAO Ming WANG Jing 《China Communications》 SCIE CSCD 2015年第11期104-111,共8页
There is a contradiction between high processing complexity and limited processing resources when turbo codes are used on the on-board processing(OBP)satellite platform.To solve this problem,this paper proposes a part... There is a contradiction between high processing complexity and limited processing resources when turbo codes are used on the on-board processing(OBP)satellite platform.To solve this problem,this paper proposes a partial iterative decode method for on-board application,in which satellite only carries out limited number of iteration according to the on-board processing resource limitation and the throughput capacity requirements.In this method,the soft information of parity bits,which is not obtained individually in conventional turbo decoder,is encoded and forwarded along with those of information bits.To save downlink transmit power,the soft information is limited and normalized before forwarding.The iteration number and limiter parameters are optimized with the help of EXIT chart and numerical analysis,respectively.Simulation results show that the proposed method can effectively decrease the complexity of onboard processing while achieve most of the decoding gain.. 展开更多
关键词 SATELLITE communication ON-BOARD processing PARTIAL ITERATIVE DECODING
下载PDF
Convolutional Neural Networks Based Indoor Wi-Fi Localization with a Novel Kind of CSI Images 被引量:8
18
作者 Haihan Li Xiangsheng Zeng +2 位作者 Yunzhou Li Shidong Zhou Jing Wang 《China Communications》 SCIE CSCD 2019年第9期250-260,共11页
Indoor Wi-Fi localization of mobile devices plays a more and more important role along with the rapid growth of location-based services and Wi-Fi mobile devices.In this paper,a new method of constructing the channel s... Indoor Wi-Fi localization of mobile devices plays a more and more important role along with the rapid growth of location-based services and Wi-Fi mobile devices.In this paper,a new method of constructing the channel state information(CSI)image is proposed to improve the localization accuracy.Compared with previous methods of constructing the CSI image,the new kind of CSI image proposed is able to contain more channel information such as the angle of arrival(AoA),the time of arrival(TOA)and the amplitude.We construct three gray images by using phase differences of different antennas and amplitudes of different subcarriers of one antenna,and then merge them to form one RGB image.The localization method has off-line stage and on-line stage.In the off-line stage,the composed three-channel RGB images at training locations are used to train a convolutional neural network(CNN)which has been proved to be efficient in image recognition.In the on-line stage,images at test locations are fed to the well-trained CNN model and the localization result is the weighted mean value with highest output values.The performance of the proposed method is verified with extensive experiments in the representative indoor environment. 展开更多
关键词 convolutional NEURAL network INDOOR WI-FI LOCALIZATION channel state information CSI image
下载PDF
A rapid classification method of aluminum alloy based on laser-induced breakdown spectroscopy and random forest algorithm 被引量:6
19
作者 詹浏洋 马晓红 +4 位作者 方玮骐 王锐 刘泽生 宋阳 赵华凤 《Plasma Science and Technology》 SCIE EI CAS CSCD 2019年第3期148-154,共7页
As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly... As an important non-ferrous metal structural material most used in industry and production,aluminum(Al) alloy shows its great value in the national economy and industrial manufacturing.How to classify Al alloy rapidly and accurately is a significant, popular and meaningful task.Classification methods based on laser-induced breakdown spectroscopy(LIBS) have been reported in recent years. Although LIBS is an advanced detection technology, it is necessary to combine it with some algorithm to reach the goal of rapid and accurate classification. As an important machine learning method, the random forest(RF) algorithm plays a great role in pattern recognition and material classification. This paper introduces a rapid classification method of Al alloy based on LIBS and the RF algorithm. The results show that the best accuracy that can be reached using this method to classify Al alloy samples is 98.59%, the average of which is 98.45%. It also reveals through the relationship laws that the accuracy varies with the number of trees in the RF and the size of the training sample set in the RF. According to the laws, researchers can find out the optimized parameters in the RF algorithm in order to achieve,as expected, a good result. These results prove that LIBS with the RF algorithm can exactly classify Al alloy effectively, precisely and rapidly with high accuracy, which obviously has significant practical value. 展开更多
关键词 LASER-INDUCED BREAKDOWN spectroscopy(LIBS) RANDOM forest(RF) aluminum(Al)alloy classification
下载PDF
Energy-Optimal and Delay-Bounded Computation Offloading in Mobile Edge Computing with Heterogeneous Clouds 被引量:23
20
作者 Tianchu Zhao Sheng Zhou +3 位作者 Linqi Song Zhiyuan Jiang Xueying Guo Zhisheng Niu 《China Communications》 SCIE CSCD 2020年第5期191-210,共20页
By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task off... By Mobile Edge Computing(MEC), computation-intensive tasks are offloaded from mobile devices to cloud servers, and thus the energy consumption of mobile devices can be notably reduced. In this paper, we study task offloading in multi-user MEC systems with heterogeneous clouds, including edge clouds and remote clouds. Tasks are forwarded from mobile devices to edge clouds via wireless channels, and they can be further forwarded to remote clouds via the Internet. Our objective is to minimize the total energy consumption of multiple mobile devices, subject to bounded-delay requirements of tasks. Based on dynamic programming, we propose an algorithm that minimizes the energy consumption, by jointly allocating bandwidth and computational resources to mobile devices. The algorithm is of pseudo-polynomial complexity. To further reduce the complexity, we propose an approximation algorithm with energy discretization, and its total energy consumption is proved to be within a bounded gap from the optimum. Simulation results show that, nearly 82.7% energy of mobile devices can be saved by task offloading compared with mobile device execution. 展开更多
关键词 mobile edge computing heterogeneous clouds energy saving delay bounds dynamic programming
下载PDF
上一页 1 2 34 下一页 到第
使用帮助 返回顶部