期刊文献+
共找到15篇文章
< 1 >
每页显示 20 50 100
Distributed Weighted Data Aggregation Algorithm in End-to-Edge Communication Networks Based on Multi-armed Bandit 被引量:1
1
作者 Yifei ZOU Senmao QI +1 位作者 Cong'an XU Dongxiao YU 《计算机科学》 CSCD 北大核心 2023年第2期13-22,共10页
As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when ... As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm. 展开更多
关键词 Weighted data aggregation End-to-edge communication Multi-armed bandit Edge intelligence
下载PDF
Methodology for local correction of the heights of global geoid models to improve the accuracy of GNSS leveling
2
作者 Stepan Savchuk Alina Fedorchuk 《Geodesy and Geodynamics》 EI CSCD 2024年第1期42-49,共8页
At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this met... At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation. 展开更多
关键词 GNSS leveling Global geoid model Gravity anomaly Weight data Correcting data
下载PDF
Weighted Multi-sensor Data Level Fusion Method of Vibration Signal Based on Correlation Function 被引量:7
3
作者 BIN Guangfu JIANG Zhinong +1 位作者 LI Xuejun DHILLON B S 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2011年第5期899-904,共6页
As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery... As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery fault diagnosis.The traditional signal processing methods,such as classical inference and weighted averaging algorithm usually lack dynamic adaptability that is easy for trends to cause the faults to be misjudged or left out.To enhance the measuring veracity and precision of vibration signal in rotary machine multi-sensor vibration signal fault diagnosis,a novel data level fusion approach is presented on the basis of correlation function analysis to fast determine the weighted value of multi-sensor vibration signals.The approach doesn't require knowing the prior information about sensors,and the weighted value of sensors can be confirmed depending on the correlation measure of real-time data tested in the data level fusion process.It gives greater weighted value to the greater correlation measure of sensor signals,and vice versa.The approach can effectively suppress large errors and even can still fuse data in the case of sensor failures because it takes full advantage of sensor's own-information to determine the weighted value.Moreover,it has good performance of anti-jamming due to the correlation measures between noise and effective signals are usually small.Through the simulation of typical signal collected from multi-sensors,the comparative analysis of dynamic adaptability and fault tolerance between the proposed approach and traditional weighted averaging approach is taken.Finally,the rotor dynamics and integrated fault simulator is taken as an example to verify the feasibility and advantages of the proposed approach,it is shown that the multi-sensor data level fusion based on correlation function weighted approach is better than the traditional weighted average approach with respect to fusion precision and dynamic adaptability.Meantime,the approach is adaptable and easy to use,can be applied to other areas of vibration measurement. 展开更多
关键词 vibration signal multi-sensor data level fusion correlation function weighted value
下载PDF
Weighted total variation using split Bregman fast quantitative susceptibility mapping reconstruction method 被引量:1
4
作者 陈琳 郑志伟 +4 位作者 包立君 方金生 杨天和 蔡淑惠 蔡聪波 《Chinese Physics B》 SCIE EI CAS CSCD 2018年第8期645-654,共10页
An ill-posed inverse problem in quantitative susceptibility mapping (QSM) is usually solved using a regularization and optimization solver, which is time consuming considering the three-dimensional volume data. Howe... An ill-posed inverse problem in quantitative susceptibility mapping (QSM) is usually solved using a regularization and optimization solver, which is time consuming considering the three-dimensional volume data. However, in clinical diagnosis, it is necessary to reconstruct a susceptibility map efficiently with an appropriate method. Here, a modified QSM reconstruction method called weighted total variation using split Bregman (WTVSB) is proposed. It reconstructs the susceptibility map with fast computational speed and effective artifact suppression by incorporating noise-suppressed data weighting with split Bregman iteration. The noise-suppressed data weighting is determined using the Laplacian of the calculated local field, which can prevent the noise and errors in field maps from spreading into the susceptibility inversion. The split Bregman iteration accelerates the solution of the Ll-regularized reconstruction model by utilizing a preconditioned conjugate gradient solver. In an experiment, the proposed reconstruction method is compared with truncated k-space division (TKD), morphology enabled dipole inversion (MEDI), total variation using the split Bregman (TVSB) method for numerical simulation, phantom and in vivo human brain data evaluated by root mean square error and mean structure similarity. Experimental results demonstrate that our proposed method can achieve better balance between accuracy and efficiency of QSM reconstruction than conventional methods, and thus facilitating clinical applications of QSM. 展开更多
关键词 quantitative susceptibility mapping ill-posed inverse problem noise-suppressed data weighting split Bregman iteration
下载PDF
STATISTICAL SPACE-TIME ADAPTIVE PROCESSING ALGORITHM
5
作者 Yang Jie 《Journal of Electronics(China)》 2010年第3期412-419,共8页
For the slowly changed environment-range-dependent non-homogeneity, a new statistical space-time adaptive processing algorithm is proposed, which uses the statistical methods, such as Bayes or likelihood criterion to ... For the slowly changed environment-range-dependent non-homogeneity, a new statistical space-time adaptive processing algorithm is proposed, which uses the statistical methods, such as Bayes or likelihood criterion to estimate the approximative covariance matrix in the non-homogeneous condition. According to the statistical characteristics of the space-time snapshot data, via defining the aggregate snapshot data and corresponding events, the conditional probability of the space-time snapshot data which is the effective training data is given, then the weighting coefficients are obtained for the weighting method. The theory analysis indicates that the statistical methods of the Bayes and likelihood criterion for covariance matrix estimation are more reasonable than other methods that estimate the covariance matrix with the use of training data except the detected outliers. The last simulations attest that the proposed algorithms can estimate the covariance in the non-homogeneous condition exactly and have favorable characteristics. 展开更多
关键词 Space-Time Adaptive Processing (STAP) Non-homogeneous condition Bayes and likelihood criterion data weighting
下载PDF
ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD 被引量:9
6
作者 SONG Kaichen NIE Xili 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2006年第3期451-454,共4页
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coeff... Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms. 展开更多
关键词 Weighted least square method data fusion Measurement noise Correlation
下载PDF
New approach to determine common weights in DEA efficiency evaluation model 被引量:7
7
作者 Feng Yang Chenchen Yang Liang Liangc Shaofu Du 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2010年第4期609-615,共7页
Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ran... Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ranking approaches are based on the self-evaluation efficiencies.In other words,each DMU chooses the weights it prefers to most,so the resulted efficiencies are not suitable to be used as ranking criteria.Therefore this paper proposes a new approach to determine a bundle of common weights in DEA efficiency evaluation model by introducing a multi-objective integer programming.The paper also gives the solving process of this multi-objective integer programming,and the solution is proven a Pareto efficient solution.The solving process ensures that the obtained common weight bundle is acceptable by a great number of DMUs.Finally a numeral example is given to demonstrate the approach. 展开更多
关键词 data envelopment analysis(DEA) common weight ranking multi-objective programming.
下载PDF
Monthly gravity field recovery from GRACE orbits and K-band measurements using variational equations approach 被引量:1
8
作者 Wang Changqing Xu Houze +1 位作者 Zhong Min Feng Wei 《Geodesy and Geodynamics》 2015年第4期253-260,共8页
The Gravity Recovery and Climate Experiment(GRACE) mission can significantly improve our knowledge of the temporal variability of the Earth's gravity field.We obtained monthly gravity field solutions based on varia... The Gravity Recovery and Climate Experiment(GRACE) mission can significantly improve our knowledge of the temporal variability of the Earth's gravity field.We obtained monthly gravity field solutions based on variational equations approach from GPS-derived positions of GRACE satellites and K-band range-rate measurements.The impact of different fixed data weighting ratios in temporal gravity field recovery while combining the two types of data was investigated for the purpose of deriving the best combined solution.The monthly gravity field solution obtained through above procedures was named as the Institute of Geodesy and Geophysics(IGG) temporal gravity field models.IGG temporal gravity field models were compared with GRACE Release05(RL05) products in following aspects:(i) the trend of the mass anomaly in China and its nearby regions within 2005-2010; (ii) the root mean squares of the global mass anomaly during 2005-2010; (iii) time-series changes in the mean water storage in the region of the Amazon Basin and the Sahara Desert between 2005 and 2010.The results showed that IGG solutions were almost consistent with GRACE RL05 products in above aspects(i)-(iii).Changes in the annual amplitude of mean water storage in the Amazon Basin were 14.7 ± 1.2 cm for IGG,17.1 ± 1.3 cm for the Centre for Space Research(CSR),16.4 ± 0.9 cm for the GeoForschungsZentrum(GFZ) and 16.9 ± 1.2 cm for the Jet Propulsion Laboratory(JPL) in terms of equivalent water height(EWH),respectively.The root mean squares of the mean mass anomaly in Sahara were 1.2 cm,0.9 cm,0.9 cm and 1.2 cm for temporal gravity field models of IGG,CSR,GFZ and JPL,respectively.Comparison suggested that IGG temporal gravity field solutions were at the same accuracy level with the latest temporal gravity field solutions published by CSR,GFZ and JPL. 展开更多
关键词 Gravity recovery and climate experiment (GRACE) Temporal gravity field Variational equations approach Water storage changes Equivalent water height(EWH)data weight ratio Geoid height per degree IGG temporal gravity model
下载PDF
k-NN METHOD IN PARTIAL LINEAR MODEL UNDER RANDOM CENSORSHIP 被引量:1
9
作者 QIN GENGSHENG (Department of Mathematics,Sichuan University, Chengdu 610064). 《Applied Mathematics(A Journal of Chinese Universities)》 SCIE CSCD 1995年第3期275-286,共12页
Consider the regression model Y=Xβ+ g(T) + e. Here g is an unknown smoothing function on [0, 1], β is a l-dimensional parameter to be estimated, and e is an unobserved error. When data are randomly censored, the est... Consider the regression model Y=Xβ+ g(T) + e. Here g is an unknown smoothing function on [0, 1], β is a l-dimensional parameter to be estimated, and e is an unobserved error. When data are randomly censored, the estimators βn* and gn*forβ and g are obtained by using class K and the least square methods. It is shown that βn* is asymptotically normal and gn* achieves the convergent rate O(n-1/3). 展开更多
关键词 Partial linear model censored data class K method k-nearest neighbor weights
下载PDF
Distributed and Weighted Extreme Learning Machine for Imbalanced Big Data Learning 被引量:8
10
作者 Zhiqiong Wang Junchang Xin +4 位作者 Hongxu Yang Shuo Tian Ge Yu Chenren Xu Yudong Yao 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2017年第2期160-173,共14页
The Extreme Learning Machine(ELM) and its variants are effective in many machine learning applications such as Imbalanced Learning(IL) or Big Data(BD) learning. However, they are unable to solve both imbalanced ... The Extreme Learning Machine(ELM) and its variants are effective in many machine learning applications such as Imbalanced Learning(IL) or Big Data(BD) learning. However, they are unable to solve both imbalanced and large-volume data learning problems. This study addresses the IL problem in BD applications. The Distributed and Weighted ELM(DW-ELM) algorithm is proposed, which is based on the Map Reduce framework. To confirm the feasibility of parallel computation, first, the fact that matrix multiplication operators are decomposable is illustrated.Then, to further improve the computational efficiency, an Improved DW-ELM algorithm(IDW-ELM) is developed using only one Map Reduce job. The successful operations of the proposed DW-ELM and IDW-ELM algorithms are finally validated through experiments. 展开更多
关键词 weighted Extreme Learning Machine(ELM) imbalanced big data MapReduce framework user-defined counter
原文传递
Discussion of Fan et al.’s paper “Gaining effciency via weighted estimators for multivariate failure time data” 被引量:1
11
作者 QU Annie & XUE Lan 1 Department of Statisties, University of Illinois at Urbana-Champaign, IL61820, USA 2 Statisties Department, The Oregon State University, Corvallis, OR 97331-4606, USA 《Science China Mathematics》 SCIE 2009年第6期1134-1136,共3页
In the analysis of correlated data, it is ideal to capture the true dependence structure to increase effciency of the estimation. However, for multivariate survival data, this is extremely
关键词 time Discussion of Fan et al s paper Gaining effciency via weighted estimators for multivariate failure time data
原文传递
Discussion on “Gaining Effciency via Weight Estimators for Multivariate Failure Time Data” by Fan, Zhou and Chen
12
作者 KUK Anthony 《Science China Mathematics》 SCIE 2009年第6期1129-1130,共2页
The survival analysis literature has always lagged behind the categorical data literature in developing methods to analyze clustered or multivariate data. While estimators based on
关键词 Discussion on Gaining Effciency via Weight Estimators for Multivariate Failure Time data Zhou and Chen by Fan
原文传递
Rejoinder for Gaining effciency via weighted estimators for multivariate failure time data
13
作者 FAN JianQing, ZHOU Yong, CAI JianWen, & CHEN Min 《Science China Mathematics》 SCIE 2009年第6期1137-1138,共2页
We thank all the discussants for their interesting and stimulating contributions. They have touched various aspects that have not been considered by the original articles.
关键词 TIME Rejoinder for Gaining effciency via weighted estimators for multivariate failure time data
原文传递
A simple data assimilation method for improving the MODIS LAI time-series data products based on the object analysis and gradient inverse weighted filter
14
作者 何彬彬 《Chinese Optics Letters》 SCIE EI CAS CSCD 2007年第6期367-369,共3页
A simple data assimilation method for improving estimation of moderate resolution imaging spectroradiometer (MODIS) leaf area index (LAI) time-series data products based on the gradient inverse weighted filter and... A simple data assimilation method for improving estimation of moderate resolution imaging spectroradiometer (MODIS) leaf area index (LAI) time-series data products based on the gradient inverse weighted filter and object analysis is proposed. The properties and quality control (QC) of MODIS LAI data products are introduced. Also, the gradient inverse weighted filter and object analysis are analyzed. An experiment based on the simple data assimilation method is performed using MODIS LAI data sets from 2000 to 2005 of Guizhou Province in China. 展开更多
关键词 MODIS data A simple data assimilation method for improving the MODIS LAI time-series data products based on the object analysis and gradient inverse weighted filter LAI time
原文传递
Improved low-distortion sigma-delta ADC with DWA for WLAN standards
15
作者 李迪 杨银堂 +3 位作者 朱樟明 石立春 吴笑峰 王江安 《Journal of Semiconductors》 EI CAS CSCD 北大核心 2010年第2期82-87,共6页
An improved low distortion sigma-delta ADC(analog-to-digital converter) for wireless local area network standards is presented.A feed-forward MASH 2-2 multi-bit cascaded sigma-delta ADC is adopted; however,this work... An improved low distortion sigma-delta ADC(analog-to-digital converter) for wireless local area network standards is presented.A feed-forward MASH 2-2 multi-bit cascaded sigma-delta ADC is adopted; however,this work shows a much better performance than the ADCs which have been presented to date by adding a feedback factor in the second stage to improve the performance of the in-band SNDR(signal to noise and distortion ratio),using 4-bit ADCs in both stages to minimize the quantization noise.Data weighted averaging technology is therefore used to decrease the mismatch noise induced by the 4-bit DACs,which improves the SFDR(spurious free dynamic range) of the ADC. The modulator has been implemented by a 0.18μm CMOS process and operates at a single 1.8 V supply voltage. Experimental results show that for a 1.25 MHz @-6 dBFS input signal at 160 MHz sampling frequency,the improved ADC with all non-idealities considered achieves a peak SNDR of 80.9 dB and an SFDR of 87 dB,and the effective number of bits is 13.15 bits. 展开更多
关键词 WLAN low distortion sigma-delta ADC data weighted averaging
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部