As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when ...As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.展开更多
At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this met...At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation.展开更多
As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery...As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery fault diagnosis.The traditional signal processing methods,such as classical inference and weighted averaging algorithm usually lack dynamic adaptability that is easy for trends to cause the faults to be misjudged or left out.To enhance the measuring veracity and precision of vibration signal in rotary machine multi-sensor vibration signal fault diagnosis,a novel data level fusion approach is presented on the basis of correlation function analysis to fast determine the weighted value of multi-sensor vibration signals.The approach doesn't require knowing the prior information about sensors,and the weighted value of sensors can be confirmed depending on the correlation measure of real-time data tested in the data level fusion process.It gives greater weighted value to the greater correlation measure of sensor signals,and vice versa.The approach can effectively suppress large errors and even can still fuse data in the case of sensor failures because it takes full advantage of sensor's own-information to determine the weighted value.Moreover,it has good performance of anti-jamming due to the correlation measures between noise and effective signals are usually small.Through the simulation of typical signal collected from multi-sensors,the comparative analysis of dynamic adaptability and fault tolerance between the proposed approach and traditional weighted averaging approach is taken.Finally,the rotor dynamics and integrated fault simulator is taken as an example to verify the feasibility and advantages of the proposed approach,it is shown that the multi-sensor data level fusion based on correlation function weighted approach is better than the traditional weighted average approach with respect to fusion precision and dynamic adaptability.Meantime,the approach is adaptable and easy to use,can be applied to other areas of vibration measurement.展开更多
An ill-posed inverse problem in quantitative susceptibility mapping (QSM) is usually solved using a regularization and optimization solver, which is time consuming considering the three-dimensional volume data. Howe...An ill-posed inverse problem in quantitative susceptibility mapping (QSM) is usually solved using a regularization and optimization solver, which is time consuming considering the three-dimensional volume data. However, in clinical diagnosis, it is necessary to reconstruct a susceptibility map efficiently with an appropriate method. Here, a modified QSM reconstruction method called weighted total variation using split Bregman (WTVSB) is proposed. It reconstructs the susceptibility map with fast computational speed and effective artifact suppression by incorporating noise-suppressed data weighting with split Bregman iteration. The noise-suppressed data weighting is determined using the Laplacian of the calculated local field, which can prevent the noise and errors in field maps from spreading into the susceptibility inversion. The split Bregman iteration accelerates the solution of the Ll-regularized reconstruction model by utilizing a preconditioned conjugate gradient solver. In an experiment, the proposed reconstruction method is compared with truncated k-space division (TKD), morphology enabled dipole inversion (MEDI), total variation using the split Bregman (TVSB) method for numerical simulation, phantom and in vivo human brain data evaluated by root mean square error and mean structure similarity. Experimental results demonstrate that our proposed method can achieve better balance between accuracy and efficiency of QSM reconstruction than conventional methods, and thus facilitating clinical applications of QSM.展开更多
For the slowly changed environment-range-dependent non-homogeneity, a new statistical space-time adaptive processing algorithm is proposed, which uses the statistical methods, such as Bayes or likelihood criterion to ...For the slowly changed environment-range-dependent non-homogeneity, a new statistical space-time adaptive processing algorithm is proposed, which uses the statistical methods, such as Bayes or likelihood criterion to estimate the approximative covariance matrix in the non-homogeneous condition. According to the statistical characteristics of the space-time snapshot data, via defining the aggregate snapshot data and corresponding events, the conditional probability of the space-time snapshot data which is the effective training data is given, then the weighting coefficients are obtained for the weighting method. The theory analysis indicates that the statistical methods of the Bayes and likelihood criterion for covariance matrix estimation are more reasonable than other methods that estimate the covariance matrix with the use of training data except the detected outliers. The last simulations attest that the proposed algorithms can estimate the covariance in the non-homogeneous condition exactly and have favorable characteristics.展开更多
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coeff...Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.展开更多
Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ran...Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ranking approaches are based on the self-evaluation efficiencies.In other words,each DMU chooses the weights it prefers to most,so the resulted efficiencies are not suitable to be used as ranking criteria.Therefore this paper proposes a new approach to determine a bundle of common weights in DEA efficiency evaluation model by introducing a multi-objective integer programming.The paper also gives the solving process of this multi-objective integer programming,and the solution is proven a Pareto efficient solution.The solving process ensures that the obtained common weight bundle is acceptable by a great number of DMUs.Finally a numeral example is given to demonstrate the approach.展开更多
The Gravity Recovery and Climate Experiment(GRACE) mission can significantly improve our knowledge of the temporal variability of the Earth's gravity field.We obtained monthly gravity field solutions based on varia...The Gravity Recovery and Climate Experiment(GRACE) mission can significantly improve our knowledge of the temporal variability of the Earth's gravity field.We obtained monthly gravity field solutions based on variational equations approach from GPS-derived positions of GRACE satellites and K-band range-rate measurements.The impact of different fixed data weighting ratios in temporal gravity field recovery while combining the two types of data was investigated for the purpose of deriving the best combined solution.The monthly gravity field solution obtained through above procedures was named as the Institute of Geodesy and Geophysics(IGG) temporal gravity field models.IGG temporal gravity field models were compared with GRACE Release05(RL05) products in following aspects:(i) the trend of the mass anomaly in China and its nearby regions within 2005-2010; (ii) the root mean squares of the global mass anomaly during 2005-2010; (iii) time-series changes in the mean water storage in the region of the Amazon Basin and the Sahara Desert between 2005 and 2010.The results showed that IGG solutions were almost consistent with GRACE RL05 products in above aspects(i)-(iii).Changes in the annual amplitude of mean water storage in the Amazon Basin were 14.7 ± 1.2 cm for IGG,17.1 ± 1.3 cm for the Centre for Space Research(CSR),16.4 ± 0.9 cm for the GeoForschungsZentrum(GFZ) and 16.9 ± 1.2 cm for the Jet Propulsion Laboratory(JPL) in terms of equivalent water height(EWH),respectively.The root mean squares of the mean mass anomaly in Sahara were 1.2 cm,0.9 cm,0.9 cm and 1.2 cm for temporal gravity field models of IGG,CSR,GFZ and JPL,respectively.Comparison suggested that IGG temporal gravity field solutions were at the same accuracy level with the latest temporal gravity field solutions published by CSR,GFZ and JPL.展开更多
Consider the regression model Y=Xβ+ g(T) + e. Here g is an unknown smoothing function on [0, 1], β is a l-dimensional parameter to be estimated, and e is an unobserved error. When data are randomly censored, the est...Consider the regression model Y=Xβ+ g(T) + e. Here g is an unknown smoothing function on [0, 1], β is a l-dimensional parameter to be estimated, and e is an unobserved error. When data are randomly censored, the estimators βn* and gn*forβ and g are obtained by using class K and the least square methods. It is shown that βn* is asymptotically normal and gn* achieves the convergent rate O(n-1/3).展开更多
The Extreme Learning Machine(ELM) and its variants are effective in many machine learning applications such as Imbalanced Learning(IL) or Big Data(BD) learning. However, they are unable to solve both imbalanced ...The Extreme Learning Machine(ELM) and its variants are effective in many machine learning applications such as Imbalanced Learning(IL) or Big Data(BD) learning. However, they are unable to solve both imbalanced and large-volume data learning problems. This study addresses the IL problem in BD applications. The Distributed and Weighted ELM(DW-ELM) algorithm is proposed, which is based on the Map Reduce framework. To confirm the feasibility of parallel computation, first, the fact that matrix multiplication operators are decomposable is illustrated.Then, to further improve the computational efficiency, an Improved DW-ELM algorithm(IDW-ELM) is developed using only one Map Reduce job. The successful operations of the proposed DW-ELM and IDW-ELM algorithms are finally validated through experiments.展开更多
In the analysis of correlated data, it is ideal to capture the true dependence structure to increase effciency of the estimation. However, for multivariate survival data, this is extremely
The survival analysis literature has always lagged behind the categorical data literature in developing methods to analyze clustered or multivariate data. While estimators based on
We thank all the discussants for their interesting and stimulating contributions. They have touched various aspects that have not been considered by the original articles.
A simple data assimilation method for improving estimation of moderate resolution imaging spectroradiometer (MODIS) leaf area index (LAI) time-series data products based on the gradient inverse weighted filter and...A simple data assimilation method for improving estimation of moderate resolution imaging spectroradiometer (MODIS) leaf area index (LAI) time-series data products based on the gradient inverse weighted filter and object analysis is proposed. The properties and quality control (QC) of MODIS LAI data products are introduced. Also, the gradient inverse weighted filter and object analysis are analyzed. An experiment based on the simple data assimilation method is performed using MODIS LAI data sets from 2000 to 2005 of Guizhou Province in China.展开更多
An improved low distortion sigma-delta ADC(analog-to-digital converter) for wireless local area network standards is presented.A feed-forward MASH 2-2 multi-bit cascaded sigma-delta ADC is adopted; however,this work...An improved low distortion sigma-delta ADC(analog-to-digital converter) for wireless local area network standards is presented.A feed-forward MASH 2-2 multi-bit cascaded sigma-delta ADC is adopted; however,this work shows a much better performance than the ADCs which have been presented to date by adding a feedback factor in the second stage to improve the performance of the in-band SNDR(signal to noise and distortion ratio),using 4-bit ADCs in both stages to minimize the quantization noise.Data weighted averaging technology is therefore used to decrease the mismatch noise induced by the 4-bit DACs,which improves the SFDR(spurious free dynamic range) of the ADC. The modulator has been implemented by a 0.18μm CMOS process and operates at a single 1.8 V supply voltage. Experimental results show that for a 1.25 MHz @-6 dBFS input signal at 160 MHz sampling frequency,the improved ADC with all non-idealities considered achieves a peak SNDR of 80.9 dB and an SFDR of 87 dB,and the effective number of bits is 13.15 bits.展开更多
基金supported by the National Natural Science Foundation of China(NSFC)(62102232,62122042,61971269)Natural Science Foundation of Shandong Province Under(ZR2021QF064)。
文摘As a combination of edge computing and artificial intelligence,edge intelligence has become a promising technique and provided its users with a series of fast,precise,and customized services.In edge intelligence,when learning agents are deployed on the edge side,the data aggregation from the end side to the designated edge devices is an important research topic.Considering the various importance of end devices,this paper studies the weighted data aggregation problem in a single hop end-to-edge communication network.Firstly,to make sure all the end devices with various weights are fairly treated in data aggregation,a distributed end-to-edge cooperative scheme is proposed.Then,to handle the massive contention on the wireless channel caused by end devices,a multi-armed bandit(MAB)algorithm is designed to help the end devices find their most appropriate update rates.Diffe-rent from the traditional data aggregation works,combining the MAB enables our algorithm a higher efficiency in data aggregation.With a theoretical analysis,we show that the efficiency of our algorithm is asymptotically optimal.Comparative experiments with previous works are also conducted to show the strength of our algorithm.
基金the International Center for Global Earth Models(ICGEM)for the height anomaly and gravity anomaly data and Bureau Gravimetrique International(BGI)for free-air gravity anomaly data from the World Gravity Map project(WGM2012)The authors are grateful to Głowny Urza˛d Geodezji i Kartografii of Poland for the height anomaly data of the quasi-geoid PL-geoid2021.
文摘At present,one of the methods used to determine the height of points on the Earth’s surface is Global Navigation Satellite System(GNSS)leveling.It is possible to determine the orthometric or normal height by this method only if there is a geoid or quasi-geoid height model available.This paper proposes the methodology for local correction of the heights of high-order global geoid models such as EGM08,EIGEN-6C4,GECO,and XGM2019e_2159.This methodology was tested in different areas of the research field,covering various relief forms.The dependence of the change in corrected height accuracy on the input data was analyzed,and the correction was also conducted for model heights in three tidal systems:"tide free","mean tide",and"zero tide".The results show that the heights of EIGEN-6C4 model can be corrected with an accuracy of up to 1 cm for flat and foothill terrains with the dimensionality of 1°×1°,2°×2°,and 3°×3°.The EGM08 model presents an almost identical result.The EIGEN-6C4 model is best suited for mountainous relief and provides an accuracy of 1.5 cm on the 1°×1°area.The height correction accuracy of GECO and XGM2019e_2159 models is slightly poor,which has fuzziness in terms of numerical fluctuation.
基金supported by National Hi-tech Research and Development Program of China (863 Program, Grant No. 2007AA04Z433)Hunan Provincial Natural Science Foundation of China (Grant No. 09JJ8005)Scientific Research Foundation of Graduate School of Beijing University of Chemical and Technology,China (Grant No. 10Me002)
文摘As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery fault diagnosis.The traditional signal processing methods,such as classical inference and weighted averaging algorithm usually lack dynamic adaptability that is easy for trends to cause the faults to be misjudged or left out.To enhance the measuring veracity and precision of vibration signal in rotary machine multi-sensor vibration signal fault diagnosis,a novel data level fusion approach is presented on the basis of correlation function analysis to fast determine the weighted value of multi-sensor vibration signals.The approach doesn't require knowing the prior information about sensors,and the weighted value of sensors can be confirmed depending on the correlation measure of real-time data tested in the data level fusion process.It gives greater weighted value to the greater correlation measure of sensor signals,and vice versa.The approach can effectively suppress large errors and even can still fuse data in the case of sensor failures because it takes full advantage of sensor's own-information to determine the weighted value.Moreover,it has good performance of anti-jamming due to the correlation measures between noise and effective signals are usually small.Through the simulation of typical signal collected from multi-sensors,the comparative analysis of dynamic adaptability and fault tolerance between the proposed approach and traditional weighted averaging approach is taken.Finally,the rotor dynamics and integrated fault simulator is taken as an example to verify the feasibility and advantages of the proposed approach,it is shown that the multi-sensor data level fusion based on correlation function weighted approach is better than the traditional weighted average approach with respect to fusion precision and dynamic adaptability.Meantime,the approach is adaptable and easy to use,can be applied to other areas of vibration measurement.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.11474236,81671674,and 11775184)the Science and Technology Project of Fujian Province,China(Grant No.2016Y0078)
文摘An ill-posed inverse problem in quantitative susceptibility mapping (QSM) is usually solved using a regularization and optimization solver, which is time consuming considering the three-dimensional volume data. However, in clinical diagnosis, it is necessary to reconstruct a susceptibility map efficiently with an appropriate method. Here, a modified QSM reconstruction method called weighted total variation using split Bregman (WTVSB) is proposed. It reconstructs the susceptibility map with fast computational speed and effective artifact suppression by incorporating noise-suppressed data weighting with split Bregman iteration. The noise-suppressed data weighting is determined using the Laplacian of the calculated local field, which can prevent the noise and errors in field maps from spreading into the susceptibility inversion. The split Bregman iteration accelerates the solution of the Ll-regularized reconstruction model by utilizing a preconditioned conjugate gradient solver. In an experiment, the proposed reconstruction method is compared with truncated k-space division (TKD), morphology enabled dipole inversion (MEDI), total variation using the split Bregman (TVSB) method for numerical simulation, phantom and in vivo human brain data evaluated by root mean square error and mean structure similarity. Experimental results demonstrate that our proposed method can achieve better balance between accuracy and efficiency of QSM reconstruction than conventional methods, and thus facilitating clinical applications of QSM.
基金Supported by the National Post-doctor Fundation (No. 20090451251) the Shaanxi Industry Surmount Foundation (2009K08-31) of China
文摘For the slowly changed environment-range-dependent non-homogeneity, a new statistical space-time adaptive processing algorithm is proposed, which uses the statistical methods, such as Bayes or likelihood criterion to estimate the approximative covariance matrix in the non-homogeneous condition. According to the statistical characteristics of the space-time snapshot data, via defining the aggregate snapshot data and corresponding events, the conditional probability of the space-time snapshot data which is the effective training data is given, then the weighting coefficients are obtained for the weighting method. The theory analysis indicates that the statistical methods of the Bayes and likelihood criterion for covariance matrix estimation are more reasonable than other methods that estimate the covariance matrix with the use of training data except the detected outliers. The last simulations attest that the proposed algorithms can estimate the covariance in the non-homogeneous condition exactly and have favorable characteristics.
文摘Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion, are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
基金supported by the National Natural Science Foundation of China for Innovative Research Groups(70821001)and the National Natural Science Foundation of China(70801056)
文摘Data envelopment analysis(DEA) is a mathematical programming approach to appraise the relative efficiencies of peer decision-making unit(DMU),which is widely used in ranking DMUs.However,almost all DEA-related ranking approaches are based on the self-evaluation efficiencies.In other words,each DMU chooses the weights it prefers to most,so the resulted efficiencies are not suitable to be used as ranking criteria.Therefore this paper proposes a new approach to determine a bundle of common weights in DEA efficiency evaluation model by introducing a multi-objective integer programming.The paper also gives the solving process of this multi-objective integer programming,and the solution is proven a Pareto efficient solution.The solving process ensures that the obtained common weight bundle is acceptable by a great number of DMUs.Finally a numeral example is given to demonstrate the approach.
基金funded by the Major National Scientific Research Plan(2013CB733305,2012CB957703)the National Natural Science Foundation of China(41174066,41131067,41374087,41431070)
文摘The Gravity Recovery and Climate Experiment(GRACE) mission can significantly improve our knowledge of the temporal variability of the Earth's gravity field.We obtained monthly gravity field solutions based on variational equations approach from GPS-derived positions of GRACE satellites and K-band range-rate measurements.The impact of different fixed data weighting ratios in temporal gravity field recovery while combining the two types of data was investigated for the purpose of deriving the best combined solution.The monthly gravity field solution obtained through above procedures was named as the Institute of Geodesy and Geophysics(IGG) temporal gravity field models.IGG temporal gravity field models were compared with GRACE Release05(RL05) products in following aspects:(i) the trend of the mass anomaly in China and its nearby regions within 2005-2010; (ii) the root mean squares of the global mass anomaly during 2005-2010; (iii) time-series changes in the mean water storage in the region of the Amazon Basin and the Sahara Desert between 2005 and 2010.The results showed that IGG solutions were almost consistent with GRACE RL05 products in above aspects(i)-(iii).Changes in the annual amplitude of mean water storage in the Amazon Basin were 14.7 ± 1.2 cm for IGG,17.1 ± 1.3 cm for the Centre for Space Research(CSR),16.4 ± 0.9 cm for the GeoForschungsZentrum(GFZ) and 16.9 ± 1.2 cm for the Jet Propulsion Laboratory(JPL) in terms of equivalent water height(EWH),respectively.The root mean squares of the mean mass anomaly in Sahara were 1.2 cm,0.9 cm,0.9 cm and 1.2 cm for temporal gravity field models of IGG,CSR,GFZ and JPL,respectively.Comparison suggested that IGG temporal gravity field solutions were at the same accuracy level with the latest temporal gravity field solutions published by CSR,GFZ and JPL.
文摘Consider the regression model Y=Xβ+ g(T) + e. Here g is an unknown smoothing function on [0, 1], β is a l-dimensional parameter to be estimated, and e is an unobserved error. When data are randomly censored, the estimators βn* and gn*forβ and g are obtained by using class K and the least square methods. It is shown that βn* is asymptotically normal and gn* achieves the convergent rate O(n-1/3).
基金partially supported by the National Natural Science Foundation of China(Nos.61402089,61472069,and 61501101)the Fundamental Research Funds for the Central Universities(Nos.N161904001,N161602003,and N150408001)+2 种基金the Natural Science Foundation of Liaoning Province(No.2015020553)the China Postdoctoral Science Foundation(No.2016M591447)the Postdoctoral Science Foundation of Northeastern University(No.20160203)
文摘The Extreme Learning Machine(ELM) and its variants are effective in many machine learning applications such as Imbalanced Learning(IL) or Big Data(BD) learning. However, they are unable to solve both imbalanced and large-volume data learning problems. This study addresses the IL problem in BD applications. The Distributed and Weighted ELM(DW-ELM) algorithm is proposed, which is based on the Map Reduce framework. To confirm the feasibility of parallel computation, first, the fact that matrix multiplication operators are decomposable is illustrated.Then, to further improve the computational efficiency, an Improved DW-ELM algorithm(IDW-ELM) is developed using only one Map Reduce job. The successful operations of the proposed DW-ELM and IDW-ELM algorithms are finally validated through experiments.
文摘In the analysis of correlated data, it is ideal to capture the true dependence structure to increase effciency of the estimation. However, for multivariate survival data, this is extremely
文摘The survival analysis literature has always lagged behind the categorical data literature in developing methods to analyze clustered or multivariate data. While estimators based on
文摘We thank all the discussants for their interesting and stimulating contributions. They have touched various aspects that have not been considered by the original articles.
基金This work was supported by the China Postdoctoral Science Foundation(No.20060390326)the key international S&T cooperation project of China(No.2004DFA06300).
文摘A simple data assimilation method for improving estimation of moderate resolution imaging spectroradiometer (MODIS) leaf area index (LAI) time-series data products based on the gradient inverse weighted filter and object analysis is proposed. The properties and quality control (QC) of MODIS LAI data products are introduced. Also, the gradient inverse weighted filter and object analysis are analyzed. An experiment based on the simple data assimilation method is performed using MODIS LAI data sets from 2000 to 2005 of Guizhou Province in China.
基金supported by National Natural Science Foundation of the China(Nos.60725415,60971066 )the National High-Tech Programs of China(Nos.2009AA01Z258,2009AA01Z260)
文摘An improved low distortion sigma-delta ADC(analog-to-digital converter) for wireless local area network standards is presented.A feed-forward MASH 2-2 multi-bit cascaded sigma-delta ADC is adopted; however,this work shows a much better performance than the ADCs which have been presented to date by adding a feedback factor in the second stage to improve the performance of the in-band SNDR(signal to noise and distortion ratio),using 4-bit ADCs in both stages to minimize the quantization noise.Data weighted averaging technology is therefore used to decrease the mismatch noise induced by the 4-bit DACs,which improves the SFDR(spurious free dynamic range) of the ADC. The modulator has been implemented by a 0.18μm CMOS process and operates at a single 1.8 V supply voltage. Experimental results show that for a 1.25 MHz @-6 dBFS input signal at 160 MHz sampling frequency,the improved ADC with all non-idealities considered achieves a peak SNDR of 80.9 dB and an SFDR of 87 dB,and the effective number of bits is 13.15 bits.