The Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)missions will image the Earth’s dayside magneto pause and cusps in soft X-rays after their respective l...The Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)missions will image the Earth’s dayside magneto pause and cusps in soft X-rays after their respective launches in the near future,to specify glo bal magnetic reconnection modes for varying solar wind conditions.To suppo rt the success of these scientific missions,it is critical to develop techniques that extract the magnetopause locations from the observed soft X-ray images.In this research,we introduce a new geometric equation that calculates the subsolar magnetopause position(RS)from a satellite position,the look direction of the instrument,and the angle at which the X-ray emission is maximized.Two assumptions are used in this method:(1)The look direction where soft X-ray emissions are maximized lies tangent to the magnetopause,and(2)the magnetopause surface near the subsolar point is almost spherical and thus RSis nea rly equal to the radius of the magneto pause curvature.We create synthetic soft X-ray images by using the Open Geospace General Circulation Model(OpenGGCM)global magnetohydrodynamic model,the galactic background,the instrument point spread function,and Poisson noise.We then apply the fast Fourier transform and Gaussian low-pass filte rs to the synthetic images to re move noise and obtain accurate look angles for the soft X-ray pea ks.From the filte red images,we calculate RS and its accuracy for different LEXI locations,look directions,and solar wind densities by using the OpenGGCM subsolar magnetopause location as ground truth.Our method estimates RS with an accuracy of<0.3 RE when the solar wind density exceeds>10 cm-3.The accuracy improves for greater solar wind densities and during southward interplanetary magnetic fields.The method ca ptures the magnetopause motion during southwa rd interplaneta ry magnetic field turnings.Consequently,the technique will enable quantitative analysis of the magnetopause motion and help reveal the dayside reconnection modes for dynamic solar wind conditions.This technique will suppo rt the LEXI and SMILE missions in achieving their scientific o bjectives.展开更多
-Two types of filters are widely used to remove semidirunal and diurnal tidal signals and other high frequency noises in oceanography. The first type of filters uses moving average with weights in time domain, and can...-Two types of filters are widely used to remove semidirunal and diurnal tidal signals and other high frequency noises in oceanography. The first type of filters uses moving average with weights in time domain, and can be easily operated. Some data will be lost at each end of the time series, especially for the low low-pass filters. The second type of filters uses the discrete Fourier transform filter (DFTF) which operates in the frequency domain, and there are no data loss at the ends for the forward transform. However, owing to the Gibbs phenomenon and the discrete sampling (Nyquist effect) , ringing appears in the inverse transformed data, which is especially serious at each end. Thus some data at the ends are also discarded. The present study tries to find out what causes the ringing and then to seek for methods to overcome the ringing. We have found that there are two kinds of ringings, one is the Gibbs phenomenon, as defined before. The other is the 'Nyquist'ringing due to sampling Nyquist critical frequency. The former is due to the abrupt transition in frequency band. The Gibbs and Nyquist effects show the ringing at each end of the filtered time series. Thus, the use of a cosine taper or a linear taper on the window in the frequency domain makes the transition band smooth, so that the Gibbs phenomenon will be minimized. Before applying the Fast Fourier Transform (FFT), the original time series at each end is properly tapered by a split cosine bell that reduces significant ringing since this method limits the energy transfer from outside of the Nyquist frequency. Thus, the DFTF can be a powerful tool to suppress the signals in which we are not interested, with sharp peaks in low frequency variation and less data loss at each end.展开更多
Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier ...Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier filtering problem from two aspects. First, a robust and efficient graph interaction model,is proposed, with the assumption that matches are correlated with each other rather than independently distributed. To this end, we construct a graph based on the local relationships of matches and formulate the outlier filtering task as a binary labeling energy minimization problem, where the pairwise term encodes the interaction between matches. We further show that this formulation can be solved globally by graph cut algorithm. Our new formulation always improves the performance of previous localitybased method without noticeable deterioration in processing time,adding a few milliseconds. Second, to construct a better graph structure, a robust and geometrically meaningful topology-aware relationship is developed to capture the topology relationship between matches. The two components in sum lead to topology interaction matching(TIM), an effective and efficient method for outlier filtering. Extensive experiments on several large and diverse datasets for multiple vision tasks including general feature matching, as well as relative pose estimation, homography and fundamental matrix estimation, loop-closure detection, and multi-modal image matching, demonstrate that our TIM is more competitive than current state-of-the-art methods, in terms of generality, efficiency, and effectiveness. The source code is publicly available at http://github.com/YifanLu2000/TIM.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge gra...To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge graph completion(KGC).Related research work has shown the superiority of convolutional neural networks(CNNs)in extracting semantic features of triple embeddings.However,these researches use only one single-shaped filter and fail to extract semantic features of different granularity.To solve this problem,ConvKG exploits multi-shaped filters to co-convolute on the triple embeddings,joint learning semantic features of different granularity.Different shaped filters cover different sizes on the triple embeddings and capture pairwise interactions of different granularity among triple elements.Experimental results confirm the strength of joint learning,and compared with state-of-the-art CNN-based KGC models,ConvKG achieves the better mean rank(MR)and Hits@10 metrics on dataset WN18 RR,and the better MR on dataset FB15k-237.展开更多
A novel defected ground structure (DGS) for the microstrip line is proposed in this paper. The DGS lattice has more defect parameters so that it can provide better performance than the conventional dumbbell-shaped D...A novel defected ground structure (DGS) for the microstrip line is proposed in this paper. The DGS lattice has more defect parameters so that it can provide better performance than the conventional dumbbell-shaped DGS. Selectivity is improved by 97.2% with a sharpness factor of 24.6%. The method is applied to the design of a low-pass filter to confirm validity of the proposed DGS.展开更多
Graph filtering is an important part of graph signal processing and a useful tool for image denoising.Existing graph filtering methods,such as adaptive weighted graph filtering(AWGF),focus on coefficient shrinkage str...Graph filtering is an important part of graph signal processing and a useful tool for image denoising.Existing graph filtering methods,such as adaptive weighted graph filtering(AWGF),focus on coefficient shrinkage strategies in a graph-frequency domain.However,they seldom consider the image attributes in their graph-filtering procedure.Consequently,the denoising performance of graph filtering is barely comparable with that of other state-of-the-art denoising methods.To fully exploit the image attributes,we propose a guided intra-patch smoothing AWGF(AWGF-GPS)method for single-image denoising.Unlike AWGF,which employs graph topology on patches,AWGF-GPS learns the topology of superpixels by introducing the pixel smoothing attribute of a patch.This operation forces the restored pixels to smoothly evolve in local areas,where both intra-and inter-patch relationships of the image are utilized during patch restoration.Meanwhile,a guided-patch regularizer is incorporated into AWGF-GPS.The guided patch is obtained in advance using a maximum-a-posteriori probability estimator.Because the guided patch is considered as a sketch of a denoised patch,AWGF-GPS can effectively supervise patch restoration during graph filtering to increase the reliability of the denoised patch.Experiments demonstrate that the AWGF-GPS method suitably rebuilds denoising images.It outperforms most state-of-the-art single-image denoising methods and is competitive with certain deep-learning methods.In particular,it has the advantage of managing images with significant noise.展开更多
Graph filtering,which is founded on the theory of graph signal processing,is proved as a useful tool for image denoising.Most graph filtering methods focus on learning an ideal lowpass filter to remove noise,where cle...Graph filtering,which is founded on the theory of graph signal processing,is proved as a useful tool for image denoising.Most graph filtering methods focus on learning an ideal lowpass filter to remove noise,where clean images are restored from noisy ones by retaining the image components in low graph frequency bands.However,this lowpass filter has limited ability to separate the low-frequency noise from clean images such that it makes the denoising procedure less effective.To address this issue,we propose an adaptive weighted graph filtering(AWGF)method to replace the design of traditional ideal lowpass filter.In detail,we reassess the existing low-rank denoising method with adaptive regularizer learning(ARLLR)from the view of graph filtering.A shrinkage approach subsequently is presented on the graph frequency domain,where the components of noisy image are adaptively decreased in each band by calculating their component significances.As a result,it makes the proposed graph filtering more explainable and suitable for denoising.Meanwhile,we demonstrate a graph filter under the constraint of subspace representation is employed in the ARLLR method.Therefore,ARLLR can be treated as a special form of graph filtering.It not only enriches the theory of graph filtering,but also builds a bridge from the low-rank methods to the graph filtering methods.In the experiments,we perform the AWGF method with a graph filter generated by the classical graph Laplacian matrix.The results show our method can achieve a comparable denoising performance with several state-of-the-art denoising methods.展开更多
Complimentary hexagonal-omega structures are used to design compact, low insertion loss (IL), low pass filter with sharp cut-off. It has been designed for improvement of roll-off performance based on both μ and ε ne...Complimentary hexagonal-omega structures are used to design compact, low insertion loss (IL), low pass filter with sharp cut-off. It has been designed for improvement of roll-off performance based on both μ and ε negative property of the complimentary hex-omega structure while maintaining the filter pass-band performance. By properly designing and loading the hexagonal-omega structure in the ground of microstrip line not only improve the roll-off of the low pass filter, but also reduced the filter size. The simulated results indicate that the proposed filter achieves a flat pass band with no ripples as well as selectivity of 19.68 dB/GHz, corresponding to 5-unit cells hex-omega structures. This significantly exceeds the 5.6 dB/GHz selectivity of the conventional low pass filter design, due to sub-lambda dimensions of the hex-omega structure. A prototype filter implementing area is: 0.712λg x 0.263λg, λg being the guided wavelength at 3-dB cut-off frequency (fc). The proposed filter has a size smaller by 36.2%.展开更多
It is a time-consuming and often iterative procedure to determine design parameters based on fine, accurate but expensive, models. To decrease the number of fine model evaluations, space mapping techniques may be empl...It is a time-consuming and often iterative procedure to determine design parameters based on fine, accurate but expensive, models. To decrease the number of fine model evaluations, space mapping techniques may be employed. In this approach, it is assumed both fine model and coarse, fast but inaccurate, one are available. First, the coarse model is optimized to obtain design parameters satisfying design objectives. Next, auxiliary parameters are calibrated to match coarse and fine models’ responses. Then, the improved coarse model is re-optimized to obtain new design parameters. The design procedure is stopped when a satisfactory solution is reached. In this paper, an implicit space mapping method is used to design a microstrip low-pass elliptic filter. Simulation results show that only two fine model evaluations are sufficient to get satisfactory results.展开更多
High quality speed information is one of the key issues in machine sensorless drives,which often requires proper filtering of the estimated speed.This paper comparatively studies typical low-pass filters(LPF)and phase...High quality speed information is one of the key issues in machine sensorless drives,which often requires proper filtering of the estimated speed.This paper comparatively studies typical low-pass filters(LPF)and phase-locked loop(PLL)type filters with respect to ramp speed reference tracking and steady-state performances,as well as the achievement of adaptive cutoff frequency control.An improved LPF-based filter structure with no ramping and steady-state errors caused by filter parameter quantization effects is proposed,which is suitable for applying LPF for sensorless drives of AC machines,especially when fixed-point digital signal processor is selected e.g.in mass production.Furthermore,the potential of adopting PLL for speed filtering is explored.It is demonstrated that PLL type filters can well maintain the advantages offered by the improved LPF.Moreover,it is found that the PLL type filters exhibit almost linear relationship between the cutoff frequency of the PLL filter and its proportional-integral(PI)gains,which can ease the realization of speed filters with adaptive cutoff frequency for improving the speed transient performance.The proposed filters are verified experimentally.The PLL type filter with adaptive cutoff frequency can provide satisfactory performances under various operating conditions and is therefore recommended.展开更多
基金supported by NASA(Grant Nos.80NSSC19K0844,80NSSC20K1670,80MSFC20C0019,and 80GSFC21M0002)support from NASA Goddard Space Flight Center internal funding programs(HIF,Internal Scientist Funding Model,and Internal Research and Development)。
文摘The Lunar Environment heliospheric X-ray Imager(LEXI)and Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)missions will image the Earth’s dayside magneto pause and cusps in soft X-rays after their respective launches in the near future,to specify glo bal magnetic reconnection modes for varying solar wind conditions.To suppo rt the success of these scientific missions,it is critical to develop techniques that extract the magnetopause locations from the observed soft X-ray images.In this research,we introduce a new geometric equation that calculates the subsolar magnetopause position(RS)from a satellite position,the look direction of the instrument,and the angle at which the X-ray emission is maximized.Two assumptions are used in this method:(1)The look direction where soft X-ray emissions are maximized lies tangent to the magnetopause,and(2)the magnetopause surface near the subsolar point is almost spherical and thus RSis nea rly equal to the radius of the magneto pause curvature.We create synthetic soft X-ray images by using the Open Geospace General Circulation Model(OpenGGCM)global magnetohydrodynamic model,the galactic background,the instrument point spread function,and Poisson noise.We then apply the fast Fourier transform and Gaussian low-pass filte rs to the synthetic images to re move noise and obtain accurate look angles for the soft X-ray pea ks.From the filte red images,we calculate RS and its accuracy for different LEXI locations,look directions,and solar wind densities by using the OpenGGCM subsolar magnetopause location as ground truth.Our method estimates RS with an accuracy of<0.3 RE when the solar wind density exceeds>10 cm-3.The accuracy improves for greater solar wind densities and during southward interplanetary magnetic fields.The method ca ptures the magnetopause motion during southwa rd interplaneta ry magnetic field turnings.Consequently,the technique will enable quantitative analysis of the magnetopause motion and help reveal the dayside reconnection modes for dynamic solar wind conditions.This technique will suppo rt the LEXI and SMILE missions in achieving their scientific o bjectives.
文摘-Two types of filters are widely used to remove semidirunal and diurnal tidal signals and other high frequency noises in oceanography. The first type of filters uses moving average with weights in time domain, and can be easily operated. Some data will be lost at each end of the time series, especially for the low low-pass filters. The second type of filters uses the discrete Fourier transform filter (DFTF) which operates in the frequency domain, and there are no data loss at the ends for the forward transform. However, owing to the Gibbs phenomenon and the discrete sampling (Nyquist effect) , ringing appears in the inverse transformed data, which is especially serious at each end. Thus some data at the ends are also discarded. The present study tries to find out what causes the ringing and then to seek for methods to overcome the ringing. We have found that there are two kinds of ringings, one is the Gibbs phenomenon, as defined before. The other is the 'Nyquist'ringing due to sampling Nyquist critical frequency. The former is due to the abrupt transition in frequency band. The Gibbs and Nyquist effects show the ringing at each end of the filtered time series. Thus, the use of a cosine taper or a linear taper on the window in the frequency domain makes the transition band smooth, so that the Gibbs phenomenon will be minimized. Before applying the Fast Fourier Transform (FFT), the original time series at each end is properly tapered by a split cosine bell that reduces significant ringing since this method limits the energy transfer from outside of the Nyquist frequency. Thus, the DFTF can be a powerful tool to suppress the signals in which we are not interested, with sharp peaks in low frequency variation and less data loss at each end.
基金supported by the National Natural Science Foundation of China (62276192)。
文摘Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier filtering problem from two aspects. First, a robust and efficient graph interaction model,is proposed, with the assumption that matches are correlated with each other rather than independently distributed. To this end, we construct a graph based on the local relationships of matches and formulate the outlier filtering task as a binary labeling energy minimization problem, where the pairwise term encodes the interaction between matches. We further show that this formulation can be solved globally by graph cut algorithm. Our new formulation always improves the performance of previous localitybased method without noticeable deterioration in processing time,adding a few milliseconds. Second, to construct a better graph structure, a robust and geometrically meaningful topology-aware relationship is developed to capture the topology relationship between matches. The two components in sum lead to topology interaction matching(TIM), an effective and efficient method for outlier filtering. Extensive experiments on several large and diverse datasets for multiple vision tasks including general feature matching, as well as relative pose estimation, homography and fundamental matrix estimation, loop-closure detection, and multi-modal image matching, demonstrate that our TIM is more competitive than current state-of-the-art methods, in terms of generality, efficiency, and effectiveness. The source code is publicly available at http://github.com/YifanLu2000/TIM.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金Supported by the National Natural Science Foundation of China(No.61876144)。
文摘To solve the problem of missing many valid triples in knowledge graphs(KGs),a novel model based on a convolutional neural network(CNN)called ConvKG is proposed,which employs a joint learning strategy for knowledge graph completion(KGC).Related research work has shown the superiority of convolutional neural networks(CNNs)in extracting semantic features of triple embeddings.However,these researches use only one single-shaped filter and fail to extract semantic features of different granularity.To solve this problem,ConvKG exploits multi-shaped filters to co-convolute on the triple embeddings,joint learning semantic features of different granularity.Different shaped filters cover different sizes on the triple embeddings and capture pairwise interactions of different granularity among triple elements.Experimental results confirm the strength of joint learning,and compared with state-of-the-art CNN-based KGC models,ConvKG achieves the better mean rank(MR)and Hits@10 metrics on dataset WN18 RR,and the better MR on dataset FB15k-237.
基金Project supported by the Shanghai Leading Academic Discipline Project (Grant No.T0102)
文摘A novel defected ground structure (DGS) for the microstrip line is proposed in this paper. The DGS lattice has more defect parameters so that it can provide better performance than the conventional dumbbell-shaped DGS. Selectivity is improved by 97.2% with a sharpness factor of 24.6%. The method is applied to the design of a low-pass filter to confirm validity of the proposed DGS.
基金This work is supported by Natural Science Foundation of Jiangsu Province,China[BK20170306]National Key R&D Program,China[2017YFC0306100].The initials of authors who received these grants are YZ and JL,respectively.It is also supported by Fundamental Research Funds for Central Universities,China[B200202217]Changzhou Science and Technology Program,China[CJ20200065].The initials of author who received these grants are YT.
文摘Graph filtering is an important part of graph signal processing and a useful tool for image denoising.Existing graph filtering methods,such as adaptive weighted graph filtering(AWGF),focus on coefficient shrinkage strategies in a graph-frequency domain.However,they seldom consider the image attributes in their graph-filtering procedure.Consequently,the denoising performance of graph filtering is barely comparable with that of other state-of-the-art denoising methods.To fully exploit the image attributes,we propose a guided intra-patch smoothing AWGF(AWGF-GPS)method for single-image denoising.Unlike AWGF,which employs graph topology on patches,AWGF-GPS learns the topology of superpixels by introducing the pixel smoothing attribute of a patch.This operation forces the restored pixels to smoothly evolve in local areas,where both intra-and inter-patch relationships of the image are utilized during patch restoration.Meanwhile,a guided-patch regularizer is incorporated into AWGF-GPS.The guided patch is obtained in advance using a maximum-a-posteriori probability estimator.Because the guided patch is considered as a sketch of a denoised patch,AWGF-GPS can effectively supervise patch restoration during graph filtering to increase the reliability of the denoised patch.Experiments demonstrate that the AWGF-GPS method suitably rebuilds denoising images.It outperforms most state-of-the-art single-image denoising methods and is competitive with certain deep-learning methods.In particular,it has the advantage of managing images with significant noise.
基金This work is supported by National Natural Science Foundation of China[61673108,41706103]The initials of authors who received these grants are LZ and YZ,respectively.It is also supported by Natural Science Foundation of Jiangsu Province,China[BK20170306]The initials of author who received this grant are YZ.
文摘Graph filtering,which is founded on the theory of graph signal processing,is proved as a useful tool for image denoising.Most graph filtering methods focus on learning an ideal lowpass filter to remove noise,where clean images are restored from noisy ones by retaining the image components in low graph frequency bands.However,this lowpass filter has limited ability to separate the low-frequency noise from clean images such that it makes the denoising procedure less effective.To address this issue,we propose an adaptive weighted graph filtering(AWGF)method to replace the design of traditional ideal lowpass filter.In detail,we reassess the existing low-rank denoising method with adaptive regularizer learning(ARLLR)from the view of graph filtering.A shrinkage approach subsequently is presented on the graph frequency domain,where the components of noisy image are adaptively decreased in each band by calculating their component significances.As a result,it makes the proposed graph filtering more explainable and suitable for denoising.Meanwhile,we demonstrate a graph filter under the constraint of subspace representation is employed in the ARLLR method.Therefore,ARLLR can be treated as a special form of graph filtering.It not only enriches the theory of graph filtering,but also builds a bridge from the low-rank methods to the graph filtering methods.In the experiments,we perform the AWGF method with a graph filter generated by the classical graph Laplacian matrix.The results show our method can achieve a comparable denoising performance with several state-of-the-art denoising methods.
文摘Complimentary hexagonal-omega structures are used to design compact, low insertion loss (IL), low pass filter with sharp cut-off. It has been designed for improvement of roll-off performance based on both μ and ε negative property of the complimentary hex-omega structure while maintaining the filter pass-band performance. By properly designing and loading the hexagonal-omega structure in the ground of microstrip line not only improve the roll-off of the low pass filter, but also reduced the filter size. The simulated results indicate that the proposed filter achieves a flat pass band with no ripples as well as selectivity of 19.68 dB/GHz, corresponding to 5-unit cells hex-omega structures. This significantly exceeds the 5.6 dB/GHz selectivity of the conventional low pass filter design, due to sub-lambda dimensions of the hex-omega structure. A prototype filter implementing area is: 0.712λg x 0.263λg, λg being the guided wavelength at 3-dB cut-off frequency (fc). The proposed filter has a size smaller by 36.2%.
文摘It is a time-consuming and often iterative procedure to determine design parameters based on fine, accurate but expensive, models. To decrease the number of fine model evaluations, space mapping techniques may be employed. In this approach, it is assumed both fine model and coarse, fast but inaccurate, one are available. First, the coarse model is optimized to obtain design parameters satisfying design objectives. Next, auxiliary parameters are calibrated to match coarse and fine models’ responses. Then, the improved coarse model is re-optimized to obtain new design parameters. The design procedure is stopped when a satisfactory solution is reached. In this paper, an implicit space mapping method is used to design a microstrip low-pass elliptic filter. Simulation results show that only two fine model evaluations are sufficient to get satisfactory results.
基金This work was supported in part by Lodam A/S and in part by the PSO-ELFORSK Program。
文摘High quality speed information is one of the key issues in machine sensorless drives,which often requires proper filtering of the estimated speed.This paper comparatively studies typical low-pass filters(LPF)and phase-locked loop(PLL)type filters with respect to ramp speed reference tracking and steady-state performances,as well as the achievement of adaptive cutoff frequency control.An improved LPF-based filter structure with no ramping and steady-state errors caused by filter parameter quantization effects is proposed,which is suitable for applying LPF for sensorless drives of AC machines,especially when fixed-point digital signal processor is selected e.g.in mass production.Furthermore,the potential of adopting PLL for speed filtering is explored.It is demonstrated that PLL type filters can well maintain the advantages offered by the improved LPF.Moreover,it is found that the PLL type filters exhibit almost linear relationship between the cutoff frequency of the PLL filter and its proportional-integral(PI)gains,which can ease the realization of speed filters with adaptive cutoff frequency for improving the speed transient performance.The proposed filters are verified experimentally.The PLL type filter with adaptive cutoff frequency can provide satisfactory performances under various operating conditions and is therefore recommended.