Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are s...Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.展开更多
When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive ...When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.展开更多
To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum...To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.展开更多
Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intole...Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.展开更多
The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for coll...The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.展开更多
In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produce...In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.展开更多
A large amount of data can partly assure good fitting quality for the trained neural networks.When the quantity of experimental or on-site monitoring data is commonly insufficient and the quality is difficult to contr...A large amount of data can partly assure good fitting quality for the trained neural networks.When the quantity of experimental or on-site monitoring data is commonly insufficient and the quality is difficult to control in engineering practice,numerical simulations can provide a large amount of controlled high quality data.Once the neural networks are trained by such data,they can be used for predicting the properties/responses of the engineering objects instantly,saving the further computing efforts of simulation tools.Correspondingly,a strategy for efficiently transferring the input and output data used and obtained in numerical simulations to neural networks is desirable for engineers and programmers.In this work,we proposed a simple image representation strategy of numerical simulations,where the input and output data are all represented by images.The temporal and spatial information is kept and the data are greatly compressed.In addition,the results are readable for not only computers but also human resources.Some examples are given,indicating the effectiveness of the proposed strategy.展开更多
The Explicit Upstream FEM(EU-FEM)proposed in this paper not only possesses the advan-tages of saving memory and CPU time for FDM,but also fits boundary easily,arranges nodal points flexiblyand makes local grids fine d...The Explicit Upstream FEM(EU-FEM)proposed in this paper not only possesses the advan-tages of saving memory and CPU time for FDM,but also fits boundary easily,arranges nodal points flexiblyand makes local grids fine down conveniently.The software package,which consists of EU-FEM models andpre/post-processing skills,has been widely used to estuary,near-shore,bay,lake and complex waters withmany islands and channels.In addition to flow field,this model can be used to calculate the distributionfields of pollutants,temperature,salinity,sediment,and oil spill,and can be used to case study of estuaryregulation projects.Some practical applications are presented and some problems discussed.展开更多
In the present paper,the algorithm of Binary Image Cross-Correlation(BICC)was developed to measure the unsteady flow field.A vortexflow field was used to test the algorithm by numerical simulation.The results showthat...In the present paper,the algorithm of Binary Image Cross-Correlation(BICC)was developed to measure the unsteady flow field.A vortexflow field was used to test the algorithm by numerical simulation.The results showthat BICC is an effective algorithm for particle identification from consecutive im-ages, the accurate velocity vector field can be obtained.The real velocity field ina valve chamber was measured by BICC in this study.From the full-field velocityinformation,the pressure and vorticity fields were also extracted by post-processing.展开更多
Superconvergence has been studied for long, and many different numerical methods have been analyzed. This paper is concerned with the problem of superconvergence for a two-dimensional time-dependent linear Schr?dinger...Superconvergence has been studied for long, and many different numerical methods have been analyzed. This paper is concerned with the problem of superconvergence for a two-dimensional time-dependent linear Schr?dinger equation with the finite element method. The error estimate and superconvergence property with order O(h^(k+1))in the H^1 norm are given by using the elliptic projection operator in the semi-discrete scheme. The global superconvergence is derived by the interpolation post-processing technique. The superconvergence result with order O(h^(k+1)+ τ~2) in the H^1 norm can be obtained in the Crank-Nicolson fully discrete scheme.展开更多
This paper proposes a simple and powerful optimal integration(OPI)method for improving hourly quantitative precipitation forecasts(QPFs,0-24 h)of a single-model by integrating the benefits of different biascorrected m...This paper proposes a simple and powerful optimal integration(OPI)method for improving hourly quantitative precipitation forecasts(QPFs,0-24 h)of a single-model by integrating the benefits of different biascorrected methods using the high-resolution CMA-GD model from the Guangzhou Institute of Tropical and Marine Meteorology of China Meteorological Administration(CMA).Three techniques are used to generate multi-method calibrated members for OPI:deep neural network(DNN),frequency-matching(FM),and optimal threat score(OTS).The results are as follows:(1)The QPF using DNN follows the basic physical patterns of CMA-GD.Despite providing superior improvements for clear-rainy and weak precipitation,DNN cannot improve the predictions for severe precipitation,while OTS can significantly strengthen these predictions.As a result,DNN and OTS are the optimal members to be incorporated into OPI.(2)Our new approach achieves state-of-the-art performances on a single model for all magnitudes of precipitation.Compared with the CMA-GD,OPI improves the TS by 2.5%,5.4%,7.8%,8.3%,and 6.1%for QPFs from clear-rainy to rainstorms in the verification dataset.Moreover,OPI shows good stability in the test dataset.(3)It is also noted that the rainstorm pattern of OPI relies heavily on the original model and that OPI cannot correct for deviations in the location of severe precipitation.Therefore,improvements in predicting severe precipitation using this method should be further realized by improving the numerical model's forecasting capability.展开更多
Precise interferometric synthetic aperture radar (InSAR) is a new intelligent photogrammetric technology that uses automatic imaging and processing means. Precise InSAR has become the most efficient satellite surveyin...Precise interferometric synthetic aperture radar (InSAR) is a new intelligent photogrammetric technology that uses automatic imaging and processing means. Precise InSAR has become the most efficient satellite surveying and mapping (SASM) method that uses the interferometric phase to create a global digital elevation model (DEM) with high precision. In this paper, we propose the application of systematic InSAR technologies to SASM. Three key technologies are proposed: calibration technology, data processing technology and post-processing technology. First, we need to calibrate the geometric and interferometric parameters including the azimuth time delay, range time delay, and atmospheric delay, as well as baseline errors. Second, we use the calibrated parameters to create a precise DEM. One of the important procedures in data processing is the determination of phase ambiguities. Finally, we improve the DEM quality through the joint use of the block adjustment method, long and short baseline combination method and descending and ascending data merge method. We use 6 sets of TanDEM-X data covering Shanxi to conduct the experiment. The root mean square error of the final DEM is 5.07 m in the mountainous regions. In addition, the low coherence area is 0.8 km 2. The result meets the China domestic SASM accuracy standard at both the 1∶50 000 and 1∶25 000 measurement scales.展开更多
The effect of range-Dopper coupling caused by aircraft moving at very high speed makes trouble on selection of waveform parameters by using frequency-modulated interrupted continuous wave (FMICW) or frequency-coded pu...The effect of range-Dopper coupling caused by aircraft moving at very high speed makes trouble on selection of waveform parameters by using frequency-modulated interrupted continuous wave (FMICW) or frequency-coded pulse (FCP). It also limits the increasing of coherent intergration time. In this paper, application of coherent phase-coded pluse train (CPCPT) solves range-Doppler coupling well. Relevant processing of CPCPT consists of three parts: Dopper preprocessing, pulse compression, and Doppler post-processing. The velocity information obtained by Doppler preprocessing is used for better pulse compression and range tracking. Doppler post-processing with range tracking could make longer coherent accumulation for better detection of target and higher velocity resolution. Finally, examples of data simulation are given to demonstrate the achievements mentioned above.展开更多
Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although Eur...Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although European Centre for Medium-Range Weather Forecasts(ECMWF)provides grid prediction up to 240 hours,the coarse data are unable to meet high requirements of these major events.In this paper,we propose a method,called model residual machine learning(MRML),to generate grid prediction with high-resolution based on high-precision stations forecasting.MRML applies model output machine learning(MOML)for stations forecasting.Subsequently,MRML utilizes these forecasts to improve the quality of the grid data by fitting a machine learning(ML)model to the residuals.We demonstrate that MRML achieves high capability at diverse meteorological elements,specifically,temperature,relative humidity,and wind speed.In addition,MRML could be easily extended to other post-processing methods by invoking different techniques.In our experiments,MRML outperforms the traditional downscaling methods such as piecewise linear interpolation(PLI)on the testing data.展开更多
The correction of model forecast is an important step in evaluating weather forecast results.In recent years,post-processing models based on deep learning have become prominent.In this paper,a deep learning model name...The correction of model forecast is an important step in evaluating weather forecast results.In recent years,post-processing models based on deep learning have become prominent.In this paper,a deep learning model named EDConvLSTM based on encoder-decoder structure and ConvLSTM is developed,which appears to be able to effectively correct numerical weather forecasts.Compared with traditional post-processing methods and convolutional neural networks,ED-ConvLSTM has strong collaborative extraction ability to effectively extract the temporal and spatial features of numerical weather forecasts and fit the complex nonlinear relationship between forecast field and observation field.In this paper,the post-processing method of ED-ConvLSTM for 2 m temperature prediction is tested using The International Grand Global Ensemble dataset and ERA5-Land data from the European Centre for Medium-Range Weather Forecasts(ECMWF).Root mean square error and temperature prediction accuracy are used as evaluation indexes to compare ED-ConvLSTM with the method of model output statistics,convolutional neural network postprocessing methods,and the original prediction by the ECMWF.The results show that the correction effect of EDConvLSTM is better than that of the other two postprocessing methods in terms of the two indexes,especially in the long forecast time.展开更多
This article focuses on exploiting superconvergence to obtain more accurate multi-resolution analysis. Specifcally, we concentrate on enhancing the quality of passing of information between scales by implementing the ...This article focuses on exploiting superconvergence to obtain more accurate multi-resolution analysis. Specifcally, we concentrate on enhancing the quality of passing of information between scales by implementing the Smoothness-Increasing Accuracy-Conserving (SIAC) fltering combined with multi-wavelets. This allows for a more accurate approximation when passing information between meshes of diferent resolutions. Although this article presents the details of the SIAC flter using the standard discontinuous Galerkin method, these techniques are easily extendable to other types of data.展开更多
Higher order accuracy is one of the well-known beneficial properties of the discontinu-ous Galerkin(DG)method.Furthermore,many studies have demonstrated the supercon-vergence property of the semi-discrete DG method.On...Higher order accuracy is one of the well-known beneficial properties of the discontinu-ous Galerkin(DG)method.Furthermore,many studies have demonstrated the supercon-vergence property of the semi-discrete DG method.One can take advantage of this super-convergence property by post-processing techniques to enhance the accuracy of the DG solution.The smoothness-increasing accuracy-conserving(SIAC)filter is a popular post-processing technique introduced by Cockburn et al.(Math.Comput.72(242):577-606,2003).It can raise the convergence rate of the DG solution(with a polynomial of degree k)from order k+1 to order 2k+1 in the L2 norm.This paper first investigates general basis functions used to construct the SIAC filter for superconvergence extraction.The generic basis function framework relaxes the SIAC filter structure and provides flexibility for more intricate features,such as extra smoothness.Second,we study the distribution of the basis functions and propose a new SIAC filter called compact SIAC filter that significantly reduces the support size of the original SIAC filter while preserving(or even improving)its ability to enhance the accuracy of the DG solution.We prove the superconvergence error estimate of the new SIAC filters.Numerical results are presented to confirm the theoretical results and demonstrate the performance of the new SIAC filters.展开更多
In this paper,we present the negative norm estimates for the arbitrary Lagrangian-Eulerian discontinuous Galerkin(ALE-DG)method solving nonlinear hyperbolic equations with smooth solutions.The smoothness-increasing ac...In this paper,we present the negative norm estimates for the arbitrary Lagrangian-Eulerian discontinuous Galerkin(ALE-DG)method solving nonlinear hyperbolic equations with smooth solutions.The smoothness-increasing accuracy-conserving(SIAC)filter is a post-processing technique to enhance the accuracy of the discontinuous Galerkin(DG)solutions.This work is the essential step to extend the SIAC filter to the moving mesh for nonlinear problems.By the post-processing theory,the negative norm estimates are vital to get the superconvergence error estimates of the solutions after post-processing in the L2 norm.Although the SIAC filter has been extended to nonuniform mesh,the analysis of fil-tered solutions on the nonuniform mesh is complicated.We prove superconvergence error estimates in the negative norm for the ALE-DG method on moving meshes.The main dif-ficulties of the analysis are the terms in the ALE-DG scheme brought by the grid velocity field,and the time-dependent function space.The mapping from time-dependent cells to reference cells is very crucial in the proof.The numerical results also confirm the theoreti-cal proof.展开更多
Fast-Than-Nyquist (FTN) transmission is a promising method to improve the spectrum efficiency for future wireless communication systems. However, this benefit of FTN is at the price of inducing the inter-symbol interf...Fast-Than-Nyquist (FTN) transmission is a promising method to improve the spectrum efficiency for future wireless communication systems. However, this benefit of FTN is at the price of inducing the inter-symbol interference (ISI), which increases the complexity of the receiver. In this paper, a circulated block transmission scheme for FTN signaling, i.e. CB-FTN system is proposed. The detail implementation structure of CB-FTN transceiver is presented, in which the ISI caused by FTN transmission is canceled by the frequency-domain equalization (FDE), and the inter-block interference (IBI) caused by the multi-path channel is overcome by the cyclic-prefix. The postprocessing signal to noise ratio (pSNR) is analyzed for the CB-FTN receiver with zero-forcing FDE in AWGN channel, which is verified by the simulation results. Moreover, the BER performances and computational complexity of CB-FTN system are compared with the existed scheme.展开更多
In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI). The technique uses textural features to describe the blocks of each...In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI). The technique uses textural features to describe the blocks of each MRI slice along with position and neighborhood features. A trained support vector machine (SVM) is used to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions based on mainly the textural features with aid of the other features. The MRI slice blocks’ classification is used to provide an initial segmentation. A comprehensive post processing module is then utilized to refine and improve the quality of the initial segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated process without the need to manually define regions of interest (ROIs). In addition, the post processing module is generic enough to be applied to the results of any other MS segmentation technique to improve the segmentation quality. This technique is evaluated using ten real MRI data-sets with 10% used in the training of the textural-based SVM. The average results for the performance evaluation of the presented technique were 0.79 for dice similarity, 0.68 for sensitivity and 0.9 for the percentage of the detected lesion load. These results indicate that the proposed method would be useful in clinical practice for the detection of MS lesions from MRI.展开更多
文摘Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.
基金supported by the New Century Excellent Talents in University(NCET-09-0396)the National Science&Technology Key Projects of Numerical Control(2012ZX04014-031)+1 种基金the Natural Science Foundation of Hubei Province(2011CDB279)the Foundation for Innovative Research Groups of the Natural Science Foundation of Hubei Province,China(2010CDA067)
文摘When castings become complicated and the demands for precision of numerical simulation become higher,the numerical data of casting numerical simulation become more massive.On a general personal computer,these massive numerical data may probably exceed the capacity of available memory,resulting in failure of rendering.Based on the out-of-core technique,this paper proposes a method to effectively utilize external storage and reduce memory usage dramatically,so as to solve the problem of insufficient memory for massive data rendering on general personal computers.Based on this method,a new postprocessor is developed.It is capable to illustrate filling and solidification processes of casting,as well as thermal stess.The new post-processor also provides fast interaction to simulation results.Theoretical analysis as well as several practical examples prove that the memory usage and loading time of the post-processor are independent of the size of the relevant files,but the proportion of the number of cells on surface.Meanwhile,the speed of rendering and fetching of value from the mouse is appreciable,and the demands of real-time and interaction are satisfied.
基金supported by the National Natural Science Foundation of China(51875535)the Natural Science Foundation for Young Scientists of Shanxi Province(201701D221017,201901D211242)。
文摘To improve the ability of detecting underwater targets under strong wideband interference environment,an efficient method of line spectrum extraction is proposed,which fully utilizes the feature of the target spectrum that the high intense and stable line spectrum is superimposed on the wide continuous spectrum.This method modifies the traditional beam forming algorithm by calculating and fusing the beam forming results at multi-frequency band and multi-azimuth interval,showing an excellent way to extract the line spectrum when the interference and the target are not in the same azimuth interval simultaneously.Statistical efficiency of the estimated azimuth variance and corresponding power of the line spectrum band depends on the line spectra ratio(LSR)of the line spectrum.The change laws of the output signal to noise ratio(SNR)with the LSR,the input SNR,the integration time and the filtering bandwidth of different algorithms bring the selection principle of the critical LSR.As the basis,the detection gain of wideband energy integration and the narrowband line spectrum algorithm are theoretically analyzed.The simulation detection gain demonstrates a good match with the theoretical model.The application conditions of all methods are verified by the receiver operating characteristic(ROC)curve and experimental data from Qiandao Lake.In fact,combining the two methods for target detection reduces the missed detection rate.The proposed post-processing method in2-dimension with the Kalman filter in the time dimension and the background equalization algorithm in the azimuth dimension makes use of the strong correlation between adjacent frames,could further remove background fluctuation and improve the display effect.
基金This work was supported by Taif university Researchers Supporting Project Number(TURSP-2020/114),Taif University,Taif,Saudi Arabia.
文摘Low contrast of Magnetic Resonance(MR)images limits the visibility of subtle structures and adversely affects the outcome of both subjective and automated diagnosis.State-of-the-art contrast boosting techniques intolerably alter inherent features of MR images.Drastic changes in brightness features,induced by post-processing are not appreciated in medical imaging as the grey level values have certain diagnostic meanings.To overcome these issues this paper proposes an algorithm that enhance the contrast of MR images while preserving the underlying features as well.This method termed as Power-law and Logarithmic Modification-based Histogram Equalization(PLMHE)partitions the histogram of the image into two sub histograms after a power-law transformation and a log compression.After a modification intended for improving the dispersion of the sub-histograms and subsequent normalization,cumulative histograms are computed.Enhanced grey level values are computed from the resultant cumulative histograms.The performance of the PLMHE algorithm is comparedwith traditional histogram equalization based algorithms and it has been observed from the results that PLMHE can boost the image contrast without causing dynamic range compression,a significant change in mean brightness,and contrast-overshoot.
文摘The travel time data collection method is used to assist the congestion management. The use of traditional sensors (e.g. inductive loops, AVI sensors) or more recent Bluetooth sensors installed on major roads for collecting data is not sufficient because of their limited coverage and expensive costs for installation and maintenance. Application of the Global Positioning Systems (GPS) in travel time and delay data collections is proven to be efficient in terms of accuracy, level of details for the data and required data collection of man-power. While data collection automation is improved by the GPS technique, human errors can easily find their way through the post-processing phase, and therefore data post-processing remains a challenge especially in case of big projects with high amount of data. This paper introduces a stand-alone post-processing tool called GPS Calculator, which provides an easy-to-use environment to carry out data post-processing. This is a Visual Basic application that processes the data files obtained in the field and integrates them into Geographic Information Systems (GIS) for analysis and representation. The results show that this tool obtains similar results to the currently used data post-processing method, reduces the post-processing effort, and also eliminates the need for the second person during the data collection.
文摘In the analysis of high-rise building, traditional displacement-based plane elements are often used to get the in-plane internal forces of the shear walls by stress integration. Limited by the singular problem produced by wall holes and the loss of precision induced by using differential method to derive strains, the displacement-based elements cannot always present accuracy enough for design. In this paper, the hybrid post-processing procedure based on the Hellinger-Reissner variational principle is used for improving the stress precision of two quadrilateral plane elements. In order to find the best stress field, three different forms are assumed for the displacement-based plane elements and with drilling DOF. Numerical results show that by using the proposed method, the accuracy of stress solutions of these two displacement-based plane elements can be improved.
基金support from the National Natural Science Foundation of China(NSFC)(52178324).
文摘A large amount of data can partly assure good fitting quality for the trained neural networks.When the quantity of experimental or on-site monitoring data is commonly insufficient and the quality is difficult to control in engineering practice,numerical simulations can provide a large amount of controlled high quality data.Once the neural networks are trained by such data,they can be used for predicting the properties/responses of the engineering objects instantly,saving the further computing efforts of simulation tools.Correspondingly,a strategy for efficiently transferring the input and output data used and obtained in numerical simulations to neural networks is desirable for engineers and programmers.In this work,we proposed a simple image representation strategy of numerical simulations,where the input and output data are all represented by images.The temporal and spatial information is kept and the data are greatly compressed.In addition,the results are readable for not only computers but also human resources.Some examples are given,indicating the effectiveness of the proposed strategy.
文摘The Explicit Upstream FEM(EU-FEM)proposed in this paper not only possesses the advan-tages of saving memory and CPU time for FDM,but also fits boundary easily,arranges nodal points flexiblyand makes local grids fine down conveniently.The software package,which consists of EU-FEM models andpre/post-processing skills,has been widely used to estuary,near-shore,bay,lake and complex waters withmany islands and channels.In addition to flow field,this model can be used to calculate the distributionfields of pollutants,temperature,salinity,sediment,and oil spill,and can be used to case study of estuaryregulation projects.Some practical applications are presented and some problems discussed.
基金The project supported by the National Natural Science Foundation of China
文摘In the present paper,the algorithm of Binary Image Cross-Correlation(BICC)was developed to measure the unsteady flow field.A vortexflow field was used to test the algorithm by numerical simulation.The results showthat BICC is an effective algorithm for particle identification from consecutive im-ages, the accurate velocity vector field can be obtained.The real velocity field ina valve chamber was measured by BICC in this study.From the full-field velocityinformation,the pressure and vorticity fields were also extracted by post-processing.
基金Project supported by the National Natural Science Foundation of China(No.11671157)
文摘Superconvergence has been studied for long, and many different numerical methods have been analyzed. This paper is concerned with the problem of superconvergence for a two-dimensional time-dependent linear Schr?dinger equation with the finite element method. The error estimate and superconvergence property with order O(h^(k+1))in the H^1 norm are given by using the elliptic projection operator in the semi-discrete scheme. The global superconvergence is derived by the interpolation post-processing technique. The superconvergence result with order O(h^(k+1)+ τ~2) in the H^1 norm can be obtained in the Crank-Nicolson fully discrete scheme.
基金Open Project Fund of Guangdong Provincial Key Laboratory of Regional Numerical Weather Prediction,CMA(J202009)Heavy Rain and Drought-Flood Disasters in Plateau and Basin Key Laboratory of Sichuan Province(SZKT202005)Innovation and Development Project of China Meteorological Administration(CXFZ2021J020)。
文摘This paper proposes a simple and powerful optimal integration(OPI)method for improving hourly quantitative precipitation forecasts(QPFs,0-24 h)of a single-model by integrating the benefits of different biascorrected methods using the high-resolution CMA-GD model from the Guangzhou Institute of Tropical and Marine Meteorology of China Meteorological Administration(CMA).Three techniques are used to generate multi-method calibrated members for OPI:deep neural network(DNN),frequency-matching(FM),and optimal threat score(OTS).The results are as follows:(1)The QPF using DNN follows the basic physical patterns of CMA-GD.Despite providing superior improvements for clear-rainy and weak precipitation,DNN cannot improve the predictions for severe precipitation,while OTS can significantly strengthen these predictions.As a result,DNN and OTS are the optimal members to be incorporated into OPI.(2)Our new approach achieves state-of-the-art performances on a single model for all magnitudes of precipitation.Compared with the CMA-GD,OPI improves the TS by 2.5%,5.4%,7.8%,8.3%,and 6.1%for QPFs from clear-rainy to rainstorms in the verification dataset.Moreover,OPI shows good stability in the test dataset.(3)It is also noted that the rainstorm pattern of OPI relies heavily on the original model and that OPI cannot correct for deviations in the location of severe precipitation.Therefore,improvements in predicting severe precipitation using this method should be further realized by improving the numerical model's forecasting capability.
文摘Precise interferometric synthetic aperture radar (InSAR) is a new intelligent photogrammetric technology that uses automatic imaging and processing means. Precise InSAR has become the most efficient satellite surveying and mapping (SASM) method that uses the interferometric phase to create a global digital elevation model (DEM) with high precision. In this paper, we propose the application of systematic InSAR technologies to SASM. Three key technologies are proposed: calibration technology, data processing technology and post-processing technology. First, we need to calibrate the geometric and interferometric parameters including the azimuth time delay, range time delay, and atmospheric delay, as well as baseline errors. Second, we use the calibrated parameters to create a precise DEM. One of the important procedures in data processing is the determination of phase ambiguities. Finally, we improve the DEM quality through the joint use of the block adjustment method, long and short baseline combination method and descending and ascending data merge method. We use 6 sets of TanDEM-X data covering Shanxi to conduct the experiment. The root mean square error of the final DEM is 5.07 m in the mountainous regions. In addition, the low coherence area is 0.8 km 2. The result meets the China domestic SASM accuracy standard at both the 1∶50 000 and 1∶25 000 measurement scales.
文摘The effect of range-Dopper coupling caused by aircraft moving at very high speed makes trouble on selection of waveform parameters by using frequency-modulated interrupted continuous wave (FMICW) or frequency-coded pulse (FCP). It also limits the increasing of coherent intergration time. In this paper, application of coherent phase-coded pluse train (CPCPT) solves range-Doppler coupling well. Relevant processing of CPCPT consists of three parts: Dopper preprocessing, pulse compression, and Doppler post-processing. The velocity information obtained by Doppler preprocessing is used for better pulse compression and range tracking. Doppler post-processing with range tracking could make longer coherent accumulation for better detection of target and higher velocity resolution. Finally, examples of data simulation are given to demonstrate the achievements mentioned above.
基金Project supported by the National Natural Science Foundation of China(Nos.12101072 and 11421101)the National Key Research and Development Program of China(No.2018YFF0300104)+1 种基金the Beijing Municipal Science and Technology Project(No.Z201100005820002)the Open Research Fund of Shenzhen Research Institute of Big Data(No.2019ORF01001)。
文摘Fine-grained weather forecasting data,i.e.,the grid data with high-resolution,have attracted increasing attention in recent years,especially for some specific applications such as the Winter Olympic Games.Although European Centre for Medium-Range Weather Forecasts(ECMWF)provides grid prediction up to 240 hours,the coarse data are unable to meet high requirements of these major events.In this paper,we propose a method,called model residual machine learning(MRML),to generate grid prediction with high-resolution based on high-precision stations forecasting.MRML applies model output machine learning(MOML)for stations forecasting.Subsequently,MRML utilizes these forecasts to improve the quality of the grid data by fitting a machine learning(ML)model to the residuals.We demonstrate that MRML achieves high capability at diverse meteorological elements,specifically,temperature,relative humidity,and wind speed.In addition,MRML could be easily extended to other post-processing methods by invoking different techniques.In our experiments,MRML outperforms the traditional downscaling methods such as piecewise linear interpolation(PLI)on the testing data.
基金National Key Research and Development Program of China(2017YFC1502104)Beijige Foundation of NJIAS(BJG202103)。
文摘The correction of model forecast is an important step in evaluating weather forecast results.In recent years,post-processing models based on deep learning have become prominent.In this paper,a deep learning model named EDConvLSTM based on encoder-decoder structure and ConvLSTM is developed,which appears to be able to effectively correct numerical weather forecasts.Compared with traditional post-processing methods and convolutional neural networks,ED-ConvLSTM has strong collaborative extraction ability to effectively extract the temporal and spatial features of numerical weather forecasts and fit the complex nonlinear relationship between forecast field and observation field.In this paper,the post-processing method of ED-ConvLSTM for 2 m temperature prediction is tested using The International Grand Global Ensemble dataset and ERA5-Land data from the European Centre for Medium-Range Weather Forecasts(ECMWF).Root mean square error and temperature prediction accuracy are used as evaluation indexes to compare ED-ConvLSTM with the method of model output statistics,convolutional neural network postprocessing methods,and the original prediction by the ECMWF.The results show that the correction effect of EDConvLSTM is better than that of the other two postprocessing methods in terms of the two indexes,especially in the long forecast time.
基金This work was motivated by discussions with Dr.Venke Sankaran(Edwards Air Force Research Lab,USA)and was performed while visiting the Applied Mathematics group at HeinrichHeine University,Düsseldorf,Germany.Research supported by the Air Force Ofce of Scientifc Research(AFOSR)Computational Mathematics Program(Program Manager:Dr.Fariba Fahroo)under Grant numbers FA9550-18-1-0486 and FA9550-19-S-0003.
文摘This article focuses on exploiting superconvergence to obtain more accurate multi-resolution analysis. Specifcally, we concentrate on enhancing the quality of passing of information between scales by implementing the Smoothness-Increasing Accuracy-Conserving (SIAC) fltering combined with multi-wavelets. This allows for a more accurate approximation when passing information between meshes of diferent resolutions. Although this article presents the details of the SIAC flter using the standard discontinuous Galerkin method, these techniques are easily extendable to other types of data.
基金Funding for this work was partially supported by the National Natural Science Foundation of China(NSFC)under Grant no.11801062.
文摘Higher order accuracy is one of the well-known beneficial properties of the discontinu-ous Galerkin(DG)method.Furthermore,many studies have demonstrated the supercon-vergence property of the semi-discrete DG method.One can take advantage of this super-convergence property by post-processing techniques to enhance the accuracy of the DG solution.The smoothness-increasing accuracy-conserving(SIAC)filter is a popular post-processing technique introduced by Cockburn et al.(Math.Comput.72(242):577-606,2003).It can raise the convergence rate of the DG solution(with a polynomial of degree k)from order k+1 to order 2k+1 in the L2 norm.This paper first investigates general basis functions used to construct the SIAC filter for superconvergence extraction.The generic basis function framework relaxes the SIAC filter structure and provides flexibility for more intricate features,such as extra smoothness.Second,we study the distribution of the basis functions and propose a new SIAC filter called compact SIAC filter that significantly reduces the support size of the original SIAC filter while preserving(or even improving)its ability to enhance the accuracy of the DG solution.We prove the superconvergence error estimate of the new SIAC filters.Numerical results are presented to confirm the theoretical results and demonstrate the performance of the new SIAC filters.
基金the fellowship of China Postdoctoral Science Foundation,no:2020TQ0030.Y.Xu:Research supported by National Numerical Windtunnel Project NNW2019ZT4-B08+1 种基金Science Challenge Project TZZT2019-A2.3NSFC Grants 11722112,12071455.X.Li:Research supported by NSFC Grant 11801062.
文摘In this paper,we present the negative norm estimates for the arbitrary Lagrangian-Eulerian discontinuous Galerkin(ALE-DG)method solving nonlinear hyperbolic equations with smooth solutions.The smoothness-increasing accuracy-conserving(SIAC)filter is a post-processing technique to enhance the accuracy of the discontinuous Galerkin(DG)solutions.This work is the essential step to extend the SIAC filter to the moving mesh for nonlinear problems.By the post-processing theory,the negative norm estimates are vital to get the superconvergence error estimates of the solutions after post-processing in the L2 norm.Although the SIAC filter has been extended to nonuniform mesh,the analysis of fil-tered solutions on the nonuniform mesh is complicated.We prove superconvergence error estimates in the negative norm for the ALE-DG method on moving meshes.The main dif-ficulties of the analysis are the terms in the ALE-DG scheme brought by the grid velocity field,and the time-dependent function space.The mapping from time-dependent cells to reference cells is very crucial in the proof.The numerical results also confirm the theoreti-cal proof.
文摘Fast-Than-Nyquist (FTN) transmission is a promising method to improve the spectrum efficiency for future wireless communication systems. However, this benefit of FTN is at the price of inducing the inter-symbol interference (ISI), which increases the complexity of the receiver. In this paper, a circulated block transmission scheme for FTN signaling, i.e. CB-FTN system is proposed. The detail implementation structure of CB-FTN transceiver is presented, in which the ISI caused by FTN transmission is canceled by the frequency-domain equalization (FDE), and the inter-block interference (IBI) caused by the multi-path channel is overcome by the cyclic-prefix. The postprocessing signal to noise ratio (pSNR) is analyzed for the CB-FTN receiver with zero-forcing FDE in AWGN channel, which is verified by the simulation results. Moreover, the BER performances and computational complexity of CB-FTN system are compared with the existed scheme.
文摘In this paper, a new technique is proposed for automatic segmentation of multiple sclerosis (MS) lesions from brain magnetic resonance imaging (MRI). The technique uses textural features to describe the blocks of each MRI slice along with position and neighborhood features. A trained support vector machine (SVM) is used to discriminate between the blocks in regions of MS lesions and the blocks in non-MS lesion regions based on mainly the textural features with aid of the other features. The MRI slice blocks’ classification is used to provide an initial segmentation. A comprehensive post processing module is then utilized to refine and improve the quality of the initial segmentation. The main contribution of the proposed technique described in this paper is the use of textural features to detect MS lesions in a fully automated process without the need to manually define regions of interest (ROIs). In addition, the post processing module is generic enough to be applied to the results of any other MS segmentation technique to improve the segmentation quality. This technique is evaluated using ten real MRI data-sets with 10% used in the training of the textural-based SVM. The average results for the performance evaluation of the presented technique were 0.79 for dice similarity, 0.68 for sensitivity and 0.9 for the percentage of the detected lesion load. These results indicate that the proposed method would be useful in clinical practice for the detection of MS lesions from MRI.