The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small e...The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.展开更多
The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor l...The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method.展开更多
Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key ...Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.展开更多
Multiplicative noise removal problems have attracted much attention in recent years.Unlike additive noise,multiplicative noise destroys almost all information of the original image,especially for texture images.Motiva...Multiplicative noise removal problems have attracted much attention in recent years.Unlike additive noise,multiplicative noise destroys almost all information of the original image,especially for texture images.Motivated by the TV-Stokes model,we propose a new two-step variational model to denoise the texture images corrupted by multiplicative noise with a good geometry explanation in this paper.In the first step,we convert the multiplicative denoising problem into an additive one by the logarithm transform and propagate the isophote directions in the tangential field smoothing.Once the isophote directions are constructed,an image is restored to fit the constructed directions in the second step.The existence and uniqueness of the solution to the variational problems are proved.In these two steps,we use the gradient descent method and construct finite difference schemes to solve the problems.Especially,the augmented Lagrangian method and the fast Fourier transform are adopted to accelerate the calculation.Experimental results show that the proposed model can remove the multiplicative noise efficiently and protect the texture well.展开更多
In this work,we propose a second-order model for image denoising by employing a novel potential function recently developed in Zhu(J Sci Comput 88:46,2021)for the design of a regularization term.Due to this new second...In this work,we propose a second-order model for image denoising by employing a novel potential function recently developed in Zhu(J Sci Comput 88:46,2021)for the design of a regularization term.Due to this new second-order derivative based regularizer,the model is able to alleviate the staircase effect and preserve image contrast.The augmented Lagrangian method(ALM)is utilized to minimize the associated functional and convergence analysis is established for the proposed algorithm.Numerical experiments are presented to demonstrate the features of the proposed model.展开更多
This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(...This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(XRD)-based DIP method effectively analyzed the mineral composition contents and spatial distributions of granite. During the particle flow code(PFC2D) model calibration phase, the numerical simulation exhibited that the uniaxial compressive strength(UCS) value, elastic modulus(E), and failure pattern of the granite specimen in the UCS test were comparable to the experiment. By establishing 351 sets of numerical models and exploring the impacts of mineral composition on the mechanical properties of granite, it indicated that there was no negative correlation between quartz and feldspar for UCS, tensile strength(σ_(t)), and E. In contrast, mica had a significant negative correlation for UCS, σ_(t), and E. The presence of quartz increased the brittleness of granite, whereas the presence of mica and feldspar increased its ductility in UCS and direct tensile strength(DTS) tests. Varying contents of major mineral compositions in granite showed minor influence on the number of cracks in both UCS and DTS tests.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
We investigate the following inverse problem:starting from the acoustic wave equation,reconstruct a piecewise constant passive acoustic source from a single boundary temporal measurement without knowing the speed of s...We investigate the following inverse problem:starting from the acoustic wave equation,reconstruct a piecewise constant passive acoustic source from a single boundary temporal measurement without knowing the speed of sound.When the amplitudes of the source are known a priori,we prove a unique determination result of the shape and propose a level set algorithm to reconstruct the singularities.When the singularities of the source are known a priori,we show unique determination of the source amplitudes and propose a least-squares fitting algorithm to recover the source amplitudes.The analysis bridges the low-frequency source inversion problem and the inverse problem of gravimetry.The proposed algorithms are validated and quantitatively evaluated with numerical experiments in 2D and 3D.展开更多
We have developed a novel method for co-adding multiple under-sampled images that combines the iteratively reweighted least squares and divide-and-conquer algorithms.Our approach not only allows for the anti-aliasing ...We have developed a novel method for co-adding multiple under-sampled images that combines the iteratively reweighted least squares and divide-and-conquer algorithms.Our approach not only allows for the anti-aliasing of the images but also enables Point-Spread Function(PSF)deconvolution,resulting in enhanced restoration of extended sources,the highest peak signal-to-noise ratio,and reduced ringing artefacts.To test our method,we conducted numerical simulations that replicated observation runs of the China Space Station Telescope/the VLT Survey Telescope(VST)and compared our results to those obtained using previous algorithms.The simulation showed that our method outperforms previous approaches in several ways,such as restoring the profile of extended sources and minimizing ringing artefacts.Additionally,because our method relies on the inherent advantages of least squares fitting,it is more versatile and does not depend on the local uniformity hypothesis for the PSF.However,the new method consumes much more computation than the other approaches.展开更多
In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of...In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of two stages:smoothing and thresholding,thus referred to as smoothing-and-thresholding(SaT).In the first stage,a smoothed image is obtained by an AITV-regularized Mumford-Shah(MS)model,which can be solved efficiently by the alternating direction method of multipliers(ADMMs)with a closed-form solution of a proximal operator of the l_(1)-αl_(2) regularizer.The convergence of the ADMM algorithm is analyzed.In the second stage,we threshold the smoothed image by K-means clustering to obtain the final segmentation result.Numerical experiments demonstrate that the proposed segmentation framework is versatile for both grayscale and color images,effcient in producing high-quality segmentation results within a few seconds,and robust to input images that are corrupted with noise,blur,or both.We compare the AITV method with its original convex TV and nonconvex TVP(O<p<1)counterparts,showcasing the qualitative and quantitative advantages of our proposed method.展开更多
There are two types of methods for image segmentation.One is traditional image processing methods,which are sensitive to details and boundaries,yet fail to recognize semantic information.The other is deep learning met...There are two types of methods for image segmentation.One is traditional image processing methods,which are sensitive to details and boundaries,yet fail to recognize semantic information.The other is deep learning methods,which can locate and identify different objects,but boundary identifications are not accurate enough.Both of them cannot generate entire segmentation information.In order to obtain accurate edge detection and semantic information,an Adaptive Boundary and Semantic Composite Segmentation method(ABSCS)is proposed.This method can precisely semantic segment individual objects in large-size aerial images with limited GPU performances.It includes adaptively dividing and modifying the aerial images with the proposed principles and methods,using the deep learning method to semantic segment and preprocess the small divided pieces,using three traditional methods to segment and preprocess original-size aerial images,adaptively selecting traditional results tomodify the boundaries of individual objects in deep learning results,and combining the results of different objects.Individual object semantic segmentation experiments are conducted by using the AeroScapes dataset,and their results are analyzed qualitatively and quantitatively.The experimental results demonstrate that the proposed method can achieve more promising object boundaries than the original deep learning method.This work also demonstrates the advantages of the proposed method in applications of point cloud semantic segmentation and image inpainting.展开更多
Rainbow particle image velocimetry(PIV)can restore the three-dimensional velocity field of particles with a single camera;however,it requires a relatively long time to complete the reconstruction.This paper proposes a...Rainbow particle image velocimetry(PIV)can restore the three-dimensional velocity field of particles with a single camera;however,it requires a relatively long time to complete the reconstruction.This paper proposes a hybrid algorithm that combines the fast Fourier transform(FFT)based co-correlation algorithm and the Horn–Schunck(HS)optical flow pyramid iterative algorithm to increase the reconstruction speed.The Rankine vortex simulation experiment was performed,in which the particle velocity field was reconstructed using the proposed algorithm and the rainbow PIV method.The average endpoint error and average angular error of the proposed algorithm were roughly the same as those of the rainbow PIV algorithm;nevertheless,the reconstruction time was 20%shorter.Furthermore,the effect of velocity magnitude and particle density on the reconstruction results was analyzed.In the end,the performance of the proposed algorithm was verified using real experimental single-vortex and double-vortex datasets,from which a similar particle velocity field was obtained compared with the rainbow PIV algorithm.The results show that the reconstruction speed of the proposed hybrid algorithm is approximately 25%faster than that of the rainbow PIV algorithm.展开更多
Content-based medical image retrieval(CBMIR)is a technique for retrieving medical images based on automatically derived image features.There are many applications of CBMIR,such as teaching,research,diagnosis and elect...Content-based medical image retrieval(CBMIR)is a technique for retrieving medical images based on automatically derived image features.There are many applications of CBMIR,such as teaching,research,diagnosis and electronic patient records.Several methods are applied to enhance the retrieval performance of CBMIR systems.Developing new and effective similarity measure and features fusion methods are two of the most powerful and effective strategies for improving these systems.This study proposes the relative difference-based similarity measure(RDBSM)for CBMIR.The new measure was first used in the similarity calculation stage for the CBMIR using an unweighted fusion method of traditional color and texture features.Furthermore,the study also proposes a weighted fusion method for medical image features extracted using pre-trained convolutional neural networks(CNNs)models.Our proposed RDBSM has outperformed the standard well-known similarity and distance measures using two popular medical image datasets,Kvasir and PH2,in terms of recall and precision retrieval measures.The effectiveness and quality of our proposed similarity measure are also proved using a significant test and statistical confidence bound.展开更多
The numerous photos captured by low-price Internet of Things(IoT)sensors are frequently affected by meteorological factors,especially rainfall.It causes varying sizes of white streaks on the image,destroying the image...The numerous photos captured by low-price Internet of Things(IoT)sensors are frequently affected by meteorological factors,especially rainfall.It causes varying sizes of white streaks on the image,destroying the image texture and ruining the performance of the outdoor computer vision system.Existing methods utilise training with pairs of images,which is difficult to cover all scenes and leads to domain gaps.In addition,the network structures adopt deep learning to map rain images to rain-free images,failing to use prior knowledge effectively.To solve these problems,we introduce a single image derain model in edge computing that combines prior knowledge of rain patterns with the learning capability of the neural network.Specifically,the algorithm first uses Residue Channel Prior to filter out the rainfall textural features then it uses the Feature Fusion Module to fuse the original image with the background feature information.This results in a pre-processed image which is fed into Half Instance Net(HINet)to recover a high-quality rain-free image with a clear and accurate structure,and the model does not rely on any rainfall assumptions.Experimental results on synthetic and real-world datasets show that the average peak signal-to-noise ratio of the model decreases by 0.37 dB on the synthetic dataset and increases by 0.43 dB on the real-world dataset,demonstrating that a combined model reduces the gap between synthetic data and natural rain scenes,improves the generalization ability of the derain network,and alleviates the overfitting problem.展开更多
In high-resolution cone-beam computed tomography (CBCT) using the flat-panel detector, imperfect or defect detector elements cause ring artifacts due to the none-uniformity of their X-ray response. They often distur...In high-resolution cone-beam computed tomography (CBCT) using the flat-panel detector, imperfect or defect detector elements cause ring artifacts due to the none-uniformity of their X-ray response. They often disturb the image quality. A dedicated fitting correction method for high-resolution micro-CT is presented. The method converts each elementary X-ray response curve to an average one, and eliminates response inconsistency among pixels. Other factors of the method are discussed, such as the correction factor variability by different sampling frames and nonlinear factors over the whole spectrum. Results show that the noise and artifacts are both reduced in reconstructed images展开更多
We conducted a study on the numerical calculation and response analysis of a transient electromagnetic field generated by a ground source in geological media. One solution method, the traditional discrete image method...We conducted a study on the numerical calculation and response analysis of a transient electromagnetic field generated by a ground source in geological media. One solution method, the traditional discrete image method, involves complex operation, and its digital filtering algorithm requires a large number of calculations. To solve these problems, we proposed an improved discrete image method, where the following are realized: the real number of the electromagnetic field solution based on the Gaver-Stehfest algorithm for approximate inversion, the exponential approximation of the objective kernel function using the Prony method, the transient electromagnetic field according to discrete image theory, and closed-form solution of the approximate coefficients. To verify the method, we tentatively calculated the transient electromagnetic field in a homogeneous model and compared it with the results obtained from the Hankel transform digital filtering method. The results show that the method has considerable accuracy and good applicability. We then used this method to calculate the transient electromagnetic field generated by a ground magnetic dipole source in a typical geoelectric model and analyzed the horizontal component response of the induced magnetic field obtained from the "ground excitation-stratum measurement method. We reached the conclusion that the horizontal component response of a transient field is related to the geoelectric structure, observation time, spatial location, and others. The horizontal component response of the induced magnetic field reflects the eddy current field distribution and its vertical gradient variation. During the detection of abnormal objects, positions with a zero or comparatively large offset were selected for the drill- hole measurements or a comparatively long observation delay was adopted to reduce the influence of the ambient field on the survey results. The discrete image method and forward calculation results in this paper can be used as references for relevant research.展开更多
A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inne...A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.展开更多
A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-effi...A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-efficiency and have many errors.This study researched the spatial distribution and cluster characteristics of gravels based on digital image processing technology combined with a self-organizing map(SOM)and multivariate statistical methods in the grassland of northern Tibetan Plateau.Moreover,the correlation of morphological parameters of gravels between different cluster groups and the environmental factors affecting gravel distribution were analyzed.The results showed that the morphological characteristics of gravels in northern region(cluster C)and southern region(cluster B)of the Tibetan Plateau were similar,with a low gravel coverage,small gravel diameter,and elongated shape.These regions were mainly distributed in high mountainous areas with large topographic relief.The central region(cluster A)has high coverage of gravels with a larger diameter,mainly distributed in high-altitude plains with smaller undulation.Principal component analysis(PCA)results showed that the gravel distribution of cluster A may be mainly affected by vegetation,while those in clusters B and C could be mainly affected by topography,climate,and soil.The study confirmed that the combination of digital image processing technology and SOM could effectively analyzed the spatial distribution characteristics of gravels,providing a new mode for gravel research.展开更多
A second-generation fast Non-dominated Sorting Genetic Algorithm product shape multi-objective imagery optimization model based on degradation(DNSGA-II)strategy is proposed to make the product appearance optimization ...A second-generation fast Non-dominated Sorting Genetic Algorithm product shape multi-objective imagery optimization model based on degradation(DNSGA-II)strategy is proposed to make the product appearance optimization scheme meet the complex emotional needs of users for the product.First,the semantic differential method and K-Means cluster analysis are applied to extract the multi-objective imagery of users;then,the product multidimensional scale analysis is applied to classify the research objects,and again the reference samples are screened by the semantic differentialmethod,and the samples are parametrized in two dimensions by using elliptic Fourier analysis;finally,the fuzzy dynamic evaluation function is used as the objective function of the algorithm,and the coordinates of key points of product contours Finally,with the fuzzy dynamic evaluation function as the objective function of the algorithm and the coordinates of key points of the product profile as the decision variables,the optimal product profile solution set is solved by DNSGA-II.The validity of the model is verified by taking the optimization of the shape scheme of the hospital connection site as an example.For comparison with DNSGA-II,other multi-objective optimization algorithms are also presented.To evaluate the performance of each algorithm,the performance evaluation index values of the five multi-objective optimization algorithms are calculated in this paper.The results show that DNSGA-II is superior in improving individual diversity and has better overall performance.展开更多
Linear scan computed tomography (LCT) is of great benefit to online industrial scanning and security inspection due to its characteristics of straight-line source trajectory and high scanning speed. However, in prac...Linear scan computed tomography (LCT) is of great benefit to online industrial scanning and security inspection due to its characteristics of straight-line source trajectory and high scanning speed. However, in practical applications of LCT, there are challenges to image reconstruction due to limited-angle and insufficient data. In this paper, a new reconstruction algorithm based on total-variation (TV) minimization is developed to reconstruct images from limited-angle and insufficient data in LCT. The main idea of our approach is to reformulate a TV problem as a linear equality constrained problem where the objective function is separable, and then minimize its augmented Lagrangian function by using alternating direction method (ADM) to solve subproblems. The proposed method is robust and efficient in the task of reconstruction by showing the convergence of ADM. The numerical simulations and real data reconstructions show that the proposed reconstruction method brings reasonable performance and outperforms some previous ones when applied to an LCT imaging problem.展开更多
基金supported by the National Key R&D Program of China(grant No.2022YFF0503800)by the National Natural Science Foundation of China(NSFC)(grant No.11427901)+1 种基金by the Strategic Priority Research Program of the Chinese Academy of Sciences(CAS-SPP)(grant No.XDA15320102)by the Youth Innovation Promotion Association(CAS No.2022057)。
文摘The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.
文摘The task of indoor visual localization, utilizing camera visual information for user pose calculation, was a core component of Augmented Reality (AR) and Simultaneous Localization and Mapping (SLAM). Existing indoor localization technologies generally used scene-specific 3D representations or were trained on specific datasets, making it challenging to balance accuracy and cost when applied to new scenes. Addressing this issue, this paper proposed a universal indoor visual localization method based on efficient image retrieval. Initially, a Multi-Layer Perceptron (MLP) was employed to aggregate features from intermediate layers of a convolutional neural network, obtaining a global representation of the image. This approach ensured accurate and rapid retrieval of reference images. Subsequently, a new mechanism using Random Sample Consensus (RANSAC) was designed to resolve relative pose ambiguity caused by the essential matrix decomposition based on the five-point method. Finally, the absolute pose of the queried user image was computed, thereby achieving indoor user pose estimation. The proposed indoor localization method was characterized by its simplicity, flexibility, and excellent cross-scene generalization. Experimental results demonstrated a positioning error of 0.09 m and 2.14° on the 7Scenes dataset, and 0.15 m and 6.37° on the 12Scenes dataset. These results convincingly illustrated the outstanding performance of the proposed indoor localization method.
基金supported by the National Natural Science Foundation of China(No.12373073,U2031104,No.12173015)Guangdong Basic and Applied Basic Research Foundation(No.2023A1515011340)。
文摘Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.
文摘Multiplicative noise removal problems have attracted much attention in recent years.Unlike additive noise,multiplicative noise destroys almost all information of the original image,especially for texture images.Motivated by the TV-Stokes model,we propose a new two-step variational model to denoise the texture images corrupted by multiplicative noise with a good geometry explanation in this paper.In the first step,we convert the multiplicative denoising problem into an additive one by the logarithm transform and propagate the isophote directions in the tangential field smoothing.Once the isophote directions are constructed,an image is restored to fit the constructed directions in the second step.The existence and uniqueness of the solution to the variational problems are proved.In these two steps,we use the gradient descent method and construct finite difference schemes to solve the problems.Especially,the augmented Lagrangian method and the fast Fourier transform are adopted to accelerate the calculation.Experimental results show that the proposed model can remove the multiplicative noise efficiently and protect the texture well.
文摘In this work,we propose a second-order model for image denoising by employing a novel potential function recently developed in Zhu(J Sci Comput 88:46,2021)for the design of a regularization term.Due to this new second-order derivative based regularizer,the model is able to alleviate the staircase effect and preserve image contrast.The augmented Lagrangian method(ALM)is utilized to minimize the associated functional and convergence analysis is established for the proposed algorithm.Numerical experiments are presented to demonstrate the features of the proposed model.
基金This research was supported by the Department of Mining Engineering at the University of Utah.In addition,the lead author wishes to acknowledge the financial support received from the Talent Introduction Project,part of the Elite Program of Shandong University of Science and Technology(No.0104060540171).
文摘This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(XRD)-based DIP method effectively analyzed the mineral composition contents and spatial distributions of granite. During the particle flow code(PFC2D) model calibration phase, the numerical simulation exhibited that the uniaxial compressive strength(UCS) value, elastic modulus(E), and failure pattern of the granite specimen in the UCS test were comparable to the experiment. By establishing 351 sets of numerical models and exploring the impacts of mineral composition on the mechanical properties of granite, it indicated that there was no negative correlation between quartz and feldspar for UCS, tensile strength(σ_(t)), and E. In contrast, mica had a significant negative correlation for UCS, σ_(t), and E. The presence of quartz increased the brittleness of granite, whereas the presence of mica and feldspar increased its ductility in UCS and direct tensile strength(DTS) tests. Varying contents of major mineral compositions in granite showed minor influence on the number of cracks in both UCS and DTS tests.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金partially supported by the NSF(Grant Nos.2012046,2152011,and 2309534)partially supported by the NSF(Grant Nos.DMS-1715178,DMS-2006881,and DMS-2237534)+1 种基金NIH(Grant No.R03-EB033521)startup fund from Michigan State University.
文摘We investigate the following inverse problem:starting from the acoustic wave equation,reconstruct a piecewise constant passive acoustic source from a single boundary temporal measurement without knowing the speed of sound.When the amplitudes of the source are known a priori,we prove a unique determination result of the shape and propose a level set algorithm to reconstruct the singularities.When the singularities of the source are known a priori,we show unique determination of the source amplitudes and propose a least-squares fitting algorithm to recover the source amplitudes.The analysis bridges the low-frequency source inversion problem and the inverse problem of gravimetry.The proposed algorithms are validated and quantitatively evaluated with numerical experiments in 2D and 3D.
基金supported by the GHfund A(202302017475)supported by the Foundation for Distinguished Young Scholars of Jiangsu Province(No.BK20140050)+5 种基金the National Natural Science Foundation of China(Nos.11973070,11333008,11273061,11825303,and 11673065)the China Manned Space Project with No.CMS-CSST-2021-A01,CMSCSST-2021-A03,CMS-CSST-2021-B01the Joint Funds of the National Natural Science Foundation of China(No.U1931210)the support from Key Research Program of Frontier Sciences,CAS,grant No.ZDBS-LY-7013Program of Shanghai Academic/Technology Research Leaderthe support from the science research grants from the China Manned Space Project with CMS-CSST-2021-A04,CMS-CSST-2021-A07。
文摘We have developed a novel method for co-adding multiple under-sampled images that combines the iteratively reweighted least squares and divide-and-conquer algorithms.Our approach not only allows for the anti-aliasing of the images but also enables Point-Spread Function(PSF)deconvolution,resulting in enhanced restoration of extended sources,the highest peak signal-to-noise ratio,and reduced ringing artefacts.To test our method,we conducted numerical simulations that replicated observation runs of the China Space Station Telescope/the VLT Survey Telescope(VST)and compared our results to those obtained using previous algorithms.The simulation showed that our method outperforms previous approaches in several ways,such as restoring the profile of extended sources and minimizing ringing artefacts.Additionally,because our method relies on the inherent advantages of least squares fitting,it is more versatile and does not depend on the local uniformity hypothesis for the PSF.However,the new method consumes much more computation than the other approaches.
基金partially supported by the NSF grants DMS-1854434,DMS-1952644,DMS-2151235,DMS-2219904,and CAREER 1846690。
文摘In this paper,we design an efficient,multi-stage image segmentation framework that incorporates a weighted difference of anisotropic and isotropic total variation(AITV).The segmentation framework generally consists of two stages:smoothing and thresholding,thus referred to as smoothing-and-thresholding(SaT).In the first stage,a smoothed image is obtained by an AITV-regularized Mumford-Shah(MS)model,which can be solved efficiently by the alternating direction method of multipliers(ADMMs)with a closed-form solution of a proximal operator of the l_(1)-αl_(2) regularizer.The convergence of the ADMM algorithm is analyzed.In the second stage,we threshold the smoothed image by K-means clustering to obtain the final segmentation result.Numerical experiments demonstrate that the proposed segmentation framework is versatile for both grayscale and color images,effcient in producing high-quality segmentation results within a few seconds,and robust to input images that are corrupted with noise,blur,or both.We compare the AITV method with its original convex TV and nonconvex TVP(O<p<1)counterparts,showcasing the qualitative and quantitative advantages of our proposed method.
基金funded in part by the Equipment Pre-Research Foundation of China,Grant No.61400010203in part by the Independent Project of the State Key Laboratory of Virtual Reality Technology and Systems.
文摘There are two types of methods for image segmentation.One is traditional image processing methods,which are sensitive to details and boundaries,yet fail to recognize semantic information.The other is deep learning methods,which can locate and identify different objects,but boundary identifications are not accurate enough.Both of them cannot generate entire segmentation information.In order to obtain accurate edge detection and semantic information,an Adaptive Boundary and Semantic Composite Segmentation method(ABSCS)is proposed.This method can precisely semantic segment individual objects in large-size aerial images with limited GPU performances.It includes adaptively dividing and modifying the aerial images with the proposed principles and methods,using the deep learning method to semantic segment and preprocess the small divided pieces,using three traditional methods to segment and preprocess original-size aerial images,adaptively selecting traditional results tomodify the boundaries of individual objects in deep learning results,and combining the results of different objects.Individual object semantic segmentation experiments are conducted by using the AeroScapes dataset,and their results are analyzed qualitatively and quantitatively.The experimental results demonstrate that the proposed method can achieve more promising object boundaries than the original deep learning method.This work also demonstrates the advantages of the proposed method in applications of point cloud semantic segmentation and image inpainting.
基金the National Natural Science Foundation of China(Grant Nos.51874264 and 52076200)。
文摘Rainbow particle image velocimetry(PIV)can restore the three-dimensional velocity field of particles with a single camera;however,it requires a relatively long time to complete the reconstruction.This paper proposes a hybrid algorithm that combines the fast Fourier transform(FFT)based co-correlation algorithm and the Horn–Schunck(HS)optical flow pyramid iterative algorithm to increase the reconstruction speed.The Rankine vortex simulation experiment was performed,in which the particle velocity field was reconstructed using the proposed algorithm and the rainbow PIV method.The average endpoint error and average angular error of the proposed algorithm were roughly the same as those of the rainbow PIV algorithm;nevertheless,the reconstruction time was 20%shorter.Furthermore,the effect of velocity magnitude and particle density on the reconstruction results was analyzed.In the end,the performance of the proposed algorithm was verified using real experimental single-vortex and double-vortex datasets,from which a similar particle velocity field was obtained compared with the rainbow PIV algorithm.The results show that the reconstruction speed of the proposed hybrid algorithm is approximately 25%faster than that of the rainbow PIV algorithm.
基金funded by the Deanship of Scientific Research (DSR)at King Abdulaziz University,Jeddah,Saudi Arabia,Under Grant No. (G:146-830-1441).
文摘Content-based medical image retrieval(CBMIR)is a technique for retrieving medical images based on automatically derived image features.There are many applications of CBMIR,such as teaching,research,diagnosis and electronic patient records.Several methods are applied to enhance the retrieval performance of CBMIR systems.Developing new and effective similarity measure and features fusion methods are two of the most powerful and effective strategies for improving these systems.This study proposes the relative difference-based similarity measure(RDBSM)for CBMIR.The new measure was first used in the similarity calculation stage for the CBMIR using an unweighted fusion method of traditional color and texture features.Furthermore,the study also proposes a weighted fusion method for medical image features extracted using pre-trained convolutional neural networks(CNNs)models.Our proposed RDBSM has outperformed the standard well-known similarity and distance measures using two popular medical image datasets,Kvasir and PH2,in terms of recall and precision retrieval measures.The effectiveness and quality of our proposed similarity measure are also proved using a significant test and statistical confidence bound.
基金supported by the National Natural Science Foundation of China under Grant no.41975183,and Grant no.41875184 and Supported by a grant from State Key Laboratory of Resources and Environmental Information System.
文摘The numerous photos captured by low-price Internet of Things(IoT)sensors are frequently affected by meteorological factors,especially rainfall.It causes varying sizes of white streaks on the image,destroying the image texture and ruining the performance of the outdoor computer vision system.Existing methods utilise training with pairs of images,which is difficult to cover all scenes and leads to domain gaps.In addition,the network structures adopt deep learning to map rain images to rain-free images,failing to use prior knowledge effectively.To solve these problems,we introduce a single image derain model in edge computing that combines prior knowledge of rain patterns with the learning capability of the neural network.Specifically,the algorithm first uses Residue Channel Prior to filter out the rainfall textural features then it uses the Feature Fusion Module to fuse the original image with the background feature information.This results in a pre-processed image which is fed into Half Instance Net(HINet)to recover a high-quality rain-free image with a clear and accurate structure,and the model does not rely on any rainfall assumptions.Experimental results on synthetic and real-world datasets show that the average peak signal-to-noise ratio of the model decreases by 0.37 dB on the synthetic dataset and increases by 0.43 dB on the real-world dataset,demonstrating that a combined model reduces the gap between synthetic data and natural rain scenes,improves the generalization ability of the derain network,and alleviates the overfitting problem.
基金Supported by the National Basic Research Program of China ("973"Program)(2006CB601201)~~
文摘In high-resolution cone-beam computed tomography (CBCT) using the flat-panel detector, imperfect or defect detector elements cause ring artifacts due to the none-uniformity of their X-ray response. They often disturb the image quality. A dedicated fitting correction method for high-resolution micro-CT is presented. The method converts each elementary X-ray response curve to an average one, and eliminates response inconsistency among pixels. Other factors of the method are discussed, such as the correction factor variability by different sampling frames and nonlinear factors over the whole spectrum. Results show that the noise and artifacts are both reduced in reconstructed images
基金supported by the Young Scientists Fund of the National Natural Science Foundation of China(No.41304082)the Young Scientists Fund of the Natural Science Foundation of Hebei Province(No.D2014403011)the Geological survey project of China Geological Survey(No.12120114090201)
文摘We conducted a study on the numerical calculation and response analysis of a transient electromagnetic field generated by a ground source in geological media. One solution method, the traditional discrete image method, involves complex operation, and its digital filtering algorithm requires a large number of calculations. To solve these problems, we proposed an improved discrete image method, where the following are realized: the real number of the electromagnetic field solution based on the Gaver-Stehfest algorithm for approximate inversion, the exponential approximation of the objective kernel function using the Prony method, the transient electromagnetic field according to discrete image theory, and closed-form solution of the approximate coefficients. To verify the method, we tentatively calculated the transient electromagnetic field in a homogeneous model and compared it with the results obtained from the Hankel transform digital filtering method. The results show that the method has considerable accuracy and good applicability. We then used this method to calculate the transient electromagnetic field generated by a ground magnetic dipole source in a typical geoelectric model and analyzed the horizontal component response of the induced magnetic field obtained from the "ground excitation-stratum measurement method. We reached the conclusion that the horizontal component response of a transient field is related to the geoelectric structure, observation time, spatial location, and others. The horizontal component response of the induced magnetic field reflects the eddy current field distribution and its vertical gradient variation. During the detection of abnormal objects, positions with a zero or comparatively large offset were selected for the drill- hole measurements or a comparatively long observation delay was adopted to reduce the influence of the ambient field on the survey results. The discrete image method and forward calculation results in this paper can be used as references for relevant research.
基金The National Natural Science Foundation of China (No.61362001,61102043,61262084,20132BAB211030,20122BAB211015)the Basic Research Program of Shenzhen(No.JC201104220219A)
文摘A two-level Bregmanized method with graph regularized sparse coding (TBGSC) is presented for image interpolation. The outer-level Bregman iterative procedure enforces the observation data constraints, while the inner-level Bregmanized method devotes to dictionary updating and sparse represention of small overlapping image patches. The introduced constraint of graph regularized sparse coding can capture local image features effectively, and consequently enables accurate reconstruction from highly undersampled partial data. Furthermore, modified sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge within a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can effectively reconstruct images and it outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.
基金funded by the National Natural Science Foundation of China(41971226,41871357)the Major Research and Development and Achievement Transformation Projects of Qinghai,China(2022-QY-224)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA28110502,XDA19030303).
文摘A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-efficiency and have many errors.This study researched the spatial distribution and cluster characteristics of gravels based on digital image processing technology combined with a self-organizing map(SOM)and multivariate statistical methods in the grassland of northern Tibetan Plateau.Moreover,the correlation of morphological parameters of gravels between different cluster groups and the environmental factors affecting gravel distribution were analyzed.The results showed that the morphological characteristics of gravels in northern region(cluster C)and southern region(cluster B)of the Tibetan Plateau were similar,with a low gravel coverage,small gravel diameter,and elongated shape.These regions were mainly distributed in high mountainous areas with large topographic relief.The central region(cluster A)has high coverage of gravels with a larger diameter,mainly distributed in high-altitude plains with smaller undulation.Principal component analysis(PCA)results showed that the gravel distribution of cluster A may be mainly affected by vegetation,while those in clusters B and C could be mainly affected by topography,climate,and soil.The study confirmed that the combination of digital image processing technology and SOM could effectively analyzed the spatial distribution characteristics of gravels,providing a new mode for gravel research.
基金supported by National Natural Science Foundation Grant 52065010the Science and Technology Project supported by Guizhou Province of China ZK[2021]341 and[2021]397the transformation Project of Scientific and Technological Achievements in Guiyang,Guizhou Province,China[2021]7-3.
文摘A second-generation fast Non-dominated Sorting Genetic Algorithm product shape multi-objective imagery optimization model based on degradation(DNSGA-II)strategy is proposed to make the product appearance optimization scheme meet the complex emotional needs of users for the product.First,the semantic differential method and K-Means cluster analysis are applied to extract the multi-objective imagery of users;then,the product multidimensional scale analysis is applied to classify the research objects,and again the reference samples are screened by the semantic differentialmethod,and the samples are parametrized in two dimensions by using elliptic Fourier analysis;finally,the fuzzy dynamic evaluation function is used as the objective function of the algorithm,and the coordinates of key points of product contours Finally,with the fuzzy dynamic evaluation function as the objective function of the algorithm and the coordinates of key points of the product profile as the decision variables,the optimal product profile solution set is solved by DNSGA-II.The validity of the model is verified by taking the optimization of the shape scheme of the hospital connection site as an example.For comparison with DNSGA-II,other multi-objective optimization algorithms are also presented.To evaluate the performance of each algorithm,the performance evaluation index values of the five multi-objective optimization algorithms are calculated in this paper.The results show that DNSGA-II is superior in improving individual diversity and has better overall performance.
基金the National High Technology Research and Development Program of China(Grant No.2012AA011603)
文摘Linear scan computed tomography (LCT) is of great benefit to online industrial scanning and security inspection due to its characteristics of straight-line source trajectory and high scanning speed. However, in practical applications of LCT, there are challenges to image reconstruction due to limited-angle and insufficient data. In this paper, a new reconstruction algorithm based on total-variation (TV) minimization is developed to reconstruct images from limited-angle and insufficient data in LCT. The main idea of our approach is to reformulate a TV problem as a linear equality constrained problem where the objective function is separable, and then minimize its augmented Lagrangian function by using alternating direction method (ADM) to solve subproblems. The proposed method is robust and efficient in the task of reconstruction by showing the convergence of ADM. The numerical simulations and real data reconstructions show that the proposed reconstruction method brings reasonable performance and outperforms some previous ones when applied to an LCT imaging problem.