In recent times, lithium-ion batteries have been widely used owing to their high energy density, extended cycle lifespan, and minimal self-discharge rate. The design of high-speed rechargeable lithium-ion batteries fa...In recent times, lithium-ion batteries have been widely used owing to their high energy density, extended cycle lifespan, and minimal self-discharge rate. The design of high-speed rechargeable lithium-ion batteries faces a significant challenge owing to the need to increase average electric power during charging. This challenge results from the direct influence of the power level on the rate of chemical reactions occurring in the battery electrodes. In this study, the Taguchi optimization method was used to enhance the average electric power during the charging process of lithium-ion batteries. The Taguchi technique is a statistical strategy that facilitates the systematic and efficient evaluation of numerous experimental variables. The proposed method involved varying seven input factors, including positive electrode thickness, positive electrode material, positive electrode active material volume fraction, negative electrode active material volume fraction, separator thickness, positive current collector thickness, and negative current collector thickness. Three levels were assigned to each control factor to identify the optimal conditions and maximize the average electric power during charging. Moreover, a variance assessment analysis was conducted to validate the results obtained from the Taguchi analysis. The results revealed that the Taguchi method was an eff ective approach for optimizing the average electric power during the charging of lithium-ion batteries. This indicates that the positive electrode material, followed by the separator thickness and the negative electrode active material volume fraction, was key factors significantly infl uencing the average electric power during the charging of lithium-ion batteries response. The identification of optimal conditions resulted in the improved performance of lithium-ion batteries, extending their potential in various applications. Particularly, lithium-ion batteries with average electric power of 16 W and 17 W during charging were designed and simulated in the range of 0-12000 s using COMSOL Multiphysics software. This study efficiently employs the Taguchi optimization technique to develop lithium-ion batteries capable of storing a predetermined average electric power during the charging phase. Therefore, this method enables the battery to achieve complete charging within a specific timeframe tailored to a specificapplication. The implementation of this method can save costs, time, and materials compared with other alternative methods, such as the trial-and-error approach.展开更多
In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploratio...In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.展开更多
Imaging methods are frequently used to diagnose gastrointestinal diseases and play a crucial role in verifying clinical diagnoses among all diagnostic algorithms.However,these methods have limitations,challenges,benef...Imaging methods are frequently used to diagnose gastrointestinal diseases and play a crucial role in verifying clinical diagnoses among all diagnostic algorithms.However,these methods have limitations,challenges,benefits,and advantages.Addressing these limitations requires the application of objective criteria to assess the effectiveness of each diagnostic method.The diagnostic process is dynamic and requires a consistent algorithm,progressing from clinical subjective data,such as patient history(anamnesis),and objective findings to diagnostics ex juvantibus.Caution must be exercised when interpreting diagnostic results,and there is an urgent need for better diagnostic tests.In the absence of such tests,preliminary criteria and a diagnosis ex juvantibus must be relied upon.Diagnostic imaging methods are critical stages in the diagnostic workflow,with sensitivity,specificity,and accuracy serving as the primary criteria for evaluating clinical,laboratory,and instrumental symptoms.A comprehensive evaluation of all available diagnostic data guarantees an accurate diagnosis.The“gold standard”for diagnosis is typically established through either the results of a pathological autopsy or a lifetime diagnosis resulting from a thorough examination using all diagnostic methods.展开更多
Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key ...Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
We investigate the following inverse problem:starting from the acoustic wave equation,reconstruct a piecewise constant passive acoustic source from a single boundary temporal measurement without knowing the speed of s...We investigate the following inverse problem:starting from the acoustic wave equation,reconstruct a piecewise constant passive acoustic source from a single boundary temporal measurement without knowing the speed of sound.When the amplitudes of the source are known a priori,we prove a unique determination result of the shape and propose a level set algorithm to reconstruct the singularities.When the singularities of the source are known a priori,we show unique determination of the source amplitudes and propose a least-squares fitting algorithm to recover the source amplitudes.The analysis bridges the low-frequency source inversion problem and the inverse problem of gravimetry.The proposed algorithms are validated and quantitatively evaluated with numerical experiments in 2D and 3D.展开更多
The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small e...The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.展开更多
Gravity as a fundamental force plays a dominant role in the formation and evolution of cosmic objects and leaves its effect in the emergence of symmetric and asymmetric structures.Thus,analyzing the symmetry criteria ...Gravity as a fundamental force plays a dominant role in the formation and evolution of cosmic objects and leaves its effect in the emergence of symmetric and asymmetric structures.Thus,analyzing the symmetry criteria allows us to uncover mechanisms behind the gravity interaction and understand the underlying physical processes that contribute to the formation of large-scale structures such as galaxies.We use a segmentation process using intensity thresholding and the k-means clustering algorithm to analyze radio galaxy images.We employ a symmetry criterion and explore the relation between morphological symmetry in radio maps and host galaxy properties.Optical properties(stellar mass,black hole mass,optical size(R_(50)),concentration,stellar mass surface density(μ_(50)),and stellar age)and radio properties(radio flux density,radio luminosity,and radio size)are considered.We found that there is a correlation between symmetry and radio size,indicating larger radio sources have smaller symmetry indices.Therefore,size of radio sources should be considered in any investigation of symmetry.Weak correlations are also observed with other properties,such as R_(50)for FRI galaxies and stellar age.We compare the symmetry differences between FRI and FRII radio galaxies.FRII galaxies show higher symmetry in 1.4 GHz and 150 MHz maps.Investigating the influence of radio source sizes,we discovered that this result is independent of the sizes of radio sources.These findings contribute to our understanding of the morphological properties and analyses of radio galaxies.展开更多
In order to solve the problems of small monitoring range,long time and high cost of existing sedimentation observation methods,based on two-view sentinel No.1 radar images of Guqiao mining area in Huainan City from No...In order to solve the problems of small monitoring range,long time and high cost of existing sedimentation observation methods,based on two-view sentinel No.1 radar images of Guqiao mining area in Huainan City from November 4,2017 to November 28,2017,surface change information was obtained in combination with D-InSAR,and the three-dimensional surface deformation was monitored by two-pass method and single line of sight D-InSAR method.The results show that during the research period of 24 d,the maximum deformation of the mining area reached 71 mm,and the southern subsidence was the most obvious,which was in line with the mining subsidence law.The maximum displacement from the north to the south was about 250 mm,while the maximum displacement from the east to the west was about 80 mm,and the maximum subsidence in the center was 110 mm.It is concluded that D-InSAR technique has a good effect on the inversion of the mining subsidence,and this method is suitable for three-dimensional surface monitoring in areas with similar geological conditions.The monitoring results have certain reference value.展开更多
As dense seismic arrays at different scales are deployed,the techniques to make full use of array data with low computing cost become increasingly needed.The wave gradiometry method(WGM)is a new branch in seismic tomo...As dense seismic arrays at different scales are deployed,the techniques to make full use of array data with low computing cost become increasingly needed.The wave gradiometry method(WGM)is a new branch in seismic tomography,which utilizes the spatial gradients of the wavefield to determine the phase velocity,wave propagation direction,geometrical spreading,and radiation pattern.Seismic wave propagation parameters obtained using the WGM can be further applied to invert 3D velocity models,Q values,and anisotropy at lithospheric(crust and/or mantle)and smaller scales(e.g.,industrial oilfield or fault zone).Herein,we review the theoretical foundation,technical development,and major applications of the WGM,and compared the WGM with other commonly used major array imaging methods.Future development of the WGM is also discussed.展开更多
The measurement of solar irradiation is still a necessary basis for planning the installation of photovoltaic parks and concentrating solar power systems. The meteorological stations for the measurement of the solar f...The measurement of solar irradiation is still a necessary basis for planning the installation of photovoltaic parks and concentrating solar power systems. The meteorological stations for the measurement of the solar flux at any point of the earth’s surface are still insufficient worldwide;moreover, these measurements on the ground are expensive, and rare. To overcome this shortcoming, the exploitation of images from the European meteorological satellites of the second generation MSG is a reliable solution to estimate the global horizontal irradiance GHI on the ground with a good spatial and temporal coverage. Since 2004, the new generation MSG satellites provide images of Africa and Europe every 15 minutes with a spatial resolution of about 1 km × 1 km at the sub-satellite point. The objective of this work was to apply the Brazil-SR method to evaluate the global horizontal GHI irradiance for the entire Moroccan national territory from the European Meteosat Second Generation MSG satellite images. This bibliographic review also exposed the standard model of calculation of GHI in clear sky by exploiting the terrestrial meteorological measurements.展开更多
A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming probl...A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.展开更多
In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling met...In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling method is proposed based on the method of moving average and adaptive nonparametric kernel density estimation(NPKDE)method.Firstly,the method of moving average is used to reduce the fluctuation of the sampling wind power component,and the probability characteristics of the modeling are then determined based on the NPKDE.Secondly,the model is improved adaptively,and is then solved by using constraint-order optimization.The simulation results show that this method has a better accuracy and applicability compared with the modeling method based on traditional parameter estimation,and solves the local adaptation problem of traditional NPKDE.展开更多
As the key technology of extracting remote sensing information,the classification of remote sensing images has always been the research focus in the field of remote sensing. The paper introduces the classification pro...As the key technology of extracting remote sensing information,the classification of remote sensing images has always been the research focus in the field of remote sensing. The paper introduces the classification process and system of remote sensing images. According to the recent research status of domestic and international remote sensing classification methods,the new study dynamics of remote sensing classification,such as artificial neural networks,support vector machine,active learning and ensemble multi-classifiers,were introduced,providing references for the automatic and intelligent development of remote sensing images classification.展开更多
The method of images is used to study the charge distribution for cases where Coulomb’s law deviates from the inverse square law. This method shows that in these cases some of the charge goes to the surface, while th...The method of images is used to study the charge distribution for cases where Coulomb’s law deviates from the inverse square law. This method shows that in these cases some of the charge goes to the surface, while the remainder charge distributed over the volume of the conductor. In accord with the experimental work, we show that the charge distribution will depend on the photon rest mass and is very sensitive to it;a very small value of the rest of mass of the photon will create deviation from Coulomb’s law.展开更多
By analyzing the existing average skidding distance formulac and the shape of the landing area, theauthors put forward that the average skidding distance is the shortest when the ratio of length and width is 1, and th...By analyzing the existing average skidding distance formulac and the shape of the landing area, theauthors put forward that the average skidding distance is the shortest when the ratio of length and width is 1, and the landing collectioll area is in proportion to of average geometrical skidding distance. The new models of calculating average distance are presented.展开更多
In this paper, the evaluation of discretely sampled Asian options was considered by numerically solving the associated partial differential equations with the Legendre spectral method. Double average options were disc...In this paper, the evaluation of discretely sampled Asian options was considered by numerically solving the associated partial differential equations with the Legendre spectral method. Double average options were discussed as examples. The problem is a parabolic one on a finite domain whose equation degenerates into ordinary differential equations on the boundaries. A fully discrete scheme was established by using the Legendre spectral method in space and the Crank-Nicolson finite difference scheme in time. The stability and convergence of the scheme were analyzed. Numerical results show that the method can keep the spectral accuracy in space for such degenerate problems.展开更多
As yet various methods have been used for determining the salinity rate of seas and oceans water. The current method of determining salinity rate of seas water has been field examination of various points of sea and d...As yet various methods have been used for determining the salinity rate of seas and oceans water. The current method of determining salinity rate of seas water has been field examination of various points of sea and determining its salinity rate. In the last decade, remote sensing satellite images have had high capability in determining sea waters salinity rate. Regarding that the present methods in remote sensing depend on the studied regions, therefore, the necessity of customization of these methods is felt. Fresh water springs due to impact on water salinity and temperature and also the environment physics and density like sound velocity are very significant and since coasts and islands of Persian Gulf are considered among arid and semi-arid regions and lack drinking water, access to fresh water springs has more significance. After studies performed, preparation of salinity rate observations and catching two series of proper images for felid data for complete coverage of the region, preprocessing and calibration was performed. For this purpose in turning the acquired radiance to reflection, ENVI software was used. The histogram of calibrated shades of gray rates in images was specified, so that reflection of each sample can be extracted from images. In this paper, the rate of least method efficiency in determining salinity rate of Persian Gulf waters was examined and finally identifying fresh water pits using remote sensing technique was done. The obtained results in the least squares methods after combining various bands of image with each other specified that combining 4 bands of 2, 3, 5 and 7 has the least standard deviation rate with training data and test, which is equal to 0.385 and 0.991978.展开更多
A great amount of work addressed methods for predicting the battery lifetime in wireless sensor systems. In spite of these efforts, the reported experimental results demonstrate that the duty-cycle current average met...A great amount of work addressed methods for predicting the battery lifetime in wireless sensor systems. In spite of these efforts, the reported experimental results demonstrate that the duty-cycle current average method, which is widely used to this aim, fails in accurately estimating the battery life time of most of the presented wireless sensor system applications. The aim of this paper is to experimentally assess the duty-cycle current average method in order to give more effective insight on the effectiveness of the method. An electronic metering system, based on a dedicated PCB, has been designed and developed to experimentally measure node current consumption profiles and charge extracted from the battery in two selected case studies. A battery lifetime measurement (during 30 days) has been carried out. Experimental results have been assessed and compared with estimations given by using the duty-cycle current average method. Based on the measurement results, we show that the assumptions on which the method is based do not hold in real operating cases. The rationality of the duty-cycle current average method needs reconsidering.展开更多
By employing the previous Voronoi approach and replacing its nearest neighbor approx- imation with Drizzle in iterative signal extraction, we develop a fast iterative Drizzle algorithm, namedfiDrizzle, to reconstruct ...By employing the previous Voronoi approach and replacing its nearest neighbor approx- imation with Drizzle in iterative signal extraction, we develop a fast iterative Drizzle algorithm, namedfiDrizzle, to reconstruct the underlying band-limited image from undersampled dithered frames. Compared with the existing iDrizzle, the new algorithm improves rate of convergence and accelerates the computational speed. Moreover, under the same conditions (e.g. the same number of dithers and iterations), fiDrizzle can make a better quality reconstruction than iDrizzle, due to the newly discov- ered High Sampling caused Decelerating Convergence (HSDC) effect in the iterative signal extraction process.fiDrizzle demonstrates its powerful ability to perform image deconvolution from undersampled dithers.展开更多
文摘In recent times, lithium-ion batteries have been widely used owing to their high energy density, extended cycle lifespan, and minimal self-discharge rate. The design of high-speed rechargeable lithium-ion batteries faces a significant challenge owing to the need to increase average electric power during charging. This challenge results from the direct influence of the power level on the rate of chemical reactions occurring in the battery electrodes. In this study, the Taguchi optimization method was used to enhance the average electric power during the charging process of lithium-ion batteries. The Taguchi technique is a statistical strategy that facilitates the systematic and efficient evaluation of numerous experimental variables. The proposed method involved varying seven input factors, including positive electrode thickness, positive electrode material, positive electrode active material volume fraction, negative electrode active material volume fraction, separator thickness, positive current collector thickness, and negative current collector thickness. Three levels were assigned to each control factor to identify the optimal conditions and maximize the average electric power during charging. Moreover, a variance assessment analysis was conducted to validate the results obtained from the Taguchi analysis. The results revealed that the Taguchi method was an eff ective approach for optimizing the average electric power during the charging of lithium-ion batteries. This indicates that the positive electrode material, followed by the separator thickness and the negative electrode active material volume fraction, was key factors significantly infl uencing the average electric power during the charging of lithium-ion batteries response. The identification of optimal conditions resulted in the improved performance of lithium-ion batteries, extending their potential in various applications. Particularly, lithium-ion batteries with average electric power of 16 W and 17 W during charging were designed and simulated in the range of 0-12000 s using COMSOL Multiphysics software. This study efficiently employs the Taguchi optimization technique to develop lithium-ion batteries capable of storing a predetermined average electric power during the charging phase. Therefore, this method enables the battery to achieve complete charging within a specific timeframe tailored to a specificapplication. The implementation of this method can save costs, time, and materials compared with other alternative methods, such as the trial-and-error approach.
基金supported by the National Natural Science Foundation of China(NSFC,grant Nos.42172323 and 12371454)。
文摘In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.
文摘Imaging methods are frequently used to diagnose gastrointestinal diseases and play a crucial role in verifying clinical diagnoses among all diagnostic algorithms.However,these methods have limitations,challenges,benefits,and advantages.Addressing these limitations requires the application of objective criteria to assess the effectiveness of each diagnostic method.The diagnostic process is dynamic and requires a consistent algorithm,progressing from clinical subjective data,such as patient history(anamnesis),and objective findings to diagnostics ex juvantibus.Caution must be exercised when interpreting diagnostic results,and there is an urgent need for better diagnostic tests.In the absence of such tests,preliminary criteria and a diagnosis ex juvantibus must be relied upon.Diagnostic imaging methods are critical stages in the diagnostic workflow,with sensitivity,specificity,and accuracy serving as the primary criteria for evaluating clinical,laboratory,and instrumental symptoms.A comprehensive evaluation of all available diagnostic data guarantees an accurate diagnosis.The“gold standard”for diagnosis is typically established through either the results of a pathological autopsy or a lifetime diagnosis resulting from a thorough examination using all diagnostic methods.
基金supported by the National Natural Science Foundation of China(No.12373073,U2031104,No.12173015)Guangdong Basic and Applied Basic Research Foundation(No.2023A1515011340)。
文摘Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金partially supported by the NSF(Grant Nos.2012046,2152011,and 2309534)partially supported by the NSF(Grant Nos.DMS-1715178,DMS-2006881,and DMS-2237534)+1 种基金NIH(Grant No.R03-EB033521)startup fund from Michigan State University.
文摘We investigate the following inverse problem:starting from the acoustic wave equation,reconstruct a piecewise constant passive acoustic source from a single boundary temporal measurement without knowing the speed of sound.When the amplitudes of the source are known a priori,we prove a unique determination result of the shape and propose a level set algorithm to reconstruct the singularities.When the singularities of the source are known a priori,we show unique determination of the source amplitudes and propose a least-squares fitting algorithm to recover the source amplitudes.The analysis bridges the low-frequency source inversion problem and the inverse problem of gravimetry.The proposed algorithms are validated and quantitatively evaluated with numerical experiments in 2D and 3D.
基金supported by the National Key R&D Program of China(grant No.2022YFF0503800)by the National Natural Science Foundation of China(NSFC)(grant No.11427901)+1 种基金by the Strategic Priority Research Program of the Chinese Academy of Sciences(CAS-SPP)(grant No.XDA15320102)by the Youth Innovation Promotion Association(CAS No.2022057)。
文摘The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.
文摘Gravity as a fundamental force plays a dominant role in the formation and evolution of cosmic objects and leaves its effect in the emergence of symmetric and asymmetric structures.Thus,analyzing the symmetry criteria allows us to uncover mechanisms behind the gravity interaction and understand the underlying physical processes that contribute to the formation of large-scale structures such as galaxies.We use a segmentation process using intensity thresholding and the k-means clustering algorithm to analyze radio galaxy images.We employ a symmetry criterion and explore the relation between morphological symmetry in radio maps and host galaxy properties.Optical properties(stellar mass,black hole mass,optical size(R_(50)),concentration,stellar mass surface density(μ_(50)),and stellar age)and radio properties(radio flux density,radio luminosity,and radio size)are considered.We found that there is a correlation between symmetry and radio size,indicating larger radio sources have smaller symmetry indices.Therefore,size of radio sources should be considered in any investigation of symmetry.Weak correlations are also observed with other properties,such as R_(50)for FRI galaxies and stellar age.We compare the symmetry differences between FRI and FRII radio galaxies.FRII galaxies show higher symmetry in 1.4 GHz and 150 MHz maps.Investigating the influence of radio source sizes,we discovered that this result is independent of the sizes of radio sources.These findings contribute to our understanding of the morphological properties and analyses of radio galaxies.
基金the Talent Introduction Project of Anhui University of Science and Technology(ZHYJ202104)Horizontal Cooperation Project(881079,880554,880982)Innovation and Entrepreneurship Project of National College Students(S202310879289,S202310879296,X202310879098,X20231087-9097).
文摘In order to solve the problems of small monitoring range,long time and high cost of existing sedimentation observation methods,based on two-view sentinel No.1 radar images of Guqiao mining area in Huainan City from November 4,2017 to November 28,2017,surface change information was obtained in combination with D-InSAR,and the three-dimensional surface deformation was monitored by two-pass method and single line of sight D-InSAR method.The results show that during the research period of 24 d,the maximum deformation of the mining area reached 71 mm,and the southern subsidence was the most obvious,which was in line with the mining subsidence law.The maximum displacement from the north to the south was about 250 mm,while the maximum displacement from the east to the west was about 80 mm,and the maximum subsidence in the center was 110 mm.It is concluded that D-InSAR technique has a good effect on the inversion of the mining subsidence,and this method is suitable for three-dimensional surface monitoring in areas with similar geological conditions.The monitoring results have certain reference value.
文摘As dense seismic arrays at different scales are deployed,the techniques to make full use of array data with low computing cost become increasingly needed.The wave gradiometry method(WGM)is a new branch in seismic tomography,which utilizes the spatial gradients of the wavefield to determine the phase velocity,wave propagation direction,geometrical spreading,and radiation pattern.Seismic wave propagation parameters obtained using the WGM can be further applied to invert 3D velocity models,Q values,and anisotropy at lithospheric(crust and/or mantle)and smaller scales(e.g.,industrial oilfield or fault zone).Herein,we review the theoretical foundation,technical development,and major applications of the WGM,and compared the WGM with other commonly used major array imaging methods.Future development of the WGM is also discussed.
文摘The measurement of solar irradiation is still a necessary basis for planning the installation of photovoltaic parks and concentrating solar power systems. The meteorological stations for the measurement of the solar flux at any point of the earth’s surface are still insufficient worldwide;moreover, these measurements on the ground are expensive, and rare. To overcome this shortcoming, the exploitation of images from the European meteorological satellites of the second generation MSG is a reliable solution to estimate the global horizontal irradiance GHI on the ground with a good spatial and temporal coverage. Since 2004, the new generation MSG satellites provide images of Africa and Europe every 15 minutes with a spatial resolution of about 1 km × 1 km at the sub-satellite point. The objective of this work was to apply the Brazil-SR method to evaluate the global horizontal GHI irradiance for the entire Moroccan national territory from the European Meteosat Second Generation MSG satellite images. This bibliographic review also exposed the standard model of calculation of GHI in clear sky by exploiting the terrestrial meteorological measurements.
文摘A multi-objective linear programming problem is made from fuzzy linear programming problem. It is due the fact that it is used fuzzy programming method during the solution. The Multi objective linear programming problem can be converted into the single objective function by various methods as Chandra Sen’s method, weighted sum method, ranking function method, statistical averaging method. In this paper, Chandra Sen’s method and statistical averaging method both are used here for making single objective function from multi-objective function. Two multi-objective programming problems are solved to verify the result. One is numerical example and the other is real life example. Then the problems are solved by ordinary simplex method and fuzzy programming method. It can be seen that fuzzy programming method gives better optimal values than the ordinary simplex method.
基金supported by Science and Technology project of the State Grid Corporation of China“Research on Active Development Planning Technology and Comprehensive Benefit Analysis Method for Regional Smart Grid Comprehensive Demonstration Zone”National Natural Science Foundation of China(51607104)
文摘In the process of large-scale,grid-connected wind power operations,it is important to establish an accurate probability distribution model for wind farm fluctuations.In this study,a wind power fluctuation modeling method is proposed based on the method of moving average and adaptive nonparametric kernel density estimation(NPKDE)method.Firstly,the method of moving average is used to reduce the fluctuation of the sampling wind power component,and the probability characteristics of the modeling are then determined based on the NPKDE.Secondly,the model is improved adaptively,and is then solved by using constraint-order optimization.The simulation results show that this method has a better accuracy and applicability compared with the modeling method based on traditional parameter estimation,and solves the local adaptation problem of traditional NPKDE.
基金Supported by the Science Research Foundation(2010Y290) of Yunnan Department of Education
文摘As the key technology of extracting remote sensing information,the classification of remote sensing images has always been the research focus in the field of remote sensing. The paper introduces the classification process and system of remote sensing images. According to the recent research status of domestic and international remote sensing classification methods,the new study dynamics of remote sensing classification,such as artificial neural networks,support vector machine,active learning and ensemble multi-classifiers,were introduced,providing references for the automatic and intelligent development of remote sensing images classification.
文摘The method of images is used to study the charge distribution for cases where Coulomb’s law deviates from the inverse square law. This method shows that in these cases some of the charge goes to the surface, while the remainder charge distributed over the volume of the conductor. In accord with the experimental work, we show that the charge distribution will depend on the photon rest mass and is very sensitive to it;a very small value of the rest of mass of the photon will create deviation from Coulomb’s law.
文摘By analyzing the existing average skidding distance formulac and the shape of the landing area, theauthors put forward that the average skidding distance is the shortest when the ratio of length and width is 1, and the landing collectioll area is in proportion to of average geometrical skidding distance. The new models of calculating average distance are presented.
文摘In this paper, the evaluation of discretely sampled Asian options was considered by numerically solving the associated partial differential equations with the Legendre spectral method. Double average options were discussed as examples. The problem is a parabolic one on a finite domain whose equation degenerates into ordinary differential equations on the boundaries. A fully discrete scheme was established by using the Legendre spectral method in space and the Crank-Nicolson finite difference scheme in time. The stability and convergence of the scheme were analyzed. Numerical results show that the method can keep the spectral accuracy in space for such degenerate problems.
文摘As yet various methods have been used for determining the salinity rate of seas and oceans water. The current method of determining salinity rate of seas water has been field examination of various points of sea and determining its salinity rate. In the last decade, remote sensing satellite images have had high capability in determining sea waters salinity rate. Regarding that the present methods in remote sensing depend on the studied regions, therefore, the necessity of customization of these methods is felt. Fresh water springs due to impact on water salinity and temperature and also the environment physics and density like sound velocity are very significant and since coasts and islands of Persian Gulf are considered among arid and semi-arid regions and lack drinking water, access to fresh water springs has more significance. After studies performed, preparation of salinity rate observations and catching two series of proper images for felid data for complete coverage of the region, preprocessing and calibration was performed. For this purpose in turning the acquired radiance to reflection, ENVI software was used. The histogram of calibrated shades of gray rates in images was specified, so that reflection of each sample can be extracted from images. In this paper, the rate of least method efficiency in determining salinity rate of Persian Gulf waters was examined and finally identifying fresh water pits using remote sensing technique was done. The obtained results in the least squares methods after combining various bands of image with each other specified that combining 4 bands of 2, 3, 5 and 7 has the least standard deviation rate with training data and test, which is equal to 0.385 and 0.991978.
文摘A great amount of work addressed methods for predicting the battery lifetime in wireless sensor systems. In spite of these efforts, the reported experimental results demonstrate that the duty-cycle current average method, which is widely used to this aim, fails in accurately estimating the battery life time of most of the presented wireless sensor system applications. The aim of this paper is to experimentally assess the duty-cycle current average method in order to give more effective insight on the effectiveness of the method. An electronic metering system, based on a dedicated PCB, has been designed and developed to experimentally measure node current consumption profiles and charge extracted from the battery in two selected case studies. A battery lifetime measurement (during 30 days) has been carried out. Experimental results have been assessed and compared with estimations given by using the duty-cycle current average method. Based on the measurement results, we show that the assumptions on which the method is based do not hold in real operating cases. The rationality of the duty-cycle current average method needs reconsidering.
基金supported by the National Basic Research Program of China (973 program, Nos. 2015CB857000 and 2013CB834900)the Foundation for Distinguished Young Scholars of Jiangsu Province (No. BK20140050)+1 种基金the ‘Strategic Priority Research Program the Emergence of Cosmological Structure’ of the CAS (No. XDB09010000)the National Natural Science Foundation of China (Nos. 11333008, 11233005, 11273061 and 11673065)
文摘By employing the previous Voronoi approach and replacing its nearest neighbor approx- imation with Drizzle in iterative signal extraction, we develop a fast iterative Drizzle algorithm, namedfiDrizzle, to reconstruct the underlying band-limited image from undersampled dithered frames. Compared with the existing iDrizzle, the new algorithm improves rate of convergence and accelerates the computational speed. Moreover, under the same conditions (e.g. the same number of dithers and iterations), fiDrizzle can make a better quality reconstruction than iDrizzle, due to the newly discov- ered High Sampling caused Decelerating Convergence (HSDC) effect in the iterative signal extraction process.fiDrizzle demonstrates its powerful ability to perform image deconvolution from undersampled dithers.