Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis...In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image proces...Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
Remotely sensed spectral data and images are acquired under significant additional effects accompanying their major formation process, which greatly determine measurement accuracy. In order to be used in subsequent qu...Remotely sensed spectral data and images are acquired under significant additional effects accompanying their major formation process, which greatly determine measurement accuracy. In order to be used in subsequent quantitative analysis and assessment, this data should be subject to preliminary processing aiming to improve its accuracy and credibility. The paper considers some major problems related with preliminary processing of remotely sensed spectral data and images. The major factors are analyzed, which affect the occurrence of data noise or uncertainties and the methods for reduction or removal thereof. Assessment is made of the extent to which available equipment and technologies may help reduce measurement errors.展开更多
The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image ar...The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image are taken as research objects. On the base of the traditional checking methods of printing quality,combining the method and theory of digital image processing with printing theory in the new domain of image quality checking,it constitute the checking system of printing quality by image processing,and expound the theory design and the model of this system. This is an application of machine vision. It uses the high resolution industrial CCD(Charge Coupled Device) colorful camera. It can display the real-time photographs on the monitor,and input the video signal to the image gathering card,and then the image data transmits through the computer PCI bus to the memory. At the same time,the system carries on processing and data analysis. This method is proved by experiments. The experiments are mainly about the data conversion of image and ink limit show of printing.展开更多
The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small e...The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.展开更多
In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illuminati...In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.展开更多
This paper presents a lineament detection method using multi-band remote sensing images. The main objective of this work is to design an automatic image processing tool for lineament mapping from Landsat-7 ETM + satel...This paper presents a lineament detection method using multi-band remote sensing images. The main objective of this work is to design an automatic image processing tool for lineament mapping from Landsat-7 ETM + satellite data. Five procedures were involved: 1) The Principal Component Analysis;2) image enhancement using histogram equalization technique 3) directional Sobel filters of the original data;4) histogram segmentation and 5) binary image generation. The applied methodology was contributed in identifying several known large-scale faults in the Northeast of Tunisia. The statistical and spatial analyses of lineament map indicate a difference of morphological appearance of lineaments in the satellite image. Indeed, all the lineaments present a specific organization. Five groups were classified based on three orientations: NE-SW, E-W and NW-SE. The overlapping of lineament map with the geologic map confirms that these lineaments of diverse directions can be identified and recognized on the field as a fault. The identified lineaments were linked to a deep faults caused by tectonic movements in Tunisia. This study shows the performance of the satellite image processing in the analysis and mapping of the accidents in the northern Atlas.展开更多
This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia dat...This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia data, creates new requirements for processing and storage. As an open source distributed computational framework, Hadoop allows for processing large amounts of images on an infinite set of computing nodes by providing necessary infrastructures. This paper introduces this framework, current works and its advantages and disadvantages.展开更多
Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or ind...Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or individual animal behaviour during the gathering of sensor data. In this study, 14 Holstein-Friesian cows were recorded with a 2D video camera while walking through a scanning passage comprising six Microsoft Kinect 3D cameras. Elementary behavioural traits like how long the cows avoided the passage, the time they needed to walk through or the number of times they stopped walking were assessed from the video footage and analysed with respect to the target variable “udder depth” that was calculated from the recorded 3D data using an automated procedure. Ten repeated passages were recorded of each cow. During the repetitions, the cows adjusted individually (p < 0.001) to the recording situations. The averaged total time to complete a passage (p = 0.05) and the averaged number of stops (p = 0.07) depended on the lactation numbers of the cows. The measurement precision of target variable “udder depth” was affected by the time the cows avoided the recording (p = 0.06) and by the time it took them to walk through the scanning passage (p = 0.03). Effects of animal behaviour during the collection of sensor data can alter the results and should, thus, be considered in the development of sensor based devices.展开更多
Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information...Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information.This allows an in-depth exploration of the rock microstructures and the coupled chemical characteristics in the BSE-SEM image to be made using image processing techniques.Although image processing is a powerful tool for revealing the more subtle data“hidden”in a picture,it is not a commonly employed method in geoscientific microstructural analysis.Here,we briefly introduce the general principles of image processing,and further discuss its application in studying rock microstructures using BSE-SEM image data.展开更多
The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectivene...The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.展开更多
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process...Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process large amounts of data of spaceborne synthetic aperture radars.It is proposed to use a new method of networked satellite data processing for improving the efficiency of data processing.A multi-satellite distributed SAR real-time processing method based on Chirp Scaling(CS)imaging algorithm is studied in this paper,and a distributed data processing system is built with field programmable gate array(FPGA)chips as the kernel.Different from the traditional CS algorithm processing,the system divides data processing into three stages.The computing tasks are reasonably allocated to different data processing units(i.e.,satellites)in each stage.The method effectively saves computing and storage resources of satellites,improves the utilization rate of a single satellite,and shortens the data processing time.Gaofen-3(GF-3)satellite SAR raw data is processed by the system,with the performance of the method verified.展开更多
To address the problem of the low accuracy of transverse velocity field measurements for small targets in highresolution solar images,we proposed a novel velocity field measurement method for high-resolution solar ima...To address the problem of the low accuracy of transverse velocity field measurements for small targets in highresolution solar images,we proposed a novel velocity field measurement method for high-resolution solar images based on PWCNet.This method transforms the transverse velocity field measurements into an optical flow field prediction problem.We evaluated the performance of the proposed method using the Hαand TiO data sets obtained from New Vacuum Solar Telescope observations.The experimental results show that our method effectively predicts the optical flow of small targets in images compared with several typical machine-and deeplearning methods.On the Hαdata set,the proposed method improves the image structure similarity from 0.9182 to0.9587 and reduces the mean of residuals from 24.9931 to 15.2818;on the TiO data set,the proposed method improves the image structure similarity from 0.9289 to 0.9628 and reduces the mean of residuals from 25.9908 to17.0194.The optical flow predicted using the proposed method can provide accurate data for the atmospheric motion information of solar images.The code implementing the proposed method is available on https://github.com/lygmsy123/transverse-velocity-field-measurement.展开更多
Preserving biodiversity and maintaining ecological balance is essential in current environmental conditions.It is challenging to determine vegetation using traditional map classification approaches.The primary issue i...Preserving biodiversity and maintaining ecological balance is essential in current environmental conditions.It is challenging to determine vegetation using traditional map classification approaches.The primary issue in detecting vegetation pattern is that it appears with complex spatial structures and similar spectral properties.It is more demandable to determine the multiple spectral ana-lyses for improving the accuracy of vegetation mapping through remotely sensed images.The proposed framework is developed with the idea of ensembling three effective strategies to produce a robust architecture for vegetation mapping.The architecture comprises three approaches,feature-based approach,region-based approach,and texture-based approach for classifying the vegetation area.The novel Deep Meta fusion model(DMFM)is created with a unique fusion frame-work of residual stacking of convolution layers with Unique covariate features(UCF),Intensity features(IF),and Colour features(CF).The overhead issues in GPU utilization during Convolution neural network(CNN)models are reduced here with a lightweight architecture.The system considers detailing feature areas to improve classification accuracy and reduce processing time.The proposed DMFM model achieved 99%accuracy,with a maximum processing time of 130 s.The training,testing,and validation losses are degraded to a significant level that shows the performance quality with the DMFM model.The system acts as a standard analysis platform for dynamic datasets since all three different fea-tures,such as Unique covariate features(UCF),Intensity features(IF),and Colour features(CF),are considered very well.展开更多
To detect the deformation of the tunnel structure based on image sensor networks is the advanced study and application of spatial sensor technology. For the vertical settlement of metro tunnel caused by internal and e...To detect the deformation of the tunnel structure based on image sensor networks is the advanced study and application of spatial sensor technology. For the vertical settlement of metro tunnel caused by internal and external stress after its long period operation, the overall scheme and measuring principle of tunnel deformation detection system is in- troduced. The image data acquisition and processing of detection target are achieved by the cooperative work of image sensor, ARM embedded system. RS485 communication achieves the data transmission between ARM memory and host computer. The database system in station platform analyses the detection data and obtains the deformation state of tunnel inner wall, which makes it possible to early-warn the tunnel deformation and take preventive measures in time.展开更多
The hard X-ray modulation telescope (HXMT) mission is mainly devoted to performing an all-sky survey at 1- 250 keV with both high sensitivity and high spatial resolution. The observed data reduction as well as the i...The hard X-ray modulation telescope (HXMT) mission is mainly devoted to performing an all-sky survey at 1- 250 keV with both high sensitivity and high spatial resolution. The observed data reduction as well as the image reconstruction for HXMT can be achieved by using the direct demodulation method (DDM). However the original DDM is too computationally expensive for multi-dimensional data with high resolution to be employed for HXMT data. We propose an accelerated direct demodulation method especially adapted for data from HXMT. Simulations are also presented to demonstrate this method.展开更多
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金Scientific Research Deanship has funded this project at the University of Ha’il–Saudi Arabia Ha’il–Saudi Arabia through project number RG-21104.
文摘In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
基金Supported by the National Natural Science Foun-dation of China (60173051) ,the Teaching and Research Award Pro-gramfor Outstanding Young Teachers in Higher Education Institu-tions of Ministry of Education of China ,and Liaoning Province HigherEducation Research Foundation (20040206)
文摘Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
文摘Remotely sensed spectral data and images are acquired under significant additional effects accompanying their major formation process, which greatly determine measurement accuracy. In order to be used in subsequent quantitative analysis and assessment, this data should be subject to preliminary processing aiming to improve its accuracy and credibility. The paper considers some major problems related with preliminary processing of remotely sensed spectral data and images. The major factors are analyzed, which affect the occurrence of data noise or uncertainties and the methods for reduction or removal thereof. Assessment is made of the extent to which available equipment and technologies may help reduce measurement errors.
文摘The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image are taken as research objects. On the base of the traditional checking methods of printing quality,combining the method and theory of digital image processing with printing theory in the new domain of image quality checking,it constitute the checking system of printing quality by image processing,and expound the theory design and the model of this system. This is an application of machine vision. It uses the high resolution industrial CCD(Charge Coupled Device) colorful camera. It can display the real-time photographs on the monitor,and input the video signal to the image gathering card,and then the image data transmits through the computer PCI bus to the memory. At the same time,the system carries on processing and data analysis. This method is proved by experiments. The experiments are mainly about the data conversion of image and ink limit show of printing.
基金supported by the National Key R&D Program of China(grant No.2022YFF0503800)by the National Natural Science Foundation of China(NSFC)(grant No.11427901)+1 种基金by the Strategic Priority Research Program of the Chinese Academy of Sciences(CAS-SPP)(grant No.XDA15320102)by the Youth Innovation Promotion Association(CAS No.2022057)。
文摘The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.
基金This paper is supported by the National Natural Science Foundation ofChina (No .40371107) .
文摘In order to evaluate radiometric normalization techniques, two image normalization algorithms for absolute radiometric correction of Landsat imagery were quantitatively compared in this paper, which are the Illumination Correction Model proposed by Markham and Irish and the Illumination and Atmospheric Correction Model developed by the Remote Sensing and GIS Laboratory of the Utah State University. Relative noise, correlation coefficient and slope value were used as the criteria for the evaluation and comparison, which were derived from pseudo-invarlant features identified from multitemporal Landsat image pairs of Xiamen (厦门) and Fuzhou (福州) areas, both located in the eastern Fujian (福建) Province of China. Compared with the unnormalized image, the radiometric differences between the normalized multitemporal images were significantly reduced when the seasons of multitemporal images were different. However, there was no significant difference between the normalized and unnorrealized images with a similar seasonal condition. Furthermore, the correction results of two algorithms are similar when the images are relatively clear with a uniform atmospheric condition. Therefore, the radiometric normalization procedures should be carried out if the multitemporal images have a significant seasonal difference.
文摘This paper presents a lineament detection method using multi-band remote sensing images. The main objective of this work is to design an automatic image processing tool for lineament mapping from Landsat-7 ETM + satellite data. Five procedures were involved: 1) The Principal Component Analysis;2) image enhancement using histogram equalization technique 3) directional Sobel filters of the original data;4) histogram segmentation and 5) binary image generation. The applied methodology was contributed in identifying several known large-scale faults in the Northeast of Tunisia. The statistical and spatial analyses of lineament map indicate a difference of morphological appearance of lineaments in the satellite image. Indeed, all the lineaments present a specific organization. Five groups were classified based on three orientations: NE-SW, E-W and NW-SE. The overlapping of lineament map with the geologic map confirms that these lineaments of diverse directions can be identified and recognized on the field as a fault. The identified lineaments were linked to a deep faults caused by tectonic movements in Tunisia. This study shows the performance of the satellite image processing in the analysis and mapping of the accidents in the northern Atlas.
文摘This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia data, creates new requirements for processing and storage. As an open source distributed computational framework, Hadoop allows for processing large amounts of images on an infinite set of computing nodes by providing necessary infrastructures. This paper introduces this framework, current works and its advantages and disadvantages.
文摘Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or individual animal behaviour during the gathering of sensor data. In this study, 14 Holstein-Friesian cows were recorded with a 2D video camera while walking through a scanning passage comprising six Microsoft Kinect 3D cameras. Elementary behavioural traits like how long the cows avoided the passage, the time they needed to walk through or the number of times they stopped walking were assessed from the video footage and analysed with respect to the target variable “udder depth” that was calculated from the recorded 3D data using an automated procedure. Ten repeated passages were recorded of each cow. During the repetitions, the cows adjusted individually (p < 0.001) to the recording situations. The averaged total time to complete a passage (p = 0.05) and the averaged number of stops (p = 0.07) depended on the lactation numbers of the cows. The measurement precision of target variable “udder depth” was affected by the time the cows avoided the recording (p = 0.06) and by the time it took them to walk through the scanning passage (p = 0.03). Effects of animal behaviour during the collection of sensor data can alter the results and should, thus, be considered in the development of sensor based devices.
基金funded by the National Natural Science Foundation(No.42261134535)the National Key Research and Development Program(No.2023YFE0125000)+2 种基金the Frontiers Science Center for Deep-time Digital Earth(No.2652023001)the 111 Project of the Ministry of Science and Technology(No.BP0719021)supported by the department of Geology,University of Vienna(No.FA536901)。
文摘Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information.This allows an in-depth exploration of the rock microstructures and the coupled chemical characteristics in the BSE-SEM image to be made using image processing techniques.Although image processing is a powerful tool for revealing the more subtle data“hidden”in a picture,it is not a commonly employed method in geoscientific microstructural analysis.Here,we briefly introduce the general principles of image processing,and further discuss its application in studying rock microstructures using BSE-SEM image data.
文摘The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
基金Project(2017YFC1405600)supported by the National Key R&D Program of ChinaProject(18JK05032)supported by the Scientific Research Project of Education Department of Shaanxi Province,China。
文摘Due to the limited scenes that synthetic aperture radar(SAR)satellites can detect,the full-track utilization rate is not high.Because of the computing and storage limitation of one satellite,it is difficult to process large amounts of data of spaceborne synthetic aperture radars.It is proposed to use a new method of networked satellite data processing for improving the efficiency of data processing.A multi-satellite distributed SAR real-time processing method based on Chirp Scaling(CS)imaging algorithm is studied in this paper,and a distributed data processing system is built with field programmable gate array(FPGA)chips as the kernel.Different from the traditional CS algorithm processing,the system divides data processing into three stages.The computing tasks are reasonably allocated to different data processing units(i.e.,satellites)in each stage.The method effectively saves computing and storage resources of satellites,improves the utilization rate of a single satellite,and shortens the data processing time.Gaofen-3(GF-3)satellite SAR raw data is processed by the system,with the performance of the method verified.
基金supported by the National Natural Science Foundation of China under Grant Nos.12063002,12163004,and 12073077。
文摘To address the problem of the low accuracy of transverse velocity field measurements for small targets in highresolution solar images,we proposed a novel velocity field measurement method for high-resolution solar images based on PWCNet.This method transforms the transverse velocity field measurements into an optical flow field prediction problem.We evaluated the performance of the proposed method using the Hαand TiO data sets obtained from New Vacuum Solar Telescope observations.The experimental results show that our method effectively predicts the optical flow of small targets in images compared with several typical machine-and deeplearning methods.On the Hαdata set,the proposed method improves the image structure similarity from 0.9182 to0.9587 and reduces the mean of residuals from 24.9931 to 15.2818;on the TiO data set,the proposed method improves the image structure similarity from 0.9289 to 0.9628 and reduces the mean of residuals from 25.9908 to17.0194.The optical flow predicted using the proposed method can provide accurate data for the atmospheric motion information of solar images.The code implementing the proposed method is available on https://github.com/lygmsy123/transverse-velocity-field-measurement.
文摘Preserving biodiversity and maintaining ecological balance is essential in current environmental conditions.It is challenging to determine vegetation using traditional map classification approaches.The primary issue in detecting vegetation pattern is that it appears with complex spatial structures and similar spectral properties.It is more demandable to determine the multiple spectral ana-lyses for improving the accuracy of vegetation mapping through remotely sensed images.The proposed framework is developed with the idea of ensembling three effective strategies to produce a robust architecture for vegetation mapping.The architecture comprises three approaches,feature-based approach,region-based approach,and texture-based approach for classifying the vegetation area.The novel Deep Meta fusion model(DMFM)is created with a unique fusion frame-work of residual stacking of convolution layers with Unique covariate features(UCF),Intensity features(IF),and Colour features(CF).The overhead issues in GPU utilization during Convolution neural network(CNN)models are reduced here with a lightweight architecture.The system considers detailing feature areas to improve classification accuracy and reduce processing time.The proposed DMFM model achieved 99%accuracy,with a maximum processing time of 130 s.The training,testing,and validation losses are degraded to a significant level that shows the performance quality with the DMFM model.The system acts as a standard analysis platform for dynamic datasets since all three different fea-tures,such as Unique covariate features(UCF),Intensity features(IF),and Colour features(CF),are considered very well.
基金Science and Technology Commission of Shanghai Municipality(No.08201202103)
文摘To detect the deformation of the tunnel structure based on image sensor networks is the advanced study and application of spatial sensor technology. For the vertical settlement of metro tunnel caused by internal and external stress after its long period operation, the overall scheme and measuring principle of tunnel deformation detection system is in- troduced. The image data acquisition and processing of detection target are achieved by the cooperative work of image sensor, ARM embedded system. RS485 communication achieves the data transmission between ARM memory and host computer. The database system in station platform analyses the detection data and obtains the deformation state of tunnel inner wall, which makes it possible to early-warn the tunnel deformation and take preventive measures in time.
基金supported by the National Natural Science Foundation of China (Grant Nos. 11173038 and 11103022)the Tsinghua University Initiative Scientific Research Program (Grant No. 20111081102)
文摘The hard X-ray modulation telescope (HXMT) mission is mainly devoted to performing an all-sky survey at 1- 250 keV with both high sensitivity and high spatial resolution. The observed data reduction as well as the image reconstruction for HXMT can be achieved by using the direct demodulation method (DDM). However the original DDM is too computationally expensive for multi-dimensional data with high resolution to be employed for HXMT data. We propose an accelerated direct demodulation method especially adapted for data from HXMT. Simulations are also presented to demonstrate this method.