Airborne 3D image which integrates GPS, attitude measurement unit (AMU), scanning laser rangefinder (SLR) and spectra scanner has been developed successfully. The spectral scanner and SLR use the same optical system w...Airborne 3D image which integrates GPS, attitude measurement unit (AMU), scanning laser rangefinder (SLR) and spectra scanner has been developed successfully. The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly. The distinctive advantage of 3D image is that it can produce geo-referenced images and DSM (digital surface models) images without any ground control points (GCPs). It is no longer necessary to survey GCPs and with some softwares the data can be processed and produce digital surface models (DSM) and geo-referenced images in quasi-real-time, therefore the efficiency of 3D image is 10–100 times higher than that of traditional approaches. The processing procedure involves decomposing and checking the raw data, processing GPS data, calculating the positions of laser sample points, producing geo-referenced image, producing DSM and mosaicing strips. The principle of 3D image is first introduced in this paper, and then we focus on the fast processing technique and algorithms. The flight tests and processed results show that the processing technique is feasible and can meet the requirement of quasi-real-time applications.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In orde...The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.展开更多
In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis...In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by reta...Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.展开更多
A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for det...A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for determining band-pass filter parameters based on signal-to-noise ratio gain,smoothness index,and cross-correlation coefficient is designed using the Chebyshev optimal consistent approximation theory.Additionally,a wavelet denoising evaluation function is constructed,with the dmey wavelet basis function identified as most effective for processing gravity gradient data.The results of hard-in-the-loop simulation and prototype experiments show that the proposed processing method has shown a 14%improvement in the measurement variance of gravity gradient signals,and the measurement accuracy has reached within 4E,compared to other commonly used methods,which verifies that the proposed method effectively removes noise from the gradient signals,improved gravity gradiometry accuracy,and has certain technical insights for high-precision airborne gravity gradiometry.展开更多
The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial...The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.展开更多
The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is a...The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.展开更多
The conventional methods of edge detection can roughly delineate edge position of geological bodies,but there are still some problems such as low detection accuracy and being susceptible to noise interference.In this ...The conventional methods of edge detection can roughly delineate edge position of geological bodies,but there are still some problems such as low detection accuracy and being susceptible to noise interference.In this paper,three image processing methods,Canny,Lo G and Sobel operators are briefly introduced,and applied to edge detection to determine the edge of geological bodies.Furthermore,model data is built to analyze the edge detection ability of this image processing methods,and compare with conventional methods.Combined with gravity anomaly of Sichuan basin and magnetic anomaly of Zhurihe area,the detection effect of image processing methods is further verified in real data.The results show that image processing methods can be applied to effectively identify the edge of geological bodies.Moreover,when both positive and negative anomalies exist and noise is abundant,fake edge can be avoided and edge division is clearer,and satisfactory results of edge detection are obtained.展开更多
This paper seeks a synthesis of Bayesian and geostatistical approaches to combining categorical data in the context of remote sensing classification. By experiment with aerial photographs and Landsat TM data, accuracy...This paper seeks a synthesis of Bayesian and geostatistical approaches to combining categorical data in the context of remote sensing classification. By experiment with aerial photographs and Landsat TM data, accuracy of spectral, spatial, and combined classification results was evaluated. It was confirmed that the incorporation of spatial information in spectral classification increases accuracy significantly. Secondly, through test with a 5-class and a 3-class classification schemes, it was revealed that setting a proper semantic framework for classification is fundamental to any endeavors of categorical mapping and the most important factor affecting accuracy. Lastly, this paper promotes non-parametric methods for both definition of class membership profiling based on band-specific histograms of image intensities and derivation of spatial probability via indicator kriging, a non-parametric geostatistical technique.展开更多
Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights t...Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.展开更多
In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularl...In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.展开更多
Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image proces...Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
Remotely sensed spectral data and images are acquired under significant additional effects accompanying their major formation process, which greatly determine measurement accuracy. In order to be used in subsequent qu...Remotely sensed spectral data and images are acquired under significant additional effects accompanying their major formation process, which greatly determine measurement accuracy. In order to be used in subsequent quantitative analysis and assessment, this data should be subject to preliminary processing aiming to improve its accuracy and credibility. The paper considers some major problems related with preliminary processing of remotely sensed spectral data and images. The major factors are analyzed, which affect the occurrence of data noise or uncertainties and the methods for reduction or removal thereof. Assessment is made of the extent to which available equipment and technologies may help reduce measurement errors.展开更多
Disease recognition in plants is one of the essential problems in agricultural image processing.This article focuses on designing a framework that can recognize and classify diseases on pomegranate plants exactly.The ...Disease recognition in plants is one of the essential problems in agricultural image processing.This article focuses on designing a framework that can recognize and classify diseases on pomegranate plants exactly.The framework utilizes image processing techniques such as image acquisition,image resizing,image enhancement,image segmentation,ROI extraction(region of interest),and feature extraction.An image dataset related to pomegranate leaf disease is utilized to implement the framework,divided into a training set and a test set.In the implementation process,techniques such as image enhancement and image segmentation are primarily used for identifying ROI and features.An image classification will then be implemented by combining a supervised learning model with a support vector machine.The proposed framework is developed based on MATLAB with a graphical user interface.According to the experimental results,the proposed framework can achieve 98.39%accuracy for classifying diseased and healthy leaves.Moreover,the framework can achieve an accuracy of 98.07%for classifying diseases on pomegranate leaves.展开更多
Images that are taken underwater mostly present color shift with hazy effects due to the special property of water.Underwater image enhancement methods are proposed to handle this issue.However,their enhancement resul...Images that are taken underwater mostly present color shift with hazy effects due to the special property of water.Underwater image enhancement methods are proposed to handle this issue.However,their enhancement results are only evaluated on a small number of underwater images.The lack of a sufficiently large and diverse dataset for efficient evaluation of underwater image enhancement methods provokes the present paper.The present paper proposes an organized method to synthesize diverse underwater images,which can function as a benchmark dataset.The present synthesis is based on the underwater image formation model,which describes the physical degradation process.The indoor RGB-D image dataset is used as the seed for underwater style image generation.The ambient light is simulated based on the statistical mean value of real-world underwater images.Attenuation coefficients for diverse water types are carefully selected.Finally,in total 14490 underwater images of 10 water types are synthesized.Based on the synthesized database,state-of-the-art image enhancement methods are appropriately evaluated.Besides,the large diverse underwater image database is beneficial in the development of learning-based methods.展开更多
In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization tec...In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.展开更多
In order to obtain good welding quality, it is necessary to apply quality control because there are many influencing factors in laser welding process. The key to realize welding quality control is to obtain the qualit...In order to obtain good welding quality, it is necessary to apply quality control because there are many influencing factors in laser welding process. The key to realize welding quality control is to obtain the quality information. Abundant weld quality information is contained in weld pool and keyhole. Aiming at Nd:YAG laser welding of stainless steel, a coaxial visual sensing system was constructed. The images of weld pool and keyhole were obtained. Based on the gray character of weld pool and keyhole in images, an image processing algorithm was designed. The search start point and search criteria of weld pool and keyhole edge were determined respectively.展开更多
文摘Airborne 3D image which integrates GPS, attitude measurement unit (AMU), scanning laser rangefinder (SLR) and spectra scanner has been developed successfully. The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly. The distinctive advantage of 3D image is that it can produce geo-referenced images and DSM (digital surface models) images without any ground control points (GCPs). It is no longer necessary to survey GCPs and with some softwares the data can be processed and produce digital surface models (DSM) and geo-referenced images in quasi-real-time, therefore the efficiency of 3D image is 10–100 times higher than that of traditional approaches. The processing procedure involves decomposing and checking the raw data, processing GPS data, calculating the positions of laser sample points, producing geo-referenced image, producing DSM and mosaicing strips. The principle of 3D image is first introduced in this paper, and then we focus on the fast processing technique and algorithms. The flight tests and processed results show that the processing technique is feasible and can meet the requirement of quasi-real-time applications.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
文摘The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.
基金Scientific Research Deanship has funded this project at the University of Ha’il–Saudi Arabia Ha’il–Saudi Arabia through project number RG-21104.
文摘In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
文摘Multimodal medical image fusion has attained immense popularity in recent years due to its robust technology for clinical diagnosis.It fuses multiple images into a single image to improve the quality of images by retaining significant information and aiding diagnostic practitioners in diagnosing and treating many diseases.However,recent image fusion techniques have encountered several challenges,including fusion artifacts,algorithm complexity,and high computing costs.To solve these problems,this study presents a novel medical image fusion strategy by combining the benefits of pixel significance with edge-preserving processing to achieve the best fusion performance.First,the method employs a cross-bilateral filter(CBF)that utilizes one image to determine the kernel and the other for filtering,and vice versa,by considering both geometric closeness and the gray-level similarities of neighboring pixels of the images without smoothing edges.The outputs of CBF are then subtracted from the original images to obtain detailed images.It further proposes to use edge-preserving processing that combines linear lowpass filtering with a non-linear technique that enables the selection of relevant regions in detailed images while maintaining structural properties.These regions are selected using morphologically processed linear filter residuals to identify the significant regions with high-amplitude edges and adequate size.The outputs of low-pass filtering are fused with meaningfully restored regions to reconstruct the original shape of the edges.In addition,weight computations are performed using these reconstructed images,and these weights are then fused with the original input images to produce a final fusion result by estimating the strength of horizontal and vertical details.Numerous standard quality evaluation metrics with complementary properties are used for comparison with existing,well-known algorithms objectively to validate the fusion results.Experimental results from the proposed research article exhibit superior performance compared to other competing techniques in the case of both qualitative and quantitative evaluation.In addition,the proposed method advocates less computational complexity and execution time while improving diagnostic computing accuracy.Nevertheless,due to the lower complexity of the fusion algorithm,the efficiency of fusion methods is high in practical applications.The results reveal that the proposed method exceeds the latest state-of-the-art methods in terms of providing detailed information,edge contour,and overall contrast.
文摘A novel method for noise removal from the rotating accelerometer gravity gradiometer(MAGG)is presented.It introduces a head-to-tail data expansion technique based on the zero-phase filtering principle.A scheme for determining band-pass filter parameters based on signal-to-noise ratio gain,smoothness index,and cross-correlation coefficient is designed using the Chebyshev optimal consistent approximation theory.Additionally,a wavelet denoising evaluation function is constructed,with the dmey wavelet basis function identified as most effective for processing gravity gradient data.The results of hard-in-the-loop simulation and prototype experiments show that the proposed processing method has shown a 14%improvement in the measurement variance of gravity gradient signals,and the measurement accuracy has reached within 4E,compared to other commonly used methods,which verifies that the proposed method effectively removes noise from the gradient signals,improved gravity gradiometry accuracy,and has certain technical insights for high-precision airborne gravity gradiometry.
基金supported by China Southern Power Grid Technology Project under Grant 03600KK52220019(GDKJXM20220253).
文摘The convergence of Internet of Things(IoT),5G,and cloud collaboration offers tailored solutions to the rigorous demands of multi-flow integrated energy aggregation dispatch data processing.While generative adversarial networks(GANs)are instrumental in resource scheduling,their application in this domain is impeded by challenges such as convergence speed,inferior optimality searching capability,and the inability to learn from failed decision making feedbacks.Therefore,a cloud-edge collaborative federated GAN-based communication and computing resource scheduling algorithm with long-term constraint violation sensitiveness is proposed to address these challenges.The proposed algorithm facilitates real-time,energy-efficient data processing by optimizing transmission power control,data migration,and computing resource allocation.It employs federated learning for global parameter aggregation to enhance GAN parameter updating and dynamically adjusts GAN learning rates and global aggregation weights based on energy consumption constraint violations.Simulation results indicate that the proposed algorithm effectively reduces data processing latency,energy consumption,and convergence time.
基金supported by the National Science Foundation of China(10972015,11172015)the Beijing Natural Science Foundation(8162008).
文摘The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.
基金Supported by projects of the National Key Research and Development Plan(Nos.2017YFC0602203,2017YFC0601606)the National Science and Technology Major Project Task(No.2016ZX05027-002-003)+1 种基金the National Natural Science Foundation of China(Nos.41604089,41404089)the State Key Program of National Natural Science of China(No.41430322)
文摘The conventional methods of edge detection can roughly delineate edge position of geological bodies,but there are still some problems such as low detection accuracy and being susceptible to noise interference.In this paper,three image processing methods,Canny,Lo G and Sobel operators are briefly introduced,and applied to edge detection to determine the edge of geological bodies.Furthermore,model data is built to analyze the edge detection ability of this image processing methods,and compare with conventional methods.Combined with gravity anomaly of Sichuan basin and magnetic anomaly of Zhurihe area,the detection effect of image processing methods is further verified in real data.The results show that image processing methods can be applied to effectively identify the edge of geological bodies.Moreover,when both positive and negative anomalies exist and noise is abundant,fake edge can be avoided and edge division is clearer,and satisfactory results of edge detection are obtained.
文摘This paper seeks a synthesis of Bayesian and geostatistical approaches to combining categorical data in the context of remote sensing classification. By experiment with aerial photographs and Landsat TM data, accuracy of spectral, spatial, and combined classification results was evaluated. It was confirmed that the incorporation of spatial information in spectral classification increases accuracy significantly. Secondly, through test with a 5-class and a 3-class classification schemes, it was revealed that setting a proper semantic framework for classification is fundamental to any endeavors of categorical mapping and the most important factor affecting accuracy. Lastly, this paper promotes non-parametric methods for both definition of class membership profiling based on band-specific histograms of image intensities and derivation of spatial probability via indicator kriging, a non-parametric geostatistical technique.
文摘Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.
文摘In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.
基金Supported by the National Natural Science Foun-dation of China (60173051) ,the Teaching and Research Award Pro-gramfor Outstanding Young Teachers in Higher Education Institu-tions of Ministry of Education of China ,and Liaoning Province HigherEducation Research Foundation (20040206)
文摘Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
文摘Remotely sensed spectral data and images are acquired under significant additional effects accompanying their major formation process, which greatly determine measurement accuracy. In order to be used in subsequent quantitative analysis and assessment, this data should be subject to preliminary processing aiming to improve its accuracy and credibility. The paper considers some major problems related with preliminary processing of remotely sensed spectral data and images. The major factors are analyzed, which affect the occurrence of data noise or uncertainties and the methods for reduction or removal thereof. Assessment is made of the extent to which available equipment and technologies may help reduce measurement errors.
文摘Disease recognition in plants is one of the essential problems in agricultural image processing.This article focuses on designing a framework that can recognize and classify diseases on pomegranate plants exactly.The framework utilizes image processing techniques such as image acquisition,image resizing,image enhancement,image segmentation,ROI extraction(region of interest),and feature extraction.An image dataset related to pomegranate leaf disease is utilized to implement the framework,divided into a training set and a test set.In the implementation process,techniques such as image enhancement and image segmentation are primarily used for identifying ROI and features.An image classification will then be implemented by combining a supervised learning model with a support vector machine.The proposed framework is developed based on MATLAB with a graphical user interface.According to the experimental results,the proposed framework can achieve 98.39%accuracy for classifying diseased and healthy leaves.Moreover,the framework can achieve an accuracy of 98.07%for classifying diseases on pomegranate leaves.
文摘Images that are taken underwater mostly present color shift with hazy effects due to the special property of water.Underwater image enhancement methods are proposed to handle this issue.However,their enhancement results are only evaluated on a small number of underwater images.The lack of a sufficiently large and diverse dataset for efficient evaluation of underwater image enhancement methods provokes the present paper.The present paper proposes an organized method to synthesize diverse underwater images,which can function as a benchmark dataset.The present synthesis is based on the underwater image formation model,which describes the physical degradation process.The indoor RGB-D image dataset is used as the seed for underwater style image generation.The ambient light is simulated based on the statistical mean value of real-world underwater images.Attenuation coefficients for diverse water types are carefully selected.Finally,in total 14490 underwater images of 10 water types are synthesized.Based on the synthesized database,state-of-the-art image enhancement methods are appropriately evaluated.Besides,the large diverse underwater image database is beneficial in the development of learning-based methods.
基金This work is supported by the research project (grant No. G20000467) of the Institute of Geology and Geophysics, CAS and bythe China Postdoctoral Science Foundation (No. 2004036083).
文摘In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.
基金Project (10776020) supported by the Joint Foundation of the National Natural Science Foundation of China and China Academy of Engineering Physics
文摘In order to obtain good welding quality, it is necessary to apply quality control because there are many influencing factors in laser welding process. The key to realize welding quality control is to obtain the quality information. Abundant weld quality information is contained in weld pool and keyhole. Aiming at Nd:YAG laser welding of stainless steel, a coaxial visual sensing system was constructed. The images of weld pool and keyhole were obtained. Based on the gray character of weld pool and keyhole in images, an image processing algorithm was designed. The search start point and search criteria of weld pool and keyhole edge were determined respectively.