In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis...In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.展开更多
Object tracking is one of the major tasks for mobile robots in many real-world applications.Also,artificial intelligence and automatic control techniques play an important role in enhancing the performance of mobile r...Object tracking is one of the major tasks for mobile robots in many real-world applications.Also,artificial intelligence and automatic control techniques play an important role in enhancing the performance of mobile robot navigation.In contrast to previous simulation studies,this paper presents a new intelligent mobile robot for accomplishing multi-tasks by tracking red-green-blue(RGB)colored objects in a real experimental field.Moreover,a practical smart controller is developed based on adaptive fuzzy logic and custom proportional-integral-derivative(PID)schemes to achieve accurate tracking results,considering robot command delay and tolerance errors.The design of developed controllers implies some motion rules to mimic the knowledge of experienced operators.Twelve scenarios of three colored object combinations have been successfully tested and evaluated by using the developed controlled image-based robot tracker.Classical PID control failed to handle some tracking scenarios in this study.The proposed adaptive fuzzy PID control achieved the best accurate results with the minimum average final error of 13.8 cm to reach the colored targets,while our designed custom PID control is efficient in saving both average time and traveling distance of 6.6 s and 14.3 cm,respectively.These promising results demonstrate the feasibility of applying our developed image-based robotic system in a colored object-tracking environment to reduce human workloads.展开更多
The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is a...The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In orde...The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.展开更多
Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights t...Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.展开更多
In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularl...In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.展开更多
As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolatio...As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolation,the quantum version of bicubic interpolation has not yet been studied.In this work,we present the first quantum image scaling scheme for bicubic interpolation based on the novel enhanced quantum representation(NEQR).Our scheme can realize synchronous enlargement and reduction of the image with the size of 2^(n)×2^(n) by integral multiple.Firstly,the image is represented by NEQR and the original image coordinates are obtained through multiple CNOT modules.Then,16 neighborhood pixels are obtained by quantum operation circuits,and the corresponding weights of these pixels are calculated by quantum arithmetic modules.Finally,a quantum matrix operation,instead of a classical convolution operation,is used to realize the sum of convolution of these pixels.Through simulation experiments and complexity analysis,we demonstrate that our scheme achieves exponential speedup over the classical bicubic interpolation algorithm,and has better effect than the quantum version of bilinear interpolation.展开更多
Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water.Such images with degradation cannot meet the needs of underwater ope...Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water.Such images with degradation cannot meet the needs of underwater operations.The main problem in classic underwater image restoration or enhancement methods is that they consume long calcu-lation time,and often,the colour or contrast of the result images is still unsatisfied.Instead of using the complicated physical model of underwater imaging degradation,we propose a new method to deal with underwater images by imitating the colour constancy mechanism of human vision using double-opponency.Firstly,the original image is converted to the LMS space.Then the signals are linearly combined,and Gaussian convolutions are per-formed to imitate the function of receptive fields(RFs).Next,two RFs with different sizes work together to constitute the double-opponency response.Finally,the underwater light is estimated to correct the colours in the image.Further contrast stretching on the luminance is optional.Experiments show that the proposed method can obtain clarified underwater images with higher quality than before,and it spends significantly less time cost compared to other previously published typical methods.展开更多
Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS...Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS image with a HR RGB(or mul-tispectral)image guidance.Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors.Recently,researchers pay more attention to deep learning methods with direct supervised or unsupervised learning,which exploit deep prior only from training dataset or testing data.In this article,an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance.Specif-ically,a progressive HS image super-resolution network is proposed,which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance.Then,the super-resolution network is progressively trained with supervised pre-training and un-supervised adaption,where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes.The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint.It has a good general-isation capability,especially for blind HS image super-resolution.Comprehensive experimental results show that the proposed deep progressive learning method out-performs the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases.展开更多
In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:thei...In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:their transformation functions are too simple to imitate complex colour transformations between low-quality images and manually retouched high-quality images.In order to address this limitation,a simple yet effective approach for image enhancement is proposed.The proposed algorithm based on the channel-wise intensity transformation is designed.However,this transformation is applied to the learnt embedding space instead of specific colour spaces and then return enhanced features to colours.To this end,the authors define the continuous intensity transformation(CIT)to describe the mapping between input and output intensities on the embedding space.Then,the enhancement network is developed,which produces multi-scale feature maps from input images,derives the set of transformation functions,and performs the CIT to obtain enhanced images.Extensive experiments on the MIT-Adobe 5K dataset demonstrate that the authors’approach improves the performance of conventional intensity transforms on colour space metrics.Specifically,the authors achieved a 3.8%improvement in peak signal-to-noise ratio,a 1.8%improvement in structual similarity index measure,and a 27.5%improvement in learned perceptual image patch similarity.Also,the authors’algorithm outperforms state-of-the-art alternatives on three image enhancement datasets:MIT-Adobe 5K,Low-Light,and Google HDRþ.展开更多
Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key ...Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.展开更多
Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of app...Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of appearance domain and pose domain.Previous alignment methods,such as appearance flow warping,correspondence learning and cross attention,often encounter challenges when it comes to producing fine texture details.These approaches suffer from limitations in accurately estimating appearance flows due to the lack of global receptive field.Alternatively,they can only perform cross-domain alignment on high-level feature maps with small spatial dimensions since the computational complexity increases quadratically with larger feature sizes.In this article,the significance of multi-scale alignment,in both low-level and high-level domains,for ensuring reliable cross-domain alignment of appearance and pose is demonstrated.To this end,a novel and effective method,named Multi-scale Crossdomain Alignment(MCA)is proposed.Firstly,MCA adopts global context aggregation transformer to model multi-scale interaction between pose and appearance inputs,which employs pair-wise window-based cross attention.Furthermore,leveraging the integrated global source information for each target position,MCA applies flexible flow prediction head and point correlation to effectively conduct warping and fusing for final transformed person image generation.Our proposed MCA achieves superior performance on two popular datasets than other methods,which verifies the effectiveness of our approach.展开更多
Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understa...Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understanding the progressive damage mechanisms of slopes based on monitoring image data.Inspired by recent advances in computer vision,deep learning(DL)models have been widely utilized for image-based fracture identification.The multi-scale characteristics,image resolution and annotation quality of images will cause a scale-space effect(SSE)that makes features indistinguishable from noise,directly affecting the accuracy.However,this effect has not received adequate attention.Herein,we try to address this gap by collecting slope images at various proportional scales and constructing multi-scale datasets using image processing techniques.Next,we quantify the intensity of feature signals using metrics such as peak signal-to-noise ratio(PSNR)and structural similarity(SSIM).Combining these metrics with the scale-space theory,we investigate the influence of the SSE on the differentiation of multi-scale features and the accuracy of recognition.It is found that augmenting the image's detail capacity does not always yield benefits for vision-based recognition models.In light of these observations,we propose a scale hybridization approach based on the diffusion mechanism of scale-space representation.The results show that scale hybridization strengthens the tolerance of multi-scale feature recognition under complex environmental noise interference and significantly enhances the recognition accuracy of GD.It also facilitates the objective understanding,description and analysis of the rock behavior and stability of slopes from the perspective of image data.展开更多
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b...Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.展开更多
Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identi...Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.展开更多
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal...Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.展开更多
This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(...This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(XRD)-based DIP method effectively analyzed the mineral composition contents and spatial distributions of granite. During the particle flow code(PFC2D) model calibration phase, the numerical simulation exhibited that the uniaxial compressive strength(UCS) value, elastic modulus(E), and failure pattern of the granite specimen in the UCS test were comparable to the experiment. By establishing 351 sets of numerical models and exploring the impacts of mineral composition on the mechanical properties of granite, it indicated that there was no negative correlation between quartz and feldspar for UCS, tensile strength(σ_(t)), and E. In contrast, mica had a significant negative correlation for UCS, σ_(t), and E. The presence of quartz increased the brittleness of granite, whereas the presence of mica and feldspar increased its ductility in UCS and direct tensile strength(DTS) tests. Varying contents of major mineral compositions in granite showed minor influence on the number of cracks in both UCS and DTS tests.展开更多
The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of ...The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.展开更多
基金Scientific Research Deanship has funded this project at the University of Ha’il–Saudi Arabia Ha’il–Saudi Arabia through project number RG-21104.
文摘In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.
基金The authors extend their appreciation to the Deanship of Scientific Research at Shaqra University for funding this research work through the Project Number(SU-ANN-2023016).
文摘Object tracking is one of the major tasks for mobile robots in many real-world applications.Also,artificial intelligence and automatic control techniques play an important role in enhancing the performance of mobile robot navigation.In contrast to previous simulation studies,this paper presents a new intelligent mobile robot for accomplishing multi-tasks by tracking red-green-blue(RGB)colored objects in a real experimental field.Moreover,a practical smart controller is developed based on adaptive fuzzy logic and custom proportional-integral-derivative(PID)schemes to achieve accurate tracking results,considering robot command delay and tolerance errors.The design of developed controllers implies some motion rules to mimic the knowledge of experienced operators.Twelve scenarios of three colored object combinations have been successfully tested and evaluated by using the developed controlled image-based robot tracker.Classical PID control failed to handle some tracking scenarios in this study.The proposed adaptive fuzzy PID control achieved the best accurate results with the minimum average final error of 13.8 cm to reach the colored targets,while our designed custom PID control is efficient in saving both average time and traveling distance of 6.6 s and 14.3 cm,respectively.These promising results demonstrate the feasibility of applying our developed image-based robotic system in a colored object-tracking environment to reduce human workloads.
基金supported by the National Science Foundation of China(10972015,11172015)the Beijing Natural Science Foundation(8162008).
文摘The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
文摘The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.
文摘Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.
文摘In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.
基金Project supported by the Scientific Research Fund of Hunan Provincial Education Department,China (Grant No.21A0470)the Natural Science Foundation of Hunan Province,China (Grant No.2023JJ50268)+1 种基金the National Natural Science Foundation of China (Grant Nos.62172268 and 62302289)the Shanghai Science and Technology Project,China (Grant Nos.21JC1402800 and 23YF1416200)。
文摘As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolation,the quantum version of bicubic interpolation has not yet been studied.In this work,we present the first quantum image scaling scheme for bicubic interpolation based on the novel enhanced quantum representation(NEQR).Our scheme can realize synchronous enlargement and reduction of the image with the size of 2^(n)×2^(n) by integral multiple.Firstly,the image is represented by NEQR and the original image coordinates are obtained through multiple CNOT modules.Then,16 neighborhood pixels are obtained by quantum operation circuits,and the corresponding weights of these pixels are calculated by quantum arithmetic modules.Finally,a quantum matrix operation,instead of a classical convolution operation,is used to realize the sum of convolution of these pixels.Through simulation experiments and complexity analysis,we demonstrate that our scheme achieves exponential speedup over the classical bicubic interpolation algorithm,and has better effect than the quantum version of bilinear interpolation.
文摘Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water.Such images with degradation cannot meet the needs of underwater operations.The main problem in classic underwater image restoration or enhancement methods is that they consume long calcu-lation time,and often,the colour or contrast of the result images is still unsatisfied.Instead of using the complicated physical model of underwater imaging degradation,we propose a new method to deal with underwater images by imitating the colour constancy mechanism of human vision using double-opponency.Firstly,the original image is converted to the LMS space.Then the signals are linearly combined,and Gaussian convolutions are per-formed to imitate the function of receptive fields(RFs).Next,two RFs with different sizes work together to constitute the double-opponency response.Finally,the underwater light is estimated to correct the colours in the image.Further contrast stretching on the luminance is optional.Experiments show that the proposed method can obtain clarified underwater images with higher quality than before,and it spends significantly less time cost compared to other previously published typical methods.
基金National Key R&D Program of China,Grant/Award Number:2022YFC3300704National Natural Science Foundation of China,Grant/Award Numbers:62171038,62088101,62006023。
文摘Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS image with a HR RGB(or mul-tispectral)image guidance.Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors.Recently,researchers pay more attention to deep learning methods with direct supervised or unsupervised learning,which exploit deep prior only from training dataset or testing data.In this article,an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance.Specif-ically,a progressive HS image super-resolution network is proposed,which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance.Then,the super-resolution network is progressively trained with supervised pre-training and un-supervised adaption,where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes.The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint.It has a good general-isation capability,especially for blind HS image super-resolution.Comprehensive experimental results show that the proposed deep progressive learning method out-performs the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases.
基金National Research Foundation of Korea,Grant/Award Numbers:2022R1I1A3069113,RS-2023-00221365Electronics and Telecommunications Research Institute,Grant/Award Number:2014-3-00123。
文摘In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:their transformation functions are too simple to imitate complex colour transformations between low-quality images and manually retouched high-quality images.In order to address this limitation,a simple yet effective approach for image enhancement is proposed.The proposed algorithm based on the channel-wise intensity transformation is designed.However,this transformation is applied to the learnt embedding space instead of specific colour spaces and then return enhanced features to colours.To this end,the authors define the continuous intensity transformation(CIT)to describe the mapping between input and output intensities on the embedding space.Then,the enhancement network is developed,which produces multi-scale feature maps from input images,derives the set of transformation functions,and performs the CIT to obtain enhanced images.Extensive experiments on the MIT-Adobe 5K dataset demonstrate that the authors’approach improves the performance of conventional intensity transforms on colour space metrics.Specifically,the authors achieved a 3.8%improvement in peak signal-to-noise ratio,a 1.8%improvement in structual similarity index measure,and a 27.5%improvement in learned perceptual image patch similarity.Also,the authors’algorithm outperforms state-of-the-art alternatives on three image enhancement datasets:MIT-Adobe 5K,Low-Light,and Google HDRþ.
基金supported by the National Natural Science Foundation of China(No.12373073,U2031104,No.12173015)Guangdong Basic and Applied Basic Research Foundation(No.2023A1515011340)。
文摘Obtaining high precision is an important consideration for astrometric studies using images from the Narrow Angle Camera(NAC)of the Cassini Imaging Science Subsystem(ISS).Selecting the best centering algorithm is key to enhancing astrometric accuracy.In this study,we compared the accuracy of five centering algorithms:Gaussian fitting,the modified moments method,and three point-spread function(PSF)fitting methods(effective PSF(ePSF),PSFEx,and extended PSF(x PSF)from the Cassini Imaging Central Laboratory for Operations(CICLOPS)).We assessed these algorithms using 70 ISS NAC star field images taken with CL1 and CL2 filters across different stellar magnitudes.The ePSF method consistently demonstrated the highest accuracy,achieving precision below 0.03 pixels for stars of magnitude 8-9.Compared to the previously considered best,the modified moments method,the e PSF method improved overall accuracy by about 10%and 21%in the sample and line directions,respectively.Surprisingly,the xPSF model provided by CICLOPS had lower precision than the ePSF.Conversely,the ePSF exhibits an improvement in measurement precision of 23%and 17%in the sample and line directions,respectively,over the xPSF.This discrepancy might be attributed to the xPSF focusing on photometry rather than astrometry.These findings highlight the necessity of constructing PSF models specifically tailored for astrometric purposes in NAC images and provide guidance for enhancing astrometric measurements using these ISS NAC images.
基金National Natural Science Foundation of China,Grant/Award Number:62274142Hangzhou Major Technology Innovation Project of Artificial Intelligence,Grant/Award Number:2022AIZD0060。
文摘Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of appearance domain and pose domain.Previous alignment methods,such as appearance flow warping,correspondence learning and cross attention,often encounter challenges when it comes to producing fine texture details.These approaches suffer from limitations in accurately estimating appearance flows due to the lack of global receptive field.Alternatively,they can only perform cross-domain alignment on high-level feature maps with small spatial dimensions since the computational complexity increases quadratically with larger feature sizes.In this article,the significance of multi-scale alignment,in both low-level and high-level domains,for ensuring reliable cross-domain alignment of appearance and pose is demonstrated.To this end,a novel and effective method,named Multi-scale Crossdomain Alignment(MCA)is proposed.Firstly,MCA adopts global context aggregation transformer to model multi-scale interaction between pose and appearance inputs,which employs pair-wise window-based cross attention.Furthermore,leveraging the integrated global source information for each target position,MCA applies flexible flow prediction head and point correlation to effectively conduct warping and fusing for final transformed person image generation.Our proposed MCA achieves superior performance on two popular datasets than other methods,which verifies the effectiveness of our approach.
基金supported by the National Natural Science Foundation of China(Grant No.52090081)the State Key Laboratory of Hydro-science and Hydraulic Engineering(Grant No.2021-KY-04).
文摘Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understanding the progressive damage mechanisms of slopes based on monitoring image data.Inspired by recent advances in computer vision,deep learning(DL)models have been widely utilized for image-based fracture identification.The multi-scale characteristics,image resolution and annotation quality of images will cause a scale-space effect(SSE)that makes features indistinguishable from noise,directly affecting the accuracy.However,this effect has not received adequate attention.Herein,we try to address this gap by collecting slope images at various proportional scales and constructing multi-scale datasets using image processing techniques.Next,we quantify the intensity of feature signals using metrics such as peak signal-to-noise ratio(PSNR)and structural similarity(SSIM).Combining these metrics with the scale-space theory,we investigate the influence of the SSE on the differentiation of multi-scale features and the accuracy of recognition.It is found that augmenting the image's detail capacity does not always yield benefits for vision-based recognition models.In light of these observations,we propose a scale hybridization approach based on the diffusion mechanism of scale-space representation.The results show that scale hybridization strengthens the tolerance of multi-scale feature recognition under complex environmental noise interference and significantly enhances the recognition accuracy of GD.It also facilitates the objective understanding,description and analysis of the rock behavior and stability of slopes from the perspective of image data.
基金the National Natural Science Foundation of China(62003298,62163036)the Major Project of Science and Technology of Yunnan Province(202202AD080005,202202AH080009)the Yunnan University Professional Degree Graduate Practice Innovation Fund Project(ZC-22222770)。
文摘Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.
基金the National Natural Science Foun-dation of China(Nos.61471263,61872267 and U21B2024)the Natural Science Foundation of Tianjin,China(No.16JCZDJC31100)Tianjin University Innovation Foundation(No.2021XZC0024).
文摘Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.
文摘Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.
基金This research was supported by the Department of Mining Engineering at the University of Utah.In addition,the lead author wishes to acknowledge the financial support received from the Talent Introduction Project,part of the Elite Program of Shandong University of Science and Technology(No.0104060540171).
文摘This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(XRD)-based DIP method effectively analyzed the mineral composition contents and spatial distributions of granite. During the particle flow code(PFC2D) model calibration phase, the numerical simulation exhibited that the uniaxial compressive strength(UCS) value, elastic modulus(E), and failure pattern of the granite specimen in the UCS test were comparable to the experiment. By establishing 351 sets of numerical models and exploring the impacts of mineral composition on the mechanical properties of granite, it indicated that there was no negative correlation between quartz and feldspar for UCS, tensile strength(σ_(t)), and E. In contrast, mica had a significant negative correlation for UCS, σ_(t), and E. The presence of quartz increased the brittleness of granite, whereas the presence of mica and feldspar increased its ductility in UCS and direct tensile strength(DTS) tests. Varying contents of major mineral compositions in granite showed minor influence on the number of cracks in both UCS and DTS tests.
基金This research was funded by Prince Sattam bin Abdulaziz University(Project Number PSAU/2023/01/25387).
文摘The research aims to improve the performance of image recognition methods based on a description in the form of a set of keypoint descriptors.The main focus is on increasing the speed of establishing the relevance of object and etalon descriptions while maintaining the required level of classification efficiency.The class to be recognized is represented by an infinite set of images obtained from the etalon by applying arbitrary geometric transformations.It is proposed to reduce the descriptions for the etalon database by selecting the most significant descriptor components according to the information content criterion.The informativeness of an etalon descriptor is estimated by the difference of the closest distances to its own and other descriptions.The developed method determines the relevance of the full description of the recognized object with the reduced description of the etalons.Several practical models of the classifier with different options for establishing the correspondence between object descriptors and etalons are considered.The results of the experimental modeling of the proposed methods for a database including images of museum jewelry are presented.The test sample is formed as a set of images from the etalon database and out of the database with the application of geometric transformations of scale and rotation in the field of view.The practical problems of determining the threshold for the number of votes,based on which a classification decision is made,have been researched.Modeling has revealed the practical possibility of tenfold reducing descriptions with full preservation of classification accuracy.Reducing the descriptions by twenty times in the experiment leads to slightly decreased accuracy.The speed of the analysis increases in proportion to the degree of reduction.The use of reduction by the informativeness criterion confirmed the possibility of obtaining the most significant subset of features for classification,which guarantees a decent level of accuracy.