The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is a...The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.展开更多
In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis...In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights t...Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.展开更多
In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularl...In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.展开更多
The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In orde...The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.展开更多
This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(...This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(XRD)-based DIP method effectively analyzed the mineral composition contents and spatial distributions of granite. During the particle flow code(PFC2D) model calibration phase, the numerical simulation exhibited that the uniaxial compressive strength(UCS) value, elastic modulus(E), and failure pattern of the granite specimen in the UCS test were comparable to the experiment. By establishing 351 sets of numerical models and exploring the impacts of mineral composition on the mechanical properties of granite, it indicated that there was no negative correlation between quartz and feldspar for UCS, tensile strength(σ_(t)), and E. In contrast, mica had a significant negative correlation for UCS, σ_(t), and E. The presence of quartz increased the brittleness of granite, whereas the presence of mica and feldspar increased its ductility in UCS and direct tensile strength(DTS) tests. Varying contents of major mineral compositions in granite showed minor influence on the number of cracks in both UCS and DTS tests.展开更多
Large structures,such as bridges,highways,etc.,need to be inspected to evaluate their actual physical and functional condition,to predict future conditions,and to help decision makers allocating maintenance and rehabi...Large structures,such as bridges,highways,etc.,need to be inspected to evaluate their actual physical and functional condition,to predict future conditions,and to help decision makers allocating maintenance and rehabilitation resources.The assessment of civil infrastructure condition is carried out through information obtained by inspection and/or monitoring operations.Traditional techniques in structural health monitoring(SHM)involve visual inspection related to inspection standards that can be time-consuming data collection,expensive,labor intensive,and dangerous.To address these limitations,machine vision-based inspection procedures have increasingly been investigated within the research community.In this context,this paper proposes and compares four different computer vision procedures to identify damage by image processing:Otsu method thresholding,Markov random fields segmentation,RGB color detection technique,and K-means clustering algorithm.The first method is based on segmentation by thresholding that returns a binary image from a grayscale image.The Markov random fields technique uses a probabilistic approach to assign labels to model the spatial dependencies in image pixels.The RGB technique uses color detection to evaluate the defect extensions.Finally,K-means algorithm is based on Euclidean distance for clustering of the images.The benefits and limitations of each technique are discussed,and the challenges of using the techniques are highlighted.To show the effectiveness of the described techniques in damage detection of civil infrastructures,a case study is presented.Results show that various types of corrosion and cracks can be detected by image processing techniques making the proposed techniques a suitable tool for the prediction of the damage evolution in civil infrastructures.展开更多
A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-effi...A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-efficiency and have many errors.This study researched the spatial distribution and cluster characteristics of gravels based on digital image processing technology combined with a self-organizing map(SOM)and multivariate statistical methods in the grassland of northern Tibetan Plateau.Moreover,the correlation of morphological parameters of gravels between different cluster groups and the environmental factors affecting gravel distribution were analyzed.The results showed that the morphological characteristics of gravels in northern region(cluster C)and southern region(cluster B)of the Tibetan Plateau were similar,with a low gravel coverage,small gravel diameter,and elongated shape.These regions were mainly distributed in high mountainous areas with large topographic relief.The central region(cluster A)has high coverage of gravels with a larger diameter,mainly distributed in high-altitude plains with smaller undulation.Principal component analysis(PCA)results showed that the gravel distribution of cluster A may be mainly affected by vegetation,while those in clusters B and C could be mainly affected by topography,climate,and soil.The study confirmed that the combination of digital image processing technology and SOM could effectively analyzed the spatial distribution characteristics of gravels,providing a new mode for gravel research.展开更多
Image processing networks have gained great success in many fields,and thus the issue of copyright protection for image processing networks hasbecome a focus of attention. Model watermarking techniques are widely used...Image processing networks have gained great success in many fields,and thus the issue of copyright protection for image processing networks hasbecome a focus of attention. Model watermarking techniques are widely usedin model copyright protection, but there are two challenges: (1) designinguniversal trigger sample watermarking for different network models is stilla challenge;(2) existing methods of copyright protection based on trigger swatermarking are difficult to resist forgery attacks. In this work, we propose adual model watermarking framework for copyright protection in image processingnetworks. The trigger sample watermark is embedded in the trainingprocess of the model, which can effectively verify the model copyright. And wedesign a common method for generating trigger sample watermarks based ongenerative adversarial networks, adaptively generating trigger sample watermarksaccording to different models. The spatial watermark is embedded intothe model output. When an attacker steals model copyright using a forgedtrigger sample watermark, which can be correctly extracted to distinguishbetween the piratical and the protected model. The experiments show that theproposed framework has good performance in different image segmentationnetworks of UNET, UNET++, and FCN (fully convolutional network), andeffectively resists forgery attacks.展开更多
The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its...The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its excellent performance in computer vision, deep learning has been applied to UAV inspection image processing tasks such as power line identification and insulator defect detection. Despite their excellent performance, electric power UAV inspection image processing models based on deep learning face several problems such as a small application scope, the need for constant retraining and optimization, and high R&D monetary and time costs due to the black-box and scene data-driven characteristics of deep learning. In this study, an automated deep learning system for electric power UAV inspection image analysis and processing is proposed as a solution to the aforementioned problems. This system design is based on the three critical design principles of generalizability, extensibility, and automation. Pre-trained models, fine-tuning (downstream task adaptation), and automated machine learning, which are closely related to these design principles, are reviewed. In addition, an automated deep learning system architecture for electric power UAV inspection image analysis and processing is presented. A prototype system was constructed and experiments were conducted on the two electric power UAV inspection image analysis and processing tasks of insulator self-detonation and bird nest recognition. The models constructed using the prototype system achieved 91.36% and 86.13% mAP for insulator self-detonation and bird nest recognition, respectively. This demonstrates that the system design concept is reasonable and the system architecture feasible .展开更多
Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmissi...Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmission electron microscope(TEM)images of W nanofibers using image processing techniques and convolutional neural network(CNN).We employ a three-stage approach consisting of Otsu,local-threshold,and watershed segmentation to extract bubbles from noisy images.To address over-segmentation,we propose a combination of area factor and radial pixel intensity scanning.A CNN is used to recognize bubbles,outperforming traditional neural network models such as Alex Net and Google Net with an accuracy of 97.1%and recall of 98.6%.Our method is tested on both clear and blurred TEM images,and demonstrates humanlike performance in recognizing bubbles.This work contributes to the development of quantitative image analysis in the field of plasma-material interactions,offering a scalable solution for analyzing material defects.Overall,this study's findings establish the potential for automatic defect recognition and its applications in the assessment of plasma-material interactions.This method can be employed in a variety of specialties,including plasma physics and materials science.展开更多
Numerical simulation is the most powerful computational and analysis tool for a large variety of engineering and physical problems.For a complex problem relating to multi-field,multi-process and multi-scale,different ...Numerical simulation is the most powerful computational and analysis tool for a large variety of engineering and physical problems.For a complex problem relating to multi-field,multi-process and multi-scale,different computing tools have to be developed so as to solve particular fields at different scales and for different processes.Therefore,the integration of different types of software is inevitable.However,it is difficult to perform the transfer of the meshes and simulated results among software packages because of the lack of shared data formats or encrypted data formats.An image processing based method for three-dimensional model reconstruction for numerical simulation was proposed,which presents a solution to the integration problem by a series of slice or projection images obtained by the post-processing modules of the numerical simulation software.By means of mapping image pixels to meshes of either finite difference or finite element models,the geometry contour can be extracted to export the stereolithography model.The values of results,represented by color,can be deduced and assigned to the meshes.All the models with data can be directly or indirectly integrated into other software as a continued or new numerical simulation.The three-dimensional reconstruction method has been validated in numerical simulation of castings and case studies were provided in this study.展开更多
In steel plants, estimation of the production system characteristic is highly critical to adjust the system parameters for best efficiency. Although the system parameters may be tuned very well, due to the machine and...In steel plants, estimation of the production system characteristic is highly critical to adjust the system parameters for best efficiency. Although the system parameters may be tuned very well, due to the machine and human factors involved in the production line some deficiencies may occur in product. It is important to detect such problems as early as possible. Surface defects and dimensional deviations are the most important quality problems. In this study, it is aimed to develop an approach to measure the dimensions of metal profiles by obtaining images of them. This will be of use in detecting the deviations in dimensions. A platform was introduced to simulate the real-time environment and images were taken from the metal profile using 4 laser light sources. The shape of the material is generated by combining the images taken from different cameras. Real dimensions were obtained by using image processing and mathematical conversion operations on the images. The results obtained with small deviations from the real values showed that this method can be applied in a real-time production line.展开更多
Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understa...Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understanding the progressive damage mechanisms of slopes based on monitoring image data.Inspired by recent advances in computer vision,deep learning(DL)models have been widely utilized for image-based fracture identification.The multi-scale characteristics,image resolution and annotation quality of images will cause a scale-space effect(SSE)that makes features indistinguishable from noise,directly affecting the accuracy.However,this effect has not received adequate attention.Herein,we try to address this gap by collecting slope images at various proportional scales and constructing multi-scale datasets using image processing techniques.Next,we quantify the intensity of feature signals using metrics such as peak signal-to-noise ratio(PSNR)and structural similarity(SSIM).Combining these metrics with the scale-space theory,we investigate the influence of the SSE on the differentiation of multi-scale features and the accuracy of recognition.It is found that augmenting the image's detail capacity does not always yield benefits for vision-based recognition models.In light of these observations,we propose a scale hybridization approach based on the diffusion mechanism of scale-space representation.The results show that scale hybridization strengthens the tolerance of multi-scale feature recognition under complex environmental noise interference and significantly enhances the recognition accuracy of GD.It also facilitates the objective understanding,description and analysis of the rock behavior and stability of slopes from the perspective of image data.展开更多
Algal blooms,the spread of algae on the surface of water bodies,have adverse effects not only on aquatic ecosystems but also on human life.The adverse effects of harmful algal blooms(HABs)necessitate a convenient solu...Algal blooms,the spread of algae on the surface of water bodies,have adverse effects not only on aquatic ecosystems but also on human life.The adverse effects of harmful algal blooms(HABs)necessitate a convenient solution for detection and monitoring.Unmanned aerial vehicles(UAVs)have recently emerged as a tool for algal bloom detection,efficiently providing on-demand images at high spatiotemporal resolutions.This study developed an image processing method for algal bloom area estimation from the aerial images(obtained from the internet)captured using UAVs.As a remote sensing method of HAB detection,analysis,and monitoring,a combination of histogram and texture analyses was used to efficiently estimate the area of HABs.Statistical features like entropy(using the Kullback-Leibler method)were emphasized with the aid of a gray-level co-occurrence matrix.The results showed that the orthogonal images demonstrated fewer errors,and the morphological filter best detected algal blooms in real time,with a precision of 80%.This study provided efficient image processing approaches using on-board UAVs for HAB monitoring.展开更多
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b...Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.展开更多
As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolatio...As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolation,the quantum version of bicubic interpolation has not yet been studied.In this work,we present the first quantum image scaling scheme for bicubic interpolation based on the novel enhanced quantum representation(NEQR).Our scheme can realize synchronous enlargement and reduction of the image with the size of 2^(n)×2^(n) by integral multiple.Firstly,the image is represented by NEQR and the original image coordinates are obtained through multiple CNOT modules.Then,16 neighborhood pixels are obtained by quantum operation circuits,and the corresponding weights of these pixels are calculated by quantum arithmetic modules.Finally,a quantum matrix operation,instead of a classical convolution operation,is used to realize the sum of convolution of these pixels.Through simulation experiments and complexity analysis,we demonstrate that our scheme achieves exponential speedup over the classical bicubic interpolation algorithm,and has better effect than the quantum version of bilinear interpolation.展开更多
Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when deal...Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.展开更多
As a part of quantum image processing,quantum image filtering is a crucial technology in the development of quantum computing.Low-pass filtering can effectively achieve anti-aliasing effects on images.Currently,most q...As a part of quantum image processing,quantum image filtering is a crucial technology in the development of quantum computing.Low-pass filtering can effectively achieve anti-aliasing effects on images.Currently,most quantum image filterings are based on classical domains and grayscale images,and there are relatively fewer studies on anti-aliasing in the quantum domain.This paper proposes a scheme for anti-aliasing filtering based on quantum grayscale and color image scaling in the spatial domain.It achieves the effect of anti-aliasing filtering on quantum images during the scaling process.First,we use the novel enhanced quantum representation(NEQR)and the improved quantum representation of color images(INCQI)to represent classical images.Since aliasing phenomena are more pronounced when images are scaled down,this paper focuses only on the anti-aliasing effects in the case of reduction.Subsequently,we perform anti-aliasing filtering on the quantum representation of the original image and then use bilinear interpolation to scale down the image,achieving the anti-aliasing effect.The constructed pyramid model is then used to select an appropriate image for upscaling to the original image size.Finally,the complexity of the circuit is analyzed.Compared to the images experiencing aliasing effects solely due to scaling,applying anti-aliasing filtering to the images results in smoother and clearer outputs.Additionally,the anti-aliasing filtering allows for manual intervention to select the desired level of image smoothness.展开更多
基金supported by the National Science Foundation of China(10972015,11172015)the Beijing Natural Science Foundation(8162008).
文摘The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.
基金Scientific Research Deanship has funded this project at the University of Ha’il–Saudi Arabia Ha’il–Saudi Arabia through project number RG-21104.
文摘In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
文摘Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.
文摘In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.
文摘The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.
基金This research was supported by the Department of Mining Engineering at the University of Utah.In addition,the lead author wishes to acknowledge the financial support received from the Talent Introduction Project,part of the Elite Program of Shandong University of Science and Technology(No.0104060540171).
文摘This study investigated the correlations between mechanical properties and mineralogy of granite using the digital image processing(DIP) and discrete element method(DEM). The results showed that the X-ray diffraction(XRD)-based DIP method effectively analyzed the mineral composition contents and spatial distributions of granite. During the particle flow code(PFC2D) model calibration phase, the numerical simulation exhibited that the uniaxial compressive strength(UCS) value, elastic modulus(E), and failure pattern of the granite specimen in the UCS test were comparable to the experiment. By establishing 351 sets of numerical models and exploring the impacts of mineral composition on the mechanical properties of granite, it indicated that there was no negative correlation between quartz and feldspar for UCS, tensile strength(σ_(t)), and E. In contrast, mica had a significant negative correlation for UCS, σ_(t), and E. The presence of quartz increased the brittleness of granite, whereas the presence of mica and feldspar increased its ductility in UCS and direct tensile strength(DTS) tests. Varying contents of major mineral compositions in granite showed minor influence on the number of cracks in both UCS and DTS tests.
基金Part of the research leading to these results has received funding from the research project DESDEMONA–Detection of Steel Defects by Enhanced MONitoring and Automated procedure for self-inspection and maintenance (grant agreement number RFCS-2018_800687) supported by EU Call RFCS-2017sponsored by the NATO Science for Peace and Security Programme under grant id. G5924。
文摘Large structures,such as bridges,highways,etc.,need to be inspected to evaluate their actual physical and functional condition,to predict future conditions,and to help decision makers allocating maintenance and rehabilitation resources.The assessment of civil infrastructure condition is carried out through information obtained by inspection and/or monitoring operations.Traditional techniques in structural health monitoring(SHM)involve visual inspection related to inspection standards that can be time-consuming data collection,expensive,labor intensive,and dangerous.To address these limitations,machine vision-based inspection procedures have increasingly been investigated within the research community.In this context,this paper proposes and compares four different computer vision procedures to identify damage by image processing:Otsu method thresholding,Markov random fields segmentation,RGB color detection technique,and K-means clustering algorithm.The first method is based on segmentation by thresholding that returns a binary image from a grayscale image.The Markov random fields technique uses a probabilistic approach to assign labels to model the spatial dependencies in image pixels.The RGB technique uses color detection to evaluate the defect extensions.Finally,K-means algorithm is based on Euclidean distance for clustering of the images.The benefits and limitations of each technique are discussed,and the challenges of using the techniques are highlighted.To show the effectiveness of the described techniques in damage detection of civil infrastructures,a case study is presented.Results show that various types of corrosion and cracks can be detected by image processing techniques making the proposed techniques a suitable tool for the prediction of the damage evolution in civil infrastructures.
基金funded by the National Natural Science Foundation of China(41971226,41871357)the Major Research and Development and Achievement Transformation Projects of Qinghai,China(2022-QY-224)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA28110502,XDA19030303).
文摘A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-efficiency and have many errors.This study researched the spatial distribution and cluster characteristics of gravels based on digital image processing technology combined with a self-organizing map(SOM)and multivariate statistical methods in the grassland of northern Tibetan Plateau.Moreover,the correlation of morphological parameters of gravels between different cluster groups and the environmental factors affecting gravel distribution were analyzed.The results showed that the morphological characteristics of gravels in northern region(cluster C)and southern region(cluster B)of the Tibetan Plateau were similar,with a low gravel coverage,small gravel diameter,and elongated shape.These regions were mainly distributed in high mountainous areas with large topographic relief.The central region(cluster A)has high coverage of gravels with a larger diameter,mainly distributed in high-altitude plains with smaller undulation.Principal component analysis(PCA)results showed that the gravel distribution of cluster A may be mainly affected by vegetation,while those in clusters B and C could be mainly affected by topography,climate,and soil.The study confirmed that the combination of digital image processing technology and SOM could effectively analyzed the spatial distribution characteristics of gravels,providing a new mode for gravel research.
基金supported by the National Natural Science Foundation of China under grants U1836208,by the Priority Academic Program Development of Jiangsu Higher Education Institutions (PAPD)fundby the Collaborative Innovation Center of Atmospheric Environment and Equipment Technology (CICAEET)fund,China.
文摘Image processing networks have gained great success in many fields,and thus the issue of copyright protection for image processing networks hasbecome a focus of attention. Model watermarking techniques are widely usedin model copyright protection, but there are two challenges: (1) designinguniversal trigger sample watermarking for different network models is stilla challenge;(2) existing methods of copyright protection based on trigger swatermarking are difficult to resist forgery attacks. In this work, we propose adual model watermarking framework for copyright protection in image processingnetworks. The trigger sample watermark is embedded in the trainingprocess of the model, which can effectively verify the model copyright. And wedesign a common method for generating trigger sample watermarks based ongenerative adversarial networks, adaptively generating trigger sample watermarksaccording to different models. The spatial watermark is embedded intothe model output. When an attacker steals model copyright using a forgedtrigger sample watermark, which can be correctly extracted to distinguishbetween the piratical and the protected model. The experiments show that theproposed framework has good performance in different image segmentationnetworks of UNET, UNET++, and FCN (fully convolutional network), andeffectively resists forgery attacks.
基金This work was supported by Science and Technology Project of State Grid Corporation“Research on Key Technologies of Power Artificial Intelligence Open Platform”(5700-202155260A-0-0-00).
文摘The continuous growth in the scale of unmanned aerial vehicle (UAV) applications in transmission line inspection has resulted in a corresponding increase in the demand for UAV inspection image processing. Owing to its excellent performance in computer vision, deep learning has been applied to UAV inspection image processing tasks such as power line identification and insulator defect detection. Despite their excellent performance, electric power UAV inspection image processing models based on deep learning face several problems such as a small application scope, the need for constant retraining and optimization, and high R&D monetary and time costs due to the black-box and scene data-driven characteristics of deep learning. In this study, an automated deep learning system for electric power UAV inspection image analysis and processing is proposed as a solution to the aforementioned problems. This system design is based on the three critical design principles of generalizability, extensibility, and automation. Pre-trained models, fine-tuning (downstream task adaptation), and automated machine learning, which are closely related to these design principles, are reviewed. In addition, an automated deep learning system architecture for electric power UAV inspection image analysis and processing is presented. A prototype system was constructed and experiments were conducted on the two electric power UAV inspection image analysis and processing tasks of insulator self-detonation and bird nest recognition. The models constructed using the prototype system achieved 91.36% and 86.13% mAP for insulator self-detonation and bird nest recognition, respectively. This demonstrates that the system design concept is reasonable and the system architecture feasible .
基金supported by the National Key R&D Program of China(No.2017YFE0300106)Dalian Science and Technology Star Project(No.2020RQ136)+1 种基金the Central Guidance on Local Science and Technology Development Fund of Liaoning Province(No.2022010055-JH6/100)the Fundamental Research Funds for the Central Universities(No.DUT21RC(3)066)。
文摘Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmission electron microscope(TEM)images of W nanofibers using image processing techniques and convolutional neural network(CNN).We employ a three-stage approach consisting of Otsu,local-threshold,and watershed segmentation to extract bubbles from noisy images.To address over-segmentation,we propose a combination of area factor and radial pixel intensity scanning.A CNN is used to recognize bubbles,outperforming traditional neural network models such as Alex Net and Google Net with an accuracy of 97.1%and recall of 98.6%.Our method is tested on both clear and blurred TEM images,and demonstrates humanlike performance in recognizing bubbles.This work contributes to the development of quantitative image analysis in the field of plasma-material interactions,offering a scalable solution for analyzing material defects.Overall,this study's findings establish the potential for automatic defect recognition and its applications in the assessment of plasma-material interactions.This method can be employed in a variety of specialties,including plasma physics and materials science.
基金funded by National Key R&D Program of China(No.2021YFB3401200)the National Natural Science Foundation of China(No.51875308)the Beijing Nature Sciences Fund-Haidian Originality Cooperation Project(L212002).
文摘Numerical simulation is the most powerful computational and analysis tool for a large variety of engineering and physical problems.For a complex problem relating to multi-field,multi-process and multi-scale,different computing tools have to be developed so as to solve particular fields at different scales and for different processes.Therefore,the integration of different types of software is inevitable.However,it is difficult to perform the transfer of the meshes and simulated results among software packages because of the lack of shared data formats or encrypted data formats.An image processing based method for three-dimensional model reconstruction for numerical simulation was proposed,which presents a solution to the integration problem by a series of slice or projection images obtained by the post-processing modules of the numerical simulation software.By means of mapping image pixels to meshes of either finite difference or finite element models,the geometry contour can be extracted to export the stereolithography model.The values of results,represented by color,can be deduced and assigned to the meshes.All the models with data can be directly or indirectly integrated into other software as a continued or new numerical simulation.The three-dimensional reconstruction method has been validated in numerical simulation of castings and case studies were provided in this study.
文摘In steel plants, estimation of the production system characteristic is highly critical to adjust the system parameters for best efficiency. Although the system parameters may be tuned very well, due to the machine and human factors involved in the production line some deficiencies may occur in product. It is important to detect such problems as early as possible. Surface defects and dimensional deviations are the most important quality problems. In this study, it is aimed to develop an approach to measure the dimensions of metal profiles by obtaining images of them. This will be of use in detecting the deviations in dimensions. A platform was introduced to simulate the real-time environment and images were taken from the metal profile using 4 laser light sources. The shape of the material is generated by combining the images taken from different cameras. Real dimensions were obtained by using image processing and mathematical conversion operations on the images. The results obtained with small deviations from the real values showed that this method can be applied in a real-time production line.
基金supported by the National Natural Science Foundation of China(Grant No.52090081)the State Key Laboratory of Hydro-science and Hydraulic Engineering(Grant No.2021-KY-04).
文摘Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understanding the progressive damage mechanisms of slopes based on monitoring image data.Inspired by recent advances in computer vision,deep learning(DL)models have been widely utilized for image-based fracture identification.The multi-scale characteristics,image resolution and annotation quality of images will cause a scale-space effect(SSE)that makes features indistinguishable from noise,directly affecting the accuracy.However,this effect has not received adequate attention.Herein,we try to address this gap by collecting slope images at various proportional scales and constructing multi-scale datasets using image processing techniques.Next,we quantify the intensity of feature signals using metrics such as peak signal-to-noise ratio(PSNR)and structural similarity(SSIM).Combining these metrics with the scale-space theory,we investigate the influence of the SSE on the differentiation of multi-scale features and the accuracy of recognition.It is found that augmenting the image's detail capacity does not always yield benefits for vision-based recognition models.In light of these observations,we propose a scale hybridization approach based on the diffusion mechanism of scale-space representation.The results show that scale hybridization strengthens the tolerance of multi-scale feature recognition under complex environmental noise interference and significantly enhances the recognition accuracy of GD.It also facilitates the objective understanding,description and analysis of the rock behavior and stability of slopes from the perspective of image data.
文摘Algal blooms,the spread of algae on the surface of water bodies,have adverse effects not only on aquatic ecosystems but also on human life.The adverse effects of harmful algal blooms(HABs)necessitate a convenient solution for detection and monitoring.Unmanned aerial vehicles(UAVs)have recently emerged as a tool for algal bloom detection,efficiently providing on-demand images at high spatiotemporal resolutions.This study developed an image processing method for algal bloom area estimation from the aerial images(obtained from the internet)captured using UAVs.As a remote sensing method of HAB detection,analysis,and monitoring,a combination of histogram and texture analyses was used to efficiently estimate the area of HABs.Statistical features like entropy(using the Kullback-Leibler method)were emphasized with the aid of a gray-level co-occurrence matrix.The results showed that the orthogonal images demonstrated fewer errors,and the morphological filter best detected algal blooms in real time,with a precision of 80%.This study provided efficient image processing approaches using on-board UAVs for HAB monitoring.
基金the National Natural Science Foundation of China(62003298,62163036)the Major Project of Science and Technology of Yunnan Province(202202AD080005,202202AH080009)the Yunnan University Professional Degree Graduate Practice Innovation Fund Project(ZC-22222770)。
文摘Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.
基金Project supported by the Scientific Research Fund of Hunan Provincial Education Department,China (Grant No.21A0470)the Natural Science Foundation of Hunan Province,China (Grant No.2023JJ50268)+1 种基金the National Natural Science Foundation of China (Grant Nos.62172268 and 62302289)the Shanghai Science and Technology Project,China (Grant Nos.21JC1402800 and 23YF1416200)。
文摘As a branch of quantum image processing,quantum image scaling has been widely studied.However,most of the existing quantum image scaling algorithms are based on nearest-neighbor interpolation and bilinear interpolation,the quantum version of bicubic interpolation has not yet been studied.In this work,we present the first quantum image scaling scheme for bicubic interpolation based on the novel enhanced quantum representation(NEQR).Our scheme can realize synchronous enlargement and reduction of the image with the size of 2^(n)×2^(n) by integral multiple.Firstly,the image is represented by NEQR and the original image coordinates are obtained through multiple CNOT modules.Then,16 neighborhood pixels are obtained by quantum operation circuits,and the corresponding weights of these pixels are calculated by quantum arithmetic modules.Finally,a quantum matrix operation,instead of a classical convolution operation,is used to realize the sum of convolution of these pixels.Through simulation experiments and complexity analysis,we demonstrate that our scheme achieves exponential speedup over the classical bicubic interpolation algorithm,and has better effect than the quantum version of bilinear interpolation.
文摘Diagnosing various diseases such as glaucoma,age-related macular degeneration,cardiovascular conditions,and diabetic retinopathy involves segmenting retinal blood vessels.The task is particularly challenging when dealing with color fundus images due to issues like non-uniformillumination,low contrast,and variations in vessel appearance,especially in the presence of different pathologies.Furthermore,the speed of the retinal vessel segmentation system is of utmost importance.With the surge of now available big data,the speed of the algorithm becomes increasingly important,carrying almost equivalent weightage to the accuracy of the algorithm.To address these challenges,we present a novel approach for retinal vessel segmentation,leveraging efficient and robust techniques based on multiscale line detection and mathematical morphology.Our algorithm’s performance is evaluated on two publicly available datasets,namely the Digital Retinal Images for Vessel Extraction dataset(DRIVE)and the Structure Analysis of Retina(STARE)dataset.The experimental results demonstrate the effectiveness of our method,withmean accuracy values of 0.9467 forDRIVE and 0.9535 for STARE datasets,aswell as sensitivity values of 0.6952 forDRIVE and 0.6809 for STARE datasets.Notably,our algorithmexhibits competitive performance with state-of-the-art methods.Importantly,it operates at an average speed of 3.73 s per image for DRIVE and 3.75 s for STARE datasets.It is worth noting that these results were achieved using Matlab scripts containing multiple loops.This suggests that the processing time can be further reduced by replacing loops with vectorization.Thus the proposed algorithm can be deployed in real time applications.In summary,our proposed system strikes a fine balance between swift computation and accuracy that is on par with the best available methods in the field.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62172268 and 62302289)the Shanghai Science and Technology Project(Grant Nos.21JC1402800 and 23YF1416200)。
文摘As a part of quantum image processing,quantum image filtering is a crucial technology in the development of quantum computing.Low-pass filtering can effectively achieve anti-aliasing effects on images.Currently,most quantum image filterings are based on classical domains and grayscale images,and there are relatively fewer studies on anti-aliasing in the quantum domain.This paper proposes a scheme for anti-aliasing filtering based on quantum grayscale and color image scaling in the spatial domain.It achieves the effect of anti-aliasing filtering on quantum images during the scaling process.First,we use the novel enhanced quantum representation(NEQR)and the improved quantum representation of color images(INCQI)to represent classical images.Since aliasing phenomena are more pronounced when images are scaled down,this paper focuses only on the anti-aliasing effects in the case of reduction.Subsequently,we perform anti-aliasing filtering on the quantum representation of the original image and then use bilinear interpolation to scale down the image,achieving the anti-aliasing effect.The constructed pyramid model is then used to select an appropriate image for upscaling to the original image size.Finally,the complexity of the circuit is analyzed.Compared to the images experiencing aliasing effects solely due to scaling,applying anti-aliasing filtering to the images results in smoother and clearer outputs.Additionally,the anti-aliasing filtering allows for manual intervention to select the desired level of image smoothness.