Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis...In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.展开更多
As deep learning techniques are increasingly applied with greater depth and sophistication in the food industry,the realm of food image processing has progressively emerged as a central focus of research interest.This...As deep learning techniques are increasingly applied with greater depth and sophistication in the food industry,the realm of food image processing has progressively emerged as a central focus of research interest.This work provides an overview of key practices in food image processing techniques,detailing common processing tasks including classification,recognition,detection,segmentation,and image retrieval,as well as outlining metrics for evaluating task performance and thoroughly examining existing food image datasets,along with specialized food-related datasets.In terms of methodology,this work offers insight into the evolution of food image processing,tracing its development from traditional methods extracting low and intermediate-level features to advanced deep learning techniques for high-level feature extraction,along with some synergistic fusion of these approaches.It is believed that these methods will play a significant role in practical application scenarios such as self-checkout systems,dietary health management,intelligent food service,disease etiology tracing,chronic disease management,and food safety monitoring.However,due to the complex content and various types of distortions in food images,further improvements in related methods are needed to meet the requirements of practical applications in the future.It is believed that this study can help researchers to further understand the research in the field of food imaging and provide some contribution to the advancement of research in this field.展开更多
The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is a...The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.展开更多
The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In orde...The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.展开更多
Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights t...Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.展开更多
In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularl...In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.展开更多
A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-effi...A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-efficiency and have many errors.This study researched the spatial distribution and cluster characteristics of gravels based on digital image processing technology combined with a self-organizing map(SOM)and multivariate statistical methods in the grassland of northern Tibetan Plateau.Moreover,the correlation of morphological parameters of gravels between different cluster groups and the environmental factors affecting gravel distribution were analyzed.The results showed that the morphological characteristics of gravels in northern region(cluster C)and southern region(cluster B)of the Tibetan Plateau were similar,with a low gravel coverage,small gravel diameter,and elongated shape.These regions were mainly distributed in high mountainous areas with large topographic relief.The central region(cluster A)has high coverage of gravels with a larger diameter,mainly distributed in high-altitude plains with smaller undulation.Principal component analysis(PCA)results showed that the gravel distribution of cluster A may be mainly affected by vegetation,while those in clusters B and C could be mainly affected by topography,climate,and soil.The study confirmed that the combination of digital image processing technology and SOM could effectively analyzed the spatial distribution characteristics of gravels,providing a new mode for gravel research.展开更多
Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmissi...Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmission electron microscope(TEM)images of W nanofibers using image processing techniques and convolutional neural network(CNN).We employ a three-stage approach consisting of Otsu,local-threshold,and watershed segmentation to extract bubbles from noisy images.To address over-segmentation,we propose a combination of area factor and radial pixel intensity scanning.A CNN is used to recognize bubbles,outperforming traditional neural network models such as Alex Net and Google Net with an accuracy of 97.1%and recall of 98.6%.Our method is tested on both clear and blurred TEM images,and demonstrates humanlike performance in recognizing bubbles.This work contributes to the development of quantitative image analysis in the field of plasma-material interactions,offering a scalable solution for analyzing material defects.Overall,this study's findings establish the potential for automatic defect recognition and its applications in the assessment of plasma-material interactions.This method can be employed in a variety of specialties,including plasma physics and materials science.展开更多
Numerical simulation is the most powerful computational and analysis tool for a large variety of engineering and physical problems.For a complex problem relating to multi-field,multi-process and multi-scale,different ...Numerical simulation is the most powerful computational and analysis tool for a large variety of engineering and physical problems.For a complex problem relating to multi-field,multi-process and multi-scale,different computing tools have to be developed so as to solve particular fields at different scales and for different processes.Therefore,the integration of different types of software is inevitable.However,it is difficult to perform the transfer of the meshes and simulated results among software packages because of the lack of shared data formats or encrypted data formats.An image processing based method for three-dimensional model reconstruction for numerical simulation was proposed,which presents a solution to the integration problem by a series of slice or projection images obtained by the post-processing modules of the numerical simulation software.By means of mapping image pixels to meshes of either finite difference or finite element models,the geometry contour can be extracted to export the stereolithography model.The values of results,represented by color,can be deduced and assigned to the meshes.All the models with data can be directly or indirectly integrated into other software as a continued or new numerical simulation.The three-dimensional reconstruction method has been validated in numerical simulation of castings and case studies were provided in this study.展开更多
The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models...The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.展开更多
Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water.Such images with degradation cannot meet the needs of underwater ope...Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water.Such images with degradation cannot meet the needs of underwater operations.The main problem in classic underwater image restoration or enhancement methods is that they consume long calcu-lation time,and often,the colour or contrast of the result images is still unsatisfied.Instead of using the complicated physical model of underwater imaging degradation,we propose a new method to deal with underwater images by imitating the colour constancy mechanism of human vision using double-opponency.Firstly,the original image is converted to the LMS space.Then the signals are linearly combined,and Gaussian convolutions are per-formed to imitate the function of receptive fields(RFs).Next,two RFs with different sizes work together to constitute the double-opponency response.Finally,the underwater light is estimated to correct the colours in the image.Further contrast stretching on the luminance is optional.Experiments show that the proposed method can obtain clarified underwater images with higher quality than before,and it spends significantly less time cost compared to other previously published typical methods.展开更多
Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of app...Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of appearance domain and pose domain.Previous alignment methods,such as appearance flow warping,correspondence learning and cross attention,often encounter challenges when it comes to producing fine texture details.These approaches suffer from limitations in accurately estimating appearance flows due to the lack of global receptive field.Alternatively,they can only perform cross-domain alignment on high-level feature maps with small spatial dimensions since the computational complexity increases quadratically with larger feature sizes.In this article,the significance of multi-scale alignment,in both low-level and high-level domains,for ensuring reliable cross-domain alignment of appearance and pose is demonstrated.To this end,a novel and effective method,named Multi-scale Crossdomain Alignment(MCA)is proposed.Firstly,MCA adopts global context aggregation transformer to model multi-scale interaction between pose and appearance inputs,which employs pair-wise window-based cross attention.Furthermore,leveraging the integrated global source information for each target position,MCA applies flexible flow prediction head and point correlation to effectively conduct warping and fusing for final transformed person image generation.Our proposed MCA achieves superior performance on two popular datasets than other methods,which verifies the effectiveness of our approach.展开更多
Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS...Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS image with a HR RGB(or mul-tispectral)image guidance.Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors.Recently,researchers pay more attention to deep learning methods with direct supervised or unsupervised learning,which exploit deep prior only from training dataset or testing data.In this article,an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance.Specif-ically,a progressive HS image super-resolution network is proposed,which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance.Then,the super-resolution network is progressively trained with supervised pre-training and un-supervised adaption,where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes.The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint.It has a good general-isation capability,especially for blind HS image super-resolution.Comprehensive experimental results show that the proposed deep progressive learning method out-performs the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases.展开更多
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b...Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.展开更多
Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identi...Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.展开更多
Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understa...Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understanding the progressive damage mechanisms of slopes based on monitoring image data.Inspired by recent advances in computer vision,deep learning(DL)models have been widely utilized for image-based fracture identification.The multi-scale characteristics,image resolution and annotation quality of images will cause a scale-space effect(SSE)that makes features indistinguishable from noise,directly affecting the accuracy.However,this effect has not received adequate attention.Herein,we try to address this gap by collecting slope images at various proportional scales and constructing multi-scale datasets using image processing techniques.Next,we quantify the intensity of feature signals using metrics such as peak signal-to-noise ratio(PSNR)and structural similarity(SSIM).Combining these metrics with the scale-space theory,we investigate the influence of the SSE on the differentiation of multi-scale features and the accuracy of recognition.It is found that augmenting the image's detail capacity does not always yield benefits for vision-based recognition models.In light of these observations,we propose a scale hybridization approach based on the diffusion mechanism of scale-space representation.The results show that scale hybridization strengthens the tolerance of multi-scale feature recognition under complex environmental noise interference and significantly enhances the recognition accuracy of GD.It also facilitates the objective understanding,description and analysis of the rock behavior and stability of slopes from the perspective of image data.展开更多
Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irreg...Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.展开更多
As a part of quantum image processing,quantum image filtering is a crucial technology in the development of quantum computing.Low-pass filtering can effectively achieve anti-aliasing effects on images.Currently,most q...As a part of quantum image processing,quantum image filtering is a crucial technology in the development of quantum computing.Low-pass filtering can effectively achieve anti-aliasing effects on images.Currently,most quantum image filterings are based on classical domains and grayscale images,and there are relatively fewer studies on anti-aliasing in the quantum domain.This paper proposes a scheme for anti-aliasing filtering based on quantum grayscale and color image scaling in the spatial domain.It achieves the effect of anti-aliasing filtering on quantum images during the scaling process.First,we use the novel enhanced quantum representation(NEQR)and the improved quantum representation of color images(INCQI)to represent classical images.Since aliasing phenomena are more pronounced when images are scaled down,this paper focuses only on the anti-aliasing effects in the case of reduction.Subsequently,we perform anti-aliasing filtering on the quantum representation of the original image and then use bilinear interpolation to scale down the image,achieving the anti-aliasing effect.The constructed pyramid model is then used to select an appropriate image for upscaling to the original image size.Finally,the complexity of the circuit is analyzed.Compared to the images experiencing aliasing effects solely due to scaling,applying anti-aliasing filtering to the images results in smoother and clearer outputs.Additionally,the anti-aliasing filtering allows for manual intervention to select the desired level of image smoothness.展开更多
Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to fac...Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.展开更多
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金Scientific Research Deanship has funded this project at the University of Ha’il–Saudi Arabia Ha’il–Saudi Arabia through project number RG-21104.
文摘In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.
文摘As deep learning techniques are increasingly applied with greater depth and sophistication in the food industry,the realm of food image processing has progressively emerged as a central focus of research interest.This work provides an overview of key practices in food image processing techniques,detailing common processing tasks including classification,recognition,detection,segmentation,and image retrieval,as well as outlining metrics for evaluating task performance and thoroughly examining existing food image datasets,along with specialized food-related datasets.In terms of methodology,this work offers insight into the evolution of food image processing,tracing its development from traditional methods extracting low and intermediate-level features to advanced deep learning techniques for high-level feature extraction,along with some synergistic fusion of these approaches.It is believed that these methods will play a significant role in practical application scenarios such as self-checkout systems,dietary health management,intelligent food service,disease etiology tracing,chronic disease management,and food safety monitoring.However,due to the complex content and various types of distortions in food images,further improvements in related methods are needed to meet the requirements of practical applications in the future.It is believed that this study can help researchers to further understand the research in the field of food imaging and provide some contribution to the advancement of research in this field.
基金supported by the National Science Foundation of China(10972015,11172015)the Beijing Natural Science Foundation(8162008).
文摘The mechanical properties and failure mechanism of lightweight aggregate concrete(LWAC)is a hot topic in the engineering field,and the relationship between its microstructure and macroscopic mechanical properties is also a frontier research topic in the academic field.In this study,the image processing technology is used to establish a micro-structure model of lightweight aggregate concrete.Through the information extraction and processing of the section image of actual light aggregate concrete specimens,the mesostructural model of light aggregate concrete with real aggregate characteristics is established.The numerical simulation of uniaxial tensile test,uniaxial compression test and three-point bending test of lightweight aggregate concrete are carried out using a new finite element method-the base force element method respectively.Firstly,the image processing technology is used to produce beam specimens,uniaxial compression specimens and uniaxial tensile specimens of light aggregate concrete,which can better simulate the aggregate shape and random distribution of real light aggregate concrete.Secondly,the three-point bending test is numerically simulated.Thirdly,the uniaxial compression specimen generated by image processing technology is numerically simulated.Fourth,the uniaxial tensile specimen generated by image processing technology is numerically simulated.The mechanical behavior and damage mode of the specimen during loading were analyzed.The results of numerical simulation are compared and analyzed with those of relevant experiments.The feasibility and correctness of the micromodel established in this study for analyzing the micromechanics of lightweight aggregate concrete materials are verified.Image processing technology has a broad application prospect in the field of concrete mesoscopic damage analysis.
文摘The rail surface status image is affected by the noise in the shooting environment and contains a large amount of interference information, which increases the difficulty of rail surface status identification. In order to solve this problem, a preprocessing method for the rail surface state image is proposed. The preprocessing process mainly includes image graying, image denoising, image geometric correction, image extraction, data amplification, and finally building the rail surface image database. The experimental results show that this method can efficiently complete image processing, facilitate feature extraction of rail surface status images, and improve rail surface status recognition accuracy.
文摘Real-time capabilities and computational efficiency are provided by parallel image processing utilizing OpenMP. However, race conditions can affect the accuracy and reliability of the outcomes. This paper highlights the importance of addressing race conditions in parallel image processing, specifically focusing on color inverse filtering using OpenMP. We considered three solutions to solve race conditions, each with distinct characteristics: #pragma omp atomic: Protects individual memory operations for fine-grained control. #pragma omp critical: Protects entire code blocks for exclusive access. #pragma omp parallel sections reduction: Employs a reduction clause for safe aggregation of values across threads. Our findings show that the produced images were unaffected by race condition. However, it becomes evident that solving the race conditions in the code makes it significantly faster, especially when it is executed on multiple cores.
文摘In recent years, the widespread adoption of parallel computing, especially in multi-core processors and high-performance computing environments, ushered in a new era of efficiency and speed. This trend was particularly noteworthy in the field of image processing, which witnessed significant advancements. This parallel computing project explored the field of parallel image processing, with a focus on the grayscale conversion of colorful images. Our approach involved integrating OpenMP into our framework for parallelization to execute a critical image processing task: grayscale conversion. By using OpenMP, we strategically enhanced the overall performance of the conversion process by distributing the workload across multiple threads. The primary objectives of our project revolved around optimizing computation time and improving overall efficiency, particularly in the task of grayscale conversion of colorful images. Utilizing OpenMP for concurrent processing across multiple cores significantly reduced execution times through the effective distribution of tasks among these cores. The speedup values for various image sizes highlighted the efficacy of parallel processing, especially for large images. However, a detailed examination revealed a potential decline in parallelization efficiency with an increasing number of cores. This underscored the importance of a carefully optimized parallelization strategy, considering factors like load balancing and minimizing communication overhead. Despite challenges, the overall scalability and efficiency achieved with parallel image processing underscored OpenMP’s effectiveness in accelerating image manipulation tasks.
基金funded by the National Natural Science Foundation of China(41971226,41871357)the Major Research and Development and Achievement Transformation Projects of Qinghai,China(2022-QY-224)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA28110502,XDA19030303).
文摘A comprehensive understanding of spatial distribution and clustering patterns of gravels is of great significance for ecological restoration and monitoring.However,traditional methods for studying gravels are low-efficiency and have many errors.This study researched the spatial distribution and cluster characteristics of gravels based on digital image processing technology combined with a self-organizing map(SOM)and multivariate statistical methods in the grassland of northern Tibetan Plateau.Moreover,the correlation of morphological parameters of gravels between different cluster groups and the environmental factors affecting gravel distribution were analyzed.The results showed that the morphological characteristics of gravels in northern region(cluster C)and southern region(cluster B)of the Tibetan Plateau were similar,with a low gravel coverage,small gravel diameter,and elongated shape.These regions were mainly distributed in high mountainous areas with large topographic relief.The central region(cluster A)has high coverage of gravels with a larger diameter,mainly distributed in high-altitude plains with smaller undulation.Principal component analysis(PCA)results showed that the gravel distribution of cluster A may be mainly affected by vegetation,while those in clusters B and C could be mainly affected by topography,climate,and soil.The study confirmed that the combination of digital image processing technology and SOM could effectively analyzed the spatial distribution characteristics of gravels,providing a new mode for gravel research.
基金supported by the National Key R&D Program of China(No.2017YFE0300106)Dalian Science and Technology Star Project(No.2020RQ136)+1 种基金the Central Guidance on Local Science and Technology Development Fund of Liaoning Province(No.2022010055-JH6/100)the Fundamental Research Funds for the Central Universities(No.DUT21RC(3)066)。
文摘Observing and analyzing surface images is critical for studying the interaction between plasma and irradiated plasma-facing materials.This paper presents a method for the automatic recognition of bubbles in transmission electron microscope(TEM)images of W nanofibers using image processing techniques and convolutional neural network(CNN).We employ a three-stage approach consisting of Otsu,local-threshold,and watershed segmentation to extract bubbles from noisy images.To address over-segmentation,we propose a combination of area factor and radial pixel intensity scanning.A CNN is used to recognize bubbles,outperforming traditional neural network models such as Alex Net and Google Net with an accuracy of 97.1%and recall of 98.6%.Our method is tested on both clear and blurred TEM images,and demonstrates humanlike performance in recognizing bubbles.This work contributes to the development of quantitative image analysis in the field of plasma-material interactions,offering a scalable solution for analyzing material defects.Overall,this study's findings establish the potential for automatic defect recognition and its applications in the assessment of plasma-material interactions.This method can be employed in a variety of specialties,including plasma physics and materials science.
基金funded by National Key R&D Program of China(No.2021YFB3401200)the National Natural Science Foundation of China(No.51875308)the Beijing Nature Sciences Fund-Haidian Originality Cooperation Project(L212002).
文摘Numerical simulation is the most powerful computational and analysis tool for a large variety of engineering and physical problems.For a complex problem relating to multi-field,multi-process and multi-scale,different computing tools have to be developed so as to solve particular fields at different scales and for different processes.Therefore,the integration of different types of software is inevitable.However,it is difficult to perform the transfer of the meshes and simulated results among software packages because of the lack of shared data formats or encrypted data formats.An image processing based method for three-dimensional model reconstruction for numerical simulation was proposed,which presents a solution to the integration problem by a series of slice or projection images obtained by the post-processing modules of the numerical simulation software.By means of mapping image pixels to meshes of either finite difference or finite element models,the geometry contour can be extracted to export the stereolithography model.The values of results,represented by color,can be deduced and assigned to the meshes.All the models with data can be directly or indirectly integrated into other software as a continued or new numerical simulation.The three-dimensional reconstruction method has been validated in numerical simulation of castings and case studies were provided in this study.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R161)PrincessNourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the|Deanship of Scientific Research at Umm Al-Qura University|for supporting this work by Grant Code:(22UQU4310373DSR33).
文摘The recent developments in Multimedia Internet of Things(MIoT)devices,empowered with Natural Language Processing(NLP)model,seem to be a promising future of smart devices.It plays an important role in industrial models such as speech understanding,emotion detection,home automation,and so on.If an image needs to be captioned,then the objects in that image,its actions and connections,and any silent feature that remains under-projected or missing from the images should be identified.The aim of the image captioning process is to generate a caption for image.In next step,the image should be provided with one of the most significant and detailed descriptions that is syntactically as well as semantically correct.In this scenario,computer vision model is used to identify the objects and NLP approaches are followed to describe the image.The current study develops aNatural Language Processing with Optimal Deep Learning Enabled Intelligent Image Captioning System(NLPODL-IICS).The aim of the presented NLPODL-IICS model is to produce a proper description for input image.To attain this,the proposed NLPODL-IICS follows two stages such as encoding and decoding processes.Initially,at the encoding side,the proposed NLPODL-IICS model makes use of Hunger Games Search(HGS)with Neural Search Architecture Network(NASNet)model.This model represents the input data appropriately by inserting it into a predefined length vector.Besides,during decoding phase,Chimp Optimization Algorithm(COA)with deeper Long Short Term Memory(LSTM)approach is followed to concatenate the description sentences 4436 CMC,2023,vol.74,no.2 produced by the method.The application of HGS and COA algorithms helps in accomplishing proper parameter tuning for NASNet and LSTM models respectively.The proposed NLPODL-IICS model was experimentally validated with the help of two benchmark datasets.Awidespread comparative analysis confirmed the superior performance of NLPODL-IICS model over other models.
文摘Underwater images are often with biased colours and reduced contrast because of the absorption and scattering effects when light propagates in water.Such images with degradation cannot meet the needs of underwater operations.The main problem in classic underwater image restoration or enhancement methods is that they consume long calcu-lation time,and often,the colour or contrast of the result images is still unsatisfied.Instead of using the complicated physical model of underwater imaging degradation,we propose a new method to deal with underwater images by imitating the colour constancy mechanism of human vision using double-opponency.Firstly,the original image is converted to the LMS space.Then the signals are linearly combined,and Gaussian convolutions are per-formed to imitate the function of receptive fields(RFs).Next,two RFs with different sizes work together to constitute the double-opponency response.Finally,the underwater light is estimated to correct the colours in the image.Further contrast stretching on the luminance is optional.Experiments show that the proposed method can obtain clarified underwater images with higher quality than before,and it spends significantly less time cost compared to other previously published typical methods.
基金National Natural Science Foundation of China,Grant/Award Number:62274142Hangzhou Major Technology Innovation Project of Artificial Intelligence,Grant/Award Number:2022AIZD0060。
文摘Person image generation aims to generate images that maintain the original human appearance in different target poses.Recent works have revealed that the critical element in achieving this task is the alignment of appearance domain and pose domain.Previous alignment methods,such as appearance flow warping,correspondence learning and cross attention,often encounter challenges when it comes to producing fine texture details.These approaches suffer from limitations in accurately estimating appearance flows due to the lack of global receptive field.Alternatively,they can only perform cross-domain alignment on high-level feature maps with small spatial dimensions since the computational complexity increases quadratically with larger feature sizes.In this article,the significance of multi-scale alignment,in both low-level and high-level domains,for ensuring reliable cross-domain alignment of appearance and pose is demonstrated.To this end,a novel and effective method,named Multi-scale Crossdomain Alignment(MCA)is proposed.Firstly,MCA adopts global context aggregation transformer to model multi-scale interaction between pose and appearance inputs,which employs pair-wise window-based cross attention.Furthermore,leveraging the integrated global source information for each target position,MCA applies flexible flow prediction head and point correlation to effectively conduct warping and fusing for final transformed person image generation.Our proposed MCA achieves superior performance on two popular datasets than other methods,which verifies the effectiveness of our approach.
基金National Key R&D Program of China,Grant/Award Number:2022YFC3300704National Natural Science Foundation of China,Grant/Award Numbers:62171038,62088101,62006023。
文摘Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS image with a HR RGB(or mul-tispectral)image guidance.Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors.Recently,researchers pay more attention to deep learning methods with direct supervised or unsupervised learning,which exploit deep prior only from training dataset or testing data.In this article,an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance.Specif-ically,a progressive HS image super-resolution network is proposed,which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance.Then,the super-resolution network is progressively trained with supervised pre-training and un-supervised adaption,where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes.The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint.It has a good general-isation capability,especially for blind HS image super-resolution.Comprehensive experimental results show that the proposed deep progressive learning method out-performs the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases.
基金the National Natural Science Foundation of China(62003298,62163036)the Major Project of Science and Technology of Yunnan Province(202202AD080005,202202AH080009)the Yunnan University Professional Degree Graduate Practice Innovation Fund Project(ZC-22222770)。
文摘Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.
基金the National Natural Science Foun-dation of China(Nos.61471263,61872267 and U21B2024)the Natural Science Foundation of Tianjin,China(No.16JCZDJC31100)Tianjin University Innovation Foundation(No.2021XZC0024).
文摘Hyperspectral images typically have high spectral resolution but low spatial resolution,which impacts the reliability and accuracy of subsequent applications,for example,remote sensingclassification and mineral identification.But in traditional methods via deep convolution neural net-works,indiscriminately extracting and fusing spectral and spatial features makes it challenging toutilize the differentiated information across adjacent spectral channels.Thus,we proposed a multi-branch interleaved iterative upsampling hyperspectral image super-resolution reconstruction net-work(MIIUSR)to address the above problems.We reinforce spatial feature extraction by integrat-ing detailed features from different receptive fields across adjacent channels.Furthermore,we pro-pose an interleaved iterative upsampling process during the reconstruction stage,which progres-sively fuses incremental information among adjacent frequency bands.Additionally,we add twoparallel three dimensional(3D)feature extraction branches to the backbone network to extractspectral and spatial features of varying granularity.We further enhance the backbone network’sconstruction results by leveraging the difference between two dimensional(2D)channel-groupingspatial features and 3D multi-granularity features.The results obtained by applying the proposednetwork model to the CAVE test set show that,at a scaling factor of×4,the peak signal to noiseratio,spectral angle mapping,and structural similarity are 37.310 dB,3.525 and 0.9438,respec-tively.Besides,extensive experiments conducted on the Harvard and Foster datasets demonstratethe superior potential of the proposed model in hyperspectral super-resolution reconstruction.
基金supported by the National Natural Science Foundation of China(Grant No.52090081)the State Key Laboratory of Hydro-science and Hydraulic Engineering(Grant No.2021-KY-04).
文摘Geological discontinuity(GD)plays a pivotal role in determining the catastrophic mechanical failure of jointed rock masses.Accurate and efficient acquisition of GD networks is essential for characterizing and understanding the progressive damage mechanisms of slopes based on monitoring image data.Inspired by recent advances in computer vision,deep learning(DL)models have been widely utilized for image-based fracture identification.The multi-scale characteristics,image resolution and annotation quality of images will cause a scale-space effect(SSE)that makes features indistinguishable from noise,directly affecting the accuracy.However,this effect has not received adequate attention.Herein,we try to address this gap by collecting slope images at various proportional scales and constructing multi-scale datasets using image processing techniques.Next,we quantify the intensity of feature signals using metrics such as peak signal-to-noise ratio(PSNR)and structural similarity(SSIM).Combining these metrics with the scale-space theory,we investigate the influence of the SSE on the differentiation of multi-scale features and the accuracy of recognition.It is found that augmenting the image's detail capacity does not always yield benefits for vision-based recognition models.In light of these observations,we propose a scale hybridization approach based on the diffusion mechanism of scale-space representation.The results show that scale hybridization strengthens the tolerance of multi-scale feature recognition under complex environmental noise interference and significantly enhances the recognition accuracy of GD.It also facilitates the objective understanding,description and analysis of the rock behavior and stability of slopes from the perspective of image data.
基金National Natural Science Foundation of China,Grant/Award Numbers:62377026,62201222Knowledge Innovation Program of Wuhan-Shuguang Project,Grant/Award Number:2023010201020382+1 种基金National Key Research and Development Programme of China,Grant/Award Number:2022YFD1700204Fundamental Research Funds for the Central Universities,Grant/Award Numbers:CCNU22QN014,CCNU22JC007,CCNU22XJ034.
文摘Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.62172268 and 62302289)the Shanghai Science and Technology Project(Grant Nos.21JC1402800 and 23YF1416200)。
文摘As a part of quantum image processing,quantum image filtering is a crucial technology in the development of quantum computing.Low-pass filtering can effectively achieve anti-aliasing effects on images.Currently,most quantum image filterings are based on classical domains and grayscale images,and there are relatively fewer studies on anti-aliasing in the quantum domain.This paper proposes a scheme for anti-aliasing filtering based on quantum grayscale and color image scaling in the spatial domain.It achieves the effect of anti-aliasing filtering on quantum images during the scaling process.First,we use the novel enhanced quantum representation(NEQR)and the improved quantum representation of color images(INCQI)to represent classical images.Since aliasing phenomena are more pronounced when images are scaled down,this paper focuses only on the anti-aliasing effects in the case of reduction.Subsequently,we perform anti-aliasing filtering on the quantum representation of the original image and then use bilinear interpolation to scale down the image,achieving the anti-aliasing effect.The constructed pyramid model is then used to select an appropriate image for upscaling to the original image size.Finally,the complexity of the circuit is analyzed.Compared to the images experiencing aliasing effects solely due to scaling,applying anti-aliasing filtering to the images results in smoother and clearer outputs.Additionally,the anti-aliasing filtering allows for manual intervention to select the desired level of image smoothness.
基金This work is supported in part by The National Natural Science Foundation of China(Grant Number 61971078),which provided domain expertise and computational power that greatly assisted the activityThis work was financially supported by Chongqing Municipal Education Commission Grants for-Major Science and Technology Project(Grant Number gzlcx20243175).
文摘Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.