In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:thei...In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:their transformation functions are too simple to imitate complex colour transformations between low-quality images and manually retouched high-quality images.In order to address this limitation,a simple yet effective approach for image enhancement is proposed.The proposed algorithm based on the channel-wise intensity transformation is designed.However,this transformation is applied to the learnt embedding space instead of specific colour spaces and then return enhanced features to colours.To this end,the authors define the continuous intensity transformation(CIT)to describe the mapping between input and output intensities on the embedding space.Then,the enhancement network is developed,which produces multi-scale feature maps from input images,derives the set of transformation functions,and performs the CIT to obtain enhanced images.Extensive experiments on the MIT-Adobe 5K dataset demonstrate that the authors’approach improves the performance of conventional intensity transforms on colour space metrics.Specifically,the authors achieved a 3.8%improvement in peak signal-to-noise ratio,a 1.8%improvement in structual similarity index measure,and a 27.5%improvement in learned perceptual image patch similarity.Also,the authors’algorithm outperforms state-of-the-art alternatives on three image enhancement datasets:MIT-Adobe 5K,Low-Light,and Google HDRþ.展开更多
Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color...Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.展开更多
Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image qual...Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices.展开更多
Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ign...Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.展开更多
Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but ...Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.展开更多
In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of ea...In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of each layer were enhanced into the global features by the proposed residual dense block,which ensured that the generated images retain more details.Secondly,a multi-scale structure was adopted to extract multi-scale semantic features of the original images.Finally,the features obtained from the dual channels were fused by an adaptive fusion module to further optimize the features.The discriminant network adopted the structure of the Markov discriminator.In addition,by constructing mean square error,structural similarity,and perceived color loss function,the generated image is consistent with the reference image in structure,color,and content.The experimental results showed that the enhanced underwater image deblurring effect of the proposed algorithm was good and the problem of underwater image color bias was effectively improved.In both subjective and objective evaluation indexes,the experimental results of the proposed algorithm are better than those of the comparison algorithm.展开更多
Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are s...Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.展开更多
Recovering high-quality inscription images from unknown and complex inscription noisy images is a challenging research issue.Different fromnatural images,character images pay more attention to stroke information.Howev...Recovering high-quality inscription images from unknown and complex inscription noisy images is a challenging research issue.Different fromnatural images,character images pay more attention to stroke information.However,existingmodelsmainly consider pixel-level informationwhile ignoring structural information of the character,such as its edge and glyph,resulting in reconstructed images with mottled local structure and character damage.To solve these problems,we propose a novel generative adversarial network(GAN)framework based on an edge-guided generator and a discriminator constructed by a dual-domain U-Net framework,i.e.,EDU-GAN.Unlike existing frameworks,the generator introduces the edge extractionmodule,guiding it into the denoising process through the attention mechanism,which maintains the edge detail of the restored inscription image.Moreover,a dual-domain U-Net-based discriminator is proposed to learn the global and local discrepancy between the denoised and the label images in both image and morphological domains,which is helpful to blind denoising tasks.The proposed dual-domain discriminator and generator for adversarial training can reduce local artifacts and keep the denoised character structure intact.Due to the lack of a real-inscription image,we built the real-inscription dataset to provide an effective benchmark for studying inscription image denoising.The experimental results show the superiority of our method both in the synthetic and real-inscription datasets.展开更多
Digital watermarking technology is adequate for copyright protection and content authentication.There needs to be more research on the watermarking algorithm after printing and scanning.Aiming at the problem that exis...Digital watermarking technology is adequate for copyright protection and content authentication.There needs to be more research on the watermarking algorithm after printing and scanning.Aiming at the problem that existing anti-print scanning text image watermarking algorithms cannot take into account the invisibility and robustness of the watermark,an anti-print scanning watermarking algorithm suitable for text images is proposed.This algorithm first performs a series of image enhancement preprocessing operations on the printed scanned image to eliminate the interference of incorrect bit information on watermark embedding and then uses a combination of Discrete Wavelet Transform(DWT)-Singular Value Decomposition(SVD)to embed the watermark.Experiments show that the average Normalized Correlation(NC)of the watermark extracted by this algorithm against attacks such as Joint Photographic Experts Group(JPEG)compression,JPEG2000 compression,and print scanning is above 0.93.Especially,the average NC of the watermark extracted after print scanning attacks is greater than 0.964,and the average Bit Error Ratio(BER)is 5.15%.This indicates that this algorithm has strong resistance to various attacks and print scanning attacks and can better take into account the invisibility of the watermark.展开更多
A method to remove stripes from remote sensing images is proposed based on statistics and a new image enhancement method.The overall processing steps for improving the quality of remote sensing images are introduced t...A method to remove stripes from remote sensing images is proposed based on statistics and a new image enhancement method.The overall processing steps for improving the quality of remote sensing images are introduced to provide a general baseline.Due to the differences in satellite sensors when producing images,subtle but inherent stripes can appear at the stitching positions between the sensors.These stitchingstripes cannot be eliminated by conventional relative radiometric calibration.The inherent stitching stripes cause difficulties in downstream tasks such as the segmentation,classification and interpretation of remote sensing images.Therefore,a method to remove the stripes based on statistics and a new image enhancement approach are proposed in this paper.First,the inconsistency in grayscales around stripes is eliminated with the statistical method.Second,the pixels within stripes are weighted and averaged based on updated pixel values to enhance the uniformity of the overall image radiation quality.Finally,the details of the images are highlighted by a new image enhancement method,which makes the whole image clearer.Comprehensive experiments are performed,and the results indicate that the proposed method outperforms the baseline approach in terms of visual quality and radiation correction accuracy.展开更多
Aiming at the scattering and absorption of light in the water body,which causes the problems of color shift,uneven brightness,poor sharpness and missing details in the acquired underwater images,an underwater image en...Aiming at the scattering and absorption of light in the water body,which causes the problems of color shift,uneven brightness,poor sharpness and missing details in the acquired underwater images,an underwater image enhancement algorithm based on IMSRCR and CLAHE-WGIF is proposed.Firstly,the IMSRCR algorithm proposed in this paper is used to process the original underwater image with adaptive color shift correction;secondly,the image is converted to HSV color space,and the segmentation exponential algorithm is used to process the S component to enhance the image saturation;finally,multi-scale Retinex is used to decompose the V component image into detail layer and base layer,and adaptive two-dimensional gamma correction is made to the base layer to adjust the brightness unevenness,while the detail layer is processed by CLAHE-WGIF algorithm to enhance the image contrast and detail information.The experimental results show that our algorithm has some advantages over existing algorithms in both subjective and objective evaluations,and the information entropy of the image is improved by 6.3%on average,and the UIQM and UCIQE indexes are improved by 12.9%and 20.3%on average.展开更多
The current study provides a quantum calculus-based medical image enhancement technique that dynamically chooses the spatial distribution of image pixel intensity values.The technique focuses on boosting the edges and...The current study provides a quantum calculus-based medical image enhancement technique that dynamically chooses the spatial distribution of image pixel intensity values.The technique focuses on boosting the edges and texture of an image while leaving the smooth areas alone.The brain Magnetic Resonance Imaging(MRI)scans are used to visualize the tumors that have spread throughout the brain in order to gain a better understanding of the stage of brain cancer.Accurately detecting brain cancer is a complex challenge that the medical system faces when diagnosing the disease.To solve this issue,this research offers a quantum calculus-based MRI image enhancement as a pre-processing step for brain cancer diagnosis.The proposed image enhancement approach improves images with low gray level changes by estimating the pixel’s quantum probability.The suggested image enhancement technique is demonstrated to be robust and resistant to major quality changes on a variety ofMRIscan datasets of variable quality.ForMRI scans,the BRISQUE“blind/referenceless image spatial quality evaluator”and the NIQE“natural image quality evaluator”measures were 39.38 and 3.58,respectively.The proposed image enhancement model,according to the data,produces the best image quality ratings,and it may be able to aid medical experts in the diagnosis process.The experimental results were achieved using a publicly available collection of MRI scans.展开更多
Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propo...Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world.展开更多
Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neu...Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neuroscience,we design a network that is more practical for engineering to classify visual features.Based on this,we propose a dendritic learning-incorporated vision Transformer(DVT),which out-performs other state-of-the-art methods on three image recognition benchmarks.展开更多
The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c...The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral...By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.展开更多
Strong coupling between resonantly matched surface plasmons of metals and excitons of quantum emitters results in the formation of new plasmon-exciton hybridized energy states.In plasmon-exciton strong coupling,plasmo...Strong coupling between resonantly matched surface plasmons of metals and excitons of quantum emitters results in the formation of new plasmon-exciton hybridized energy states.In plasmon-exciton strong coupling,plasmonic nanocavities play a significant role due to their ability to confine light in an ultrasmall volume.Additionally,two-dimensional transition metal dichalcogenides(TMDCs) have a significant exciton binding energy and remain stable at ambient conditions,making them an excellent alternative for investigating light-matter interactions.As a result,strong plasmon-exciton coupling has been reported by introducing a single metallic cavity.However,single nanoparticles have lower spatial confinement of electromagnetic fields and limited tunability to match the excitonic resonance.Here,we introduce the concept of catenary-shaped optical fields induced by plasmonic metamaterial cavities to scale the strength of plasmon-exciton coupling.The demonstrated plasmon modes of metallic metamaterial cavities offer high confinement and tunability and can match with the excitons of TMDCs to exhibit a strong coupling regime by tuning either the size of the cavity gap or thickness.The calculated Rabi splitting of Au-MoSe_2 and Au-WSe_2 heterostructures strongly depends on the catenary-like field enhancement induced by the Au cavity,resulting in room-temperature Rabi splitting ranging between 77.86 and 320 me V.These plasmonic metamaterial cavities can pave the way for manipulating excitons in TMDCs and operating active nanophotonic devices at ambient temperature.展开更多
Olympus Corporation developed texture and color enhancement imaging(TXI)as a novel image-enhancing endoscopic technique.This topic highlights a series of hot-topic articles that investigated the efficacy of TXI for ga...Olympus Corporation developed texture and color enhancement imaging(TXI)as a novel image-enhancing endoscopic technique.This topic highlights a series of hot-topic articles that investigated the efficacy of TXI for gastrointestinal disease identification in the clinical setting.A randomized controlled trial demonstrated improvements in the colorectal adenoma detection rate(ADR)and the mean number of adenomas per procedure(MAP)of TXI compared with those of white-light imaging(WLI)observation(58.7%vs 42.7%,adjusted relative risk 1.35,95%CI:1.17-1.56;1.36 vs 0.89,adjusted incident risk ratio 1.48,95%CI:1.22-1.80,respectively).A cross-over study also showed that the colorectal MAP and ADR in TXI were higher than those in WLI(1.5 vs 1.0,adjusted odds ratio 1.4,95%CI:1.2-1.6;58.2%vs 46.8%,1.5,1.0-2.3,respectively).A randomized controlled trial demonstrated non-inferiority of TXI to narrow-band imaging in the colorectal mean number of adenomas and sessile serrated lesions per procedure(0.29 vs 0.30,difference for non-inferiority-0.01,95%CI:-0.10 to 0.08).A cohort study found that scoring for ulcerative colitis severity using TXI could predict relapse of ulcerative colitis.A cross-sectional study found that TXI improved the gastric cancer detection rate compared to WLI(0.71%vs 0.29%).A cross-sectional study revealed that the sensitivity and accuracy for active Helicobacter pylori gastritis in TXI were higher than those of WLI(69.2%vs 52.5%and 85.3%vs 78.7%,res-pectively).In conclusion,TXI can improve gastrointestinal lesion detection and qualitative diagnosis.Therefore,further studies on the efficacy of TXI in clinical practice are required.展开更多
BACKGROUND This study aimed to evaluate the safety of enhanced recovery after surgery(ERAS)in elderly patients with gastric cancer(GC).AIM To evaluate the safety of ERAS in elderly patients with GC.METHODS The PubMed,...BACKGROUND This study aimed to evaluate the safety of enhanced recovery after surgery(ERAS)in elderly patients with gastric cancer(GC).AIM To evaluate the safety of ERAS in elderly patients with GC.METHODS The PubMed,EMBASE,and Cochrane Library databases were used to search for eligible studies from inception to April 1,2023.The mean difference(MD),odds ratio(OR)and 95%confidence interval(95%CI)were pooled for analysis.The quality of the included studies was evaluated using the Newcastle-Ottawa Scale scores.We used Stata(V.16.0)software for data analysis.RESULTS This study consists of six studies involving 878 elderly patients.By analyzing the clinical outcomes,we found that the ERAS group had shorter postoperative hospital stays(MD=-0.51,I2=0.00%,95%CI=-0.72 to-0.30,P=0.00);earlier times to first flatus(defecation;MD=-0.30,I²=0.00%,95%CI=-0.55 to-0.06,P=0.02);less intestinal obstruction(OR=3.24,I2=0.00%,95%CI=1.07 to 9.78,P=0.04);less nausea and vomiting(OR=4.07,I2=0.00%,95%CI=1.29 to 12.84,P=0.02);and less gastric retention(OR=5.69,I2=2.46%,95%CI=2.00 to 16.20,P=0.00).Our results showed that the conventional group had a greater mortality rate than the ERAS group(OR=0.24,I2=0.00%,95%CI=0.07 to 0.84,P=0.03).However,there was no statistically significant difference in major complications between the ERAS group and the conventional group(OR=0.67,I2=0.00%,95%CI=0.38 to 1.18,P=0.16).CONCLUSION Compared to those with conventional recovery,elderly GC patients who received the ERAS protocol after surgery had a lower risk of mortality.展开更多
基金National Research Foundation of Korea,Grant/Award Numbers:2022R1I1A3069113,RS-2023-00221365Electronics and Telecommunications Research Institute,Grant/Award Number:2014-3-00123。
文摘In recent times,an image enhancement approach,which learns the global transformation function using deep neural networks,has gained attention.However,many existing methods based on this approach have a limitation:their transformation functions are too simple to imitate complex colour transformations between low-quality images and manually retouched high-quality images.In order to address this limitation,a simple yet effective approach for image enhancement is proposed.The proposed algorithm based on the channel-wise intensity transformation is designed.However,this transformation is applied to the learnt embedding space instead of specific colour spaces and then return enhanced features to colours.To this end,the authors define the continuous intensity transformation(CIT)to describe the mapping between input and output intensities on the embedding space.Then,the enhancement network is developed,which produces multi-scale feature maps from input images,derives the set of transformation functions,and performs the CIT to obtain enhanced images.Extensive experiments on the MIT-Adobe 5K dataset demonstrate that the authors’approach improves the performance of conventional intensity transforms on colour space metrics.Specifically,the authors achieved a 3.8%improvement in peak signal-to-noise ratio,a 1.8%improvement in structual similarity index measure,and a 27.5%improvement in learned perceptual image patch similarity.Also,the authors’algorithm outperforms state-of-the-art alternatives on three image enhancement datasets:MIT-Adobe 5K,Low-Light,and Google HDRþ.
基金supported by the national key research and development program (No.2020YFB1806608)Jiangsu natural science foundation for distinguished young scholars (No.BK20220054)。
文摘Due to the selective absorption of light and the existence of a large number of floating media in sea water, underwater images often suffer from color casts and detail blurs. It is therefore necessary to perform color correction and detail restoration. However,the existing enhancement algorithms cannot achieve the desired results. In order to solve the above problems, this paper proposes a multi-stream feature fusion network. First, an underwater image is preprocessed to obtain potential information from the illumination stream, color stream and structure stream by histogram equalization with contrast limitation, gamma correction and white balance, respectively. Next, these three streams and the original raw stream are sent to the residual blocks to extract the features. The features will be subsequently fused. It can enhance feature representation in underwater images. In the meantime, a composite loss function including three terms is used to ensure the quality of the enhanced image from the three aspects of color balance, structure preservation and image smoothness. Therefore, the enhanced image is more in line with human visual perception.Finally, the effectiveness of the proposed method is verified by comparison experiments with many stateof-the-art underwater image enhancement algorithms. Experimental results show that the proposed method provides superior results over them in terms of MSE,PSNR, SSIM, UIQM and UCIQE, and the enhanced images are more similar to their ground truth images.
文摘Handheld ultrasound devices are known for their portability and affordability,making them widely utilized in underdeveloped areas and community healthcare for rapid diagnosis and early screening.However,the image quality of handheld ultrasound devices is not always satisfactory due to the limited equipment size,which hinders accurate diagnoses by doctors.At the same time,paired ultrasound images are difficult to obtain from the clinic because imaging process is complicated.Therefore,we propose a modified cycle generative adversarial network(cycleGAN) for ultrasound image enhancement from multiple organs via unpaired pre-training.We introduce an ultrasound image pre-training method that does not require paired images,alleviating the requirement for large-scale paired datasets.We also propose an enhanced block with different structures in the pre-training and fine-tuning phases,which can help achieve the goals of different training phases.To improve the robustness of the model,we add Gaussian noise to the training images as data augmentation.Our approach is effective in obtaining the best quantitative evaluation results using a small number of parameters and less training costs to improve the quality of handheld ultrasound devices.
基金supported in part by the National Key Research and Development Program of China(2020YFB1313002)the National Natural Science Foundation of China(62276023,U22B2055,62222302,U2013202)+1 种基金the Fundamental Research Funds for the Central Universities(FRF-TP-22-003C1)the Postgraduate Education Reform Project of Henan Province(2021SJGLX260Y)。
文摘Underwater image enhancement aims to restore a clean appearance and thus improves the quality of underwater degraded images.Current methods feed the whole image directly into the model for enhancement.However,they ignored that the R,G and B channels of underwater degraded images present varied degrees of degradation,due to the selective absorption for the light.To address this issue,we propose an unsupervised multi-expert learning model by considering the enhancement of each color channel.Specifically,an unsupervised architecture based on generative adversarial network is employed to alleviate the need for paired underwater images.Based on this,we design a generator,including a multi-expert encoder,a feature fusion module and a feature fusion-guided decoder,to generate the clear underwater image.Accordingly,a multi-expert discriminator is proposed to verify the authenticity of the R,G and B channels,respectively.In addition,content perceptual loss and edge loss are introduced into the loss function to further improve the content and details of the enhanced images.Extensive experiments on public datasets demonstrate that our method achieves more pleasing results in vision quality.Various metrics(PSNR,SSIM,UIQM and UCIQE) evaluated on our enhanced images have been improved obviously.
基金supported by the National Natural Science Foundation of China(62276192)。
文摘Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.
文摘In this study,an underwater image enhancement method based on multi-scale adversarial network was proposed to solve the problem of detail blur and color distortion in underwater images.Firstly,the local features of each layer were enhanced into the global features by the proposed residual dense block,which ensured that the generated images retain more details.Secondly,a multi-scale structure was adopted to extract multi-scale semantic features of the original images.Finally,the features obtained from the dual channels were fused by an adaptive fusion module to further optimize the features.The discriminant network adopted the structure of the Markov discriminator.In addition,by constructing mean square error,structural similarity,and perceived color loss function,the generated image is consistent with the reference image in structure,color,and content.The experimental results showed that the enhanced underwater image deblurring effect of the proposed algorithm was good and the problem of underwater image color bias was effectively improved.In both subjective and objective evaluation indexes,the experimental results of the proposed algorithm are better than those of the comparison algorithm.
文摘Finger vein extraction and recognition hold significance in various applications due to the unique and reliable nature of finger vein patterns. While recently finger vein recognition has gained popularity, there are still challenges associated with extracting and processing finger vein patterns related to image quality, positioning and alignment, skin conditions, security concerns and processing techniques applied. In this paper, a method for robust segmentation of line patterns in strongly blurred images is presented and evaluated in vessel network extraction from infrared images of human fingers. In a four-step process: local normalization of brightness, image enhancement, segmentation and cleaning were involved. A novel image enhancement method was used to re-establish the line patterns from the brightness sum of the independent close-form solutions of the adopted optimization criterion derived in small windows. In the proposed method, the computational resources were reduced significantly compared to the solution derived when the whole image was processed. In the enhanced image, where the concave structures have been sufficiently emphasized, accurate detection of line patterns was obtained by local entropy thresholding. Typical segmentation errors appearing in the binary image were removed using morphological dilation with a line structuring element and morphological filtering with a majority filter to eliminate isolated blobs. The proposed method performs accurate detection of the vessel network in human finger infrared images, as the experimental results show, applied both in real and artificial images and can readily be applied in many image enhancement and segmentation applications.
基金supported by the Key R&D Program of Shaanxi Province,China(Grant Nos.2022GY-274,2023-YBSF-505)the National Natural Science Foundation of China(Grant No.62273273).
文摘Recovering high-quality inscription images from unknown and complex inscription noisy images is a challenging research issue.Different fromnatural images,character images pay more attention to stroke information.However,existingmodelsmainly consider pixel-level informationwhile ignoring structural information of the character,such as its edge and glyph,resulting in reconstructed images with mottled local structure and character damage.To solve these problems,we propose a novel generative adversarial network(GAN)framework based on an edge-guided generator and a discriminator constructed by a dual-domain U-Net framework,i.e.,EDU-GAN.Unlike existing frameworks,the generator introduces the edge extractionmodule,guiding it into the denoising process through the attention mechanism,which maintains the edge detail of the restored inscription image.Moreover,a dual-domain U-Net-based discriminator is proposed to learn the global and local discrepancy between the denoised and the label images in both image and morphological domains,which is helpful to blind denoising tasks.The proposed dual-domain discriminator and generator for adversarial training can reduce local artifacts and keep the denoised character structure intact.Due to the lack of a real-inscription image,we built the real-inscription dataset to provide an effective benchmark for studying inscription image denoising.The experimental results show the superiority of our method both in the synthetic and real-inscription datasets.
基金sponsored by the National Natural Science Foundation of China under Grants 61972207,U1836208,U1836110,61672290,and the Project was through the Priority Academic Program Development(PAPD)of Jiangsu Higher Education Institution.
文摘Digital watermarking technology is adequate for copyright protection and content authentication.There needs to be more research on the watermarking algorithm after printing and scanning.Aiming at the problem that existing anti-print scanning text image watermarking algorithms cannot take into account the invisibility and robustness of the watermark,an anti-print scanning watermarking algorithm suitable for text images is proposed.This algorithm first performs a series of image enhancement preprocessing operations on the printed scanned image to eliminate the interference of incorrect bit information on watermark embedding and then uses a combination of Discrete Wavelet Transform(DWT)-Singular Value Decomposition(SVD)to embed the watermark.Experiments show that the average Normalized Correlation(NC)of the watermark extracted by this algorithm against attacks such as Joint Photographic Experts Group(JPEG)compression,JPEG2000 compression,and print scanning is above 0.93.Especially,the average NC of the watermark extracted after print scanning attacks is greater than 0.964,and the average Bit Error Ratio(BER)is 5.15%.This indicates that this algorithm has strong resistance to various attacks and print scanning attacks and can better take into account the invisibility of the watermark.
文摘A method to remove stripes from remote sensing images is proposed based on statistics and a new image enhancement method.The overall processing steps for improving the quality of remote sensing images are introduced to provide a general baseline.Due to the differences in satellite sensors when producing images,subtle but inherent stripes can appear at the stitching positions between the sensors.These stitchingstripes cannot be eliminated by conventional relative radiometric calibration.The inherent stitching stripes cause difficulties in downstream tasks such as the segmentation,classification and interpretation of remote sensing images.Therefore,a method to remove the stripes based on statistics and a new image enhancement approach are proposed in this paper.First,the inconsistency in grayscales around stripes is eliminated with the statistical method.Second,the pixels within stripes are weighted and averaged based on updated pixel values to enhance the uniformity of the overall image radiation quality.Finally,the details of the images are highlighted by a new image enhancement method,which makes the whole image clearer.Comprehensive experiments are performed,and the results indicate that the proposed method outperforms the baseline approach in terms of visual quality and radiation correction accuracy.
文摘Aiming at the scattering and absorption of light in the water body,which causes the problems of color shift,uneven brightness,poor sharpness and missing details in the acquired underwater images,an underwater image enhancement algorithm based on IMSRCR and CLAHE-WGIF is proposed.Firstly,the IMSRCR algorithm proposed in this paper is used to process the original underwater image with adaptive color shift correction;secondly,the image is converted to HSV color space,and the segmentation exponential algorithm is used to process the S component to enhance the image saturation;finally,multi-scale Retinex is used to decompose the V component image into detail layer and base layer,and adaptive two-dimensional gamma correction is made to the base layer to adjust the brightness unevenness,while the detail layer is processed by CLAHE-WGIF algorithm to enhance the image contrast and detail information.The experimental results show that our algorithm has some advantages over existing algorithms in both subjective and objective evaluations,and the information entropy of the image is improved by 6.3%on average,and the UIQM and UCIQE indexes are improved by 12.9%and 20.3%on average.
文摘The current study provides a quantum calculus-based medical image enhancement technique that dynamically chooses the spatial distribution of image pixel intensity values.The technique focuses on boosting the edges and texture of an image while leaving the smooth areas alone.The brain Magnetic Resonance Imaging(MRI)scans are used to visualize the tumors that have spread throughout the brain in order to gain a better understanding of the stage of brain cancer.Accurately detecting brain cancer is a complex challenge that the medical system faces when diagnosing the disease.To solve this issue,this research offers a quantum calculus-based MRI image enhancement as a pre-processing step for brain cancer diagnosis.The proposed image enhancement approach improves images with low gray level changes by estimating the pixel’s quantum probability.The suggested image enhancement technique is demonstrated to be robust and resistant to major quality changes on a variety ofMRIscan datasets of variable quality.ForMRI scans,the BRISQUE“blind/referenceless image spatial quality evaluator”and the NIQE“natural image quality evaluator”measures were 39.38 and 3.58,respectively.The proposed image enhancement model,according to the data,produces the best image quality ratings,and it may be able to aid medical experts in the diagnosis process.The experimental results were achieved using a publicly available collection of MRI scans.
基金supported by the National Key Research and Development Program Topics(Grant No.2021YFB4000905)the National Natural Science Foundation of China(Grant Nos.62101432 and 62102309)in part by Shaanxi Natural Science Fundamental Research Program Project(No.2022JM-508).
文摘Low-light image enhancement methods have limitations in addressing issues such as color distortion,lack of vibrancy,and uneven light distribution and often require paired training data.To address these issues,we propose a two-stage unsupervised low-light image enhancement algorithm called Retinex and Exposure Fusion Network(RFNet),which can overcome the problems of over-enhancement of the high dynamic range and under-enhancement of the low dynamic range in existing enhancement algorithms.This algorithm can better manage the challenges brought about by complex environments in real-world scenarios by training with unpaired low-light images and regular-light images.In the first stage,we design a multi-scale feature extraction module based on Retinex theory,capable of extracting details and structural information at different scales to generate high-quality illumination and reflection images.In the second stage,an exposure image generator is designed through the camera response mechanism function to acquire exposure images containing more dark features,and the generated images are fused with the original input images to complete the low-light image enhancement.Experiments show the effectiveness and rationality of each module designed in this paper.And the method reconstructs the details of contrast and color distribution,outperforms the current state-of-the-art methods in both qualitative and quantitative metrics,and shows excellent performance in the real world.
基金partially supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation(JPMJFS2115)。
文摘Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neuroscience,we design a network that is more practical for engineering to classify visual features.Based on this,we propose a dendritic learning-incorporated vision Transformer(DVT),which out-performs other state-of-the-art methods on three image recognition benchmarks.
文摘The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
基金Ministry of Education,Culture,Sports,Science and Technology,Grant/Award Number:20K11867。
文摘By automatically learning the priors embedded in images with powerful modelling ca-pabilities,deep learning-based algorithms have recently made considerable progress in reconstructing the high-resolution hyperspectral(HR-HS)image.With previously collected large-amount of external data,these methods are intuitively realised under the full supervision of the ground-truth data.Thus,the database construction in merging the low-resolution(LR)HS(LR-HS)and HR multispectral(MS)or RGB image research paradigm,commonly named as HSI SR,requires collecting corresponding training triplets:HR-MS(RGB),LR-HS and HR-HS image simultaneously,and often faces dif-ficulties in reality.The learned models with the training datasets collected simultaneously under controlled conditions may significantly degrade the HSI super-resolved perfor-mance to the real images captured under diverse environments.To handle the above-mentioned limitations,the authors propose to leverage the deep internal and self-supervised learning to solve the HSI SR problem.The authors advocate that it is possible to train a specific CNN model at test time,called as deep internal learning(DIL),by on-line preparing the training triplet samples from the observed LR-HS/HR-MS(or RGB)images and the down-sampled LR-HS version.However,the number of the training triplets extracted solely from the transformed data of the observation itself is extremely few particularly for the HSI SR tasks with large spatial upscale factors,which would result in limited reconstruction performance.To solve this problem,the authors further exploit deep self-supervised learning(DSL)by considering the observations as the unlabelled training samples.Specifically,the degradation modules inside the network were elaborated to realise the spatial and spectral down-sampling procedures for transforming the generated HR-HS estimation to the high-resolution RGB/LR-HS approximation,and then the reconstruction errors of the observations were formulated for measuring the network modelling performance.By consolidating the DIL and DSL into a unified deep framework,the authors construct a more robust HSI SR method without any prior training and have great potential of flexible adaptation to different settings per obser-vation.To verify the effectiveness of the proposed approach,extensive experiments have been conducted on two benchmark HS datasets,including the CAVE and Harvard datasets,and demonstrate the great performance gain of the proposed method over the state-of-the-art methods.
基金supported by the Australian Research Council (DP200101353)。
文摘Strong coupling between resonantly matched surface plasmons of metals and excitons of quantum emitters results in the formation of new plasmon-exciton hybridized energy states.In plasmon-exciton strong coupling,plasmonic nanocavities play a significant role due to their ability to confine light in an ultrasmall volume.Additionally,two-dimensional transition metal dichalcogenides(TMDCs) have a significant exciton binding energy and remain stable at ambient conditions,making them an excellent alternative for investigating light-matter interactions.As a result,strong plasmon-exciton coupling has been reported by introducing a single metallic cavity.However,single nanoparticles have lower spatial confinement of electromagnetic fields and limited tunability to match the excitonic resonance.Here,we introduce the concept of catenary-shaped optical fields induced by plasmonic metamaterial cavities to scale the strength of plasmon-exciton coupling.The demonstrated plasmon modes of metallic metamaterial cavities offer high confinement and tunability and can match with the excitons of TMDCs to exhibit a strong coupling regime by tuning either the size of the cavity gap or thickness.The calculated Rabi splitting of Au-MoSe_2 and Au-WSe_2 heterostructures strongly depends on the catenary-like field enhancement induced by the Au cavity,resulting in room-temperature Rabi splitting ranging between 77.86 and 320 me V.These plasmonic metamaterial cavities can pave the way for manipulating excitons in TMDCs and operating active nanophotonic devices at ambient temperature.
文摘Olympus Corporation developed texture and color enhancement imaging(TXI)as a novel image-enhancing endoscopic technique.This topic highlights a series of hot-topic articles that investigated the efficacy of TXI for gastrointestinal disease identification in the clinical setting.A randomized controlled trial demonstrated improvements in the colorectal adenoma detection rate(ADR)and the mean number of adenomas per procedure(MAP)of TXI compared with those of white-light imaging(WLI)observation(58.7%vs 42.7%,adjusted relative risk 1.35,95%CI:1.17-1.56;1.36 vs 0.89,adjusted incident risk ratio 1.48,95%CI:1.22-1.80,respectively).A cross-over study also showed that the colorectal MAP and ADR in TXI were higher than those in WLI(1.5 vs 1.0,adjusted odds ratio 1.4,95%CI:1.2-1.6;58.2%vs 46.8%,1.5,1.0-2.3,respectively).A randomized controlled trial demonstrated non-inferiority of TXI to narrow-band imaging in the colorectal mean number of adenomas and sessile serrated lesions per procedure(0.29 vs 0.30,difference for non-inferiority-0.01,95%CI:-0.10 to 0.08).A cohort study found that scoring for ulcerative colitis severity using TXI could predict relapse of ulcerative colitis.A cross-sectional study found that TXI improved the gastric cancer detection rate compared to WLI(0.71%vs 0.29%).A cross-sectional study revealed that the sensitivity and accuracy for active Helicobacter pylori gastritis in TXI were higher than those of WLI(69.2%vs 52.5%and 85.3%vs 78.7%,res-pectively).In conclusion,TXI can improve gastrointestinal lesion detection and qualitative diagnosis.Therefore,further studies on the efficacy of TXI in clinical practice are required.
基金Supported by Chongqing Medical University Program for Youth Innovation in Future Medicine,No.W0190.
文摘BACKGROUND This study aimed to evaluate the safety of enhanced recovery after surgery(ERAS)in elderly patients with gastric cancer(GC).AIM To evaluate the safety of ERAS in elderly patients with GC.METHODS The PubMed,EMBASE,and Cochrane Library databases were used to search for eligible studies from inception to April 1,2023.The mean difference(MD),odds ratio(OR)and 95%confidence interval(95%CI)were pooled for analysis.The quality of the included studies was evaluated using the Newcastle-Ottawa Scale scores.We used Stata(V.16.0)software for data analysis.RESULTS This study consists of six studies involving 878 elderly patients.By analyzing the clinical outcomes,we found that the ERAS group had shorter postoperative hospital stays(MD=-0.51,I2=0.00%,95%CI=-0.72 to-0.30,P=0.00);earlier times to first flatus(defecation;MD=-0.30,I²=0.00%,95%CI=-0.55 to-0.06,P=0.02);less intestinal obstruction(OR=3.24,I2=0.00%,95%CI=1.07 to 9.78,P=0.04);less nausea and vomiting(OR=4.07,I2=0.00%,95%CI=1.29 to 12.84,P=0.02);and less gastric retention(OR=5.69,I2=2.46%,95%CI=2.00 to 16.20,P=0.00).Our results showed that the conventional group had a greater mortality rate than the ERAS group(OR=0.24,I2=0.00%,95%CI=0.07 to 0.84,P=0.03).However,there was no statistically significant difference in major complications between the ERAS group and the conventional group(OR=0.67,I2=0.00%,95%CI=0.38 to 1.18,P=0.16).CONCLUSION Compared to those with conventional recovery,elderly GC patients who received the ERAS protocol after surgery had a lower risk of mortality.