Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS...Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS image with a HR RGB(or mul-tispectral)image guidance.Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors.Recently,researchers pay more attention to deep learning methods with direct supervised or unsupervised learning,which exploit deep prior only from training dataset or testing data.In this article,an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance.Specif-ically,a progressive HS image super-resolution network is proposed,which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance.Then,the super-resolution network is progressively trained with supervised pre-training and un-supervised adaption,where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes.The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint.It has a good general-isation capability,especially for blind HS image super-resolution.Comprehensive experimental results show that the proposed deep progressive learning method out-performs the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases.展开更多
Generating a realistic person's image from one source pose conditioned on another different target pose is a promising computer vision task.The previous mainstream methods mainly focus on exploring the transformat...Generating a realistic person's image from one source pose conditioned on another different target pose is a promising computer vision task.The previous mainstream methods mainly focus on exploring the transformation relationship between the keypoint-based source pose and the target pose,but rarely investigate the region-based human semantic information.Some current methods that adopt the parsing map neither consider the precise local pose-semantic matching issues nor the correspondence between two different poses.In this study,a Region Semantics-Assisted Generative Adversarial Network(RSA-GAN)is proposed for the pose-guided person image gen-eration task.In particular,a regional pose-guided semantic fusion module is first devel-oped to solve the imprecise match issue between the semantic parsing map from a certain source image and the corresponding keypoints in the source pose.To well align the style of the human in the source image with the target pose,a pose correspondence guided style injection module is designed to learn the correspondence between the source pose and the target pose.In addition,one gated depth-wise convolutional cross-attention based style integration module is proposed to distribute the well-aligned coarse style information together with the precisely matched pose-guided semantic information to-wards the target pose.The experimental results indicate that the proposed RSA-GAN achieves a 23%reduction in LPIPS compared to the method without using the seman-tic maps and a 6.9%reduction in FID for the method with semantic maps,respectively,and also shows higher realistic qualitative results.展开更多
Current data-driven deep learning(DL)methods typically reconstruct subsurface velocity models directly from pre-stack seismic records.However,these purely data-driven methods are often less robust and produce results ...Current data-driven deep learning(DL)methods typically reconstruct subsurface velocity models directly from pre-stack seismic records.However,these purely data-driven methods are often less robust and produce results that are less physically interpretative.Here,the authors propose a new method that uses migration images as input,combined with convolutional neural networks to construct high-resolution velocity models.Compared to directly using pre-stack seismic records as input,the nonlinearity between migration images and velocity models is significantly reduced.Additionally,the advantage of using migration images lies in its ability to more comprehensively capture the reflective properties of the subsurface medium,including amplitude and phase information,thereby to provide richer physical information in guiding the reconstruction of the velocity model.This approach not only improves the accuracy and resolution of the reconstructed velocity models,but also enhances the physical interpretability and robustness.Numerical experiments on synthetic data show that the proposed method has superior reconstruction performance and strong generalization capability when dealing with complex geological structures,and shows great potential in providing efficient solutions for the task of reconstructing high-wavenumber components.展开更多
The key factor in photothermal therapy lies in the selection of photothermal agents.Traditional photothermal agents generally have problems such as poor photothermal stability and low photothermal conversion efficienc...The key factor in photothermal therapy lies in the selection of photothermal agents.Traditional photothermal agents generally have problems such as poor photothermal stability and low photothermal conversion efficiency.Herein,we have designed and synthesized an isoindigo(IID)dye.We used isoindigo as the molecular center and introduced common triphenylamine and methoxy groups as rotors.In order to improve the photothermal stability and tumor targeting ability,we encapsulated IID into nanoparticles.As a result,the nanoparticles exhibited high photothermal stability and photothermal conversion efficiency(67%)upon 635 nm laser irradiation.Thus,the nanoparticles demonstrated a significant inhibitory effect on live tumors in photothermal therapy guided by photoacoustic imaging and provided a viable strategy to overcome the treatment challenges.展开更多
Hypoxia is the common characteristic of almost all solid tumors,which prevents therapeutic drugs from reaching the tumors.Therefore,the development of new targeted agents for the accurate diagnosis of hypoxia tumors i...Hypoxia is the common characteristic of almost all solid tumors,which prevents therapeutic drugs from reaching the tumors.Therefore,the development of new targeted agents for the accurate diagnosis of hypoxia tumors is widely concerned.As carbonic anhydrase IX(CA IX)is abundantly distributed on the hypoxia tumor cells,it is considered as a potential tumor biomarker.4-(2-Aminoethyl)benzenesulfonamide(ABS)as a CA IX inhibitor has inherent inhibitory activity and good targeting effect.In this study,Ag_(2)S quantum dots(QDs)were used as the carrier to prepare a novel diagnostic and therapeutic bioprobe(Ag_(2)S@polyethylene glycol(PEG)-ABS)through ligand exchange and amide condensation reaction.Ag_(2)S@PEG-ABS can selectively target tumors by surface-modified ABS and achieve accurate tumor imaging by the near infrared-II(NIR-II)fluorescence characteristics of Ag_(2)S QDs.PEG modification of Ag_(2)S QDs greatly improves its water solubility and stability,and therefore achieves high photothermal stability and high photothermal conversion efficiency(PCE)of 45.17%.Under laser irradiation,Ag_(2)S@PEG-ABS has powerful photothermal and inherent antitumor combinations on colon cancer cells(CT-26)in vitro.It also has been proved that Ag_(2)S@PEG-ABS can realize the effective treatment of hypoxia tumors in vivo and show good biocompatibility.Therefore,it is a new efficient integrated platform for the diagnosis and treatment of hypoxia tumors.展开更多
Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to fac...Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.展开更多
Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory class...Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory classification results on mammographic images because these models are not specifically designed for mammographic images and do not take the specific traits of these images into account.To exploit the essential discriminant information of mammographic images,we propose a novel classification method based on a convolutional neural network.Specifically,the proposed method designs two branches to extract the discriminative features from mammographic images from the mediolateral oblique and craniocaudal(CC)mammographic views.The features extracted from the two-view mammographic images contain complementary information that enables breast cancer to be more easily distinguished.Moreover,the attention block is introduced to capture the channel-wise information by adjusting the weight of each feature map,which is beneficial to emphasising the important features of mammographic images.Furthermore,we add a penalty term based on the fuzzy cluster algorithm to the cross-entropy function,which improves the generalisation ability of the classification model by maximising the interclass distance and minimising the intraclass distance of the samples.The experimental results on The Digital database for Screening Mammography INbreast and MIAS mammography databases illustrate that the proposed method achieves the best classification performance and is more robust than the compared state-ofthe-art classification methods.展开更多
BACKGROUND This case report demonstrates the simultaneous development of a gastrointestinal stromal tumour(GIST)with arteriovenous malformations(AVMs)within the jejunal mesentery.A 74-year-old male presented to the de...BACKGROUND This case report demonstrates the simultaneous development of a gastrointestinal stromal tumour(GIST)with arteriovenous malformations(AVMs)within the jejunal mesentery.A 74-year-old male presented to the department of surgery at our institution with a one-month history of abdominal pain.Contrast-enhanced computed tomography revealed an AVM.During exploratory laparotomy,hyperspectral imaging(HSI)and indocyanine green(ICG)fluorescence were used to evaluate the extent of the tumour and determine the resection margins.Intraoperative imaging confirmed AVM,while histopathological evaluation showed an epithelioid,partially spindle cell GIST.CASE SUMMARY This is the first case reporting the use of HSI and ICG to image GIST intermingled with an AVM.The resection margins were planned using intraoperative analysis of additional optical data.Image-guided surgery enhances the clinician’s knowledge of tissue composition and facilitates tissue differentiation.CONCLUSION Since image-guided surgery is safe,this procedure should increase in popularity among the next generation of surgeons as it is associated with better postoperative outcomes.展开更多
Background Owing to the rapid development of deep networks, single-image deraining tasks have progressed significantly. Various architectures have been designed to recursively or directly remove rain, and most rain st...Background Owing to the rapid development of deep networks, single-image deraining tasks have progressed significantly. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed using existing deraining methods. However, many of them cause detail loss, resulting in visual artifacts. Method To resolve this issue, we propose a novel unrolling rain-guided detail recovery network(URDRN) for single-image deraining based on the observation that the most degraded areas of a background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learningbased methods trivialize the observation model and simply learn end-to-end mapping, the proposed URDRN unrolls a single-image deraining task into two subproblems: rain extraction and detail recovery. Result Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks;thereafter, a rain attention map is generated as an indicator to guide the detail recovery process. For the detail recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details.Experiments on several well-known benchmark datasets show that the proposed approach can achieve performance similar to those of other state-of-the-art methods.展开更多
Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neu...Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neuroscience,we design a network that is more practical for engineering to classify visual features.Based on this,we propose a dendritic learning-incorporated vision Transformer(DVT),which out-performs other state-of-the-art methods on three image recognition benchmarks.展开更多
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The ne...A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.展开更多
Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosph...Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.展开更多
The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese...The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese Academy of Sciences(CAS)and is due for launch in 2025.SXI is a compact X-ray telescope with a wide field-of-view(FOV)capable of encompassing large portions of Earth’s magnetosphere from the vantage point of the SMILE orbit.SXI is sensitive to the soft X-rays produced by the Solar Wind Charge eXchange(SWCX)process produced when heavy ions of solar wind origin interact with neutral particles in Earth’s exosphere.SWCX provides a mechanism for boundary detection within the magnetosphere,such as the position of Earth’s magnetopause,because the solar wind heavy ions have a very low density in regions of closed magnetic field lines.The sensitivity of the SXI is such that it can potentially track movements of the magnetopause on timescales of a few minutes and the orbit of SMILE will enable such movements to be tracked for segments lasting many hours.SXI is led by the University of Leicester in the United Kingdom(UK)with collaborating organisations on hardware,software and science support within the UK,Europe,China and the United States.展开更多
Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but...Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.展开更多
The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and chara...The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and characteristics of discontinuities.It ignores the influence of mineral composition and shows a deficiency in assessing the integrity coefficient.In this context,hyperspectral imaging and digital panoramic borehole camera technologies are applied to analyze the mineral content and integrity of rock mass.Based on the carbonate mineral content and fissure area ratio,the strength reduction factor and integrity coefficient are calculated to improve the GSI evaluation method.According to the results of mineral classification and fissure identification,the strength reduction factor and integrity coefficient increase with the depth of rock mass.The rock mass GSI calculated by the improved method is mainly concentrated between 40 and 60,which is close to the calculation results of the traditional method.The GSI error rates obtained by the two methods are mostly less than 10%,indicating the rationality of the hyperspectral-digital borehole image coupled evaluation method.Moreover,the sensitivity of the fissure area ratio(Sr)to GSI is greater than that of the strength reduction factor(a),which means the proposed GSI is suitable for rocks with significant fissure development.The improved method reduces the influence of subjective factors and provides a reliable index for the deterioration evaluation of rock mass.展开更多
Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts ...Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.展开更多
Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the sof...Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the soft X-ray Imager,an initial characterisation of the devices has been carried out to give a baseline performance level.Three CCDs have been characterised,the two flight devices and the flight spa re.This has been carried out at the Open University in a bespo ke cleanroom measure ment facility.The results show that there is a cluster of bright pixels in the flight spa re which increases in size with tempe rature.However at the nominal ope rating tempe rature(-120℃) it is within the procure ment specifications.Overall,the devices meet the specifications when ope rating at -120℃ in 6 × 6 binned frame transfer science mode.The se rial charge transfer inefficiency degrades with temperature in full frame mode.However any charge losses are recovered when binning/frame transfer is implemented.展开更多
Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce...Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.展开更多
基金National Key R&D Program of China,Grant/Award Number:2022YFC3300704National Natural Science Foundation of China,Grant/Award Numbers:62171038,62088101,62006023。
文摘Due to hardware limitations,existing hyperspectral(HS)camera often suffer from low spatial/temporal resolution.Recently,it has been prevalent to super-resolve a low reso-lution(LR)HS image into a high resolution(HR)HS image with a HR RGB(or mul-tispectral)image guidance.Previous approaches for this guided super-resolution task often model the intrinsic characteristic of the desired HR HS image using hand-crafted priors.Recently,researchers pay more attention to deep learning methods with direct supervised or unsupervised learning,which exploit deep prior only from training dataset or testing data.In this article,an efficient convolutional neural network-based method is presented to progressively super-resolve HS image with RGB image guidance.Specif-ically,a progressive HS image super-resolution network is proposed,which progressively super-resolve the LR HS image with pixel shuffled HR RGB image guidance.Then,the super-resolution network is progressively trained with supervised pre-training and un-supervised adaption,where supervised pre-training learns the general prior on training data and unsupervised adaptation generalises the general prior to specific prior for variant testing scenes.The proposed method can effectively exploit prior from training dataset and testing HS and RGB images with spectral-spatial constraint.It has a good general-isation capability,especially for blind HS image super-resolution.Comprehensive experimental results show that the proposed deep progressive learning method out-performs the existing state-of-the-art methods for HS image super-resolution in non-blind and blind cases.
文摘Generating a realistic person's image from one source pose conditioned on another different target pose is a promising computer vision task.The previous mainstream methods mainly focus on exploring the transformation relationship between the keypoint-based source pose and the target pose,but rarely investigate the region-based human semantic information.Some current methods that adopt the parsing map neither consider the precise local pose-semantic matching issues nor the correspondence between two different poses.In this study,a Region Semantics-Assisted Generative Adversarial Network(RSA-GAN)is proposed for the pose-guided person image gen-eration task.In particular,a regional pose-guided semantic fusion module is first devel-oped to solve the imprecise match issue between the semantic parsing map from a certain source image and the corresponding keypoints in the source pose.To well align the style of the human in the source image with the target pose,a pose correspondence guided style injection module is designed to learn the correspondence between the source pose and the target pose.In addition,one gated depth-wise convolutional cross-attention based style integration module is proposed to distribute the well-aligned coarse style information together with the precisely matched pose-guided semantic information to-wards the target pose.The experimental results indicate that the proposed RSA-GAN achieves a 23%reduction in LPIPS compared to the method without using the seman-tic maps and a 6.9%reduction in FID for the method with semantic maps,respectively,and also shows higher realistic qualitative results.
文摘Current data-driven deep learning(DL)methods typically reconstruct subsurface velocity models directly from pre-stack seismic records.However,these purely data-driven methods are often less robust and produce results that are less physically interpretative.Here,the authors propose a new method that uses migration images as input,combined with convolutional neural networks to construct high-resolution velocity models.Compared to directly using pre-stack seismic records as input,the nonlinearity between migration images and velocity models is significantly reduced.Additionally,the advantage of using migration images lies in its ability to more comprehensively capture the reflective properties of the subsurface medium,including amplitude and phase information,thereby to provide richer physical information in guiding the reconstruction of the velocity model.This approach not only improves the accuracy and resolution of the reconstructed velocity models,but also enhances the physical interpretability and robustness.Numerical experiments on synthetic data show that the proposed method has superior reconstruction performance and strong generalization capability when dealing with complex geological structures,and shows great potential in providing efficient solutions for the task of reconstructing high-wavenumber components.
基金financially supported by the National Natural Science Foundation of China(22078046)Fundamental Research Fundamental Funds for the Central Universities(DUT22LAB601)+1 种基金Liaoning Binhai Laboratory(LBLB-2023-03)China Postdoctoral Science Foundation(2023M740487)。
文摘The key factor in photothermal therapy lies in the selection of photothermal agents.Traditional photothermal agents generally have problems such as poor photothermal stability and low photothermal conversion efficiency.Herein,we have designed and synthesized an isoindigo(IID)dye.We used isoindigo as the molecular center and introduced common triphenylamine and methoxy groups as rotors.In order to improve the photothermal stability and tumor targeting ability,we encapsulated IID into nanoparticles.As a result,the nanoparticles exhibited high photothermal stability and photothermal conversion efficiency(67%)upon 635 nm laser irradiation.Thus,the nanoparticles demonstrated a significant inhibitory effect on live tumors in photothermal therapy guided by photoacoustic imaging and provided a viable strategy to overcome the treatment challenges.
基金supported by the National Natural Science Foundation of China(Grant Nos:82073808,82273885).
文摘Hypoxia is the common characteristic of almost all solid tumors,which prevents therapeutic drugs from reaching the tumors.Therefore,the development of new targeted agents for the accurate diagnosis of hypoxia tumors is widely concerned.As carbonic anhydrase IX(CA IX)is abundantly distributed on the hypoxia tumor cells,it is considered as a potential tumor biomarker.4-(2-Aminoethyl)benzenesulfonamide(ABS)as a CA IX inhibitor has inherent inhibitory activity and good targeting effect.In this study,Ag_(2)S quantum dots(QDs)were used as the carrier to prepare a novel diagnostic and therapeutic bioprobe(Ag_(2)S@polyethylene glycol(PEG)-ABS)through ligand exchange and amide condensation reaction.Ag_(2)S@PEG-ABS can selectively target tumors by surface-modified ABS and achieve accurate tumor imaging by the near infrared-II(NIR-II)fluorescence characteristics of Ag_(2)S QDs.PEG modification of Ag_(2)S QDs greatly improves its water solubility and stability,and therefore achieves high photothermal stability and high photothermal conversion efficiency(PCE)of 45.17%.Under laser irradiation,Ag_(2)S@PEG-ABS has powerful photothermal and inherent antitumor combinations on colon cancer cells(CT-26)in vitro.It also has been proved that Ag_(2)S@PEG-ABS can realize the effective treatment of hypoxia tumors in vivo and show good biocompatibility.Therefore,it is a new efficient integrated platform for the diagnosis and treatment of hypoxia tumors.
基金This work is supported in part by The National Natural Science Foundation of China(Grant Number 61971078),which provided domain expertise and computational power that greatly assisted the activityThis work was financially supported by Chongqing Municipal Education Commission Grants for-Major Science and Technology Project(Grant Number gzlcx20243175).
文摘Semantic segmentation of driving scene images is crucial for autonomous driving.While deep learning technology has significantly improved daytime image semantic segmentation,nighttime images pose challenges due to factors like poor lighting and overexposure,making it difficult to recognize small objects.To address this,we propose an Image Adaptive Enhancement(IAEN)module comprising a parameter predictor(Edip),multiple image processing filters(Mdif),and a Detail Processing Module(DPM).Edip combines image processing filters to predict parameters like exposure and hue,optimizing image quality.We adopt a novel image encoder to enhance parameter prediction accuracy by enabling Edip to handle features at different scales.DPM strengthens overlooked image details,extending the IAEN module’s functionality.After the segmentation network,we integrate a Depth Guided Filter(DGF)to refine segmentation outputs.The entire network is trained end-to-end,with segmentation results guiding parameter prediction optimization,promoting self-learning and network improvement.This lightweight and efficient network architecture is particularly suitable for addressing challenges in nighttime image segmentation.Extensive experiments validate significant performance improvements of our approach on the ACDC-night and Nightcity datasets.
基金Guangdong Basic and Applied Basic Research Foundation,Grant/Award Number:2019A1515110582Shenzhen Key Laboratory of Visual Object Detection and Recognition,Grant/Award Number:ZDSYS20190902093015527National Natural Science Foundation of China,Grant/Award Number:61876051。
文摘Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory classification results on mammographic images because these models are not specifically designed for mammographic images and do not take the specific traits of these images into account.To exploit the essential discriminant information of mammographic images,we propose a novel classification method based on a convolutional neural network.Specifically,the proposed method designs two branches to extract the discriminative features from mammographic images from the mediolateral oblique and craniocaudal(CC)mammographic views.The features extracted from the two-view mammographic images contain complementary information that enables breast cancer to be more easily distinguished.Moreover,the attention block is introduced to capture the channel-wise information by adjusting the weight of each feature map,which is beneficial to emphasising the important features of mammographic images.Furthermore,we add a penalty term based on the fuzzy cluster algorithm to the cross-entropy function,which improves the generalisation ability of the classification model by maximising the interclass distance and minimising the intraclass distance of the samples.The experimental results on The Digital database for Screening Mammography INbreast and MIAS mammography databases illustrate that the proposed method achieves the best classification performance and is more robust than the compared state-ofthe-art classification methods.
文摘BACKGROUND This case report demonstrates the simultaneous development of a gastrointestinal stromal tumour(GIST)with arteriovenous malformations(AVMs)within the jejunal mesentery.A 74-year-old male presented to the department of surgery at our institution with a one-month history of abdominal pain.Contrast-enhanced computed tomography revealed an AVM.During exploratory laparotomy,hyperspectral imaging(HSI)and indocyanine green(ICG)fluorescence were used to evaluate the extent of the tumour and determine the resection margins.Intraoperative imaging confirmed AVM,while histopathological evaluation showed an epithelioid,partially spindle cell GIST.CASE SUMMARY This is the first case reporting the use of HSI and ICG to image GIST intermingled with an AVM.The resection margins were planned using intraoperative analysis of additional optical data.Image-guided surgery enhances the clinician’s knowledge of tissue composition and facilitates tissue differentiation.CONCLUSION Since image-guided surgery is safe,this procedure should increase in popularity among the next generation of surgeons as it is associated with better postoperative outcomes.
基金Supported by the Project of Guangzhou Science and Technology (202102020591,202007010004,202007040005)。
文摘Background Owing to the rapid development of deep networks, single-image deraining tasks have progressed significantly. Various architectures have been designed to recursively or directly remove rain, and most rain streaks can be removed using existing deraining methods. However, many of them cause detail loss, resulting in visual artifacts. Method To resolve this issue, we propose a novel unrolling rain-guided detail recovery network(URDRN) for single-image deraining based on the observation that the most degraded areas of a background image tend to be the most rain-corrupted regions. Furthermore, to address the problem that most existing deep-learningbased methods trivialize the observation model and simply learn end-to-end mapping, the proposed URDRN unrolls a single-image deraining task into two subproblems: rain extraction and detail recovery. Result Specifically, first, a context aggregation attention network is introduced to effectively extract rain streaks;thereafter, a rain attention map is generated as an indicator to guide the detail recovery process. For the detail recovery sub-network, with the guidance of the rain attention map, a simple encoder–decoder model is sufficient to recover the lost details.Experiments on several well-known benchmark datasets show that the proposed approach can achieve performance similar to those of other state-of-the-art methods.
基金partially supported by the Japan Society for the Promotion of Science(JSPS)KAKENHI(JP22H03643)Japan Science and Technology Agency(JST)Support for Pioneering Research Initiated by the Next Generation(SPRING)(JPMJSP2145)JST through the Establishment of University Fellowships towards the Creation of Science Technology Innovation(JPMJFS2115)。
文摘Dear Editor,This letter proposes to integrate dendritic learnable network architecture with Vision Transformer to improve the accuracy of image recognition.In this study,based on the theory of dendritic neurons in neuroscience,we design a network that is more practical for engineering to classify visual features.Based on this,we propose a dendritic learning-incorporated vision Transformer(DVT),which out-performs other state-of-the-art methods on three image recognition benchmarks.
文摘A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase thevisual impression of fused images by improving the quality of infrared and visible light picture fusion. The networkcomprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder moduleutilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformerto achieve deep-level co-extraction of local and global features from the original picture. An edge enhancementmodule (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy isintroduced to enhance the adaptive representation of information in various regions of the source image, therebyenhancing the contrast of the fused image. The encoder and the EEM module extract features, which are thencombined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test thealgorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preservesbackground and detail information in both infrared and visible images, yielding superior outcomes in subjectiveand objective evaluations.
基金supported by the National Natural Science Foundation of China(Grant Nos.42322408,42188101,41974211,and 42074202)the Key Research Program of Frontier Sciences,Chinese Academy of Sciences(Grant No.QYZDJ-SSW-JSC028)+1 种基金the Strategic Priority Program on Space Science,Chinese Academy of Sciences(Grant Nos.XDA15052500,XDA15350201,and XDA15014800)supported by the Youth Innovation Promotion Association of the Chinese Academy of Sciences(Grant No.Y202045)。
文摘Astronomical imaging technologies are basic tools for the exploration of the universe,providing basic data for the research of astronomy and space physics.The Soft X-ray Imager(SXI)carried by the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)aims to capture two-dimensional(2-D)images of the Earth’s magnetosheath by using soft X-ray imaging.However,the observed 2-D images are affected by many noise factors,destroying the contained information,which is not conducive to the subsequent reconstruction of the three-dimensional(3-D)structure of the magnetopause.The analysis of SXI-simulated observation images shows that such damage cannot be evaluated with traditional restoration models.This makes it difficult to establish the mapping relationship between SXIsimulated observation images and target images by using mathematical models.We propose an image restoration algorithm for SXIsimulated observation images that can recover large-scale structure information on the magnetosphere.The idea is to train a patch estimator by selecting noise–clean patch pairs with the same distribution through the Classification–Expectation Maximization algorithm to achieve the restoration estimation of the SXI-simulated observation image,whose mapping relationship with the target image is established by the patch estimator.The Classification–Expectation Maximization algorithm is used to select multiple patch clusters with the same distribution and then train different patch estimators so as to improve the accuracy of the estimator.Experimental results showed that our image restoration algorithm is superior to other classical image restoration algorithms in the SXI-simulated observation image restoration task,according to the peak signal-to-noise ratio and structural similarity.The restoration results of SXI-simulated observation images are used in the tangent fitting approach and the computed tomography approach toward magnetospheric reconstruction techniques,significantly improving the reconstruction results.Hence,the proposed technology may be feasible for processing SXI-simulated observation images.
基金funding and support from the United Kingdom Space Agency(UKSA)the European Space Agency(ESA)+5 种基金funded and supported through the ESA PRODEX schemefunded through PRODEX PEA 4000123238the Research Council of Norway grant 223252funded by Spanish MCIN/AEI/10.13039/501100011033 grant PID2019-107061GB-C61funding and support from the Chinese Academy of Sciences(CAS)funding and support from the National Aeronautics and Space Administration(NASA)。
文摘The Soft X-ray Imager(SXI)is part of the scientific payload of the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.SMILE is a joint science mission between the European Space Agency(ESA)and the Chinese Academy of Sciences(CAS)and is due for launch in 2025.SXI is a compact X-ray telescope with a wide field-of-view(FOV)capable of encompassing large portions of Earth’s magnetosphere from the vantage point of the SMILE orbit.SXI is sensitive to the soft X-rays produced by the Solar Wind Charge eXchange(SWCX)process produced when heavy ions of solar wind origin interact with neutral particles in Earth’s exosphere.SWCX provides a mechanism for boundary detection within the magnetosphere,such as the position of Earth’s magnetopause,because the solar wind heavy ions have a very low density in regions of closed magnetic field lines.The sensitivity of the SXI is such that it can potentially track movements of the magnetopause on timescales of a few minutes and the orbit of SMILE will enable such movements to be tracked for segments lasting many hours.SXI is led by the University of Leicester in the United Kingdom(UK)with collaborating organisations on hardware,software and science support within the UK,Europe,China and the United States.
基金supported by the Research Council of Norway under contracts 223252/F50 and 300844/F50the Trond Mohn Foundation。
文摘Global images of auroras obtained by cameras on spacecraft are a key tool for studying the near-Earth environment.However,the cameras are sensitive not only to auroral emissions produced by precipitating particles,but also to dayglow emissions produced by photoelectrons induced by sunlight.Nightglow emissions and scattered sunlight can contribute to the background signal.To fully utilize such images in space science,background contamination must be removed to isolate the auroral signal.Here we outline a data-driven approach to modeling the background intensity in multiple images by formulating linear inverse problems based on B-splines and spherical harmonics.The approach is robust,flexible,and iteratively deselects outliers,such as auroral emissions.The final model is smooth across the terminator and accounts for slow temporal variations and large-scale asymmetries in the dayglow.We demonstrate the model by using the three far ultraviolet cameras on the Imager for Magnetopause-to-Aurora Global Exploration(IMAGE)mission.The method can be applied to historical missions and is relevant for upcoming missions,such as the Solar wind Magnetosphere Ionosphere Link Explorer(SMILE)mission.
基金supported by the National Key R&D Program of China(Grant Nos.2021YFB3901403 and 2023YFC3007203).
文摘The deterioration of unstable rock mass raised interest in evaluating rock mass quality.However,the traditional evaluation method for the geological strength index(GSI)primarily emphasizes the rock structure and characteristics of discontinuities.It ignores the influence of mineral composition and shows a deficiency in assessing the integrity coefficient.In this context,hyperspectral imaging and digital panoramic borehole camera technologies are applied to analyze the mineral content and integrity of rock mass.Based on the carbonate mineral content and fissure area ratio,the strength reduction factor and integrity coefficient are calculated to improve the GSI evaluation method.According to the results of mineral classification and fissure identification,the strength reduction factor and integrity coefficient increase with the depth of rock mass.The rock mass GSI calculated by the improved method is mainly concentrated between 40 and 60,which is close to the calculation results of the traditional method.The GSI error rates obtained by the two methods are mostly less than 10%,indicating the rationality of the hyperspectral-digital borehole image coupled evaluation method.Moreover,the sensitivity of the fissure area ratio(Sr)to GSI is greater than that of the strength reduction factor(a),which means the proposed GSI is suitable for rocks with significant fissure development.The improved method reduces the influence of subjective factors and provides a reliable index for the deterioration evaluation of rock mass.
基金supported by the National Natural Science Foundation of China(62375144 and 61875092)Tianjin Foundation of Natural Science(21JCYBJC00260)Beijing-Tianjin-Hebei Basic Research Cooperation Special Program(19JCZDJC65300).
文摘Limited by the dynamic range of the detector,saturation artifacts usually occur in optical coherence tomography(OCT)imaging for high scattering media.The available methods are difficult to remove saturation artifacts and restore texture completely in OCT images.We proposed a deep learning-based inpainting method of saturation artifacts in this paper.The generation mechanism of saturation artifacts was analyzed,and experimental and simulated datasets were built based on the mechanism.Enhanced super-resolution generative adversarial networks were trained by the clear–saturated phantom image pairs.The perfect reconstructed results of experimental zebrafish and thyroid OCT images proved its feasibility,strong generalization,and robustness.
文摘Throughout the SMILE mission the satellite will be bombarded by radiation which gradually damages the focal plane devices and degrades their performance.In order to understand the changes of the CCD370s within the soft X-ray Imager,an initial characterisation of the devices has been carried out to give a baseline performance level.Three CCDs have been characterised,the two flight devices and the flight spa re.This has been carried out at the Open University in a bespo ke cleanroom measure ment facility.The results show that there is a cluster of bright pixels in the flight spa re which increases in size with tempe rature.However at the nominal ope rating tempe rature(-120℃) it is within the procure ment specifications.Overall,the devices meet the specifications when ope rating at -120℃ in 6 × 6 binned frame transfer science mode.The se rial charge transfer inefficiency degrades with temperature in full frame mode.However any charge losses are recovered when binning/frame transfer is implemented.
文摘Introduction: Ultrafast latest developments in artificial intelligence (ΑΙ) have recently multiplied concerns regarding the future of robotic autonomy in surgery. However, the literature on the topic is still scarce. Aim: To test a novel AI commercially available tool for image analysis on a series of laparoscopic scenes. Methods: The research tools included OPENAI CHATGPT 4.0 with its corresponding image recognition plugin which was fed with a list of 100 laparoscopic selected snapshots from common surgical procedures. In order to score reliability of received responses from image-recognition bot, two corresponding scales were developed ranging from 0 - 5. The set of images was divided into two groups: unlabeled (Group A) and labeled (Group B), and according to the type of surgical procedure or image resolution. Results: AI was able to recognize correctly the context of surgical-related images in 97% of its reports. For the labeled surgical pictures, the image-processing bot scored 3.95/5 (79%), whilst for the unlabeled, it scored 2.905/5 (58.1%). Phases of the procedure were commented in detail, after all successful interpretations. With rates 4 - 5/5, the chatbot was able to talk in detail about the indications, contraindications, stages, instrumentation, complications and outcome rates of the operation discussed. Conclusion: Interaction between surgeon and chatbot appears to be an interesting frontend for further research by clinicians in parallel with evolution of its complex underlying infrastructure. In this early phase of using artificial intelligence for image recognition in surgery, no safe conclusions can be drawn by small cohorts with commercially available software. Further development of medically-oriented AI software and clinical world awareness are expected to bring fruitful information on the topic in the years to come.