Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier ...Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier filtering problem from two aspects. First, a robust and efficient graph interaction model,is proposed, with the assumption that matches are correlated with each other rather than independently distributed. To this end, we construct a graph based on the local relationships of matches and formulate the outlier filtering task as a binary labeling energy minimization problem, where the pairwise term encodes the interaction between matches. We further show that this formulation can be solved globally by graph cut algorithm. Our new formulation always improves the performance of previous localitybased method without noticeable deterioration in processing time,adding a few milliseconds. Second, to construct a better graph structure, a robust and geometrically meaningful topology-aware relationship is developed to capture the topology relationship between matches. The two components in sum lead to topology interaction matching(TIM), an effective and efficient method for outlier filtering. Extensive experiments on several large and diverse datasets for multiple vision tasks including general feature matching, as well as relative pose estimation, homography and fundamental matrix estimation, loop-closure detection, and multi-modal image matching, demonstrate that our TIM is more competitive than current state-of-the-art methods, in terms of generality, efficiency, and effectiveness. The source code is publicly available at http://github.com/YifanLu2000/TIM.展开更多
Spectral compressive imaging has emerged as a powerful technique to collect the 3D spectral information as 2D measurements.The algorithm for restoring the original 3D hyperspectral images(HSIs)from compressive measure...Spectral compressive imaging has emerged as a powerful technique to collect the 3D spectral information as 2D measurements.The algorithm for restoring the original 3D hyperspectral images(HSIs)from compressive measurements is pivotal in the imaging process.Early approaches painstakingly designed networks to directly map compressive measurements to HSIs,resulting in the lack of interpretability without exploiting the imaging priors.While some recent works have introduced the deep unfolding framework for explainable reconstruction,the performance of these methods is still limited by the weak information transmission between iterative stages.In this paper,we propose a Memory-Augmented deep Unfolding Network,termed MAUN,for explainable and accurate HSI reconstruction.Specifically,MAUN implements a novel CNN scheme to facilitate a better extrapolation step of the fast iterative shrinkage-thresholding algorithm,introducing an extra momentum incorporation step for each iteration to alleviate the information loss.Moreover,to exploit the high correlation of intermediate images from neighboring iterations,we customize a cross-stage transformer(CSFormer)as the deep denoiser to simultaneously capture self-similarity from both in-stage and cross-stage features,which is the first attempt to model the long-distance dependencies between iteration stages.Extensive experiments demonstrate that the proposed MAUN is superior to other state-of-the-art methods both visually and metrically.Our code is publicly available at https://github.com/HuQ1an/MAUN.展开更多
Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but ...Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.展开更多
Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,w...Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.展开更多
Dear Editor,Since the existing hyperspectral image denoising methods suffer from excessive or incomplete denoising,leading to information distortion and loss,this letter proposes a deep denoising network in the freque...Dear Editor,Since the existing hyperspectral image denoising methods suffer from excessive or incomplete denoising,leading to information distortion and loss,this letter proposes a deep denoising network in the frequency domain,termed D2Net.Our motivation stems from the observation that images from different hyperspectral image(HSI)bands share the same structural and contextual features while the reflectance variations in the spectra are mainly fallen on the details and textures.展开更多
A critical component of visual simultaneous localization and mapping is loop closure detection(LCD),an operation judging whether a robot has come to a pre-visited area.Concretely,given a query image(i.e.,the latest vi...A critical component of visual simultaneous localization and mapping is loop closure detection(LCD),an operation judging whether a robot has come to a pre-visited area.Concretely,given a query image(i.e.,the latest view observed by the robot),it proceeds by first exploring images with similar semantic information,followed by solving the relative relationship between candidate pairs in the 3D space.In this work,a novel appearance-based LCD system is proposed.Specifically,candidate frame selection is conducted via the combination of Superfeatures and aggregated selective match kernel(ASMK).We incorporate an incremental strategy into the vanilla ASMK to make it applied in the LCD task.It is demonstrated that this setting is memory-wise efficient and can achieve remarkable performance.To dig up consistent geometry between image pairs during loop closure verification,we propose a simple yet surprisingly effective feature matching algorithm,termed locality preserving matching with global consensus(LPM-GC).The major objective of LPM-GC is to retain the local neighborhood information of true feature correspondences between candidate pairs,where a global constraint is further designed to effectively remove false correspondences in challenging sceneries,e.g.,containing numerous repetitive structures.Meanwhile,we derive a closed-form solution that enables our approach to provide reliable correspondences within only a few milliseconds.The performance of the proposed approach has been experimentally evaluated on ten publicly available and challenging datasets.Results show that our method can achieve better performance over the state-of-the-art in both feature matching and LCD tasks.We have released our code of LPM-GC at https://github.com/jiayi-ma/LPM-GC.展开更多
Dear Editor, This letter is concerned with a new hyperspectral fusion paradigm by simultaneously fusing hyperspectral, multispectral, and panchromatic images. Seeking an efficient prior about the target hyperspectral ...Dear Editor, This letter is concerned with a new hyperspectral fusion paradigm by simultaneously fusing hyperspectral, multispectral, and panchromatic images. Seeking an efficient prior about the target hyperspectral image(HSI), is vital for constructing an accurate fusion model in this problem. To this end, this work suggests a novel sparse tensor prior using patch-based sparse tensor dictionary learning.展开更多
Pan-sharpening aims to seek high-resolution multispectral(HRMS) images from paired multispectral images of low resolution(LRMS) and panchromatic(PAN) images, the key to which is how to maximally integrate spatial and ...Pan-sharpening aims to seek high-resolution multispectral(HRMS) images from paired multispectral images of low resolution(LRMS) and panchromatic(PAN) images, the key to which is how to maximally integrate spatial and spectral information from PAN and LRMS images. Following the principle of gradual advance, this paper designs a novel network that contains two main logical functions, i.e., detail enhancement and progressive fusion, to solve the problem. More specifically, the detail enhancement module attempts to produce enhanced MS results with the same spatial sizes as corresponding PAN images, which are of higher quality than directly up-sampling LRMS images.Having a better MS base(enhanced MS) and its PAN, we progressively extract information from the PAN and enhanced MS images, expecting to capture pivotal and complementary information of the two modalities for the purpose of constructing the desired HRMS. Extensive experiments together with ablation studies on widely-used datasets are provided to verify the efficacy of our design, and demonstrate its superiority over other state-of-the-art methods both quantitatively and qualitatively. Our code has been released at https://github.com/JiaYN1/PAPS.展开更多
This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achi...This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.展开更多
Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to ...Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to strictly aligned source images and cause severe artifacts in the fusion results when input images have slight shifts or deformations. In addition,the fusion results typically only have good visual effect, but neglect the semantic requirements of high-level vision tasks.This study incorporates image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named Super Fusion. Specifically, we design a registration network to estimate bidirectional deformation fields to rectify geometric distortions of input images under the supervision of both photometric and end-point constraints. The registration and fusion are combined in a symmetric scheme, in which while mutual promotion can be achieved by optimizing the naive fusion loss, it is further enhanced by the mono-modal consistent constraint on symmetric fusion outputs. In addition, the image fusion network is equipped with the global spatial attention mechanism to achieve adaptive feature integration. Moreover, the semantic constraint based on the pre-trained segmentation model and Lovasz-Softmax loss is deployed to guide the fusion network to focus more on the semantic requirements of high-level vision tasks. Extensive experiments on image registration, image fusion,and semantic segmentation tasks demonstrate the superiority of our Super Fusion compared to the state-of-the-art alternatives.The source code and pre-trained model are publicly available at https://github.com/Linfeng-Tang/Super Fusion.展开更多
Dear Editor,This letter develops two new self-training strategies for domain adaptive semantic segmentation,which formulate self-training into the processes of mining more training samples and reducing influence of th...Dear Editor,This letter develops two new self-training strategies for domain adaptive semantic segmentation,which formulate self-training into the processes of mining more training samples and reducing influence of the false pseudo-labels.Particularly,a self-training strategy based on entropy-ranking is proposed to mine intra-domain information.展开更多
Objective:To investigate the proteomic characteristics of overweight/obesity and related abnormal glucose and lipid metabolism caused by phlegm-dampness retention to identify related biomarkers.Methods:Seventy-one sub...Objective:To investigate the proteomic characteristics of overweight/obesity and related abnormal glucose and lipid metabolism caused by phlegm-dampness retention to identify related biomarkers.Methods:Seventy-one subjects were enrolled in the study.We assessed blood glucose,blood lipids,body mass index(BMI),and phlegm-dampness pattern,which was confirmed by a traditional Chinese medicine clinician.Of the participants,we included healthy participants with normal weight(NW,n=23),overweight/obese participants with normal metabolism(ONM,n=19),overweight/obese participants with pre-diabetes(OPD,n=12),and overweight/obese participants with marginally-elevated blood lipids(OML,n=17).Among them,the ONM,OPD,and OML groups were diagnosed with phlegmdampness pattern.The data-independent acquisition(DIA)method was first used to analyze the plasma protein expression of each group,and the relevant differential proteins of each group were screened.The co-expressed proteins were evaluated by Venn analysis.The pathway analyses of the differential proteins were analyzed using Ingenuity Pathway Analysis(IPA)software.Parallel reaction monitoring(PRM)was used to verify the differential and common proteins in each group.Results:After comparing ONM,OPD,and OML groups with NW group,we identified the differentially expressed proteins(DEPs).Next,we determined the DEPs among OPD,OML,and ONM groups.Using Venn analysis of the DEPs in each group,24 co-expressed proteins were screened.Two co-expressed proteins were verified by PRM.IPA analysis showed that pathways including LXR/RXR activation,acute phase response signaling,and FXR/RXR activation were common to all three groups of phlegmdamp overweight/obesity participants.However,the activation or inhibition of these pathways was different among the three groups.Conclusion:Participants with overweight/obesity have similar proteomic characteristics,though each type shows specific proteomic characteristics.Two co-expressed proteins,VTN and ORM1,are potential biomarkers for glucose and lipid metabolism diseases with overweight/obesity caused by phlegmdampness retention.展开更多
Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real ...Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real degradation is not consistent with the assumption.To deal with real-world scenarios,existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme.However,degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors.In this paper,we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples,respectively.Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space.Furthermore,instead of estimating the degradation,we extract global statistical prior information to capture the character of the distortion.Considering the coupling between the degradation and the low-resolution image,we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions.We term our distortion-specific network with contrastive regularization as CRDNet.The extensive experiments on synthetic and realworld scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches.展开更多
Sequential recommendation based on amulti-interest framework aims to analyze different aspects of interest based on historical interactions and generate predictions of a user’s potential interest in a list of items.M...Sequential recommendation based on amulti-interest framework aims to analyze different aspects of interest based on historical interactions and generate predictions of a user’s potential interest in a list of items.Most existing methods only focus on what are themultiple interests behind interactions but neglect the evolution of user interests over time.To explore the impact of temporal dynamics on interest extraction,this paper explicitly models the timestamp with amulti-interest network and proposes a time-highlighted network to learn user preferences,which considers not only the interests at different moments but also the possible trends of interest over time.More specifically,the time intervals between historical interactions and prediction moments are first mapped to vectors.Meanwhile,a time-attentive aggregation layer is designed to capture the trends of items in the sequence over time,where the time intervals are seen as additional information to distinguish the importance of different neighbors.Then,the learned items’transition trends are aggregated with the items themselves by a gated unit.Finally,a self-attention network is deployed to capture multiple interests with the obtained temporal information vectors.Extensive experiments are carried out based on three real-world datasets and the results convincingly establish the superiority of the proposed method over other state-of-the-art baselines in terms of model performance.展开更多
Reducing the defocus blur that arises from the finite aperture size and short exposure time is an essential problem in computational photography.It is very challenging because the blur kernel is spatially varying and ...Reducing the defocus blur that arises from the finite aperture size and short exposure time is an essential problem in computational photography.It is very challenging because the blur kernel is spatially varying and difficult to estimate by traditional methods.Due to its great breakthrough in low-level tasks,convolutional neural networks(CNNs)have been introdu-ced to the defocus deblurring problem and achieved significant progress.However,previous methods apply the same learned kernel for different regions of the defocus blurred images,thus it is difficult to handle nonuniform blurred images.To this end,this study designs a novel blur-aware multi-branch network(Ba-MBNet),in which different regions are treated differentially.In particular,we estimate the blur amounts of different regions by the internal geometric constraint of the dual-pixel(DP)data,which measures the defocus disparity between the left and right views.Based on the assumption that different image regions with different blur amounts have different deblurring difficulties,we leverage different networks with different capacities to treat different image regions.Moreover,we introduce a meta-learning defocus mask generation algorithm to assign each pixel to a proper branch.In this way,we can expect to maintain the information of the clear regions well while recovering the missing details of the blurred regions.Both quantitative and qualitative experiments demonstrate that our BaMBNet outperforms the state-of-the-art(SOTA)methods.For the dual-pixel defocus deblurring(DPD)-blur dataset,the proposed BaMBNet achieves 1.20 dB gain over the previous SOTA method in term of peak signal-to-noise ratio(PSNR)and reduces learnable parameters by 85%.The details of the code and dataset are available at https://github.com/junjun-jiang/BaMBNet.展开更多
Dear Editor,Loop closure detection(LCD)is an important module in simultaneous localization and mapping(SLAM).In this letter,we address the LCD task from the semantic aspect to the geometric one.To this end,a network t...Dear Editor,Loop closure detection(LCD)is an important module in simultaneous localization and mapping(SLAM).In this letter,we address the LCD task from the semantic aspect to the geometric one.To this end,a network termed as AttentionNetVLAD which can simultaneously extract global and local features is proposed.It leverages attentive selection for local features,coupling with reweighting the soft assignment in NetVLAD via the attention map for global features.Given a query image,candidate frames are first identified coarsely by retrieving similar global features in the database via hierarchical navigable small world(HNSW).As global features mainly summarize the semantic information of images and lead to compact representation,information about spatial arrangement of visual elements is lost.展开更多
Background:Based on the theory of traditional Chinese medicine (TCM) and preepidemiological investigation,Professor Qi Wang classified the entire human population into nine constitutions and put forward the theory of ...Background:Based on the theory of traditional Chinese medicine (TCM) and preepidemiological investigation,Professor Qi Wang classified the entire human population into nine constitutions and put forward the theory of 'Nine-Constitution Medicine.' Among these constitutions,the main feature of the yang-deficiency constitution (YADC) is intolerance of the cold,which has been proven to reduce quality of life and confer susceptibility to specific diseases.Previous studies explored the genetic and transcriptional bases of YADC.In this experiment,we explored the potential mechanism of YADC using protein microarray,to deepen our understanding of its biological mechanism.Methods:Subjects identified with a YADC (n =12) or a balance constitution (BC;n =12) in accordance with the Classification and Determination Standards of Constitutions in Traditional Chinese Medicine were selected.Blood was collected to separate serum and protein microarray technology was used to analyze serum protein expression.Results:The clustering of subjects' constitutions based on protein expression profiling largely coincided with the TCM classification.Based on false discovery rate correction (P <.01) and fold change ≥ 5 or ≤ 0.2,a total of 85 proteins differentially expressed in YADC compared with their status in BC were selected,including 64 upregulated and 21 downregulated ones.Enrichment analysis suggested that subjects with YADC are susceptible to endocrine and energy metabolism disorders,as well as decline in immune function.Conclusion:This study revealed that YADC exhibits systematic differences in its protein expression profile.Moreover,we can potentially explain the characteristics of YADC partly via differentially expressed proteins.展开更多
Most existing light field(LF)super-resolution(SR)methods either fail to fully use angular information or have an unbalanced performance distribution because they use parts of views.To address these issues,we propose a...Most existing light field(LF)super-resolution(SR)methods either fail to fully use angular information or have an unbalanced performance distribution because they use parts of views.To address these issues,we propose a novel integration network based on macro-pixel representation for the LF SR task,named MPIN.Restoring the entire LF image simultaneously,we couple the spatial and angular information by rearranging the four-dimensional LF image into a two-dimensional macro-pixel image.Then,two special convolutions are deployed to extract spatial and angular information,separately.To fully exploit spatial-angular correlations,the integration resblock is designed to merge the two kinds of information for mutual guidance,allowing our method to be angular-coherent.Under the macro-pixel representation,an angular shuffle layer is tailored to improve the spatial resolution of the macro-pixel image,which can effectively avoid aliasing.Extensive experiments on both synthetic and real-world LF datasets demonstrate that our method can achieve better performance than the state-of-the-art methods qualitatively and quantitatively.Moreover,the proposed method has an advantage in preserving the inherent epipolar structures of LF images with a balanced distribution of performance.展开更多
As predators,the macronutrients spiders extract from their prey play important roles in their mating and reproduction.Previous studies of macronutrients on spider mating and reproduction focus on protein,the potential...As predators,the macronutrients spiders extract from their prey play important roles in their mating and reproduction.Previous studies of macronutrients on spider mating and reproduction focus on protein,the potential impact of prey lipid content on spider mating and reproduction remains largely unexplored.Here,we tested the influence of prey varying in lipid content on female mating,sexual cannibalism,reproduction,and offspring fitness in the wolf spider Pardosa pseudoannulata.We acquired 2 groups of fruit fly Drosophila melanogaster that differed significantly in lipid but not protein content by supplementing cultural media with a high or low dose of sucrose on which the fruit flies were reared(HL:high lipid and LL:low lipid).Subadult(i.e.,1 molt before adult)female spiders that fed HL flies matured with significantly higher lipid content than those fed LL flies.We found that the mated females fed with HL flies significantly shortened pre-oviposition time and resulted in a significantly higher fecundity.However,there was no significant difference in female spiders varying in lipid content on other behaviors and traits,including the latency to courtship,courtship duration,mating,copulation duration,sexual cannibalism,offspring body size,and survival.Hence,our results suggest that the lipid content of prey may be a limiting factor for female reproduction,but not for other behavioral traits in the wolf spiders P.pseudoannulata.展开更多
Bone tissue engineering(BTE)has proven to be a promising strategy for bone defect repair.Due to its excellent biological properties,gelatin methacrylate(GelMA)hydrogels have been used as bioinks for 3D bioprinting in ...Bone tissue engineering(BTE)has proven to be a promising strategy for bone defect repair.Due to its excellent biological properties,gelatin methacrylate(GelMA)hydrogels have been used as bioinks for 3D bioprinting in some BTE studies to produce scaffolds for bone regeneration.However,applications for load-bearing defects are limited by poor mechanical properties and a lack of bioactivity.In this study,3D printing technology was used to create nano-attapulgite(nano-ATP)/GelMA composite hydrogels loaded into mouse bone mesenchymal stem cells(BMSCs)and mouse umbilical vein endothelial cells(MUVECs).The bioprintability,physicochemical properties,and mechanical properties were all thoroughly evaluated.Our findings showed that nano-ATP groups outperform the control group in terms of printability,indicating that nano-ATP is beneficial for printability.Additionally,after incorporation with nano-ATP,the mechanical strength of the composite hydrogels was significantly improved,resulting in adequate mechanical properties for bone regeneration.The presence of nano-ATP in the scaffolds has also been stud-ied for cell-material interactions.The findings show that cells within the scaffold not only have high viability but also a clear proclivity to promote osteogenic differentiation of BMSCs.Besides,the MUVECs-loaded composite hydrogels demonstrated increased angiogenic activity.A cranial defect model was also developed to evaluate the bone repair capability of scaffolds loaded with rat BMSCs.According to histo-logical analysis,cell-laden nano-ATP composite hydrogels can effectively im prove bone regeneration and promote angiogenesis.This study demonstrated the potential of nano-ATP for bone tissue engineering,which should also increase the clinical practicality of nano-ATP.展开更多
基金supported by the National Natural Science Foundation of China (62276192)。
文摘Feature matching plays a key role in computer vision. However, due to the limitations of the descriptors, the putative matches are inevitably contaminated by massive outliers.This paper attempts to tackle the outlier filtering problem from two aspects. First, a robust and efficient graph interaction model,is proposed, with the assumption that matches are correlated with each other rather than independently distributed. To this end, we construct a graph based on the local relationships of matches and formulate the outlier filtering task as a binary labeling energy minimization problem, where the pairwise term encodes the interaction between matches. We further show that this formulation can be solved globally by graph cut algorithm. Our new formulation always improves the performance of previous localitybased method without noticeable deterioration in processing time,adding a few milliseconds. Second, to construct a better graph structure, a robust and geometrically meaningful topology-aware relationship is developed to capture the topology relationship between matches. The two components in sum lead to topology interaction matching(TIM), an effective and efficient method for outlier filtering. Extensive experiments on several large and diverse datasets for multiple vision tasks including general feature matching, as well as relative pose estimation, homography and fundamental matrix estimation, loop-closure detection, and multi-modal image matching, demonstrate that our TIM is more competitive than current state-of-the-art methods, in terms of generality, efficiency, and effectiveness. The source code is publicly available at http://github.com/YifanLu2000/TIM.
基金supported by the National Natural Science Foundation of China(62276192)。
文摘Spectral compressive imaging has emerged as a powerful technique to collect the 3D spectral information as 2D measurements.The algorithm for restoring the original 3D hyperspectral images(HSIs)from compressive measurements is pivotal in the imaging process.Early approaches painstakingly designed networks to directly map compressive measurements to HSIs,resulting in the lack of interpretability without exploiting the imaging priors.While some recent works have introduced the deep unfolding framework for explainable reconstruction,the performance of these methods is still limited by the weak information transmission between iterative stages.In this paper,we propose a Memory-Augmented deep Unfolding Network,termed MAUN,for explainable and accurate HSI reconstruction.Specifically,MAUN implements a novel CNN scheme to facilitate a better extrapolation step of the fast iterative shrinkage-thresholding algorithm,introducing an extra momentum incorporation step for each iteration to alleviate the information loss.Moreover,to exploit the high correlation of intermediate images from neighboring iterations,we customize a cross-stage transformer(CSFormer)as the deep denoiser to simultaneously capture self-similarity from both in-stage and cross-stage features,which is the first attempt to model the long-distance dependencies between iteration stages.Extensive experiments demonstrate that the proposed MAUN is superior to other state-of-the-art methods both visually and metrically.Our code is publicly available at https://github.com/HuQ1an/MAUN.
基金supported by the National Natural Science Foundation of China(62276192)。
文摘Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast,a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods,resulting in remnants of other degradations,uneven brightness and artifacts.In response,this paper proposes a self-supervised enhancement method,termed as SLIE.It can handle multiple degradations including illumination attenuation,noise pollution,and color shift,all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally,the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions,and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.
基金supported in part by the National Natural Science Foundation of China(62276192)。
文摘Hyperspectral image super-resolution,which refers to reconstructing the high-resolution hyperspectral image from the input low-resolution observation,aims to improve the spatial resolution of the hyperspectral image,which is beneficial for subsequent applications.The development of deep learning has promoted significant progress in hyperspectral image super-resolution,and the powerful expression capabilities of deep neural networks make the predicted results more reliable.Recently,several latest deep learning technologies have made the hyperspectral image super-resolution method explode.However,a comprehensive review and analysis of the latest deep learning methods from the hyperspectral image super-resolution perspective is absent.To this end,in this survey,we first introduce the concept of hyperspectral image super-resolution and classify the methods from the perspectives with or without auxiliary information.Then,we review the learning-based methods in three categories,including single hyperspectral image super-resolution,panchromatic-based hyperspectral image super-resolution,and multispectral-based hyperspectral image super-resolution.Subsequently,we summarize the commonly used hyperspectral dataset,and the evaluations for some representative methods in three categories are performed qualitatively and quantitatively.Moreover,we briefly introduce several typical applications of hyperspectral image super-resolution,including ground object classification,urban change detection,and ecosystem monitoring.Finally,we provide the conclusion and challenges in existing learning-based methods,looking forward to potential future research directions.
基金supported by the National Natural Science Foundation of China(61903279)。
文摘Dear Editor,Since the existing hyperspectral image denoising methods suffer from excessive or incomplete denoising,leading to information distortion and loss,this letter proposes a deep denoising network in the frequency domain,termed D2Net.Our motivation stems from the observation that images from different hyperspectral image(HSI)bands share the same structural and contextual features while the reflectance variations in the spectra are mainly fallen on the details and textures.
基金supported by the Key Research and Development Program of Hubei Province(2020BAB113)。
文摘A critical component of visual simultaneous localization and mapping is loop closure detection(LCD),an operation judging whether a robot has come to a pre-visited area.Concretely,given a query image(i.e.,the latest view observed by the robot),it proceeds by first exploring images with similar semantic information,followed by solving the relative relationship between candidate pairs in the 3D space.In this work,a novel appearance-based LCD system is proposed.Specifically,candidate frame selection is conducted via the combination of Superfeatures and aggregated selective match kernel(ASMK).We incorporate an incremental strategy into the vanilla ASMK to make it applied in the LCD task.It is demonstrated that this setting is memory-wise efficient and can achieve remarkable performance.To dig up consistent geometry between image pairs during loop closure verification,we propose a simple yet surprisingly effective feature matching algorithm,termed locality preserving matching with global consensus(LPM-GC).The major objective of LPM-GC is to retain the local neighborhood information of true feature correspondences between candidate pairs,where a global constraint is further designed to effectively remove false correspondences in challenging sceneries,e.g.,containing numerous repetitive structures.Meanwhile,we derive a closed-form solution that enables our approach to provide reliable correspondences within only a few milliseconds.The performance of the proposed approach has been experimentally evaluated on ten publicly available and challenging datasets.Results show that our method can achieve better performance over the state-of-the-art in both feature matching and LCD tasks.We have released our code of LPM-GC at https://github.com/jiayi-ma/LPM-GC.
文摘Dear Editor, This letter is concerned with a new hyperspectral fusion paradigm by simultaneously fusing hyperspectral, multispectral, and panchromatic images. Seeking an efficient prior about the target hyperspectral image(HSI), is vital for constructing an accurate fusion model in this problem. To this end, this work suggests a novel sparse tensor prior using patch-based sparse tensor dictionary learning.
基金partially supported by the National Natural Science Foundation of China (62372251)。
文摘Pan-sharpening aims to seek high-resolution multispectral(HRMS) images from paired multispectral images of low resolution(LRMS) and panchromatic(PAN) images, the key to which is how to maximally integrate spatial and spectral information from PAN and LRMS images. Following the principle of gradual advance, this paper designs a novel network that contains two main logical functions, i.e., detail enhancement and progressive fusion, to solve the problem. More specifically, the detail enhancement module attempts to produce enhanced MS results with the same spatial sizes as corresponding PAN images, which are of higher quality than directly up-sampling LRMS images.Having a better MS base(enhanced MS) and its PAN, we progressively extract information from the PAN and enhanced MS images, expecting to capture pivotal and complementary information of the two modalities for the purpose of constructing the desired HRMS. Extensive experiments together with ablation studies on widely-used datasets are provided to verify the efficacy of our design, and demonstrate its superiority over other state-of-the-art methods both quantitatively and qualitatively. Our code has been released at https://github.com/JiaYN1/PAPS.
基金This work was supported by the National Natural Science Foundation of China(62075169,62003247,62061160370)the Key Research and Development Program of Hubei Province(2020BAB113).
文摘This study proposes a novel general image fusion framework based on cross-domain long-range learning and Swin Transformer,termed as SwinFusion.On the one hand,an attention-guided cross-domain module is devised to achieve sufficient integration of complementary information and global interaction.More specifically,the proposed method involves an intra-domain fusion unit based on self-attention and an interdomain fusion unit based on cross-attention,which mine and integrate long dependencies within the same domain and across domains.Through long-range dependency modeling,the network is able to fully implement domain-specific information extraction and cross-domain complementary information integration as well as maintaining the appropriate apparent intensity from a global perspective.In particular,we introduce the shifted windows mechanism into the self-attention and cross-attention,which allows our model to receive images with arbitrary sizes.On the other hand,the multi-scene image fusion problems are generalized to a unified framework with structure maintenance,detail preservation,and proper intensity control.Moreover,an elaborate loss function,consisting of SSIM loss,texture loss,and intensity loss,drives the network to preserve abundant texture details and structural information,as well as presenting optimal apparent intensity.Extensive experiments on both multi-modal image fusion and digital photography image fusion demonstrate the superiority of our SwinFusion compared to the state-of-theart unified image fusion algorithms and task-specific alternatives.Implementation code and pre-trained weights can be accessed at https://github.com/Linfeng-Tang/SwinFusion.
基金supported by the National Natural Science Foundation of China(62276192,62075169,62061160370)the Key Research and Development Program of Hubei Province(2020BAB113)。
文摘Image fusion aims to integrate complementary information in source images to synthesize a fused image comprehensively characterizing the imaging scene. However, existing image fusion algorithms are only applicable to strictly aligned source images and cause severe artifacts in the fusion results when input images have slight shifts or deformations. In addition,the fusion results typically only have good visual effect, but neglect the semantic requirements of high-level vision tasks.This study incorporates image registration, image fusion, and semantic requirements of high-level vision tasks into a single framework and proposes a novel image registration and fusion method, named Super Fusion. Specifically, we design a registration network to estimate bidirectional deformation fields to rectify geometric distortions of input images under the supervision of both photometric and end-point constraints. The registration and fusion are combined in a symmetric scheme, in which while mutual promotion can be achieved by optimizing the naive fusion loss, it is further enhanced by the mono-modal consistent constraint on symmetric fusion outputs. In addition, the image fusion network is equipped with the global spatial attention mechanism to achieve adaptive feature integration. Moreover, the semantic constraint based on the pre-trained segmentation model and Lovasz-Softmax loss is deployed to guide the fusion network to focus more on the semantic requirements of high-level vision tasks. Extensive experiments on image registration, image fusion,and semantic segmentation tasks demonstrate the superiority of our Super Fusion compared to the state-of-the-art alternatives.The source code and pre-trained model are publicly available at https://github.com/Linfeng-Tang/Super Fusion.
基金supported by the Key Research and Development Program of Hubei Province(2020BAB113)the Natural Science Fund of Hubei Province(2019CFA037)。
文摘Dear Editor,This letter develops two new self-training strategies for domain adaptive semantic segmentation,which formulate self-training into the processes of mining more training samples and reducing influence of the false pseudo-labels.Particularly,a self-training strategy based on entropy-ranking is proposed to mine intra-domain information.
基金supported by the General Program of National Natural Science Foundation of China(81673836)。
文摘Objective:To investigate the proteomic characteristics of overweight/obesity and related abnormal glucose and lipid metabolism caused by phlegm-dampness retention to identify related biomarkers.Methods:Seventy-one subjects were enrolled in the study.We assessed blood glucose,blood lipids,body mass index(BMI),and phlegm-dampness pattern,which was confirmed by a traditional Chinese medicine clinician.Of the participants,we included healthy participants with normal weight(NW,n=23),overweight/obese participants with normal metabolism(ONM,n=19),overweight/obese participants with pre-diabetes(OPD,n=12),and overweight/obese participants with marginally-elevated blood lipids(OML,n=17).Among them,the ONM,OPD,and OML groups were diagnosed with phlegmdampness pattern.The data-independent acquisition(DIA)method was first used to analyze the plasma protein expression of each group,and the relevant differential proteins of each group were screened.The co-expressed proteins were evaluated by Venn analysis.The pathway analyses of the differential proteins were analyzed using Ingenuity Pathway Analysis(IPA)software.Parallel reaction monitoring(PRM)was used to verify the differential and common proteins in each group.Results:After comparing ONM,OPD,and OML groups with NW group,we identified the differentially expressed proteins(DEPs).Next,we determined the DEPs among OPD,OML,and ONM groups.Using Venn analysis of the DEPs in each group,24 co-expressed proteins were screened.Two co-expressed proteins were verified by PRM.IPA analysis showed that pathways including LXR/RXR activation,acute phase response signaling,and FXR/RXR activation were common to all three groups of phlegmdamp overweight/obesity participants.However,the activation or inhibition of these pathways was different among the three groups.Conclusion:Participants with overweight/obesity have similar proteomic characteristics,though each type shows specific proteomic characteristics.Two co-expressed proteins,VTN and ORM1,are potential biomarkers for glucose and lipid metabolism diseases with overweight/obesity caused by phlegmdampness retention.
基金supported by the National Natural Science Foundation of China(61971165)the Key Research and Development Program of Hubei Province(2020BAB113)。
文摘Previous deep learning-based super-resolution(SR)methods rely on the assumption that the degradation process is predefined(e.g.,bicubic downsampling).Thus,their performance would suffer from deterioration if the real degradation is not consistent with the assumption.To deal with real-world scenarios,existing blind SR methods are committed to estimating both the degradation and the super-resolved image with an extra loss or iterative scheme.However,degradation estimation that requires more computation would result in limited SR performance due to the accumulated estimation errors.In this paper,we propose a contrastive regularization built upon contrastive learning to exploit both the information of blurry images and clear images as negative and positive samples,respectively.Contrastive regularization ensures that the restored image is pulled closer to the clear image and pushed far away from the blurry image in the representation space.Furthermore,instead of estimating the degradation,we extract global statistical prior information to capture the character of the distortion.Considering the coupling between the degradation and the low-resolution image,we embed the global prior into the distortion-specific SR network to make our method adaptive to the changes of distortions.We term our distortion-specific network with contrastive regularization as CRDNet.The extensive experiments on synthetic and realworld scenes demonstrate that our lightweight CRDNet surpasses state-of-the-art blind super-resolution approaches.
基金supported in part by the National Natural Science Foundation of China under Grant 61702060.
文摘Sequential recommendation based on amulti-interest framework aims to analyze different aspects of interest based on historical interactions and generate predictions of a user’s potential interest in a list of items.Most existing methods only focus on what are themultiple interests behind interactions but neglect the evolution of user interests over time.To explore the impact of temporal dynamics on interest extraction,this paper explicitly models the timestamp with amulti-interest network and proposes a time-highlighted network to learn user preferences,which considers not only the interests at different moments but also the possible trends of interest over time.More specifically,the time intervals between historical interactions and prediction moments are first mapped to vectors.Meanwhile,a time-attentive aggregation layer is designed to capture the trends of items in the sequence over time,where the time intervals are seen as additional information to distinguish the importance of different neighbors.Then,the learned items’transition trends are aggregated with the items themselves by a gated unit.Finally,a self-attention network is deployed to capture multiple interests with the obtained temporal information vectors.Extensive experiments are carried out based on three real-world datasets and the results convincingly establish the superiority of the proposed method over other state-of-the-art baselines in terms of model performance.
基金supported by the National Natural Science Foundation of China (61971165, 61922027, 61773295)in part by the Fundamental Research Funds for the Central Universities (FRFCU5710050119)+1 种基金the Natural Science Foundation of Heilongjiang Province(YQ2020F004)the Chinese Association for Artificial Intelligence(CAAI)-Huawei Mind Spore Open Fund
文摘Reducing the defocus blur that arises from the finite aperture size and short exposure time is an essential problem in computational photography.It is very challenging because the blur kernel is spatially varying and difficult to estimate by traditional methods.Due to its great breakthrough in low-level tasks,convolutional neural networks(CNNs)have been introdu-ced to the defocus deblurring problem and achieved significant progress.However,previous methods apply the same learned kernel for different regions of the defocus blurred images,thus it is difficult to handle nonuniform blurred images.To this end,this study designs a novel blur-aware multi-branch network(Ba-MBNet),in which different regions are treated differentially.In particular,we estimate the blur amounts of different regions by the internal geometric constraint of the dual-pixel(DP)data,which measures the defocus disparity between the left and right views.Based on the assumption that different image regions with different blur amounts have different deblurring difficulties,we leverage different networks with different capacities to treat different image regions.Moreover,we introduce a meta-learning defocus mask generation algorithm to assign each pixel to a proper branch.In this way,we can expect to maintain the information of the clear regions well while recovering the missing details of the blurred regions.Both quantitative and qualitative experiments demonstrate that our BaMBNet outperforms the state-of-the-art(SOTA)methods.For the dual-pixel defocus deblurring(DPD)-blur dataset,the proposed BaMBNet achieves 1.20 dB gain over the previous SOTA method in term of peak signal-to-noise ratio(PSNR)and reduces learnable parameters by 85%.The details of the code and dataset are available at https://github.com/junjun-jiang/BaMBNet.
基金supported by Key Research and Development Program of Hubei Province(2020BAB113)the Natural Science Fund of Hubei Province(2019CFA037)。
文摘Dear Editor,Loop closure detection(LCD)is an important module in simultaneous localization and mapping(SLAM).In this letter,we address the LCD task from the semantic aspect to the geometric one.To this end,a network termed as AttentionNetVLAD which can simultaneously extract global and local features is proposed.It leverages attentive selection for local features,coupling with reweighting the soft assignment in NetVLAD via the attention map for global features.Given a query image,candidate frames are first identified coarsely by retrieving similar global features in the database via hierarchical navigable small world(HNSW).As global features mainly summarize the semantic information of images and lead to compact representation,information about spatial arrangement of visual elements is lost.
基金This study was supported by the National Natural Science Foundation of China:Surface Project(81373504)National Key Basic Research Program of China(973 Program)(2011CB505400).
文摘Background:Based on the theory of traditional Chinese medicine (TCM) and preepidemiological investigation,Professor Qi Wang classified the entire human population into nine constitutions and put forward the theory of 'Nine-Constitution Medicine.' Among these constitutions,the main feature of the yang-deficiency constitution (YADC) is intolerance of the cold,which has been proven to reduce quality of life and confer susceptibility to specific diseases.Previous studies explored the genetic and transcriptional bases of YADC.In this experiment,we explored the potential mechanism of YADC using protein microarray,to deepen our understanding of its biological mechanism.Methods:Subjects identified with a YADC (n =12) or a balance constitution (BC;n =12) in accordance with the Classification and Determination Standards of Constitutions in Traditional Chinese Medicine were selected.Blood was collected to separate serum and protein microarray technology was used to analyze serum protein expression.Results:The clustering of subjects' constitutions based on protein expression profiling largely coincided with the TCM classification.Based on false discovery rate correction (P <.01) and fold change ≥ 5 or ≤ 0.2,a total of 85 proteins differentially expressed in YADC compared with their status in BC were selected,including 64 upregulated and 21 downregulated ones.Enrichment analysis suggested that subjects with YADC are susceptible to endocrine and energy metabolism disorders,as well as decline in immune function.Conclusion:This study revealed that YADC exhibits systematic differences in its protein expression profile.Moreover,we can potentially explain the characteristics of YADC partly via differentially expressed proteins.
基金Project supported by the National Natural Science Foundation of China(No.61773295)。
文摘Most existing light field(LF)super-resolution(SR)methods either fail to fully use angular information or have an unbalanced performance distribution because they use parts of views.To address these issues,we propose a novel integration network based on macro-pixel representation for the LF SR task,named MPIN.Restoring the entire LF image simultaneously,we couple the spatial and angular information by rearranging the four-dimensional LF image into a two-dimensional macro-pixel image.Then,two special convolutions are deployed to extract spatial and angular information,separately.To fully exploit spatial-angular correlations,the integration resblock is designed to merge the two kinds of information for mutual guidance,allowing our method to be angular-coherent.Under the macro-pixel representation,an angular shuffle layer is tailored to improve the spatial resolution of the macro-pixel image,which can effectively avoid aliasing.Extensive experiments on both synthetic and real-world LF datasets demonstrate that our method can achieve better performance than the state-of-the-art methods qualitatively and quantitatively.Moreover,the proposed method has an advantage in preserving the inherent epipolar structures of LF images with a balanced distribution of performance.
基金This study was supported by grants from the National Natural Science Foundation of China(NSFC)(30800121).
文摘As predators,the macronutrients spiders extract from their prey play important roles in their mating and reproduction.Previous studies of macronutrients on spider mating and reproduction focus on protein,the potential impact of prey lipid content on spider mating and reproduction remains largely unexplored.Here,we tested the influence of prey varying in lipid content on female mating,sexual cannibalism,reproduction,and offspring fitness in the wolf spider Pardosa pseudoannulata.We acquired 2 groups of fruit fly Drosophila melanogaster that differed significantly in lipid but not protein content by supplementing cultural media with a high or low dose of sucrose on which the fruit flies were reared(HL:high lipid and LL:low lipid).Subadult(i.e.,1 molt before adult)female spiders that fed HL flies matured with significantly higher lipid content than those fed LL flies.We found that the mated females fed with HL flies significantly shortened pre-oviposition time and resulted in a significantly higher fecundity.However,there was no significant difference in female spiders varying in lipid content on other behaviors and traits,including the latency to courtship,courtship duration,mating,copulation duration,sexual cannibalism,offspring body size,and survival.Hence,our results suggest that the lipid content of prey may be a limiting factor for female reproduction,but not for other behavioral traits in the wolf spiders P.pseudoannulata.
基金This research was funded by Jiangsu Province’s Key Project of Science and Technology(Grant No.BE2018644)Changzhou Health Commission’s Young Talents Science and Technology project(Grant No.QN202029).
文摘Bone tissue engineering(BTE)has proven to be a promising strategy for bone defect repair.Due to its excellent biological properties,gelatin methacrylate(GelMA)hydrogels have been used as bioinks for 3D bioprinting in some BTE studies to produce scaffolds for bone regeneration.However,applications for load-bearing defects are limited by poor mechanical properties and a lack of bioactivity.In this study,3D printing technology was used to create nano-attapulgite(nano-ATP)/GelMA composite hydrogels loaded into mouse bone mesenchymal stem cells(BMSCs)and mouse umbilical vein endothelial cells(MUVECs).The bioprintability,physicochemical properties,and mechanical properties were all thoroughly evaluated.Our findings showed that nano-ATP groups outperform the control group in terms of printability,indicating that nano-ATP is beneficial for printability.Additionally,after incorporation with nano-ATP,the mechanical strength of the composite hydrogels was significantly improved,resulting in adequate mechanical properties for bone regeneration.The presence of nano-ATP in the scaffolds has also been stud-ied for cell-material interactions.The findings show that cells within the scaffold not only have high viability but also a clear proclivity to promote osteogenic differentiation of BMSCs.Besides,the MUVECs-loaded composite hydrogels demonstrated increased angiogenic activity.A cranial defect model was also developed to evaluate the bone repair capability of scaffolds loaded with rat BMSCs.According to histo-logical analysis,cell-laden nano-ATP composite hydrogels can effectively im prove bone regeneration and promote angiogenesis.This study demonstrated the potential of nano-ATP for bone tissue engineering,which should also increase the clinical practicality of nano-ATP.