Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesio...Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis.展开更多
【目的】为保障广东省水稻安全生产,减轻农区害鼠为害损失,总结农区鼠害TBS技术试验成效,形成可复制可推广的防控技术模式。【方法】对TBS技术在广东省农区害鼠监测和防控效果进行了研究,在江门市台山市和河源市连平县设置5005 m TBS围...【目的】为保障广东省水稻安全生产,减轻农区害鼠为害损失,总结农区鼠害TBS技术试验成效,形成可复制可推广的防控技术模式。【方法】对TBS技术在广东省农区害鼠监测和防控效果进行了研究,在江门市台山市和河源市连平县设置5005 m TBS围栏、502个捕鼠桶。【结果】共捕获害鼠478只,捕获鼠种包括小家鼠151只,黄毛鼠126只,板齿鼠109只,褐家鼠76只,黄胸鼠16只,其中,小家鼠、黄毛鼠、板齿鼠、褐家鼠为当地农田主要鼠种,分别占捕获总量的31.6%、26.4%、22.8%和15.9%,捕获鼠种构成分别与2023年当地使用鼠夹法进行的鼠情监测结果基本一致。通过对不同生境捕获鼠数量的比较,结果显示,水稻苗期香蕉树区域TBS围栏捕获鼠数量显著高于河道区域,黄熟期荷花池区域捕获鼠数量显著高于河道区域。在连平县和台山市试验区均未捕获到害鼠,台山市对照区害鼠捕获率为10.50%,连平县对照区害鼠捕获率为9.50%,TBS技术对害鼠有较好的控制作用。台山市试验区和对照区的调查丛数、有效穗数和受害穗率存在显著差异(P<0.05),TBS围栏防治效果95.41%;连平县试验区和对照区有效穗数和受害穗率存在显著差异(P<0.05),TBS技术防治效果85.35%。【结论】TBS技术的应用,减轻了害鼠危害,减少稻谷损失。TBS技术具有安全、环保、可持续利用等特点,具有良好的应用前景。展开更多
Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.I...Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.In order to improve the sample quality,a weighted block cooperative sparse representation algorithm is proposed based on visual saliency dictionary.First,the algorithm uses the biological visual attention mechanism to quickly and accurately obtain the face salient target and constructs the visual salient dictionary.Then,a block cooperation framework is presented to perform sparse coding for different local structures of human face,and the weighted regular term is introduced in the sparse representation process to enhance the identification of information hidden in the coding coefficients.Finally,by synthesising the sparse representation results of all visual salient block dictionaries,the global coding residual is obtained and the class label is given.The experimental results on four databases,that is,AR,extended Yale B,LFW and PubFig,indicate that the combination of visual saliency dictionary,block cooperative sparse representation and weighted constraint coding can effectively enhance the accuracy of sparse representation of the samples to be tested and improve the performance of unconstrained face recognition.展开更多
基金supported in part by the National Natural Science Foundation of China(Grant No.62062003)Natural Science Foundation of Ningxia(Grant No.2023AAC03293).
文摘Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis.
文摘【目的】为保障广东省水稻安全生产,减轻农区害鼠为害损失,总结农区鼠害TBS技术试验成效,形成可复制可推广的防控技术模式。【方法】对TBS技术在广东省农区害鼠监测和防控效果进行了研究,在江门市台山市和河源市连平县设置5005 m TBS围栏、502个捕鼠桶。【结果】共捕获害鼠478只,捕获鼠种包括小家鼠151只,黄毛鼠126只,板齿鼠109只,褐家鼠76只,黄胸鼠16只,其中,小家鼠、黄毛鼠、板齿鼠、褐家鼠为当地农田主要鼠种,分别占捕获总量的31.6%、26.4%、22.8%和15.9%,捕获鼠种构成分别与2023年当地使用鼠夹法进行的鼠情监测结果基本一致。通过对不同生境捕获鼠数量的比较,结果显示,水稻苗期香蕉树区域TBS围栏捕获鼠数量显著高于河道区域,黄熟期荷花池区域捕获鼠数量显著高于河道区域。在连平县和台山市试验区均未捕获到害鼠,台山市对照区害鼠捕获率为10.50%,连平县对照区害鼠捕获率为9.50%,TBS技术对害鼠有较好的控制作用。台山市试验区和对照区的调查丛数、有效穗数和受害穗率存在显著差异(P<0.05),TBS围栏防治效果95.41%;连平县试验区和对照区有效穗数和受害穗率存在显著差异(P<0.05),TBS技术防治效果85.35%。【结论】TBS技术的应用,减轻了害鼠危害,减少稻谷损失。TBS技术具有安全、环保、可持续利用等特点,具有良好的应用前景。
基金Natural Science Foundation of Jiangsu Province,Grant/Award Number:BK20170765National Natural Science Foundation of China,Grant/Award Number:61703201+1 种基金Future Network Scientific Research Fund Project,Grant/Award Number:FNSRFP2021YB26Science Foundation of Nanjing Institute of Technology,Grant/Award Numbers:ZKJ202002,ZKJ202003,and YKJ202019。
文摘Unconstrained face images are interfered by many factors such as illumination,posture,expression,occlusion,age,accessories and so on,resulting in the randomness of the noise pollution implied in the original samples.In order to improve the sample quality,a weighted block cooperative sparse representation algorithm is proposed based on visual saliency dictionary.First,the algorithm uses the biological visual attention mechanism to quickly and accurately obtain the face salient target and constructs the visual salient dictionary.Then,a block cooperation framework is presented to perform sparse coding for different local structures of human face,and the weighted regular term is introduced in the sparse representation process to enhance the identification of information hidden in the coding coefficients.Finally,by synthesising the sparse representation results of all visual salient block dictionaries,the global coding residual is obtained and the class label is given.The experimental results on four databases,that is,AR,extended Yale B,LFW and PubFig,indicate that the combination of visual saliency dictionary,block cooperative sparse representation and weighted constraint coding can effectively enhance the accuracy of sparse representation of the samples to be tested and improve the performance of unconstrained face recognition.