Hierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously.Existing methods tend to overlook that different image region...Hierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously.Existing methods tend to overlook that different image regions contribute differently to label prediction at different granularities,and also insufficiently consider relationships between the hierarchical multi-granularity labels.We introduce a sequence-to-sequence mechanism to overcome these two problems and propose a multi-granularity sequence generation(MGSG)approach for the hierarchical multi-granularity image classification task.Specifically,we introduce a transformer architecture to encode the image into visual representation sequences.Next,we traverse the taxonomic tree and organize the multi-granularity labels into sequences,and vectorize them and add positional information.The proposed multi-granularity sequence generation method builds a decoder that takes visual representation sequences and semantic label embedding as inputs,and outputs the predicted multi-granularity label sequence.The decoder models dependencies and correlations between multi-granularity labels through a masked multi-head self-attention mechanism,and relates visual information to the semantic label information through a crossmodality attention mechanism.In this way,the proposed method preserves the relationships between labels at different granularity levels and takes into account the influence of different image regions on labels with different granularities.Evaluations on six public benchmarks qualitatively and quantitatively demonstrate the advantages of the proposed method.Our project is available at https://github.com/liuxindazz/mgs.展开更多
Mining rich semantic information hidden in heterogeneous information network is one of the important tasks of data mining. Generally, a nuclear medicine text consists of the description of disease (<i>i.e.</i...Mining rich semantic information hidden in heterogeneous information network is one of the important tasks of data mining. Generally, a nuclear medicine text consists of the description of disease (<i>i.e.</i>, lesions) and diagnostic results. However, how to construct a computer-aided diagnostic model with a large number of medical texts is a challenging task. To automatically diagnose diseases with SPECT imaging, in this work, we create a knowledge-based diagnostic model by exploring the association between a disease and its properties. Firstly, an overview of nuclear medicine and data mining is presented. Second, the method of preprocessing textual nuclear medicine diagnostic reports is proposed. Last, the created diagnostic modes based on random forest and SVM are proposed. Experimental evaluation conducted real-world data of diagnostic reports of SPECT imaging demonstrates that our diagnostic models are workable and effective to automatically identify diseases with textual diagnostic reports.展开更多
基金supported by National Key R&D Program of China(2019YFC1521102)the National Natural Science Foundation of China(61932003)Beijing Science and Technology Plan(Z221100007722004).
文摘Hierarchical multi-granularity image classification is a challenging task that aims to tag each given image with multiple granularity labels simultaneously.Existing methods tend to overlook that different image regions contribute differently to label prediction at different granularities,and also insufficiently consider relationships between the hierarchical multi-granularity labels.We introduce a sequence-to-sequence mechanism to overcome these two problems and propose a multi-granularity sequence generation(MGSG)approach for the hierarchical multi-granularity image classification task.Specifically,we introduce a transformer architecture to encode the image into visual representation sequences.Next,we traverse the taxonomic tree and organize the multi-granularity labels into sequences,and vectorize them and add positional information.The proposed multi-granularity sequence generation method builds a decoder that takes visual representation sequences and semantic label embedding as inputs,and outputs the predicted multi-granularity label sequence.The decoder models dependencies and correlations between multi-granularity labels through a masked multi-head self-attention mechanism,and relates visual information to the semantic label information through a crossmodality attention mechanism.In this way,the proposed method preserves the relationships between labels at different granularity levels and takes into account the influence of different image regions on labels with different granularities.Evaluations on six public benchmarks qualitatively and quantitatively demonstrate the advantages of the proposed method.Our project is available at https://github.com/liuxindazz/mgs.
文摘Mining rich semantic information hidden in heterogeneous information network is one of the important tasks of data mining. Generally, a nuclear medicine text consists of the description of disease (<i>i.e.</i>, lesions) and diagnostic results. However, how to construct a computer-aided diagnostic model with a large number of medical texts is a challenging task. To automatically diagnose diseases with SPECT imaging, in this work, we create a knowledge-based diagnostic model by exploring the association between a disease and its properties. Firstly, an overview of nuclear medicine and data mining is presented. Second, the method of preprocessing textual nuclear medicine diagnostic reports is proposed. Last, the created diagnostic modes based on random forest and SVM are proposed. Experimental evaluation conducted real-world data of diagnostic reports of SPECT imaging demonstrates that our diagnostic models are workable and effective to automatically identify diseases with textual diagnostic reports.