Recently,the semantic classification(SC)algorithm for remote sensing images(RSI)has been greatly improved by deep learning(DL)techniques,e.g.,deep convolutional neural networks(CNNs).However,too many methods employ co...Recently,the semantic classification(SC)algorithm for remote sensing images(RSI)has been greatly improved by deep learning(DL)techniques,e.g.,deep convolutional neural networks(CNNs).However,too many methods employ complex procedures(e.g.,multi-stages),excessive hardware budgets(e.g.,multi-models),and an extreme reliance on domain knowledge(e.g.,handcrafted features)for the pure purpose of improving accuracy.It obviously goes against the superiority of DL,i.e.,simplicity and automation.Meanwhile,these algorithms come with unnecessarily expensive overhead on parameters and hardware costs.As a solution,the author proposed a fast and simple training algorithm based on the smallest architecture of EfficientNet version 2,which is called FST-EfficientNet.The approach employs a routine transfer learning strategy and has fast training characteristics.It outperforms all the former methods by a 0.8%–2.7%increase in accuracy.It does,however,use a higher testing resolution of 5122 and 6002,which results in high consumption of graphics processing units(GPUs).As an upgrade option,the author proposes a novel and more efficient method named FSTEfficientNetV2 as the successor.The new algorithm still employs a routine transfer learning strategy and maintains fast training characteristics.But a set of crucial algorithmic tweaks and hyperparameter re-optimizations have been updated.As a result,it achieves a noticeable increase in accuracy of 0.3%–1.1%over its predecessor.More importantly,the algorithm’sGPU costs are reduced by 75%–81%,with a significant reduction in training time costs of 60%–80%.The results demonstrate that an efficient training optimization strategy can significantly boost the CNN algorithm’s performance for RSISC.More crucially,the results prove that the distribution shift introduced by data augmentation(DA)techniques is vital to the method’s performance for RSI-SC,which has been ignored to date.These findings may help us gain a correct understanding of the CNN algorithm for RSI-SC.展开更多
This paper proposes a tree kernel method of semantic relation detection and classification (RDC) between named entities. It resolves two critical problems in previous tree kernel methods of RDC. First, a new tree ke...This paper proposes a tree kernel method of semantic relation detection and classification (RDC) between named entities. It resolves two critical problems in previous tree kernel methods of RDC. First, a new tree kernel is presented to better capture the inherent structural information in a parse tree by enabling the standard convolution tree kernel with context-sensitiveness and approximate matching of sub-trees. Second, an enriched parse tree structure is proposed to well derive necessary structural information, e.g., proper latent annotations, from a parse tree. Evaluation on the ACE RDC corpora shows that both the new tree kernel and the enriched parse tree structure contribute significantly to RDC and our tree kernel method much outperforms the state-of-the-art ones.展开更多
基金Hunan University of Arts and Science provided doctoral research funding for this study(grant number 16BSQD23)Fund of Geography Subject([2022]351)also provided funding.
文摘Recently,the semantic classification(SC)algorithm for remote sensing images(RSI)has been greatly improved by deep learning(DL)techniques,e.g.,deep convolutional neural networks(CNNs).However,too many methods employ complex procedures(e.g.,multi-stages),excessive hardware budgets(e.g.,multi-models),and an extreme reliance on domain knowledge(e.g.,handcrafted features)for the pure purpose of improving accuracy.It obviously goes against the superiority of DL,i.e.,simplicity and automation.Meanwhile,these algorithms come with unnecessarily expensive overhead on parameters and hardware costs.As a solution,the author proposed a fast and simple training algorithm based on the smallest architecture of EfficientNet version 2,which is called FST-EfficientNet.The approach employs a routine transfer learning strategy and has fast training characteristics.It outperforms all the former methods by a 0.8%–2.7%increase in accuracy.It does,however,use a higher testing resolution of 5122 and 6002,which results in high consumption of graphics processing units(GPUs).As an upgrade option,the author proposes a novel and more efficient method named FSTEfficientNetV2 as the successor.The new algorithm still employs a routine transfer learning strategy and maintains fast training characteristics.But a set of crucial algorithmic tweaks and hyperparameter re-optimizations have been updated.As a result,it achieves a noticeable increase in accuracy of 0.3%–1.1%over its predecessor.More importantly,the algorithm’sGPU costs are reduced by 75%–81%,with a significant reduction in training time costs of 60%–80%.The results demonstrate that an efficient training optimization strategy can significantly boost the CNN algorithm’s performance for RSISC.More crucially,the results prove that the distribution shift introduced by data augmentation(DA)techniques is vital to the method’s performance for RSI-SC,which has been ignored to date.These findings may help us gain a correct understanding of the CNN algorithm for RSI-SC.
基金Supported by the National Natural Science Foundation of China under Grant Nos.60873150,60970056 and 90920004
文摘This paper proposes a tree kernel method of semantic relation detection and classification (RDC) between named entities. It resolves two critical problems in previous tree kernel methods of RDC. First, a new tree kernel is presented to better capture the inherent structural information in a parse tree by enabling the standard convolution tree kernel with context-sensitiveness and approximate matching of sub-trees. Second, an enriched parse tree structure is proposed to well derive necessary structural information, e.g., proper latent annotations, from a parse tree. Evaluation on the ACE RDC corpora shows that both the new tree kernel and the enriched parse tree structure contribute significantly to RDC and our tree kernel method much outperforms the state-of-the-art ones.