The pharmaceutical industry increasingly values medicinal plants due to their perceived safety and costeffectiveness compared to modern drugs.Throughout the extensive history of medicinal plant usage,various plant par...The pharmaceutical industry increasingly values medicinal plants due to their perceived safety and costeffectiveness compared to modern drugs.Throughout the extensive history of medicinal plant usage,various plant parts,including flowers,leaves,and roots,have been acknowledged for their healing properties and employed in plant identification.Leaf images,however,stand out as the preferred and easily accessible source of information.Manual plant identification by plant taxonomists is intricate,time-consuming,and prone to errors,relying heavily on human perception.Artificial intelligence(AI)techniques offer a solution by automating plant recognition processes.This study thoroughly examines cutting-edge AI approaches for leaf image-based plant identification,drawing insights from literature across renowned repositories.This paper critically summarizes relevant literature based on AI algorithms,extracted features,and results achieved.Additionally,it analyzes extensively used datasets in automated plant classification research.It also offers deep insights into implemented techniques and methods employed for medicinal plant recognition.Moreover,this rigorous review study discusses opportunities and challenges in employing these AI-based approaches.Furthermore,in-depth statistical findings and lessons learned from this survey are highlighted with novel research areas with the aim of offering insights to the readers and motivating new research directions.This review is expected to serve as a foundational resource for future researchers in the field of AI-based identification of medicinal plants.展开更多
Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms...Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image,improving the effectiveness of identifying relevant image regions at each step of caption generation.However,providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features.Consequently,this leads to enhanced captioning network performance.In light of this,we present an image captioning framework that efficiently exploits the extracted representations of the image.Our framework comprises three key components:the Visual Feature Detector module(VFD),the Visual Feature Visual Attention module(VFVA),and the language model.The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features,creating an updated visual features matrix.Subsequently,the VFVA directs its attention to the visual features matrix generated by the VFD,resulting in an updated context vector employed by the language model to generate an informative description.Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features,thereby contributing to enhancing the image captioning model’s performance.Using the MS-COCO dataset,our experiments show that the proposed framework competes well with state-of-the-art methods,effectively leveraging visual representations to improve performance.The implementation code can be found here:https://github.com/althobhani/VFDICM(accessed on 30 July 2024).展开更多
文摘The pharmaceutical industry increasingly values medicinal plants due to their perceived safety and costeffectiveness compared to modern drugs.Throughout the extensive history of medicinal plant usage,various plant parts,including flowers,leaves,and roots,have been acknowledged for their healing properties and employed in plant identification.Leaf images,however,stand out as the preferred and easily accessible source of information.Manual plant identification by plant taxonomists is intricate,time-consuming,and prone to errors,relying heavily on human perception.Artificial intelligence(AI)techniques offer a solution by automating plant recognition processes.This study thoroughly examines cutting-edge AI approaches for leaf image-based plant identification,drawing insights from literature across renowned repositories.This paper critically summarizes relevant literature based on AI algorithms,extracted features,and results achieved.Additionally,it analyzes extensively used datasets in automated plant classification research.It also offers deep insights into implemented techniques and methods employed for medicinal plant recognition.Moreover,this rigorous review study discusses opportunities and challenges in employing these AI-based approaches.Furthermore,in-depth statistical findings and lessons learned from this survey are highlighted with novel research areas with the aim of offering insights to the readers and motivating new research directions.This review is expected to serve as a foundational resource for future researchers in the field of AI-based identification of medicinal plants.
基金supported by the National Natural Science Foundation of China(Nos.U22A2034,62177047)High Caliber Foreign Experts Introduction Plan funded by MOST,and Central South University Research Programme of Advanced Interdisciplinary Studies(No.2023QYJC020).
文摘Image captioning has gained increasing attention in recent years.Visual characteristics found in input images play a crucial role in generating high-quality captions.Prior studies have used visual attention mechanisms to dynamically focus on localized regions of the input image,improving the effectiveness of identifying relevant image regions at each step of caption generation.However,providing image captioning models with the capability of selecting the most relevant visual features from the input image and attending to them can significantly improve the utilization of these features.Consequently,this leads to enhanced captioning network performance.In light of this,we present an image captioning framework that efficiently exploits the extracted representations of the image.Our framework comprises three key components:the Visual Feature Detector module(VFD),the Visual Feature Visual Attention module(VFVA),and the language model.The VFD module is responsible for detecting a subset of the most pertinent features from the local visual features,creating an updated visual features matrix.Subsequently,the VFVA directs its attention to the visual features matrix generated by the VFD,resulting in an updated context vector employed by the language model to generate an informative description.Integrating the VFD and VFVA modules introduces an additional layer of processing for the visual features,thereby contributing to enhancing the image captioning model’s performance.Using the MS-COCO dataset,our experiments show that the proposed framework competes well with state-of-the-art methods,effectively leveraging visual representations to improve performance.The implementation code can be found here:https://github.com/althobhani/VFDICM(accessed on 30 July 2024).