This study presents a single-class and multi-class instance segmentation approach applied to ancient Palmyrene inscriptions,employing two state-of-the-art deep learning algorithms,namely YOLOv8 and Roboflow 3.0.The go...This study presents a single-class and multi-class instance segmentation approach applied to ancient Palmyrene inscriptions,employing two state-of-the-art deep learning algorithms,namely YOLOv8 and Roboflow 3.0.The goal is to contribute to the preservation and understanding of historical texts,showcasing the potential of modern deep learning methods in archaeological research.Our research culminates in several key findings and scientific contributions.We comprehensively compare the performance of YOLOv8 and Roboflow 3.0 in the context of Palmyrene character segmentation—this comparative analysis mainly focuses on the strengths and weaknesses of each algorithm in this context.We also created and annotated an extensive dataset of Palmyrene inscriptions,a crucial resource for further research in the field.The dataset serves for training and evaluating the segmentation models.We employ comparative evaluation metrics to quantitatively assess the segmentation results,ensuring the reliability and reproducibility of our findings and we present custom visualization tools for predicted segmentation masks.Our study advances the state of the art in semi-automatic reading of Palmyrene inscriptions and establishes a benchmark for future research.The availability of the Palmyrene dataset and the insights into algorithm performance contribute to the broader understanding of historical text analysis.展开更多
The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are ...The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are complex,the edges are blurred,and the sample numbers are unbalanced.To solve these problems,this paper proposes a Multi-branch Cross-scale Interactive Feature fusion Transformer model(MCIF-Transformer Mask RCNN)for PET/CT lung tumor instance segmentation,The main innovative works of this paper are as follows:Firstly,the ResNet-Transformer backbone network is used to extract global feature and local feature in lung images.The pixel dependence relationship is established in local and non-local fields to improve the model perception ability.Secondly,the Cross-scale Interactive Feature Enhancement auxiliary network is designed to provide the shallow features to the deep features,and the cross-scale interactive feature enhancement module(CIFEM)is used to enhance the attention ability of the fine-grained features.Thirdly,the Cross-scale Interactive Feature fusion FPN network(CIF-FPN)is constructed to realize bidirectional interactive fusion between deep features and shallow features,and the low-level features are enhanced in deep semantic features.Finally,4 ablation experiments,3 comparison experiments of detection,3 comparison experiments of segmentation and 6 comparison experiments with two-stage and single-stage instance segmentation networks are done on PET/CT lung medical image datasets.The results showed that APdet,APseg,ARdet and ARseg indexes are improved by 5.5%,5.15%,3.11%and 6.79%compared with Mask RCNN(resnet50).Based on the above research,the precise detection and segmentation of the lesion region are realized in this paper.This method has positive significance for the detection of lung tumors.展开更多
The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-r...The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.展开更多
Tea leaf picking is a crucial stage in tea production that directly influences the quality and value of the tea.Traditional tea-picking machines may compromise the quality of the tea leaves.High-quality teas are often...Tea leaf picking is a crucial stage in tea production that directly influences the quality and value of the tea.Traditional tea-picking machines may compromise the quality of the tea leaves.High-quality teas are often handpicked and need more delicate operations in intelligent picking machines.Compared with traditional image processing techniques,deep learning models have stronger feature extraction capabilities,and better generalization and are more suitable for practical tea shoot harvesting.However,current research mostly focuses on shoot detection and cannot directly accomplish end-to-end shoot segmentation tasks.We propose a tea shoot instance segmentation model based on multi-scale mixed attention(Mask2FusionNet)using a dataset from the tea garden in Hangzhou.We further analyzed the characteristics of the tea shoot dataset,where the proportion of small to medium-sized targets is 89.9%.Our algorithm is compared with several mainstream object segmentation algorithms,and the results demonstrate that our model achieves an accuracy of 82%in recognizing the tea shoots,showing a better performance compared to other models.Through ablation experiments,we found that ResNet50,PointRend strategy,and the Feature Pyramid Network(FPN)architecture can improve performance by 1.6%,1.4%,and 2.4%,respectively.These experiments demonstrated that our proposed multi-scale and point selection strategy optimizes the feature extraction capability for overlapping small targets.The results indicate that the proposed Mask2FusionNet model can perform the shoot segmentation in unstructured environments,realizing the individual distinction of tea shoots,and complete extraction of the shoot edge contours with a segmentation accuracy of 82.0%.The research results can provide algorithmic support for the segmentation and intelligent harvesting of premium tea shoots at different scales.展开更多
Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,...Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,current dynamic SLAM systems struggle to achieve precise localization and map construction.With the advancement of deep learning,there has been increasing interest in the development of deep learning-based dynamic SLAM visual odometry in recent years,and more researchers are turning to deep learning techniques to address the challenges of dynamic SLAM.Compared to dynamic SLAM systems based on deep learning methods such as object detection and semantic segmentation,dynamic SLAM systems based on instance segmentation can not only detect dynamic objects in the scene but also distinguish different instances of the same type of object,thereby reducing the impact of dynamic objects on the SLAM system’s positioning.This article not only introduces traditional dynamic SLAM systems based on mathematical models but also provides a comprehensive analysis of existing instance segmentation algorithms and dynamic SLAM systems based on instance segmentation,comparing and summarizing their advantages and disadvantages.Through comparisons on datasets,it is found that instance segmentation-based methods have significant advantages in accuracy and robustness in dynamic environments.However,the real-time performance of instance segmentation algorithms hinders the widespread application of dynamic SLAM systems.In recent years,the rapid development of single-stage instance segmentationmethods has brought hope for the widespread application of dynamic SLAM systems based on instance segmentation.Finally,possible future research directions and improvementmeasures are discussed for reference by relevant professionals.展开更多
Instance segmentation plays an important role in image processing.The Deep Snake algorithm based on contour iteration deforms an initial bounding box to an instance contour end-to-end,which can improve the performance...Instance segmentation plays an important role in image processing.The Deep Snake algorithm based on contour iteration deforms an initial bounding box to an instance contour end-to-end,which can improve the performance of instance segmentation,but has defects such as slow segmentation speed and sub-optimal initial contour.To solve these problems,a real-time instance segmentation algorithm based on contour learning was proposed.Firstly,ShuffleNet V2 was used as backbone network,and the receptive field of the model was expanded by using a 5×5 convolution kernel.Secondly,a lightweight up-sampling module,multi-stage aggregation(MSA),performs residual fusion of multi-layer features,which not only improves segmentation speed,but also extracts effective features more comprehensively.Thirdly,a contour initialization method for network learning was designed,and a global contour feature aggregation mechanism was used to return a coarse contour,which solves the problem of excessive error between manually initialized contour and real contour.Finally,the Snake deformation module was used to iteratively optimize the coarse contour to obtain the final instance contour.The experimental results showed that the proposed method improved the instance segmentation accuracy on semantic boundaries dataset(SBD),Cityscapes and Kins datasets,and the average precision reached 55.8 on the SBD;Compared with Deep Snake,the model parameters were reduced by 87.2%,calculation amount was reduced by 78.3%,and segmentation speed reached 39.8 frame·s−1 when instance segmentation was performed on an image with a size of 512×512 pixels on a 2080Ti GPU.The proposed method can reduce resource consumption,realize instance segmentation tasks quickly and accurately,and therefore is more suitable for embedded platforms with limited resources.展开更多
We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect,segment and track each instance in a video sequence.Differ...We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect,segment and track each instance in a video sequence.Differently from current discriminative tracking-by-detection solutions,our proposed hierarchical structural embedding learning can predict more highquality masks with accurate boundary details over spatio-temporal space via the normalizing flows.We formulate the instance inference procedure as a hierarchical spatio-temporal embedded learning across time and space.Given the video clip,our method first coarsely locates pixels belonging to a particular instance with Gaussian distribution and then builds a novel mixing distribution to promote the instance boundary by fusing hierarchical appearance embedding information in a coarse-to-fine manner.For the mixing distribution,we utilize a factorization condition normalized flow fashion to estimate the distribution parameters to improve the segmentation performance.Comprehensive qualitative,quantitative,and ablation experiments are performed on three representative video instance segmentation benchmarks(i.e.,YouTube-VIS19,YouTube-VIS21,and OVIS)and the effectiveness of the proposed method is demonstrated.More impressively,the superior performance of our model on an unsupervised video object segmentation dataset(i.e.,DAVIS19)proves its generalizability.Our algorithm implementations are publicly available at https://github.com/zyqin19/HEVis.展开更多
Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving syst...Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU(Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA(Efficient Channel Attention)Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS(Non-Maximum Suppression)algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation.展开更多
Mature soybean phenotyping is an important process in soybean breeding;however, the manual process is time-consuming and labor-intensive. Therefore, a novel approach that is rapid, accurate and highly precise is requi...Mature soybean phenotyping is an important process in soybean breeding;however, the manual process is time-consuming and labor-intensive. Therefore, a novel approach that is rapid, accurate and highly precise is required to obtain the phenotypic data of soybean stems, pods and seeds. In this research, we propose a mature soybean phenotype measurement algorithm called Soybean Phenotype Measure-instance Segmentation(SPM-IS). SPM-IS is based on a feature pyramid network, Principal Component Analysis(PCA) and instance segmentation. We also propose a new method that uses PCA to locate and measure the length and width of a target object via image instance segmentation. After 60,000 iterations, the maximum mean Average Precision(m AP) of the mask and box was able to reach 95.7%. The correlation coefficients R^(2) of the manual measurement and SPM-IS measurement of the pod length, pod width, stem length, complete main stem length, seed length and seed width were 0.9755, 0.9872, 0.9692, 0.9803,0.9656, and 0.9716, respectively. The correlation coefficients R^(2) of the manual counting and SPM-IS counting of pods, stems and seeds were 0.9733, 0.9872, and 0.9851, respectively. The above results show that SPM-IS is a robust measurement and counting algorithm that can reduce labor intensity, improve efficiency and speed up the soybean breeding process.展开更多
3D object recognition is a challenging task for intelligent and robot systems in industrial and home indoor environments.It is critical for such systems to recognize and segment the 3D object instances that they encou...3D object recognition is a challenging task for intelligent and robot systems in industrial and home indoor environments.It is critical for such systems to recognize and segment the 3D object instances that they encounter on a frequent basis.The computer vision,graphics,and machine learning fields have all given it a lot of attention.Traditionally,3D segmentation was done with hand-crafted features and designed approaches that didn’t achieve acceptable performance and couldn’t be generalized to large-scale data.Deep learning approaches have lately become the preferred method for 3D segmentation challenges by their great success in 2D computer vision.However,the task of instance segmentation is currently less explored.In this paper,we propose a novel approach for efficient 3D instance segmentation using red green blue and depth(RGB-D)data based on deep learning.The 2D region based convolutional neural networks(Mask R-CNN)deep learning model with point based rending module is adapted to integrate with depth information to recognize and segment 3D instances of objects.In order to generate 3D point cloud coordinates(x,y,z),segmented 2D pixels(u,v)of recognized object regions in the RGB image are merged into(u,v)points of the depth image.Moreover,we conducted an experiment and analysis to compare our proposed method from various points of view and distances.The experimentation shows the proposed 3D object recognition and instance segmentation are sufficiently beneficial to support object handling in robotic and intelligent systems.展开更多
Efficient and accurate segmentation of complex microstructures is a critical challenge in establishing process-structure-property(PSP) linkages of materials. Deep learning(DL)-based instance segmentation algorithms sh...Efficient and accurate segmentation of complex microstructures is a critical challenge in establishing process-structure-property(PSP) linkages of materials. Deep learning(DL)-based instance segmentation algorithms show potential in achieving this goal.However, to ensure prediction reliability, the current algorithms usually have complex structures and demand vast training data.To overcome the model complexity and its dependence on the amount of data, we developed an ingenious DL framework based on a simple method called dual-layer semantics. In the framework, a data standardization module was designed to remove extraneous microstructural noise and accentuate desired structural characteristics, while a post-processing module was employed to further improve segmentation accuracy. The framework was successfully applied in a small dataset of bimodal Ti-6Al-4V microstructures with only 112 samples. Compared with the ground truth, it realizes an 86.81% accuracy IoU for the globular αphase and a 94.70% average size distribution similarity for the colony structures. More importantly, only 36 s was taken to handle a 1024 × 1024 micrograph, which is much faster than the treatment of experienced experts(usually 900 s). The framework proved reliable, interpretable, and scalable, enabling its utilization in complex microstructures to deepen the understanding of PSP linkages.展开更多
Edible mushrooms are rich in nutrients;however,harvesting mainly relies on manual labor.Coarse localization of each mushroom is necessary to enable a robotic arm to accurately pick edible mushrooms.Previous studies us...Edible mushrooms are rich in nutrients;however,harvesting mainly relies on manual labor.Coarse localization of each mushroom is necessary to enable a robotic arm to accurately pick edible mushrooms.Previous studies used detection algorithms that did not consider mushroom pixel-level information.When these algorithms are combined with a depth map,the information is lost.Moreover,in instance segmentation algorithms,convolutional neural network(CNN)-based methods are lightweight,and the extracted features are not correlated.To guarantee real-time location detection and improve the accuracy of mushroom segmentation,this study proposed a new spatial-channel transformer network model based on Mask-CNN(SCT-Mask-RCNN).The fusion of Mask-RCNN with the self-attention mechanism extracts the global correlation outcomes of image features from the channel and spatial dimensions.Subsequently,Mask-RCNN was used to maintain a lightweight structure and extract local features using a spatial pooling pyramidal structure to achieve multiscale local feature fusion and improve detection accuracy.The results showed that the SCT-Mask-RCNN method achieved a segmentation accuracy of 0.750 on segm_Precision_mAP and detection accuracy of 0.638 on Bbox_Precision_mAP.Compared to existing methods,the proposed method improved the accuracy of the evaluation metrics Bbox_Precision_mAP and segm_Precision_mAP by over 2%and 5%,respectively.展开更多
Cell instance segmentation is a fundamental task for many biological applications,especially for packed cells in three-dimensional(3D)microscope images that can fully display cellular morphology.Image processing algor...Cell instance segmentation is a fundamental task for many biological applications,especially for packed cells in three-dimensional(3D)microscope images that can fully display cellular morphology.Image processing algorithms based on neural networks and feature engineering have enabled great progress in two-dimensional(2D)instance segmentation.However,current methods cannot achieve high segmentation accuracy for irregular cells in 3D images.In this study,we introduce a universal,morphology-based 3D instance segmentation algorithm called Crop Once Merge Twice(C1M2),which can segment cells from a wide range of image types and does not require nucleus images.C1M2 can be extended to quantify the fluorescence intensity of fluorescent proteins and antibodies and automatically annotate their expression levels in individual cells.Our results suggest that C1M2 can serve as a tissue cytometry for 3D histopathological assays by quantifying fluorescence intensity with spatial localization and morphological information.展开更多
Instance segmentation has drawn mounting attention due to its significant utility.However,high computational costs have been widely acknowledged in this domain,as the instance mask is generally achieved by pixel-level...Instance segmentation has drawn mounting attention due to its significant utility.However,high computational costs have been widely acknowledged in this domain,as the instance mask is generally achieved by pixel-level labeling.In this paper,we present a conceptually efficient contour regression network based on the you only look once(YOLO)architecture named YOLO-CORE for instance segmentation.The mask of the instance is efficiently acquired by explicit and direct contour regression using our designed multiorder constraint consisting of a polar distance loss and a sector loss.Our proposed YOLO-CORE yields impressive segmentation performance in terms of both accuracy and speed.It achieves 57.9%AP@0.5 with 47 FPS(frames per second)on the semantic boundaries dataset(SBD)and 51.1%AP@0.5 with 46 FPS on the COCO dataset.The superior performance achieved by our method with explicit contour regression suggests a new technique line in the YOLO-based image understanding field.Moreover,our instance segmentation design can be flexibly integrated into existing deep detectors with negligible computation cost(65.86 BFLOPs(billion float operations per second)to 66.15 BFLOPs with the YOLOv3 detector).展开更多
The application of robotic grasping for agricultural products pushes automation in agriculture-related industries.Cucumber,a common vegetable in greenhouses and supermarkets,often needs to be grasped from a cluttered ...The application of robotic grasping for agricultural products pushes automation in agriculture-related industries.Cucumber,a common vegetable in greenhouses and supermarkets,often needs to be grasped from a cluttered scene.In order to realize efficient grasping in cluttered scenes,a fully automatic cucumber recognition,grasping,and palletizing robot system was constructed in this paper.The system adopted Yolact++deep learning network to segment cucumber instances.An early fusion method of F-RGBD was proposed,which increases the algorithm's discriminative ability for these appearance-similar cucumbers at different depths,and at different occlusion degrees.The results of the comparative experiment of the F-RGBD dataset and the common RGB dataset on Yolact++prove the positive effect of the F-RGBD fusion method.Its segmentation masks have higher quality,are more continuous,and are less false positive for prioritizing-grasping prediction.Based on the segmentation result,a 4D grab line prediction method was proposed for cucumber grasping.And the cucumber detection experiment in cluttered scenarios is carried out in the real world.The success rate is 93.67%and the average sorting time is 9.87 s.The effectiveness of the cucumber segmentation and grasping pose acquisition method is verified by experiments.展开更多
Recognizing 3D part instances from a 3D point cloud is crucial for 3D structure and scene understanding.Several learning-based approaches use semantic segmentation and instance center prediction as training tasks and ...Recognizing 3D part instances from a 3D point cloud is crucial for 3D structure and scene understanding.Several learning-based approaches use semantic segmentation and instance center prediction as training tasks and fail to further exploit the inherent relationship between shape semantics and part instances.In this paper,we present a new method for 3D part instance segmentation.Our method exploits semantic segmentation to fuse nonlocal instance features,such as center prediction,and further enhances the fusion scheme in a multi-and cross-level way.We also propose a semantic region center prediction task to train and leverage the prediction results to improve the clustering of instance points.Our method outperforms existing methods with a large-margin improvement in the PartNet benchmark.We also demonstrate that our feature fusion scheme can be applied to other existing methods to improve their performance in indoor scene instance segmentation tasks.展开更多
In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhanc...In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation.展开更多
Skin defect inspection is one of the most significant tasks in the conventional process of aircraft inspection.This paper proposes a vision-based method of pixel-level defect detection,which is based on the Mask Scori...Skin defect inspection is one of the most significant tasks in the conventional process of aircraft inspection.This paper proposes a vision-based method of pixel-level defect detection,which is based on the Mask Scoring R-CNN.First,an attention mechanism and a feature fusion module are introduced,to improve feature representation.Second,a new classifier head—consisting of four convolutional layers and a fully connected layer—is proposed,to reduce the influence of information around the area of the defect.Third,to evaluate the proposed method,a dataset of aircraft skin defects was constructed,containing 276 images with a resolution of 960×720 pixels.Experimental results show that the proposed classifier head improves the detection and segmentation accuracy,for aircraft skin defect inspection,more effectively than the attention mechanism and feature fusion module.Compared with the Mask R-CNN and Mask Scoring R-CNN,the proposed method increased the segmentation precision by approximately 21%and 19.59%,respectively.These results demonstrate that the proposed method performs favorably against the other two methods of pixellevel aircraft skin defect detection.展开更多
The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to...The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to substantial redundancy,fluctuating sample density and lack of apparent organization.The research area has a wide range of robotics applications,including intelligent vehicles,autonomous mapping and navigation.A number of researchers have introduced various methodologies and algorithms.Deep learning has been successfully used to a spectrum of 2D vision domains as a prevailing A.I.methods.However,due to the specific problems of processing point clouds with deep neural networks,deep learning on point clouds is still in its initial stages.This study examines many strategies that have been presented to 3D instance and semantic segmentation and gives a complete assessment of current developments in deep learning-based 3D segmentation.In these approaches’benefits,draw backs,and design mechanisms are studied and addressed.This study evaluates the impact of various segmentation algorithms on competitiveness on various publicly accessible datasets,as well as the most often used pipelines,their advantages and limits,insightful findings and intriguing future research directions.展开更多
Instance co-segmentation aims to segment the co-occurrent instances among two images.This task heavily relies on instance-related cues provided by co-peaks,which are generally estimated by exhaustively exploiting all ...Instance co-segmentation aims to segment the co-occurrent instances among two images.This task heavily relies on instance-related cues provided by co-peaks,which are generally estimated by exhaustively exploiting all paired candidates in point-to-point patterns.However,such patterns could yield a high number of false-positive co-peaks,resulting in over-segmentation whenever there are mutual occlusions.To tackle with this issue,this paper proposes an instance co-segmentation method via tensor-based salient co-peak search(TSCPS-ICS).The proposed method explores high-order correlations via triple-to-triple matching among feature maps to find reliable co-peaks with the help of co-saliency detection.The proposed method is shown to capture more accurate intra-peaks and inter-peaks among feature maps,reducing the false-positive rate of co-peak search.Upon having accurate co-peaks,one can efficiently infer responses of the targeted instance.Experiments on four benchmark datasets validate the superior performance of the proposed method.展开更多
基金The results and knowledge included herein have been obtained owing to support from the following institutional grant.Internal grant agency of the Faculty of Economics and Management,Czech University of Life Sciences Prague,Grant No.2023A0004-“Text Segmentation Methods of Historical Alphabets in OCR Development”.https://iga.pef.czu.cz/.Funds were granted to T.Novák,A.Hamplová,O.Svojše,and A.Veselýfrom the author team.
文摘This study presents a single-class and multi-class instance segmentation approach applied to ancient Palmyrene inscriptions,employing two state-of-the-art deep learning algorithms,namely YOLOv8 and Roboflow 3.0.The goal is to contribute to the preservation and understanding of historical texts,showcasing the potential of modern deep learning methods in archaeological research.Our research culminates in several key findings and scientific contributions.We comprehensively compare the performance of YOLOv8 and Roboflow 3.0 in the context of Palmyrene character segmentation—this comparative analysis mainly focuses on the strengths and weaknesses of each algorithm in this context.We also created and annotated an extensive dataset of Palmyrene inscriptions,a crucial resource for further research in the field.The dataset serves for training and evaluating the segmentation models.We employ comparative evaluation metrics to quantitatively assess the segmentation results,ensuring the reliability and reproducibility of our findings and we present custom visualization tools for predicted segmentation masks.Our study advances the state of the art in semi-automatic reading of Palmyrene inscriptions and establishes a benchmark for future research.The availability of the Palmyrene dataset and the insights into algorithm performance contribute to the broader understanding of historical text analysis.
基金funded by National Natural Science Foundation of China No.62062003Ningxia Natural Science Foundation Project No.2023AAC03293.
文摘The precise detection and segmentation of tumor lesions are very important for lung cancer computer-aided diagnosis.However,in PET/CT(Positron Emission Tomography/Computed Tomography)lung images,the lesion shapes are complex,the edges are blurred,and the sample numbers are unbalanced.To solve these problems,this paper proposes a Multi-branch Cross-scale Interactive Feature fusion Transformer model(MCIF-Transformer Mask RCNN)for PET/CT lung tumor instance segmentation,The main innovative works of this paper are as follows:Firstly,the ResNet-Transformer backbone network is used to extract global feature and local feature in lung images.The pixel dependence relationship is established in local and non-local fields to improve the model perception ability.Secondly,the Cross-scale Interactive Feature Enhancement auxiliary network is designed to provide the shallow features to the deep features,and the cross-scale interactive feature enhancement module(CIFEM)is used to enhance the attention ability of the fine-grained features.Thirdly,the Cross-scale Interactive Feature fusion FPN network(CIF-FPN)is constructed to realize bidirectional interactive fusion between deep features and shallow features,and the low-level features are enhanced in deep semantic features.Finally,4 ablation experiments,3 comparison experiments of detection,3 comparison experiments of segmentation and 6 comparison experiments with two-stage and single-stage instance segmentation networks are done on PET/CT lung medical image datasets.The results showed that APdet,APseg,ARdet and ARseg indexes are improved by 5.5%,5.15%,3.11%and 6.79%compared with Mask RCNN(resnet50).Based on the above research,the precise detection and segmentation of the lesion region are realized in this paper.This method has positive significance for the detection of lung tumors.
基金funded by Anhui Provincial Natural Science Foundation(No.2208085ME128)the Anhui University-Level Special Project of Anhui University of Science and Technology(No.XCZX2021-01)+1 种基金the Research and the Development Fund of the Institute of Environmental Friendly Materials and Occupational Health,Anhui University of Science and Technology(No.ALW2022YF06)Anhui Province New Era Education Quality Project(Graduate Education)(No.2022xscx073).
文摘The real-time detection and instance segmentation of strawberries constitute fundamental components in the development of strawberry harvesting robots.Real-time identification of strawberries in an unstructured envi-ronment is a challenging task.Current instance segmentation algorithms for strawberries suffer from issues such as poor real-time performance and low accuracy.To this end,the present study proposes an Efficient YOLACT(E-YOLACT)algorithm for strawberry detection and segmentation based on the YOLACT framework.The key enhancements of the E-YOLACT encompass the development of a lightweight attention mechanism,pyramid squeeze shuffle attention(PSSA),for efficient feature extraction.Additionally,an attention-guided context-feature pyramid network(AC-FPN)is employed instead of FPN to optimize the architecture’s performance.Furthermore,a feature-enhanced model(FEM)is introduced to enhance the prediction head’s capabilities,while efficient fast non-maximum suppression(EF-NMS)is devised to improve non-maximum suppression.The experimental results demonstrate that the E-YOLACT achieves a Box-mAP and Mask-mAP of 77.9 and 76.6,respectively,on the custom dataset.Moreover,it exhibits an impressive category accuracy of 93.5%.Notably,the E-YOLACT also demonstrates a remarkable real-time detection capability with a speed of 34.8 FPS.The method proposed in this article presents an efficient approach for the vision system of a strawberry-picking robot.
基金This research was supported by the National Natural Science Foundation of China No.62276086the National Key R&D Program of China No.2022YFD2000100Zhejiang Provincial Natural Science Foundation of China under Grant No.LTGN23D010002.
文摘Tea leaf picking is a crucial stage in tea production that directly influences the quality and value of the tea.Traditional tea-picking machines may compromise the quality of the tea leaves.High-quality teas are often handpicked and need more delicate operations in intelligent picking machines.Compared with traditional image processing techniques,deep learning models have stronger feature extraction capabilities,and better generalization and are more suitable for practical tea shoot harvesting.However,current research mostly focuses on shoot detection and cannot directly accomplish end-to-end shoot segmentation tasks.We propose a tea shoot instance segmentation model based on multi-scale mixed attention(Mask2FusionNet)using a dataset from the tea garden in Hangzhou.We further analyzed the characteristics of the tea shoot dataset,where the proportion of small to medium-sized targets is 89.9%.Our algorithm is compared with several mainstream object segmentation algorithms,and the results demonstrate that our model achieves an accuracy of 82%in recognizing the tea shoots,showing a better performance compared to other models.Through ablation experiments,we found that ResNet50,PointRend strategy,and the Feature Pyramid Network(FPN)architecture can improve performance by 1.6%,1.4%,and 2.4%,respectively.These experiments demonstrated that our proposed multi-scale and point selection strategy optimizes the feature extraction capability for overlapping small targets.The results indicate that the proposed Mask2FusionNet model can perform the shoot segmentation in unstructured environments,realizing the individual distinction of tea shoots,and complete extraction of the shoot edge contours with a segmentation accuracy of 82.0%.The research results can provide algorithmic support for the segmentation and intelligent harvesting of premium tea shoots at different scales.
基金the National Natural Science Foundation of China(No.62063006)the Natural Science Foundation of Guangxi Province(No.2023GXNS-FAA026025)+3 种基金the Innovation Fund of Chinese Universities Industry-University-Research(ID:2021RYC06005)the Research Project for Young andMiddle-Aged Teachers in Guangxi Universi-ties(ID:2020KY15013)the Special Research Project of Hechi University(ID:2021GCC028)financially supported by the Project of Outstanding Thousand Young Teachers’Training in Higher Education Institutions of Guangxi,Guangxi Colleges and Universities Key Laboratory of AI and Information Processing(Hechi University),Education Department of Guangxi Zhuang Autonomous Region.
文摘Dynamic Simultaneous Localization and Mapping(SLAM)in visual scenes is currently a major research area in fields such as robot navigation and autonomous driving.However,in the face of complex real-world envi-ronments,current dynamic SLAM systems struggle to achieve precise localization and map construction.With the advancement of deep learning,there has been increasing interest in the development of deep learning-based dynamic SLAM visual odometry in recent years,and more researchers are turning to deep learning techniques to address the challenges of dynamic SLAM.Compared to dynamic SLAM systems based on deep learning methods such as object detection and semantic segmentation,dynamic SLAM systems based on instance segmentation can not only detect dynamic objects in the scene but also distinguish different instances of the same type of object,thereby reducing the impact of dynamic objects on the SLAM system’s positioning.This article not only introduces traditional dynamic SLAM systems based on mathematical models but also provides a comprehensive analysis of existing instance segmentation algorithms and dynamic SLAM systems based on instance segmentation,comparing and summarizing their advantages and disadvantages.Through comparisons on datasets,it is found that instance segmentation-based methods have significant advantages in accuracy and robustness in dynamic environments.However,the real-time performance of instance segmentation algorithms hinders the widespread application of dynamic SLAM systems.In recent years,the rapid development of single-stage instance segmentationmethods has brought hope for the widespread application of dynamic SLAM systems based on instance segmentation.Finally,possible future research directions and improvementmeasures are discussed for reference by relevant professionals.
基金supported by National Key Research and Development Program(No.2022YFE0112400)National Natural Science Foundation of China(No.21706096)Natural Science Foundation of Jiangsu Province(No.BK20160162).
文摘Instance segmentation plays an important role in image processing.The Deep Snake algorithm based on contour iteration deforms an initial bounding box to an instance contour end-to-end,which can improve the performance of instance segmentation,but has defects such as slow segmentation speed and sub-optimal initial contour.To solve these problems,a real-time instance segmentation algorithm based on contour learning was proposed.Firstly,ShuffleNet V2 was used as backbone network,and the receptive field of the model was expanded by using a 5×5 convolution kernel.Secondly,a lightweight up-sampling module,multi-stage aggregation(MSA),performs residual fusion of multi-layer features,which not only improves segmentation speed,but also extracts effective features more comprehensively.Thirdly,a contour initialization method for network learning was designed,and a global contour feature aggregation mechanism was used to return a coarse contour,which solves the problem of excessive error between manually initialized contour and real contour.Finally,the Snake deformation module was used to iteratively optimize the coarse contour to obtain the final instance contour.The experimental results showed that the proposed method improved the instance segmentation accuracy on semantic boundaries dataset(SBD),Cityscapes and Kins datasets,and the average precision reached 55.8 on the SBD;Compared with Deep Snake,the model parameters were reduced by 87.2%,calculation amount was reduced by 78.3%,and segmentation speed reached 39.8 frame·s−1 when instance segmentation was performed on an image with a size of 512×512 pixels on a 2080Ti GPU.The proposed method can reduce resource consumption,realize instance segmentation tasks quickly and accurately,and therefore is more suitable for embedded platforms with limited resources.
基金supported in part by the National Natural Science Foundation of China(62176139,62106128,62176141)the Major Basic Research Project of Shandong Natural Science Foundation(ZR2021ZD15)+4 种基金the Natural Science Foundation of Shandong Province(ZR2021QF001)the Young Elite Scientists Sponsorship Program by CAST(2021QNRC001)the Open Project of Key Laboratory of Artificial Intelligence,Ministry of Educationthe Shandong Provincial Natural Science Foundation for Distinguished Young Scholars(ZR2021JQ26)the Taishan Scholar Project of Shandong Province(tsqn202103088)。
文摘We introduce a novel method using a new generative model that automatically learns effective representations of the target and background appearance to detect,segment and track each instance in a video sequence.Differently from current discriminative tracking-by-detection solutions,our proposed hierarchical structural embedding learning can predict more highquality masks with accurate boundary details over spatio-temporal space via the normalizing flows.We formulate the instance inference procedure as a hierarchical spatio-temporal embedded learning across time and space.Given the video clip,our method first coarsely locates pixels belonging to a particular instance with Gaussian distribution and then builds a novel mixing distribution to promote the instance boundary by fusing hierarchical appearance embedding information in a coarse-to-fine manner.For the mixing distribution,we utilize a factorization condition normalized flow fashion to estimate the distribution parameters to improve the segmentation performance.Comprehensive qualitative,quantitative,and ablation experiments are performed on three representative video instance segmentation benchmarks(i.e.,YouTube-VIS19,YouTube-VIS21,and OVIS)and the effectiveness of the proposed method is demonstrated.More impressively,the superior performance of our model on an unsupervised video object segmentation dataset(i.e.,DAVIS19)proves its generalizability.Our algorithm implementations are publicly available at https://github.com/zyqin19/HEVis.
基金supported by the Natural Science Foundation of Guizhou Province(Grant Number:20161054)Joint Natural Science Foundation of Guizhou Province(Grant Number:LH20177226)+1 种基金2017 Special Project of New Academic Talent Training and Innovation Exploration of Guizhou University(Grant Number:20175788)The National Natural Science Foundation of China under Grant No.12205062.
文摘Autonomous driving technology has made a lot of outstanding achievements with deep learning,and the vehicle detection and classification algorithm has become one of the critical technologies of autonomous driving systems.The vehicle instance segmentation can perform instance-level semantic parsing of vehicle information,which is more accurate and reliable than object detection.However,the existing instance segmentation algorithms still have the problems of poor mask prediction accuracy and low detection speed.Therefore,this paper proposes an advanced real-time instance segmentation model named FIR-YOLACT,which fuses the ICIoU(Improved Complete Intersection over Union)and Res2Net for the YOLACT algorithm.Specifically,the ICIoU function can effectively solve the degradation problem of the original CIoU loss function,and improve the training convergence speed and detection accuracy.The Res2Net module fused with the ECA(Efficient Channel Attention)Net is added to the model’s backbone network,which improves the multi-scale detection capability and mask prediction accuracy.Furthermore,the Cluster NMS(Non-Maximum Suppression)algorithm is introduced in the model’s bounding box regression to enhance the performance of detecting similarly occluded objects.The experimental results demonstrate the superiority of FIR-YOLACT to the based methods and the effectiveness of all components.The processing speed reaches 28 FPS,which meets the demands of real-time vehicle instance segmentation.
基金supported by the National Natural Science Foundation of China (31400074, 31471516, 31271747, and 30971809)the Natural Science Foundation of Heilongjiang Province of China(ZD201213)the Heilongjiang Postdoctoral Science Foundation(LBH-Q18025)。
文摘Mature soybean phenotyping is an important process in soybean breeding;however, the manual process is time-consuming and labor-intensive. Therefore, a novel approach that is rapid, accurate and highly precise is required to obtain the phenotypic data of soybean stems, pods and seeds. In this research, we propose a mature soybean phenotype measurement algorithm called Soybean Phenotype Measure-instance Segmentation(SPM-IS). SPM-IS is based on a feature pyramid network, Principal Component Analysis(PCA) and instance segmentation. We also propose a new method that uses PCA to locate and measure the length and width of a target object via image instance segmentation. After 60,000 iterations, the maximum mean Average Precision(m AP) of the mask and box was able to reach 95.7%. The correlation coefficients R^(2) of the manual measurement and SPM-IS measurement of the pod length, pod width, stem length, complete main stem length, seed length and seed width were 0.9755, 0.9872, 0.9692, 0.9803,0.9656, and 0.9716, respectively. The correlation coefficients R^(2) of the manual counting and SPM-IS counting of pods, stems and seeds were 0.9733, 0.9872, and 0.9851, respectively. The above results show that SPM-IS is a robust measurement and counting algorithm that can reduce labor intensity, improve efficiency and speed up the soybean breeding process.
文摘3D object recognition is a challenging task for intelligent and robot systems in industrial and home indoor environments.It is critical for such systems to recognize and segment the 3D object instances that they encounter on a frequent basis.The computer vision,graphics,and machine learning fields have all given it a lot of attention.Traditionally,3D segmentation was done with hand-crafted features and designed approaches that didn’t achieve acceptable performance and couldn’t be generalized to large-scale data.Deep learning approaches have lately become the preferred method for 3D segmentation challenges by their great success in 2D computer vision.However,the task of instance segmentation is currently less explored.In this paper,we propose a novel approach for efficient 3D instance segmentation using red green blue and depth(RGB-D)data based on deep learning.The 2D region based convolutional neural networks(Mask R-CNN)deep learning model with point based rending module is adapted to integrate with depth information to recognize and segment 3D instances of objects.In order to generate 3D point cloud coordinates(x,y,z),segmented 2D pixels(u,v)of recognized object regions in the RGB image are merged into(u,v)points of the depth image.Moreover,we conducted an experiment and analysis to compare our proposed method from various points of view and distances.The experimentation shows the proposed 3D object recognition and instance segmentation are sufficiently beneficial to support object handling in robotic and intelligent systems.
基金supported by the National Key R&D Program of China(Grant No.2023YFB4606502)the National Natural Science Foundation of China(Grant Nos.51871183 and 51874245)+1 种基金the Research Fund of the State Key Laboratory of Solidification Processing(NPU), China(Grant No.2020-TS-06)Sponsored by the Practice and Innovation Funds for Graduate Students of Northwestern Polytechnical University。
文摘Efficient and accurate segmentation of complex microstructures is a critical challenge in establishing process-structure-property(PSP) linkages of materials. Deep learning(DL)-based instance segmentation algorithms show potential in achieving this goal.However, to ensure prediction reliability, the current algorithms usually have complex structures and demand vast training data.To overcome the model complexity and its dependence on the amount of data, we developed an ingenious DL framework based on a simple method called dual-layer semantics. In the framework, a data standardization module was designed to remove extraneous microstructural noise and accentuate desired structural characteristics, while a post-processing module was employed to further improve segmentation accuracy. The framework was successfully applied in a small dataset of bimodal Ti-6Al-4V microstructures with only 112 samples. Compared with the ground truth, it realizes an 86.81% accuracy IoU for the globular αphase and a 94.70% average size distribution similarity for the colony structures. More importantly, only 36 s was taken to handle a 1024 × 1024 micrograph, which is much faster than the treatment of experienced experts(usually 900 s). The framework proved reliable, interpretable, and scalable, enabling its utilization in complex microstructures to deepen the understanding of PSP linkages.
基金supported by China Agriculture Research System of MOF and MARA(CARS-20)Zhejiang Provincial Key Laboratory of Agricultural Intelligent Equipment and Robotics Open Fund(2023ZJZD2301)+1 种基金Chinese Academy of Agricultural Science and Technology Innovation Project“Fruit And Vegetable Production And Processing Technical Equipment Team”(2024)Beijing Nova Program(20220484023).
文摘Edible mushrooms are rich in nutrients;however,harvesting mainly relies on manual labor.Coarse localization of each mushroom is necessary to enable a robotic arm to accurately pick edible mushrooms.Previous studies used detection algorithms that did not consider mushroom pixel-level information.When these algorithms are combined with a depth map,the information is lost.Moreover,in instance segmentation algorithms,convolutional neural network(CNN)-based methods are lightweight,and the extracted features are not correlated.To guarantee real-time location detection and improve the accuracy of mushroom segmentation,this study proposed a new spatial-channel transformer network model based on Mask-CNN(SCT-Mask-RCNN).The fusion of Mask-RCNN with the self-attention mechanism extracts the global correlation outcomes of image features from the channel and spatial dimensions.Subsequently,Mask-RCNN was used to maintain a lightweight structure and extract local features using a spatial pooling pyramidal structure to achieve multiscale local feature fusion and improve detection accuracy.The results showed that the SCT-Mask-RCNN method achieved a segmentation accuracy of 0.750 on segm_Precision_mAP and detection accuracy of 0.638 on Bbox_Precision_mAP.Compared to existing methods,the proposed method improved the accuracy of the evaluation metrics Bbox_Precision_mAP and segm_Precision_mAP by over 2%and 5%,respectively.
基金the National Key Research and Development Program of China(2017YFA0700403,2017YFA0700402)the National Natural Science Foundation of China(62061160490)+2 种基金the Applied Fundamental Research of Wuhan(2020010601012167)the Fundamental Research Funds for the Central Universities(2019kfy XMBZ022)the Innovation Fund of Wuhan National Laboratory for Optoelectronics(WNLO)。
文摘Cell instance segmentation is a fundamental task for many biological applications,especially for packed cells in three-dimensional(3D)microscope images that can fully display cellular morphology.Image processing algorithms based on neural networks and feature engineering have enabled great progress in two-dimensional(2D)instance segmentation.However,current methods cannot achieve high segmentation accuracy for irregular cells in 3D images.In this study,we introduce a universal,morphology-based 3D instance segmentation algorithm called Crop Once Merge Twice(C1M2),which can segment cells from a wide range of image types and does not require nucleus images.C1M2 can be extended to quantify the fluorescence intensity of fluorescent proteins and antibodies and automatically annotate their expression levels in individual cells.Our results suggest that C1M2 can serve as a tissue cytometry for 3D histopathological assays by quantifying fluorescence intensity with spatial localization and morphological information.
基金supported by the National Key R&D Program of China(Nos.2018AAA0100104 and 2018AAA0100100)Natural Science Foundation of Jiangsu Province,China(No.BK20211164).
文摘Instance segmentation has drawn mounting attention due to its significant utility.However,high computational costs have been widely acknowledged in this domain,as the instance mask is generally achieved by pixel-level labeling.In this paper,we present a conceptually efficient contour regression network based on the you only look once(YOLO)architecture named YOLO-CORE for instance segmentation.The mask of the instance is efficiently acquired by explicit and direct contour regression using our designed multiorder constraint consisting of a polar distance loss and a sector loss.Our proposed YOLO-CORE yields impressive segmentation performance in terms of both accuracy and speed.It achieves 57.9%AP@0.5 with 47 FPS(frames per second)on the semantic boundaries dataset(SBD)and 51.1%AP@0.5 with 46 FPS on the COCO dataset.The superior performance achieved by our method with explicit contour regression suggests a new technique line in the YOLO-based image understanding field.Moreover,our instance segmentation design can be flexibly integrated into existing deep detectors with negligible computation cost(65.86 BFLOPs(billion float operations per second)to 66.15 BFLOPs with the YOLOv3 detector).
基金supported by the Beijing Innovation Consortium of Agriculture Research System (BAIC12).
文摘The application of robotic grasping for agricultural products pushes automation in agriculture-related industries.Cucumber,a common vegetable in greenhouses and supermarkets,often needs to be grasped from a cluttered scene.In order to realize efficient grasping in cluttered scenes,a fully automatic cucumber recognition,grasping,and palletizing robot system was constructed in this paper.The system adopted Yolact++deep learning network to segment cucumber instances.An early fusion method of F-RGBD was proposed,which increases the algorithm's discriminative ability for these appearance-similar cucumbers at different depths,and at different occlusion degrees.The results of the comparative experiment of the F-RGBD dataset and the common RGB dataset on Yolact++prove the positive effect of the F-RGBD fusion method.Its segmentation masks have higher quality,are more continuous,and are less false positive for prioritizing-grasping prediction.Based on the segmentation result,a 4D grab line prediction method was proposed for cucumber grasping.And the cucumber detection experiment in cluttered scenarios is carried out in the real world.The success rate is 93.67%and the average sorting time is 9.87 s.The effectiveness of the cucumber segmentation and grasping pose acquisition method is verified by experiments.
文摘Recognizing 3D part instances from a 3D point cloud is crucial for 3D structure and scene understanding.Several learning-based approaches use semantic segmentation and instance center prediction as training tasks and fail to further exploit the inherent relationship between shape semantics and part instances.In this paper,we present a new method for 3D part instance segmentation.Our method exploits semantic segmentation to fuse nonlocal instance features,such as center prediction,and further enhances the fusion scheme in a multi-and cross-level way.We also propose a semantic region center prediction task to train and leverage the prediction results to improve the clustering of instance points.Our method outperforms existing methods with a large-margin improvement in the PartNet benchmark.We also demonstrate that our feature fusion scheme can be applied to other existing methods to improve their performance in indoor scene instance segmentation tasks.
基金the National Natural Science Foundation of China(52175236)Qingdao People’s Livelihood Science and Technology Plan(19-6-1-88-nsh).
文摘In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation.
基金National Natural Science Foundation of China(Nos.U2033201 and U1633105)。
文摘Skin defect inspection is one of the most significant tasks in the conventional process of aircraft inspection.This paper proposes a vision-based method of pixel-level defect detection,which is based on the Mask Scoring R-CNN.First,an attention mechanism and a feature fusion module are introduced,to improve feature representation.Second,a new classifier head—consisting of four convolutional layers and a fully connected layer—is proposed,to reduce the influence of information around the area of the defect.Third,to evaluate the proposed method,a dataset of aircraft skin defects was constructed,containing 276 images with a resolution of 960×720 pixels.Experimental results show that the proposed classifier head improves the detection and segmentation accuracy,for aircraft skin defect inspection,more effectively than the attention mechanism and feature fusion module.Compared with the Mask R-CNN and Mask Scoring R-CNN,the proposed method increased the segmentation precision by approximately 21%and 19.59%,respectively.These results demonstrate that the proposed method performs favorably against the other two methods of pixellevel aircraft skin defect detection.
基金This research was supported by the BB21 plus funded by Busan Metropolitan City and Busan Institute for Talent and Lifelong Education(BIT)and a grant from Tongmyong University Innovated University Research Park(I-URP)funded by Busan Metropolitan City,Republic of Korea.
文摘The process of segmenting point cloud data into several homogeneous areas with points in the same region having the same attributes is known as 3D segmentation.Segmentation is challenging with point cloud data due to substantial redundancy,fluctuating sample density and lack of apparent organization.The research area has a wide range of robotics applications,including intelligent vehicles,autonomous mapping and navigation.A number of researchers have introduced various methodologies and algorithms.Deep learning has been successfully used to a spectrum of 2D vision domains as a prevailing A.I.methods.However,due to the specific problems of processing point clouds with deep neural networks,deep learning on point clouds is still in its initial stages.This study examines many strategies that have been presented to 3D instance and semantic segmentation and gives a complete assessment of current developments in deep learning-based 3D segmentation.In these approaches’benefits,draw backs,and design mechanisms are studied and addressed.This study evaluates the impact of various segmentation algorithms on competitiveness on various publicly accessible datasets,as well as the most often used pipelines,their advantages and limits,insightful findings and intriguing future research directions.
基金supported in part by the National Natural Science Foundation of China (Grant Nos.U21A20520,62172112)the Key-Area Research and Development of Guangdong Province (2022A0505050014,2020B1111190001)+1 种基金the National Key Research and Development Program of China (2022YFE0112200)the Key-Area Research and Development Program of Guangzhou City (202206030009).
文摘Instance co-segmentation aims to segment the co-occurrent instances among two images.This task heavily relies on instance-related cues provided by co-peaks,which are generally estimated by exhaustively exploiting all paired candidates in point-to-point patterns.However,such patterns could yield a high number of false-positive co-peaks,resulting in over-segmentation whenever there are mutual occlusions.To tackle with this issue,this paper proposes an instance co-segmentation method via tensor-based salient co-peak search(TSCPS-ICS).The proposed method explores high-order correlations via triple-to-triple matching among feature maps to find reliable co-peaks with the help of co-saliency detection.The proposed method is shown to capture more accurate intra-peaks and inter-peaks among feature maps,reducing the false-positive rate of co-peak search.Upon having accurate co-peaks,one can efficiently infer responses of the targeted instance.Experiments on four benchmark datasets validate the superior performance of the proposed method.