In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clini...In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clinical operating environments,endoscopic images often suffer from challenges such as low texture,uneven illumination,and non-rigid structures,which affect feature observation and extraction.This can severely impact surgical navigation or clinical diagnosis due to missing feature points in endoscopic images,leading to treatment and postoperative recovery issues for patients.To address these challenges,this paper introduces,for the first time,a Cross-Channel Multi-Modal Adaptive Spatial Feature Fusion(ASFF)module based on the lightweight architecture of EfficientViT.Additionally,a novel lightweight feature extraction and matching network based on attention mechanism is proposed.This network dynamically adjusts attention weights for cross-modal information from grayscale images and optical flow images through a dual-branch Siamese network.It extracts static and dynamic information features ranging from low-level to high-level,and from local to global,ensuring robust feature extraction across different widths,noise levels,and blur scenarios.Global and local matching are performed through a multi-level cascaded attention mechanism,with cross-channel attention introduced to simultaneously extract low-level and high-level features.Extensive ablation experiments and comparative studies are conducted on the HyperKvasir,EAD,M2caiSeg,CVC-ClinicDB,and UCL synthetic datasets.Experimental results demonstrate that the proposed network improves upon the baseline EfficientViT-B3 model by 75.4%in accuracy(Acc),while also enhancing runtime performance and storage efficiency.When compared with the complex DenseDescriptor feature extraction network,the difference in Acc is less than 7.22%,and IoU calculation results on specific datasets outperform complex dense models.Furthermore,this method increases the F1 score by 33.2%and accelerates runtime by 70.2%.It is noteworthy that the speed of CMMCAN surpasses that of comparative lightweight models,with feature extraction and matching performance comparable to existing complex models but with faster speed and higher cost-effectiveness.展开更多
Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibilit...Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.展开更多
Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speed...Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms.展开更多
To obtain the sparse decomposition and flexible representation of traffic images,this paper proposes a fast matching pursuit for traffic images using differential evolution. According to the structural features of tra...To obtain the sparse decomposition and flexible representation of traffic images,this paper proposes a fast matching pursuit for traffic images using differential evolution. According to the structural features of traffic images,the introduced algorithm selects the image atoms in a fast and flexible way from an over-complete image dictionary to adaptively match the local structures of traffic images and therefore to implement the sparse decomposition. As compared with the traditional method and a genetic algorithm of matching pursuit by using extensive experiments,the differential evolution achieves much higher quality of traffic images with much less computational time,which indicates the effectiveness of the proposed algorithm.展开更多
Three-dimensional(3D)reconstruction based on aerial images has broad prospects,and feature matching is an important step of it.However,for high-resolution aerial images,there are usually problems such as long time,mis...Three-dimensional(3D)reconstruction based on aerial images has broad prospects,and feature matching is an important step of it.However,for high-resolution aerial images,there are usually problems such as long time,mismatching and sparse feature pairs using traditional algorithms.Therefore,an algorithm is proposed to realize fast,accurate and dense feature matching.The algorithm consists of four steps.Firstly,we achieve a balance between the feature matching time and the number of matching pairs by appropriately reducing the image resolution.Secondly,to realize further screening of the mismatches,a feature screening algorithm based on similarity judgment or local optimization is proposed.Thirdly,to make the algorithm more widely applicable,we combine the results of different algorithms to get dense results.Finally,all matching feature pairs in the low-resolution images are restored to the original images.Comparisons between the original algorithms and our algorithm show that the proposed algorithm can effectively reduce the matching time,screen out the mismatches,and improve the number of matches.展开更多
The mean Hausdorff distance, though highly applicable in image registration, does not work well on partial matching images. An improvement upon traditional Hausdorff-distance-based image registration method is propose...The mean Hausdorff distance, though highly applicable in image registration, does not work well on partial matching images. An improvement upon traditional Hausdorff-distance-based image registration method is proposed, which consists of the following two aspects. One is to estimate transformation parameters between two images from the distributions of geometric property differences instead of establishing explicit feature correspondences. This procedure is treated as the pre-registration. The other aspect is that mean Hausdorff distance computation is replaced with the analysis of the second difference of generalized Hausdorff distance so as to eliminate the redundant points. Experimental results show that our registration method outperforms the method based on mean Hausdorff distance. The registration errors are noticeably reduced in the partial matching images.展开更多
An iterative algorithm to calculate mutual correlation using hierarchical key points and the search space mark principle is proposed. An effective algorithm is designed to improve the matching speed. By hi-erarchical ...An iterative algorithm to calculate mutual correlation using hierarchical key points and the search space mark principle is proposed. An effective algorithm is designed to improve the matching speed. By hi-erarchical key point algorithm and mutual correlation coefficients of the matching images, the important points can be iteratively calculated in the images hierarchically, and the correlation coefficient can be ob-tained with satisfactory precision. Massive spots in the parameter space which are impossible to match can be removed by the search space mark principle. Two approximate continuities in the correlation image matching process, the image gray level distribution continuity and the correlation coefficient value in the parameter space continuity, are considered in the method. The experiments show that the new algorithm can greatly enhance matching speed and achieve accurate matching results.展开更多
As the fundamental problem in the computer vision area,image matching has wide applications in pose estimation,3D reconstruction,image retrieval,etc.Suffering from the influence of external factors,the process of imag...As the fundamental problem in the computer vision area,image matching has wide applications in pose estimation,3D reconstruction,image retrieval,etc.Suffering from the influence of external factors,the process of image matching using classical local detectors,e.g.,scale-invariant feature transform(SIFT),and the outlier filtering approaches,e.g.,Random sample consensus(RANSAC),show high computation speed and pool robustness under changing illumination and viewpoints conditions,while image matching approaches with deep learning strategy(such as HardNet,OANet)display reliable achievements in large-scale datasets with challenging scenes.However,the past learning-based approaches are limited to the distinction and quality of the dataset and the training strategy in the image-matching approaches.As an extension of the previous conference paper,this paper proposes an accurate and robust image matching approach using fewer training data in an end-to-end manner,which could be used to estimate the pose error This research first proposes a novel dataset cleaning and construction strategy to eliminate the noise and improve the training efficiency;Secondly,a novel loss named quadratic hinge triplet loss(QHT)is proposed to gather more effective and stable feature matching;Thirdly,in the outlier filtering process,the stricter OANet and bundle adjustment are applied for judging samples by adding the epipolar distance constraint and triangulation constraint to generate more outstanding matches;Finally,to recall the matching pairs,dynamic guided matching is used and then submit the inliers after the PyRANSAC process.Multiple evaluation metrics are used and reported in the 1st place in the Track1 of CVPR Image-Matching Challenge Workshop.The results show that the proposed method has advanced performance in large-scale and challenging Phototourism benchmark.展开更多
Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixi...Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixing methods are globally based and do not consider the spectral variability among its endmembers that occur due to illumination,atmospheric,and environmental conditions.Here,endmember bundle extraction plays a major role in overcoming the above-mentioned limitations leading to more accurate abundance fractions.Accordingly,a two-stage approach is proposed to extract endmembers through endmember bundles in hyperspectral images.The divide and conquer method is applied as the first step in subset images with only the non-redundant bands to extract endmembers using the Vertex Component Analysis(VCA)and N-FINDR algorithms.A fuzzy rule-based inference system utilizing spectral matching parameters is proposed in the second step to categorize endmembers.The endmember with the minimum error is chosen as the final endmember in each specific category.The proposed method is simple and automatically considers endmember variability in hyperspectral images.The efficiency of the proposed method is evaluated using two real hyperspectral datasets.The average spectral angle and abundance angle are used to analyze the performance measures.展开更多
Background Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping.The accuracy of the matching significantly impacted subsequent studies....Background Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping.The accuracy of the matching significantly impacted subsequent studies.Because of their local similarity,when image pairs contain comparable patterns but feature pairs are positioned differently,incorrect recognition can occur as global motion consistency is disregarded.Methods This study proposes an image-matching filtering algorithm based on global motion consistency.It can be used as a subsequent matching filter for the initial matching results generated by other matching algorithms based on the principle of motion smoothness.A particular matching algorithm can first be used to perform the initial matching;then,the rotation and movement information of the global feature vectors are combined to effectively identify outlier matches.The principle is that if the matching result is accurate,the feature vectors formed by any matched point should have similar rotation angles and moving distances.Thus,global motion direction and global motion distance consistencies were used to reject outliers caused by similar patterns in different locations.Results Four datasets were used to test the effectiveness of the proposed method.Three datasets with similar patterns in different locations were used to test the results for similar images that could easily be incorrectly matched by other algorithms,and one commonly used dataset was used to test the results for the general image-matching problem.The experimental results suggest that the proposed method is more accurate than other state-of-the-art algorithms in identifying mismatches in the initial matching set.Conclusions The proposed outlier rejection matching method can significantly improve the matching accuracy for similar images with locally similar feature pairs in different locations and can provide more accurate matching results for subsequent computer vision tasks.展开更多
基金This work was supported by Science and Technology Cooperation Special Project of Shijiazhuang(SJZZXA23005).
文摘In minimally invasive surgery,endoscopes or laparoscopes equipped with miniature cameras and tools are used to enter the human body for therapeutic purposes through small incisions or natural cavities.However,in clinical operating environments,endoscopic images often suffer from challenges such as low texture,uneven illumination,and non-rigid structures,which affect feature observation and extraction.This can severely impact surgical navigation or clinical diagnosis due to missing feature points in endoscopic images,leading to treatment and postoperative recovery issues for patients.To address these challenges,this paper introduces,for the first time,a Cross-Channel Multi-Modal Adaptive Spatial Feature Fusion(ASFF)module based on the lightweight architecture of EfficientViT.Additionally,a novel lightweight feature extraction and matching network based on attention mechanism is proposed.This network dynamically adjusts attention weights for cross-modal information from grayscale images and optical flow images through a dual-branch Siamese network.It extracts static and dynamic information features ranging from low-level to high-level,and from local to global,ensuring robust feature extraction across different widths,noise levels,and blur scenarios.Global and local matching are performed through a multi-level cascaded attention mechanism,with cross-channel attention introduced to simultaneously extract low-level and high-level features.Extensive ablation experiments and comparative studies are conducted on the HyperKvasir,EAD,M2caiSeg,CVC-ClinicDB,and UCL synthetic datasets.Experimental results demonstrate that the proposed network improves upon the baseline EfficientViT-B3 model by 75.4%in accuracy(Acc),while also enhancing runtime performance and storage efficiency.When compared with the complex DenseDescriptor feature extraction network,the difference in Acc is less than 7.22%,and IoU calculation results on specific datasets outperform complex dense models.Furthermore,this method increases the F1 score by 33.2%and accelerates runtime by 70.2%.It is noteworthy that the speed of CMMCAN surpasses that of comparative lightweight models,with feature extraction and matching performance comparable to existing complex models but with faster speed and higher cost-effectiveness.
基金supported by a grant from the Basic Science Research Program through the National Research Foundation(NRF)(2021R1F1A1063634)funded by the Ministry of Science and ICT(MSIT),Republic of KoreaThe authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/RG/SERC/13/40)+2 种基金Also,the authors are thankful to Prince Satam bin Abdulaziz University for supporting this study via funding from Prince Satam bin Abdulaziz University project number(PSAU/2024/R/1445)This work was also supported by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2023R54)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Road traffic monitoring is an imperative topic widely discussed among researchers.Systems used to monitor traffic frequently rely on cameras mounted on bridges or roadsides.However,aerial images provide the flexibility to use mobile platforms to detect the location and motion of the vehicle over a larger area.To this end,different models have shown the ability to recognize and track vehicles.However,these methods are not mature enough to produce accurate results in complex road scenes.Therefore,this paper presents an algorithm that combines state-of-the-art techniques for identifying and tracking vehicles in conjunction with image bursts.The extracted frames were converted to grayscale,followed by the application of a georeferencing algorithm to embed coordinate information into the images.The masking technique eliminated irrelevant data and reduced the computational cost of the overall monitoring system.Next,Sobel edge detection combined with Canny edge detection and Hough line transform has been applied for noise reduction.After preprocessing,the blob detection algorithm helped detect the vehicles.Vehicles of varying sizes have been detected by implementing a dynamic thresholding scheme.Detection was done on the first image of every burst.Then,to track vehicles,the model of each vehicle was made to find its matches in the succeeding images using the template matching algorithm.To further improve the tracking accuracy by incorporating motion information,Scale Invariant Feature Transform(SIFT)features have been used to find the best possible match among multiple matches.An accuracy rate of 87%for detection and 80%accuracy for tracking in the A1 Motorway Netherland dataset has been achieved.For the Vehicle Aerial Imaging from Drone(VAID)dataset,an accuracy rate of 86%for detection and 78%accuracy for tracking has been achieved.
基金Supported by the Key Research Program of the Chinese Academy of Sciences(ZDRE-KT-2021-3)。
文摘Augmented solar images were used to research the adaptability of four representative image extraction and matching algorithms in space weather domain.These include the scale-invariant feature transform algorithm,speeded-up robust features algorithm,binary robust invariant scalable keypoints algorithm,and oriented fast and rotated brief algorithm.The performance of these algorithms was estimated in terms of matching accuracy,feature point richness,and running time.The experiment result showed that no algorithm achieved high accuracy while keeping low running time,and all algorithms are not suitable for image feature extraction and matching of augmented solar images.To solve this problem,an improved method was proposed by using two-frame matching to utilize the accuracy advantage of the scale-invariant feature transform algorithm and the speed advantage of the oriented fast and rotated brief algorithm.Furthermore,our method and the four representative algorithms were applied to augmented solar images.Our application experiments proved that our method achieved a similar high recognition rate to the scale-invariant feature transform algorithm which is significantly higher than other algorithms.Our method also obtained a similar low running time to the oriented fast and rotated brief algorithm,which is significantly lower than other algorithms.
文摘To obtain the sparse decomposition and flexible representation of traffic images,this paper proposes a fast matching pursuit for traffic images using differential evolution. According to the structural features of traffic images,the introduced algorithm selects the image atoms in a fast and flexible way from an over-complete image dictionary to adaptively match the local structures of traffic images and therefore to implement the sparse decomposition. As compared with the traditional method and a genetic algorithm of matching pursuit by using extensive experiments,the differential evolution achieves much higher quality of traffic images with much less computational time,which indicates the effectiveness of the proposed algorithm.
基金This work was supported by the Equipment Pre-Research Foundation of China(6140001020310).
文摘Three-dimensional(3D)reconstruction based on aerial images has broad prospects,and feature matching is an important step of it.However,for high-resolution aerial images,there are usually problems such as long time,mismatching and sparse feature pairs using traditional algorithms.Therefore,an algorithm is proposed to realize fast,accurate and dense feature matching.The algorithm consists of four steps.Firstly,we achieve a balance between the feature matching time and the number of matching pairs by appropriately reducing the image resolution.Secondly,to realize further screening of the mismatches,a feature screening algorithm based on similarity judgment or local optimization is proposed.Thirdly,to make the algorithm more widely applicable,we combine the results of different algorithms to get dense results.Finally,all matching feature pairs in the low-resolution images are restored to the original images.Comparisons between the original algorithms and our algorithm show that the proposed algorithm can effectively reduce the matching time,screen out the mismatches,and improve the number of matches.
基金Project(61070090)supported by the National Natural Science Foundation of ChinaProject(2012J4300030)supported by the GuangzhouScience and Technology Support Key Projects,China
文摘The mean Hausdorff distance, though highly applicable in image registration, does not work well on partial matching images. An improvement upon traditional Hausdorff-distance-based image registration method is proposed, which consists of the following two aspects. One is to estimate transformation parameters between two images from the distributions of geometric property differences instead of establishing explicit feature correspondences. This procedure is treated as the pre-registration. The other aspect is that mean Hausdorff distance computation is replaced with the analysis of the second difference of generalized Hausdorff distance so as to eliminate the redundant points. Experimental results show that our registration method outperforms the method based on mean Hausdorff distance. The registration errors are noticeably reduced in the partial matching images.
文摘An iterative algorithm to calculate mutual correlation using hierarchical key points and the search space mark principle is proposed. An effective algorithm is designed to improve the matching speed. By hi-erarchical key point algorithm and mutual correlation coefficients of the matching images, the important points can be iteratively calculated in the images hierarchically, and the correlation coefficient can be ob-tained with satisfactory precision. Massive spots in the parameter space which are impossible to match can be removed by the search space mark principle. Two approximate continuities in the correlation image matching process, the image gray level distribution continuity and the correlation coefficient value in the parameter space continuity, are considered in the method. The experiments show that the new algorithm can greatly enhance matching speed and achieve accurate matching results.
文摘As the fundamental problem in the computer vision area,image matching has wide applications in pose estimation,3D reconstruction,image retrieval,etc.Suffering from the influence of external factors,the process of image matching using classical local detectors,e.g.,scale-invariant feature transform(SIFT),and the outlier filtering approaches,e.g.,Random sample consensus(RANSAC),show high computation speed and pool robustness under changing illumination and viewpoints conditions,while image matching approaches with deep learning strategy(such as HardNet,OANet)display reliable achievements in large-scale datasets with challenging scenes.However,the past learning-based approaches are limited to the distinction and quality of the dataset and the training strategy in the image-matching approaches.As an extension of the previous conference paper,this paper proposes an accurate and robust image matching approach using fewer training data in an end-to-end manner,which could be used to estimate the pose error This research first proposes a novel dataset cleaning and construction strategy to eliminate the noise and improve the training efficiency;Secondly,a novel loss named quadratic hinge triplet loss(QHT)is proposed to gather more effective and stable feature matching;Thirdly,in the outlier filtering process,the stricter OANet and bundle adjustment are applied for judging samples by adding the epipolar distance constraint and triangulation constraint to generate more outstanding matches;Finally,to recall the matching pairs,dynamic guided matching is used and then submit the inliers after the PyRANSAC process.Multiple evaluation metrics are used and reported in the 1st place in the Track1 of CVPR Image-Matching Challenge Workshop.The results show that the proposed method has advanced performance in large-scale and challenging Phototourism benchmark.
文摘Spectral unmixing helps to identify different components present in the spectral mixtures which occur in the uppermost layer of the area owing to the low spatial resolution of hyperspectral images.Most spectral unmixing methods are globally based and do not consider the spectral variability among its endmembers that occur due to illumination,atmospheric,and environmental conditions.Here,endmember bundle extraction plays a major role in overcoming the above-mentioned limitations leading to more accurate abundance fractions.Accordingly,a two-stage approach is proposed to extract endmembers through endmember bundles in hyperspectral images.The divide and conquer method is applied as the first step in subset images with only the non-redundant bands to extract endmembers using the Vertex Component Analysis(VCA)and N-FINDR algorithms.A fuzzy rule-based inference system utilizing spectral matching parameters is proposed in the second step to categorize endmembers.The endmember with the minimum error is chosen as the final endmember in each specific category.The proposed method is simple and automatically considers endmember variability in hyperspectral images.The efficiency of the proposed method is evaluated using two real hyperspectral datasets.The average spectral angle and abundance angle are used to analyze the performance measures.
基金Supported by the Natural Science Foundation of China(62072388,62276146)the Industry Guidance Project Foundation of Science technology Bureau of Fujian province(2020H0047)+2 种基金the Natural Science Foundation of Science Technology Bureau of Fujian province(2019J01601)the Creation Fund project of Science Technology Bureau of Fujian province(JAT190596)Putian University Research Project(2022034)。
文摘Background Image matching is crucial in numerous computer vision tasks such as 3D reconstruction and simultaneous visual localization and mapping.The accuracy of the matching significantly impacted subsequent studies.Because of their local similarity,when image pairs contain comparable patterns but feature pairs are positioned differently,incorrect recognition can occur as global motion consistency is disregarded.Methods This study proposes an image-matching filtering algorithm based on global motion consistency.It can be used as a subsequent matching filter for the initial matching results generated by other matching algorithms based on the principle of motion smoothness.A particular matching algorithm can first be used to perform the initial matching;then,the rotation and movement information of the global feature vectors are combined to effectively identify outlier matches.The principle is that if the matching result is accurate,the feature vectors formed by any matched point should have similar rotation angles and moving distances.Thus,global motion direction and global motion distance consistencies were used to reject outliers caused by similar patterns in different locations.Results Four datasets were used to test the effectiveness of the proposed method.Three datasets with similar patterns in different locations were used to test the results for similar images that could easily be incorrectly matched by other algorithms,and one commonly used dataset was used to test the results for the general image-matching problem.The experimental results suggest that the proposed method is more accurate than other state-of-the-art algorithms in identifying mismatches in the initial matching set.Conclusions The proposed outlier rejection matching method can significantly improve the matching accuracy for similar images with locally similar feature pairs in different locations and can provide more accurate matching results for subsequent computer vision tasks.
文摘为了提高小目标识别和分类的实时性,同时降低识别系统的资源消耗,本文提出了一种简易、高效的现场可编程门阵列(Field Programmable Gate Array,FPGA)小目标识别分类系统。该系统首先通过图像预处理消除图像噪点,并采用并行计算提升系统实时性。然后将处理后的图像与模板进行匹配计算得到识别结果,设计的模板匹配电路具有较小的硬件复杂度和较快的处理速度。实验结果表明,本文所提出的识别系统在680×480图像分辨下,可达137.5帧/s的处理速度,实时性强,同时仅消耗了9个块随机存储器(Block Random Access Memory,BRAM)和2个数字信号处理器(Digital Signal Processor,DSP),硬件资源消耗较少,在处理小目标识别和分类问题上有较好的实用价值。