针对传统全局路径规划中扩展节点多、寻路时间长等问题,提出一种基于JPS+(jump point search plus)算法的全局路径规划算法,旨在提高机器人在复杂环境的智能性、高效性的要求。首先引入了一种基于密度的判断障碍物角点规则,实现对于主...针对传统全局路径规划中扩展节点多、寻路时间长等问题,提出一种基于JPS+(jump point search plus)算法的全局路径规划算法,旨在提高机器人在复杂环境的智能性、高效性的要求。首先引入了一种基于密度的判断障碍物角点规则,实现对于主要跳点的识别数目,减少搜索路径过程中的可扩展节点,同时在路径求解过程中对目标跳点的判定规则进行了修改,最终实现了减少计算量、缩短计算时长的目标。为验证所提改进型JPS+算法的有效性,将A、JPS+算法在不同类型地图中与改进型JPS+算法进行了比较。仿真结果表明,改进型JPS+算法与A算法相比,在路径长度、寻路时间和扩展节点数量上都有明显改进;在生成相同路径的基础上,与传统JPS+算法相比,在障碍物占比33.25%的地图中搜索时间降低了7.58%,节点扩展数量减少了9.38%,能够满足移动机器人快速全局路径规划的要求。展开更多
基于对智能交通系统(ITS,Intelligent Transport Systems)中视频检测的研究和分析,特别针对其中关键步骤之一的阴影消除展开深入探讨,分析了阴影产生的原理和特点,阐述了现有的阴影去除算法,在现有算法的基础上提出了一种基于区域聚类...基于对智能交通系统(ITS,Intelligent Transport Systems)中视频检测的研究和分析,特别针对其中关键步骤之一的阴影消除展开深入探讨,分析了阴影产生的原理和特点,阐述了现有的阴影去除算法,在现有算法的基础上提出了一种基于区域聚类的阴影消除算法。实践证明,该方法能够较好的去除运动车辆的阴影,保留较完整的车辆目标信息,为准确提取车辆目标奠定了基础。展开更多
基于对智能交通系统(ITS,Intelligent Transport Systems)中视频检测的研究和分析,特别针对其中关键步骤之一的阴影消除展开深入探讨,分析了阴影产生的原理和特点,阐述了现有的阴影去除算法,在现有算法的基础上提出了一种基于区域聚类...基于对智能交通系统(ITS,Intelligent Transport Systems)中视频检测的研究和分析,特别针对其中关键步骤之一的阴影消除展开深入探讨,分析了阴影产生的原理和特点,阐述了现有的阴影去除算法,在现有算法的基础上提出了一种基于区域聚类的阴影消除算法。实践证明,该方法能够较好的去除运动车辆的阴影,保留较完整的车辆目标信息,为准确提取车辆目标奠定了基础。展开更多
[Objective] This study was aimed to find a new method for the classification of common aroma components in tobacco leaves. [Method] Sixty-four common aroma components in tobacco leaves were classified by cluster analy...[Objective] This study was aimed to find a new method for the classification of common aroma components in tobacco leaves. [Method] Sixty-four common aroma components in tobacco leaves were classified by cluster analysis based on their relative molecular weight. The contents and distribution of aroma components in another 71 C3F and 64 B2F tobacco leaf samples were analyzed by using the new method. [Result] The 64 common aroma components were divided into three categories trough the cluster analysis based on their molecular weight. CategoryⅠ consisted of 12 aroma components which had high molecular weight(281.308±21.536on average) and high boiling point(371.311±29.904 ℃ on average). Category Ⅱ included 27 components which had low molecular weight(103.722 ± 13.115 on average) and low boiling point(176.132±42.342 ℃ on average). Category Ⅲ included 25components which had middle molecular weight(175.393 ± 24.906 on average) and middle boiling point(250.562±45.431 ℃ on average). The content of high-molecularweight aroma components in middle leaves(547.344±224.391 μg/g) was much higher than that in upper leaves(477.549±182.066 μg/g). The content of low-molecularweight aroma component in middle leaves(17.468±3.459 μg/g) was also significantly higher than that in upper leaves(15.936±3.456 μg/g). The content of middle-molecular-weight aroma component in middle leaves(44.931 ±8.953 μg/g) was extremely significantly higher than that in upper leaves(37.997±6.042 μg/g). [Conclusion] This study proposed a new way to classify the aroma components in flue-cured tobacco leaves using the relative molecular weight as the index, which will provide theoretical reference for developing special tobacco leaves.展开更多
To explore the problems of dynamic change in production demand and operating contradiction in production process, a new extension theory-based production operation method is proposed. The core is the demand requisitio...To explore the problems of dynamic change in production demand and operating contradiction in production process, a new extension theory-based production operation method is proposed. The core is the demand requisition, contradiction resolution and operation classification. For the demand requisition, the deep and comprehensive demand elements are collected by the conjugating analysis. For the contradiction resolution, the conflict between the demand and operating elements are solved by the extension reasoning, extension transformation and consistency judgment. For the operating classification, the operating importance among the operating elements is calculated by the extension clustering so as to guide the production operation and ensure the production safety. Through the actual application in the cascade reaction process of high-density polyethylene (HDPE) of a chemical plant, cases study and comparison show that the proposed extension theory-based production operation method is significantly better than the traditional experience-based operation method in actual production process, which exploits a new way to the research on the production operating methods for industrial process.展开更多
A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional de...A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.展开更多
We propose a new clustering algorithm that assists the researchers to quickly and accurately analyze data. We call this algorithm Combined Density-based and Constraint-based Algorithm (CDC). CDC consists of two phases...We propose a new clustering algorithm that assists the researchers to quickly and accurately analyze data. We call this algorithm Combined Density-based and Constraint-based Algorithm (CDC). CDC consists of two phases. In the first phase, CDC employs the idea of density-based clustering algorithm to split the original data into a number of fragmented clusters. At the same time, CDC cuts off the noises and outliers. In the second phase, CDC employs the concept of K-means clustering algorithm to select a greater cluster to be the center. Then, the greater cluster merges some smaller clusters which satisfy some constraint rules. Due to the merged clusters around the center cluster, the clustering results show high accuracy. Moreover, CDC reduces the calculations and speeds up the clustering process. In this paper, the accuracy of CDC is evaluated and compared with those of K-means, hierarchical clustering, and the genetic clustering algorithm (GCA) proposed in 2004. Experimental results show that CDC has better performance.展开更多
Performing cluster analysis on molecular conformation is an important way to find the representative conformation in the molecular dynamics trajectories.Usually,it is a critical step for interpreting complex conformat...Performing cluster analysis on molecular conformation is an important way to find the representative conformation in the molecular dynamics trajectories.Usually,it is a critical step for interpreting complex conformational changes or interaction mechanisms.As one of the density-based clustering algorithms,find density peaks(FDP)is an accurate and reasonable candidate for the molecular conformation clustering.However,facing the rapidly increasing simulation length due to the increase in computing power,the low computing efficiency of FDP limits its application potential.Here we propose a marginal extension to FDP named K-means find density peaks(KFDP)to solve the mass source consuming problem.In KFDP,the points are initially clustered by a high efficiency clustering algorithm,such as K-means.Cluster centers are defined as typical points with a weight which represents the cluster size.Then,the weighted typical points are clustered again by FDP,and then are refined as core,boundary,and redefined halo points.In this way,KFDP has comparable accuracy as FDP but its computational complexity is reduced from O(n^(2))to O(n).We apply and test our KFDP method to the trajectory data of multiple small proteins in terms of torsion angle,secondary structure or contact map.The comparing results with K-means and density-based spatial clustering of applications with noise show the validation of the proposed KFDP.展开更多
This paper examines the utility of high-resolution airborne RGB orthophotos and LiDAR data for mapping residential land uses within the spatial limits of suburb of Athens, Greece. Modem remote sensors deliver ample in...This paper examines the utility of high-resolution airborne RGB orthophotos and LiDAR data for mapping residential land uses within the spatial limits of suburb of Athens, Greece. Modem remote sensors deliver ample information from the AOI (area of interest) for the estimation of 2D indicators or with the inclusion of elevation data 3D indicators for the classification of urban land. In this research, two of these indicators, BCR (building coverage ratio) and FAR (floor area ratio) are automatically evaluated. In the pre-processing step, the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an nDSM (normalized digital surface model) comprised of upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. Following this step, a MFNN (multilayer feedforward neural network) is used to classify all pixels of the AOI into building or non-building categories. The information derived from the BCR and FAR building indicators, adapted to landscape characteristics of the test area is used to propose two new indices and an automatic post-classification based on the density of buildings.展开更多
文摘针对传统全局路径规划中扩展节点多、寻路时间长等问题,提出一种基于JPS+(jump point search plus)算法的全局路径规划算法,旨在提高机器人在复杂环境的智能性、高效性的要求。首先引入了一种基于密度的判断障碍物角点规则,实现对于主要跳点的识别数目,减少搜索路径过程中的可扩展节点,同时在路径求解过程中对目标跳点的判定规则进行了修改,最终实现了减少计算量、缩短计算时长的目标。为验证所提改进型JPS+算法的有效性,将A、JPS+算法在不同类型地图中与改进型JPS+算法进行了比较。仿真结果表明,改进型JPS+算法与A算法相比,在路径长度、寻路时间和扩展节点数量上都有明显改进;在生成相同路径的基础上,与传统JPS+算法相比,在障碍物占比33.25%的地图中搜索时间降低了7.58%,节点扩展数量减少了9.38%,能够满足移动机器人快速全局路径规划的要求。
文摘基于对智能交通系统(ITS,Intelligent Transport Systems)中视频检测的研究和分析,特别针对其中关键步骤之一的阴影消除展开深入探讨,分析了阴影产生的原理和特点,阐述了现有的阴影去除算法,在现有算法的基础上提出了一种基于区域聚类的阴影消除算法。实践证明,该方法能够较好的去除运动车辆的阴影,保留较完整的车辆目标信息,为准确提取车辆目标奠定了基础。
文摘基于对智能交通系统(ITS,Intelligent Transport Systems)中视频检测的研究和分析,特别针对其中关键步骤之一的阴影消除展开深入探讨,分析了阴影产生的原理和特点,阐述了现有的阴影去除算法,在现有算法的基础上提出了一种基于区域聚类的阴影消除算法。实践证明,该方法能够较好的去除运动车辆的阴影,保留较完整的车辆目标信息,为准确提取车辆目标奠定了基础。
基金Supported by the Fund from Hongyun Honghe Tobacco(Group)Co.Ltd.(HYHH2012YL01)~~
文摘[Objective] This study was aimed to find a new method for the classification of common aroma components in tobacco leaves. [Method] Sixty-four common aroma components in tobacco leaves were classified by cluster analysis based on their relative molecular weight. The contents and distribution of aroma components in another 71 C3F and 64 B2F tobacco leaf samples were analyzed by using the new method. [Result] The 64 common aroma components were divided into three categories trough the cluster analysis based on their molecular weight. CategoryⅠ consisted of 12 aroma components which had high molecular weight(281.308±21.536on average) and high boiling point(371.311±29.904 ℃ on average). Category Ⅱ included 27 components which had low molecular weight(103.722 ± 13.115 on average) and low boiling point(176.132±42.342 ℃ on average). Category Ⅲ included 25components which had middle molecular weight(175.393 ± 24.906 on average) and middle boiling point(250.562±45.431 ℃ on average). The content of high-molecularweight aroma components in middle leaves(547.344±224.391 μg/g) was much higher than that in upper leaves(477.549±182.066 μg/g). The content of low-molecularweight aroma component in middle leaves(17.468±3.459 μg/g) was also significantly higher than that in upper leaves(15.936±3.456 μg/g). The content of middle-molecular-weight aroma component in middle leaves(44.931 ±8.953 μg/g) was extremely significantly higher than that in upper leaves(37.997±6.042 μg/g). [Conclusion] This study proposed a new way to classify the aroma components in flue-cured tobacco leaves using the relative molecular weight as the index, which will provide theoretical reference for developing special tobacco leaves.
基金Supported by the National Natural Science Foundation of China(61104131)the Fundamental Research Funds for Central Universities(ZY1111)
文摘To explore the problems of dynamic change in production demand and operating contradiction in production process, a new extension theory-based production operation method is proposed. The core is the demand requisition, contradiction resolution and operation classification. For the demand requisition, the deep and comprehensive demand elements are collected by the conjugating analysis. For the contradiction resolution, the conflict between the demand and operating elements are solved by the extension reasoning, extension transformation and consistency judgment. For the operating classification, the operating importance among the operating elements is calculated by the extension clustering so as to guide the production operation and ensure the production safety. Through the actual application in the cascade reaction process of high-density polyethylene (HDPE) of a chemical plant, cases study and comparison show that the proposed extension theory-based production operation method is significantly better than the traditional experience-based operation method in actual production process, which exploits a new way to the research on the production operating methods for industrial process.
基金National "863" project (2001AA114140) the National Natural Science Foundation of China (60135020).
文摘A non-parameter Bayesian classifier based on Kernel Density Estimation (KDE)is presented for face recognition, which can be regarded as a weighted Nearest Neighbor (NN)classifier in formation. The class conditional density is estimated by KDE and the bandwidthof the kernel function is estimated by Expectation Maximum (EM) algorithm. Two subspaceanalysis methods-linear Principal Component Analysis (PCA) and Kernel-based PCA (KPCA)are respectively used to extract features, and the proposed method is compared with ProbabilisticReasoning Models (PRM), Nearest Center (NC) and NN classifiers which are widely used in facerecognition systems. The experiments are performed on two benchmarks and the experimentalresults show that the KDE outperforms PRM, NC and NN classifiers.
文摘We propose a new clustering algorithm that assists the researchers to quickly and accurately analyze data. We call this algorithm Combined Density-based and Constraint-based Algorithm (CDC). CDC consists of two phases. In the first phase, CDC employs the idea of density-based clustering algorithm to split the original data into a number of fragmented clusters. At the same time, CDC cuts off the noises and outliers. In the second phase, CDC employs the concept of K-means clustering algorithm to select a greater cluster to be the center. Then, the greater cluster merges some smaller clusters which satisfy some constraint rules. Due to the merged clusters around the center cluster, the clustering results show high accuracy. Moreover, CDC reduces the calculations and speeds up the clustering process. In this paper, the accuracy of CDC is evaluated and compared with those of K-means, hierarchical clustering, and the genetic clustering algorithm (GCA) proposed in 2004. Experimental results show that CDC has better performance.
基金Professor Hong Yu at Intelligent Fishery Innovative Team(No.C202109)in School of Information Engineering of Dalian Ocean University for her support of this workfunded by the National Natural Science Foundation of China(No.31800615 and No.21933010)。
文摘Performing cluster analysis on molecular conformation is an important way to find the representative conformation in the molecular dynamics trajectories.Usually,it is a critical step for interpreting complex conformational changes or interaction mechanisms.As one of the density-based clustering algorithms,find density peaks(FDP)is an accurate and reasonable candidate for the molecular conformation clustering.However,facing the rapidly increasing simulation length due to the increase in computing power,the low computing efficiency of FDP limits its application potential.Here we propose a marginal extension to FDP named K-means find density peaks(KFDP)to solve the mass source consuming problem.In KFDP,the points are initially clustered by a high efficiency clustering algorithm,such as K-means.Cluster centers are defined as typical points with a weight which represents the cluster size.Then,the weighted typical points are clustered again by FDP,and then are refined as core,boundary,and redefined halo points.In this way,KFDP has comparable accuracy as FDP but its computational complexity is reduced from O(n^(2))to O(n).We apply and test our KFDP method to the trajectory data of multiple small proteins in terms of torsion angle,secondary structure or contact map.The comparing results with K-means and density-based spatial clustering of applications with noise show the validation of the proposed KFDP.
文摘This paper examines the utility of high-resolution airborne RGB orthophotos and LiDAR data for mapping residential land uses within the spatial limits of suburb of Athens, Greece. Modem remote sensors deliver ample information from the AOI (area of interest) for the estimation of 2D indicators or with the inclusion of elevation data 3D indicators for the classification of urban land. In this research, two of these indicators, BCR (building coverage ratio) and FAR (floor area ratio) are automatically evaluated. In the pre-processing step, the low resolution elevation data are fused with the high resolution optical data through a mean-shift based discontinuity preserving smoothing algorithm. The outcome is an nDSM (normalized digital surface model) comprised of upsampled elevation data with considerable improvement regarding region filling and "straightness" of elevation discontinuities. Following this step, a MFNN (multilayer feedforward neural network) is used to classify all pixels of the AOI into building or non-building categories. The information derived from the BCR and FAR building indicators, adapted to landscape characteristics of the test area is used to propose two new indices and an automatic post-classification based on the density of buildings.