期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Trade-Off between Efficiency and Effectiveness: A Late Fusion Multi-View Clustering Algorithm 被引量:1
1
作者 Yunping Zhao Weixuan Liang +2 位作者 jianzhuang lu Xiaowen Chen Nijiwa Kong 《Computers, Materials & Continua》 SCIE EI 2021年第3期2709-2722,共14页
Late fusion multi-view clustering(LFMVC)algorithms aim to integrate the base partition of each single view into a consensus partition.Base partitions can be obtained by performing kernel k-means clustering on all view... Late fusion multi-view clustering(LFMVC)algorithms aim to integrate the base partition of each single view into a consensus partition.Base partitions can be obtained by performing kernel k-means clustering on all views.This type of method is not only computationally efficient,but also more accurate than multiple kernel k-means,and is thus widely used in the multi-view clustering context.LFMVC improves computational efficiency to the extent that the computational complexity of each iteration is reduced from Oen3T to OenT(where n is the number of samples).However,LFMVC also limits the search space of the optimal solution,meaning that the clustering results obtained are not ideal.Accordingly,in order to obtain more information from each base partition and thus improve the clustering performance,we propose a new late fusion multi-view clustering algorithm with a computational complexity of Oen2T.Experiments on several commonly used datasets demonstrate that the proposed algorithm can reach quickly convergence.Moreover,compared with other late fusion algorithms with computational complexity of OenT,the actual time consumption of the proposed algorithm does not significantly increase.At the same time,comparisons with several other state-of-the-art algorithms reveal that the proposed algorithm also obtains the best clustering performance. 展开更多
关键词 Late fusion kernel k-means similarity matrix
下载PDF
A Dynamically Reconfigurable Accelerator Design Using a Sparse-Winograd Decomposition Algorithm for CNNs
2
作者 Yunping Zhao jianzhuang lu Xiaowen Chen 《Computers, Materials & Continua》 SCIE EI 2021年第1期517-535,共19页
Convolutional Neural Networks(CNNs)are widely used in many fields.Due to their high throughput and high level of computing characteristics,however,an increasing number of researchers are focusing on how to improve the... Convolutional Neural Networks(CNNs)are widely used in many fields.Due to their high throughput and high level of computing characteristics,however,an increasing number of researchers are focusing on how to improve the computational efficiency,hardware utilization,or flexibility of CNN hardware accelerators.Accordingly,this paper proposes a dynamically reconfigurable accelerator architecture that implements a Sparse-Winograd F(2×2.3×3)-based high-parallelism hardware architecture.This approach not only eliminates the pre-calculation complexity associated with the Winograd algorithm,thereby reducing the difficulty of hardware implementation,but also greatly improves the flexibility of the hardware;as a result,the accelerator can realize the calculation of Conventional Convolution,Grouped Convolution(GCONV)or Depthwise Separable Convolution(DSC)using the same hardware architecture.Our experimental results show that the accelerator achieves a 3x–4.14x speedup compared with the designs that do not use the acceleration algorithm on VGG-16 and MobileNet V1.Moreover,compared with previous designs using the traditional Winograd algorithm,the accelerator design achieves 1.4x–1.8x speedup.At the same time,the efficiency of the multiplier improves by up to 142%. 展开更多
关键词 High performance computing accelerator architecture HARDWARE
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部