Restricted Boltzmann Machines (RBMs) are an effective model for machine learning;however, they require a significant amount of processing time. In this study, we propose a highly parallel, highly flexible architecture...Restricted Boltzmann Machines (RBMs) are an effective model for machine learning;however, they require a significant amount of processing time. In this study, we propose a highly parallel, highly flexible architecture that combines small and completely parallel RBMs. This proposal addresses problems associated with calculation speed and exponential increases in circuit scale. We show that this architecture can optionally respond to the trade-offs between these two problems. Furthermore, our FPGA implementation performs at a 134 times processing speed up factor with respect to a conventional CPU.展开更多
文摘Restricted Boltzmann Machines (RBMs) are an effective model for machine learning;however, they require a significant amount of processing time. In this study, we propose a highly parallel, highly flexible architecture that combines small and completely parallel RBMs. This proposal addresses problems associated with calculation speed and exponential increases in circuit scale. We show that this architecture can optionally respond to the trade-offs between these two problems. Furthermore, our FPGA implementation performs at a 134 times processing speed up factor with respect to a conventional CPU.