期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
Efficient stochastic parallel gradient descent training for on-chip optical processor 被引量:1
1
作者 Yuanjian Wan Xudong Liu +4 位作者 Guangze Wu Min Yang Guofeng Yan Yu Zhang Jian Wang 《Opto-Electronic Advances》 SCIE EI CAS CSCD 2024年第4期5-15,共11页
In recent years,space-division multiplexing(SDM)technology,which involves transmitting data information on multiple parallel channels for efficient capacity scaling,has been widely used in fiber and free-space optical... In recent years,space-division multiplexing(SDM)technology,which involves transmitting data information on multiple parallel channels for efficient capacity scaling,has been widely used in fiber and free-space optical communication sys-tems.To enable flexible data management and cope with the mixing between different channels,the integrated reconfig-urable optical processor is used for optical switching and mitigating the channel crosstalk.However,efficient online train-ing becomes intricate and challenging,particularly when dealing with a significant number of channels.Here we use the stochastic parallel gradient descent(SPGD)algorithm to configure the integrated optical processor,which has less com-putation than the traditional gradient descent(GD)algorithm.We design and fabricate a 6×6 on-chip optical processor on silicon platform to implement optical switching and descrambling assisted by the online training with the SPDG algorithm.Moreover,we apply the on-chip processor configured by the SPGD algorithm to optical communications for optical switching and efficiently mitigating the channel crosstalk in SDM systems.In comparison with the traditional GD al-gorithm,it is found that the SPGD algorithm features better performance especially when the scale of matrix is large,which means it has the potential to optimize large-scale optical matrix computation acceleration chips. 展开更多
关键词 optical communications optical signal processing channel descrambling optical neural network chip silicon photonics
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部