The game of Tibetan Go faces the scarcity of expert knowledge and research literature.Therefore,we study the zero learning model of Tibetan Go under limited computing power resources and propose a novel scaleinvariant...The game of Tibetan Go faces the scarcity of expert knowledge and research literature.Therefore,we study the zero learning model of Tibetan Go under limited computing power resources and propose a novel scaleinvariant U-Net style two-headed output lightweight network TibetanGoTinyNet.The lightweight convolutional neural networks and capsule structure are applied to the encoder and decoder of TibetanGoTinyNet to reduce computational burden and achieve better feature extraction results.Several autonomous self-attention mechanisms are integrated into TibetanGoTinyNet to capture the Tibetan Go board’s spatial and global information and select important channels.The training data are generated entirely from self-play games.TibetanGoTinyNet achieves 62%–78%winning rate against other four U-Net style models including Res-UNet,Res-UNet Attention,Ghost-UNet,and Ghost Capsule-UNet.It also achieves 75%winning rate in the ablation experiments on the attention mechanism with embedded positional information.The model saves about 33%of the training time with 45%–50%winning rate for different Monte–Carlo tree search(MCTS)simulation counts when migrated from 9×9 to 11×11 boards.Code for our model is available at https://github.com/paulzyy/TibetanGoTinyNet.展开更多
基金the National Natural Science Foundation of China(Nos.62276285 and 62236011)the Major Projects of Social Science Fundation of China(No.20&ZD279)。
文摘The game of Tibetan Go faces the scarcity of expert knowledge and research literature.Therefore,we study the zero learning model of Tibetan Go under limited computing power resources and propose a novel scaleinvariant U-Net style two-headed output lightweight network TibetanGoTinyNet.The lightweight convolutional neural networks and capsule structure are applied to the encoder and decoder of TibetanGoTinyNet to reduce computational burden and achieve better feature extraction results.Several autonomous self-attention mechanisms are integrated into TibetanGoTinyNet to capture the Tibetan Go board’s spatial and global information and select important channels.The training data are generated entirely from self-play games.TibetanGoTinyNet achieves 62%–78%winning rate against other four U-Net style models including Res-UNet,Res-UNet Attention,Ghost-UNet,and Ghost Capsule-UNet.It also achieves 75%winning rate in the ablation experiments on the attention mechanism with embedded positional information.The model saves about 33%of the training time with 45%–50%winning rate for different Monte–Carlo tree search(MCTS)simulation counts when migrated from 9×9 to 11×11 boards.Code for our model is available at https://github.com/paulzyy/TibetanGoTinyNet.