期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
In-memory computing to break the memory wall 被引量:1
1
作者 Xiaohe Huang Chunsen Liu +1 位作者 yu-gang jiang Peng Zhou 《Chinese Physics B》 SCIE EI CAS CSCD 2020年第7期28-48,共21页
Facing the computing demands of Internet of things(IoT)and artificial intelligence(AI),the cost induced by moving the data between the central processing unit(CPU)and memory is the key problem and a chip featured with... Facing the computing demands of Internet of things(IoT)and artificial intelligence(AI),the cost induced by moving the data between the central processing unit(CPU)and memory is the key problem and a chip featured with flexible structural unit,ultra-low power consumption,and huge parallelism will be needed.In-memory computing,a non-von Neumann architecture fusing memory units and computing units,can eliminate the data transfer time and energy consumption while performing massive parallel computations.Prototype in-memory computing schemes modified from different memory technologies have shown orders of magnitude improvement in computing efficiency,making it be regarded as the ultimate computing paradigm.Here we review the state-of-the-art memory device technologies potential for in-memory computing,summarize their versatile applications in neural network,stochastic generation,and hybrid precision digital computing,with promising solutions for unprecedented computing tasks,and also discuss the challenges of stability and integration for general in-memory computing. 展开更多
关键词 in-memory computing non-volatile memory device technologies crossbar array
下载PDF
MOSS:An Open Conversational Large Language Model 被引量:1
2
作者 Tianxiang Sun Xiaotian Zhang +21 位作者 Zhengfu He Peng Li Qinyuan Cheng Xiangyang Liu Hang Yan Yunfan Shao Qiong Tang Shiduo Zhang Xingjian Zhao Ke Chen Yining Zheng Zhejian Zhou Ruixiao Li Jun Zhan Yunhua Zhou Linyang Li Xiaogui Yang Lingling Wu Zhangyue Yin Xuanjing Huang yu-gang jiang Xipeng Qiu 《Machine Intelligence Research》 EI CSCD 2024年第5期888-905,共18页
Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of rese... Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of research,in this paper,we report the development of MOSS,an open-sourced conversational LLM that contains 16 B parameters and can perform a variety of instructions in multi-turn interactions with humans.The base model of MOSS is pre-trained on large-scale unlabeled English,Chinese,and code data.To optimize the model for dialogue,we generate 1.1 M synthetic conversations based on user prompts collected through our earlier versions of the model API.We then perform preference-aware training on preference data annotated from AI feedback.Evaluation results on real-world use cases and academic benchmarks demonstrate the effectiveness of the proposed approaches.In addition,we present an effective practice to augment MOSS with several external tools.Through the development of MOSS,we have established a complete technical roadmap for large language models from pre-training,supervised fine-tuning to alignment,verifying the feasibility of chatGPT under resource-limited conditions and providing a reference for both the academic and industrial communities.Model weights and code are publicly available at https://github.com/OpenMOSS/MOSS. 展开更多
关键词 Large language models natural language processing pre-training ALIGNMENT chatGPT MOSS
原文传递
Correction to: MOSS: An Open Conversational Large Language Model
3
作者 Tianxiang Sun Xiaotian Zhang +21 位作者 Zhengfu He Peng Li Qinyuan Cheng Xiangyang Liu Hang Yan Yunfan Shao Qiong Tang Shiduo Zhang Xingjian Zhao Ke Chen Yining Zheng Zhejian Zhou Ruixiao Li Jun Zhan Yunhua Zhou Linyang Li Xiaogui Yang Lingling Wu Zhangyue Yin Xuanjing Huang yu-gang jiang Xipeng Qiu 《Machine Intelligence Research》 EI CSCD 2024年第6期1216-1216,共1页
Correction to: MOsS: An Open Conversational LargeLanguage ModelDOI:10.1007/s11633-024-1502-8Authors: Tianxiang Sun, Xiaotian Zhang, Zhengfu He,Peng Li, Qinyuan Cheng, Xiangyang Liu, Hang Yan,Yunfan Shao, Qiong Tang, S... Correction to: MOsS: An Open Conversational LargeLanguage ModelDOI:10.1007/s11633-024-1502-8Authors: Tianxiang Sun, Xiaotian Zhang, Zhengfu He,Peng Li, Qinyuan Cheng, Xiangyang Liu, Hang Yan,Yunfan Shao, Qiong Tang, Shiduo Zhang, Xingjian Zhao,Ke Chen, Yining Zheng, Zhejian Zhou, Ruixiao Li, JunZhan, Yunhua Zhou, Linyang Li, Xiaogui Yang, LinglingWu, Zhangyue Yin, Xuanjing Huang, Yu-Gang Jiang,Xipeng Qiu。 展开更多
关键词 OPEN CORRECTION Xiang
原文传递
Extreme vocabulary learning
4
作者 Hanze DONG Zhenfeng SUN +3 位作者 Yanwei FU Shi ZHONG Zhengjun ZHANG yu-gang jiang 《Frontiers of Computer Science》 SCIE EI CSCD 2020年第6期5-15,共11页
Regarding extreme value theory,the unseen novelclasses in the openset recognition can be seen as the extremevalues of training classes.Following this idea,we introducethe margin and coverage distribution to model the ... Regarding extreme value theory,the unseen novelclasses in the openset recognition can be seen as the extremevalues of training classes.Following this idea,we introducethe margin and coverage distribution to model the trainingclasses.A novel visual-semantic embedding framework-extreme vocabulary learning(EVoL)is proposed;the EVoL embeds the visual features into semantic space in a probabilisticway.Notably,we adopt the vast open vocabulary in the semantic space to help further constraint the margin and coverage of training classes.The learned embedding can directlybe used to solve supervised learning,zero-shot learning,andopen set recognition simultaneously.Experiments on twobenchmark datasets demonstrate the effectiveness of the proposed framework against conventional ways. 展开更多
关键词 vocabulary informed lcarning zero shot learning extreme value theory
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部