Mathematics foundations of information security is a core course in the subject of information security.In view of the current national ideological and political conference in universities,finding a way to integrate t...Mathematics foundations of information security is a core course in the subject of information security.In view of the current national ideological and political conference in universities,finding a way to integrate this course with ideological and political education attracts a lot of attention from the education community.This paper makes an assay of the significance of the combination of mathematics foundations of information security course and ideological and political education,and introduces the teaching practice of mathematics foundations of information security course combined with ideological and political education.Through the combination of ideological and political education and curriculum content,cultivating all-round development of talents who study information security.展开更多
Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of rese...Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of research,in this paper,we report the development of MOSS,an open-sourced conversational LLM that contains 16 B parameters and can perform a variety of instructions in multi-turn interactions with humans.The base model of MOSS is pre-trained on large-scale unlabeled English,Chinese,and code data.To optimize the model for dialogue,we generate 1.1 M synthetic conversations based on user prompts collected through our earlier versions of the model API.We then perform preference-aware training on preference data annotated from AI feedback.Evaluation results on real-world use cases and academic benchmarks demonstrate the effectiveness of the proposed approaches.In addition,we present an effective practice to augment MOSS with several external tools.Through the development of MOSS,we have established a complete technical roadmap for large language models from pre-training,supervised fine-tuning to alignment,verifying the feasibility of chatGPT under resource-limited conditions and providing a reference for both the academic and industrial communities.Model weights and code are publicly available at https://github.com/OpenMOSS/MOSS.展开更多
文摘Mathematics foundations of information security is a core course in the subject of information security.In view of the current national ideological and political conference in universities,finding a way to integrate this course with ideological and political education attracts a lot of attention from the education community.This paper makes an assay of the significance of the combination of mathematics foundations of information security course and ideological and political education,and introduces the teaching practice of mathematics foundations of information security course combined with ideological and political education.Through the combination of ideological and political education and curriculum content,cultivating all-round development of talents who study information security.
基金supported by the National Natural Science Foundation of China(No.62022027).
文摘Conversational large language models(LLMs)such as ChatGPT and GPT-4 have recently exhibited remarkable capabilities across various domains,capturing widespread attention from the public.To facilitate this line of research,in this paper,we report the development of MOSS,an open-sourced conversational LLM that contains 16 B parameters and can perform a variety of instructions in multi-turn interactions with humans.The base model of MOSS is pre-trained on large-scale unlabeled English,Chinese,and code data.To optimize the model for dialogue,we generate 1.1 M synthetic conversations based on user prompts collected through our earlier versions of the model API.We then perform preference-aware training on preference data annotated from AI feedback.Evaluation results on real-world use cases and academic benchmarks demonstrate the effectiveness of the proposed approaches.In addition,we present an effective practice to augment MOSS with several external tools.Through the development of MOSS,we have established a complete technical roadmap for large language models from pre-training,supervised fine-tuning to alignment,verifying the feasibility of chatGPT under resource-limited conditions and providing a reference for both the academic and industrial communities.Model weights and code are publicly available at https://github.com/OpenMOSS/MOSS.