Like every other societal domain,science faces yet another reckoning caused by a bot called ChatGPT(Chat Generative Pre-Trained Transformer).ChatGPT was introduced in November 2022 to produce messages that seem like t...Like every other societal domain,science faces yet another reckoning caused by a bot called ChatGPT(Chat Generative Pre-Trained Transformer).ChatGPT was introduced in November 2022 to produce messages that seem like they were written by humans and are conversational.With the release of the latest version of ChatGPT called GPT-4,and other similar models such as Google Bard,Chatsonic,Collosal Chat,these chatbots combine several(about 175 billion)neural networks pre-trained on large Language Models(LLMs),allowing them to respond to user promptings just like humans.GPT-4 for example can admit its mistakes and confront false assumptions thanks to the dialogue style,which also enables it to write essays and to keep track of the context of a discussion while it is happening.However,users may be deceived by the human-like text structure of the AI models to believe that it came from a human origin[1].These chatbot models could be better,even though they generate text with a high level of accuracy.Occasionally,they produce inappropriate or wrong responses,resulting in faulty inferences or ethical issues.This article will discuss some fundamental strengths and weaknesses of this Artificial intelligence(AI)system concerning scientific research.展开更多
基金financially supported by the 2115 Talent Development Program of China Agricultural University.
文摘Like every other societal domain,science faces yet another reckoning caused by a bot called ChatGPT(Chat Generative Pre-Trained Transformer).ChatGPT was introduced in November 2022 to produce messages that seem like they were written by humans and are conversational.With the release of the latest version of ChatGPT called GPT-4,and other similar models such as Google Bard,Chatsonic,Collosal Chat,these chatbots combine several(about 175 billion)neural networks pre-trained on large Language Models(LLMs),allowing them to respond to user promptings just like humans.GPT-4 for example can admit its mistakes and confront false assumptions thanks to the dialogue style,which also enables it to write essays and to keep track of the context of a discussion while it is happening.However,users may be deceived by the human-like text structure of the AI models to believe that it came from a human origin[1].These chatbot models could be better,even though they generate text with a high level of accuracy.Occasionally,they produce inappropriate or wrong responses,resulting in faulty inferences or ethical issues.This article will discuss some fundamental strengths and weaknesses of this Artificial intelligence(AI)system concerning scientific research.