期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Can ChatGPT evaluate research quality?
1
作者 Mike Thelwall 《Journal of Data and Information Science》 CSCD 2024年第2期1-21,共21页
Purpose:Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task.Design/methodology/approach:Test the extent to which ChatGPT-4 can assess ... Purpose:Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task.Design/methodology/approach:Test the extent to which ChatGPT-4 can assess the quality of journal articles using a case study of the published scoring guidelines of the UK Research Excellence Framework(REF)2021 to create a research evaluation ChatGPT.This was applied to 51 of my own articles and compared against my own quality judgements.Findings:ChatGPT-4 can produce plausible document summaries and quality evaluation rationales that match the REF criteria.Its overall scores have weak correlations with my self-evaluation scores of the same documents(averaging r=0.281 over 15 iterations,with 8 being statistically significantly different from 0).In contrast,the average scores from the 15 iterations produced a statistically significant positive correlation of 0.509.Thus,averaging scores from multiple ChatGPT-4 rounds seems more effective than individual scores.The positive correlation may be due to ChatGPT being able to extract the author’s significance,rigour,and originality claims from inside each paper.If my weakest articles are removed,then the correlation with average scores(r=0.200)falls below statistical significance,suggesting that ChatGPT struggles to make fine-grained evaluations.Research limitations:The data is self-evaluations of a convenience sample of articles from one academic in one field.Practical implications:Overall,ChatGPT does not yet seem to be accurate enough to be trusted for any formal or informal research quality evaluation tasks.Research evaluators,including journal editors,should therefore take steps to control its use.Originality/value:This is the first published attempt at post-publication expert review accuracy testing for ChatGPT. 展开更多
关键词 ChatGPT Large Language Models LLM research excellence framework REF 2021 research quality research assessment
下载PDF
Google Scholar University Ranking Algorithm to Evaluate the Quality of Institutional Research
2
作者 Noor Ul Sabah Muhammad Murad Khan +3 位作者 Ramzan Talib Muhammad Anwar Muhammad Sheraz Arshad Malik Puteri Nor Ellyza Nohuddin 《Computers, Materials & Continua》 SCIE EI 2023年第6期4955-4972,共18页
Education quality has undoubtedly become an important local and international benchmark for education,and an institute’s ranking is assessed based on the quality of education,research projects,theses,and dissertation... Education quality has undoubtedly become an important local and international benchmark for education,and an institute’s ranking is assessed based on the quality of education,research projects,theses,and dissertations,which has always been controversial.Hence,this research paper is influenced by the institutes ranking all over the world.The data of institutes are obtained through Google Scholar(GS),as input to investigate the United Kingdom’s Research Excellence Framework(UK-REF)process.For this purpose,the current research used a Bespoke Program to evaluate the institutes’ranking based on their source.The bespoke program requires changes to improve the results by addressing these methodological issues:Firstly,Redundant profiles,which increased their citation and rank to produce false results.Secondly,the exclusion of theses and dissertation documents to retrieve the actual publications to count for citations.Thirdly,the elimination of falsely owned articles from scholars’profiles.To accomplish this task,the experimental design referred to collecting data from 120 UK-REF institutes and GS for the present year to enhance its correlation analysis in this new evaluation.The data extracted from GS is processed into structured data,and afterward,it is utilized to generate statistical computations of citations’analysis that contribute to the ranking based on their citations.The research promoted the predictive approach of correlational research.Furthermore,experimental evaluation reported encouraging results in comparison to the previous modi-fication made by the proposed taxonomy.This paper discussed the limitations of the current evaluation and suggested the potential paths to improve the research impact algorithm. 展开更多
关键词 Google scholar institutes ranking research assessment exercise research excellence framework impact evaluation citation data
下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部