期刊文献+
共找到12篇文章
< 1 >
每页显示 20 50 100
Is big team research fair in national research assessments? The case of the UK Research Excellence Framework 2021
1
作者 Mike Thelwall Kayvan Kousha +4 位作者 Meiko Makita Mahshid Abdoli Emma Stuart Paul Wilson Jonathan Levitt 《Journal of Data and Information Science》 CSCD 2023年第1期9-20,共12页
Collaborative research causes problems for research assessments because of the difficulty in fairly crediting its authors.Whilst splitting the rewards for an article amongst its authors has the greatest surface-level ... Collaborative research causes problems for research assessments because of the difficulty in fairly crediting its authors.Whilst splitting the rewards for an article amongst its authors has the greatest surface-level fairness,many important evaluations assign full credit to each author,irrespective of team size.The underlying rationales for this are labour reduction and the need to incentivise collaborative work because it is necessary to solve many important societal problems.This article assesses whether full counting changes results compared to fractional counting in the case of the UK’s Research Excellence Framework(REF)2021.For this assessment,fractional counting reduces the number of journal articles to as little as 10%of the full counting value,depending on the Unit of Assessment(UoA).Despite this large difference,allocating an overall grade point average(GPA)based on full counting or fractional counting gives results with a median Pearson correlation within UoAs of 0.98.The largest changes are for Archaeology(r=0.84)and Physics(r=0.88).There is a weak tendency for higher scoring institutions to lose from fractional counting,with the loss being statistically significant in 5 of the 34 UoAs.Thus,whilst the apparent over-weighting of contributions to collaboratively authored outputs does not seem too problematic from a fairness perspective overall,it may be worth examining in the few UoAs in which it makes the most difference. 展开更多
关键词 COLLABORATION research assessment REF REF2021 research quality SCIENTOMETRICS
下载PDF
Can ChatGPT evaluate research quality?
2
作者 Mike Thelwall 《Journal of Data and Information Science》 CSCD 2024年第2期1-21,共21页
Purpose:Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task.Design/methodology/approach:Test the extent to which ChatGPT-4 can assess ... Purpose:Assess whether ChatGPT 4.0 is accurate enough to perform research evaluations on journal articles to automate this time-consuming task.Design/methodology/approach:Test the extent to which ChatGPT-4 can assess the quality of journal articles using a case study of the published scoring guidelines of the UK Research Excellence Framework(REF)2021 to create a research evaluation ChatGPT.This was applied to 51 of my own articles and compared against my own quality judgements.Findings:ChatGPT-4 can produce plausible document summaries and quality evaluation rationales that match the REF criteria.Its overall scores have weak correlations with my self-evaluation scores of the same documents(averaging r=0.281 over 15 iterations,with 8 being statistically significantly different from 0).In contrast,the average scores from the 15 iterations produced a statistically significant positive correlation of 0.509.Thus,averaging scores from multiple ChatGPT-4 rounds seems more effective than individual scores.The positive correlation may be due to ChatGPT being able to extract the author’s significance,rigour,and originality claims from inside each paper.If my weakest articles are removed,then the correlation with average scores(r=0.200)falls below statistical significance,suggesting that ChatGPT struggles to make fine-grained evaluations.Research limitations:The data is self-evaluations of a convenience sample of articles from one academic in one field.Practical implications:Overall,ChatGPT does not yet seem to be accurate enough to be trusted for any formal or informal research quality evaluation tasks.Research evaluators,including journal editors,should therefore take steps to control its use.Originality/value:This is the first published attempt at post-publication expert review accuracy testing for ChatGPT. 展开更多
关键词 ChatGPT Large Language Models LLM research Excellence Framework REF 2021 research quality research assessment
下载PDF
Google Scholar University Ranking Algorithm to Evaluate the Quality of Institutional Research
3
作者 Noor Ul Sabah Muhammad Murad Khan +3 位作者 Ramzan Talib Muhammad Anwar Muhammad Sheraz Arshad Malik Puteri Nor Ellyza Nohuddin 《Computers, Materials & Continua》 SCIE EI 2023年第6期4955-4972,共18页
Education quality has undoubtedly become an important local and international benchmark for education,and an institute’s ranking is assessed based on the quality of education,research projects,theses,and dissertation... Education quality has undoubtedly become an important local and international benchmark for education,and an institute’s ranking is assessed based on the quality of education,research projects,theses,and dissertations,which has always been controversial.Hence,this research paper is influenced by the institutes ranking all over the world.The data of institutes are obtained through Google Scholar(GS),as input to investigate the United Kingdom’s Research Excellence Framework(UK-REF)process.For this purpose,the current research used a Bespoke Program to evaluate the institutes’ranking based on their source.The bespoke program requires changes to improve the results by addressing these methodological issues:Firstly,Redundant profiles,which increased their citation and rank to produce false results.Secondly,the exclusion of theses and dissertation documents to retrieve the actual publications to count for citations.Thirdly,the elimination of falsely owned articles from scholars’profiles.To accomplish this task,the experimental design referred to collecting data from 120 UK-REF institutes and GS for the present year to enhance its correlation analysis in this new evaluation.The data extracted from GS is processed into structured data,and afterward,it is utilized to generate statistical computations of citations’analysis that contribute to the ranking based on their citations.The research promoted the predictive approach of correlational research.Furthermore,experimental evaluation reported encouraging results in comparison to the previous modi-fication made by the proposed taxonomy.This paper discussed the limitations of the current evaluation and suggested the potential paths to improve the research impact algorithm. 展开更多
关键词 Google scholar institutes ranking research assessment exercise research excellence framework impact evaluation citation data
下载PDF
Research Progress in Occupational Health Risk Assessment Methods in China 被引量:23
4
作者 ZHOU Li Fang TIAN Fang +3 位作者 ZOU Hua YUAN Wei Ming HAO Mo ZHANG Mei Bian 《Biomedical and Environmental Sciences》 SCIE CAS CSCD 2017年第8期616-622,共7页
Traditional occupational disease control and prevention has remained prevalent in China over recent decades. There are appropriately 30,000 new case reports of occupational diseases annually. Although China has alread... Traditional occupational disease control and prevention has remained prevalent in China over recent decades. There are appropriately 30,000 new case reports of occupational diseases annually. Although China has already established a series of occupational disease prevention programs, occupational health risk assessment (OHRA) strategies continue to be a limitation. 展开更多
关键词 HR research Progress in Occupational Health Risk assessment Methods in China
下载PDF
Achieve Research Excellence through Improvement of Science Assessment System
5
《Bulletin of the Chinese Academy of Sciences》 2017年第3期163-167,共5页
With the support of the Consultation and Evaluation Committee of the Academic Divisions of the Chinese Academy of Sciences(CAS),a task force headed by CAS Member FANG Rongxiang recently conducted a project entitled Sc... With the support of the Consultation and Evaluation Committee of the Academic Divisions of the Chinese Academy of Sciences(CAS),a task force headed by CAS Member FANG Rongxiang recently conducted a project entitled Science Value Assessment in an Increasingly Globalized World.Based on the study,the researchers presented a report on the improvement of China’s science assessment system.The following paragraphs present the major ideas of this report. 展开更多
关键词 AS In Achieve research Excellence through Improvement of Science assessment System
下载PDF
A Local Adaptation in an Output-Based Research Support Scheme(OBRSS)at University College Dublin 被引量:2
6
作者 Liam Cleere Lai Ma 《Journal of Data and Information Science》 CSCD 2018年第4期74-84,共11页
University College Dublin(UCD) has implemented the Output-Based Research Support Scheme(OBRSS) since 2016. Adapted from the Norwegian model, the OBRSS awards individual academic staff using a points system based on th... University College Dublin(UCD) has implemented the Output-Based Research Support Scheme(OBRSS) since 2016. Adapted from the Norwegian model, the OBRSS awards individual academic staff using a points system based on the number of publications and doctoral students. This article describes the design and implementation processes of the OBRSS, including the creation of the ranked publication list and points system and infrastructure requirements. Some results of the OBRSS will be presented, focusing on the coverage of publications reported in the OBRSS ranked publication list and Scopus, as well as information about spending patterns. Challenges such as the evaluation of the OBRSS in terms of fairness, transparency, and effectiveness will also be discussed. 展开更多
关键词 Output-Based research Support Scheme Norwegian model Performancebased funding research assessment research information systems
下载PDF
Citation distributions and research evaluations:The impossibility of formulating a universal indicator
7
作者 Alonso Rodríguez-Navarro 《Journal of Data and Information Science》 2024年第4期24-48,共25页
Purpose:To analyze the diversity of citation distributions to publications in different research topics to investigate the accuracy of size-independent,rank-based indicators.The top percentile-based indicators are the... Purpose:To analyze the diversity of citation distributions to publications in different research topics to investigate the accuracy of size-independent,rank-based indicators.The top percentile-based indicators are the most common indicators of this type,and the evaluations of Japan are the most evident misjudgments.Design/methodology/approach:The distributions of citations to publications from countries and journals in several research topics were analyzed along with the corresponding global publications using histograms with logarithmic binning,double rank plots,and normal probability plots of log-transformed numbers of citations.Findings:Size-independent,top percentile-based indicators are accurate when the global ranks of local publications fit a power law,but deviations in the least cited papers are frequent in countries and occur in all journals with high impact factors.In these cases,a single indicator is misleading.Comparisons of the proportions of uncited papers are the best way to predict these deviations.Research limitations:This study is fundamentally analytical,and its results describe mathematical facts that are self-evident.Practical implications:Respectable institutions,such as the OECD,the European Commission,and the U.S.National Science Board,produce research country rankings and individual evaluations using size-independent percentile indicators that are misleading in many countries.These misleading evaluations should be discontinued because they can cause confusion among research policymakers and lead to incorrect research policies.Originality/value:Studies linking the lower tail of citation distribution,including uncited papers,to percentile research indicators have not been performed previously.The present results demonstrate that studies of this type are necessary to find reliable procedures for research assessments. 展开更多
关键词 Scientometrics research assessment research indicators Citation distribution Rank analysis
下载PDF
The Climate-traffic Assessment Model Research and Annual Assessment
8
《Natural Disaster Reduction in China》 1997年第1期31-37,共7页
关键词 In The Climate-traffic assessment Model research and Annual assessment
原文传递
REVIEW OF RESEARCH ON DISASTER MONITORING AND ASSESSMENT AT LREIS AND DGIS
9
作者 (State Lab. of Resources and Environment Information System Departmentof Geographic Information System, NRSCC) 《Natural Disaster Reduction in China》 1995年第4期182-185,共4页
REVIEWOFRESEARCHONDISASTERMONITORINGANDASSESSMENTATLREISANDDGIS¥(StateLab.ofResourcesandEnvironmentInformati... REVIEWOFRESEARCHONDISASTERMONITORINGANDASSESSMENTATLREISANDDGIS¥(StateLab.ofResourcesandEnvironmentInformationSystemDepartmen... 展开更多
关键词 AT REVIEW OF research ON DISASTER MONITORING AND assessment AT LREIS AND DGIS
原文传递
Quality Assessment of Clinical Practice Guidelines for Integrative Medicine in China:A Systematic Review 被引量:3
10
作者 YAO Sha Wei Dang +4 位作者 CHEN Yao-long WANG Qi WANG Xiao-qin ZENG Zhao LI Hui 《Chinese Journal of Integrative Medicine》 SCIE CAS CSCD 2017年第5期381-385,共5页
Objective: To assess the quality of integrative medicine clinical practice guidelines(CPGs) published before 2014. Methods: A systematic search of the scientific literature published before 2014 was conducted to s... Objective: To assess the quality of integrative medicine clinical practice guidelines(CPGs) published before 2014. Methods: A systematic search of the scientific literature published before 2014 was conducted to select integrative medicine CPGs. Four major Chinese integrated databases and one guideline database were searched: the Chinese Biomedical Literature Database(CBM), the China National Knowledge Infrastructure(CNKI), China Science and Technology Journal Database(VIP), Wanfang Data, and the China Guideline Clearinghouse(CGC). Four reviewers independently assessed the quality of the included guidelines using the Appraisal of Guidelines for Research and Evaluation(AGREE) Ⅱ Instrument. Overall consensus among the reviewers was assessed using the intra-class correlation coefficient(ICC). Results: A total of 41 guidelines published from 2003 to 2014 were included. The overall consensus among the reviewers was good [ICC: 0.928; 95% confidence interval(CI): 0.920 to 0.935]. The scores on the 6 AGREE domains were: 17% for scope and purpose(range: 6% to 32%), 11% for stakeholder involvement(range: 0 to 24%), 10% for rigor of development(range: 3% to 22%), 39% for clarity and presentation(range: 25% to 64%), 11% for applicability(range: 4% to 24%), and 1% for editorial independence(range: 0 to 15%). Conclusions: The quality of integrative medicine CPGs was low, the development of integrative medicine CPGs should be guided by systematic methodology. More emphasis should be placed on multi-disciplinary guideline development groups, quality of evidence, management of funding and conflicts of interest, and guideline updates in the process of developing integrative medicine CPGs in China. 展开更多
关键词 clinical practice guideline Appraisal of Guidelines for research and Evaluation quality assessment
原文传递
Research methods in complementary and alternative medicine: an integrative review 被引量:1
11
作者 Fabiana de Almeida Andrade Caio Fabio Schlechta Portella 《Journal of Integrative Medicine》 SCIE CAS CSCD 2018年第1期6-13,共8页
The scientific literature presents a modest amount ot evidence m the use or complementary ana al[erna- tire medicine (CAM). On the other hand, in practice, relevant results are common. The debates among CAM practiti... The scientific literature presents a modest amount ot evidence m the use or complementary ana al[erna- tire medicine (CAM). On the other hand, in practice, relevant results are common. The debates among CAM practitioners about the quality and execution of scientific research are important, Therefore, the aim of this review is to gather, synthesize and describe the differentiated methodological models that encompass the complexity of therapeutic interventions. The process of bringing evidence-based medicine into clinical practice in CAM is essential for the growth and strengthening of complementary medicines worldwide. 展开更多
关键词 research methodology Complementary medicine Alternative medicine Complementary therapies Comparative effectiveness research Outcome assessment Nonlinear dynamics Evaluation studies
原文传递
Building Momentum to Realign Incentives to Support Open Science
12
作者 Heather Joseph 《Data Intelligence》 2021年第1期71-78,共8页
The COVID-19 pandemic highlights the urgent need to strengthen global scientific collaboration,and to ensure the fundamental right to universal access to scientific progress and its applications.Open Science(OS)is cen... The COVID-19 pandemic highlights the urgent need to strengthen global scientific collaboration,and to ensure the fundamental right to universal access to scientific progress and its applications.Open Science(OS)is central to achieving these goals.It aims to make science accessible,transparent,and effective by providing barrier-free access to scientific publications,data,and infrastructures,along with open software,Open Educational Resources,and open technologies.OS also promotes public trust in science at a time when it has never been more important to do so.Over the past decade,momentum towards the widespread adoption of OS practices has been primarily driven by declarations(e.g.,DORA,the Leiden Manifesto).These serve an important role,but for OS to truly take root,researchers also must be fully incentivized and rewarded for its practice.This requires research funders and academic leaders to take the lead in collaborating,with researchers in designing,and implementing new incentive structures,and to actively work to socialize these throughout the research ecosystem.The US National Academies of Science,Engineering,and Medicine(NASEM)Roundtable on Aligning Research Incentives for OS is one such effort.This paper examines the strategy behind convening the Roundtable,its current participant makeup,focus,and outputs.It also explores how this approach might be expanded and adapted throughout the global OS community. 展开更多
关键词 research assessment INCENTIVES Open Science NASEM Roundtable
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部