Purpose: This paper presents an overview of different kinds of lists of scholarly publication channels and of experiences related to the construction and maintenance of national lists supporting performance-based rese...Purpose: This paper presents an overview of different kinds of lists of scholarly publication channels and of experiences related to the construction and maintenance of national lists supporting performance-based research funding systems. It also contributes with a set of recommendations for the construction and maintenance of national lists of journals and book publishers.Design/methodology/approach: The study is based on analysis of previously published studies, policy papers, and reported experiences related to the construction and use of lists of scholarly publication channels. Findings: Several countries have systems for research funding and/or evaluation, that involve the use of national lists of scholarly publication channels(mainly journals and publishers). Typically, such lists are selective(do not include all scholarly or non-scholarly channels) and differentiated(distinguish between channels of different levels and quality). At the same time, most lists are embedded in a system that encompasses multiple or all disciplines. This raises the question how such lists can be organized and maintained to ensure that all relevant disciplines and all types of research are adequately represented. Research limitation: The conclusions and recommendations of the study are based on the authors' interpretation of a complex and sometimes controversial process with many different stakeholders involved.Practical implications: The recommendations and the related background information provided in this paper enable mutual learning that may feed into improvements in the construction and maintenance of national and other lists of scholarly publication channels in any geographical context. This may foster a development of responsible evaluation practices.Originality/value: This paper presents the first general overview and typology of different kinds of publication channel lists, provides insights on expert-based versus metrics-based evaluation, and formulates a set of recommendations for the responsible construction and maintenance of publication channel lists.展开更多
Two journal-level indicators,respectively the mean(mi)and the standard deviation(vi)are proposed to be the core indicators of each journal and we show that quite several other indicators can be calculated from those t...Two journal-level indicators,respectively the mean(mi)and the standard deviation(vi)are proposed to be the core indicators of each journal and we show that quite several other indicators can be calculated from those two core indicators,assuming that yearly citation counts of papers in each journal follow more or less a log-normal distribution.Those other journal-level indicators include journal index,journal one-by-one-sample comparison citation success index S_(j)^(i),journal multiple-sample K^(i)-K^(j) comparison success rate S_(j,k^(j)^(i,k^(i))),and minimum representative sizes k_(j)^(i) and k_(i)^(j),the average ranking of all papers in a journal in a set of journals(R^(t)).We find that those indicators are consistent with those calculated directly using the raw citation data({C^(i)=(c_(1)^(i),c_(2)^(j),...c_(N)^(i),■i})of journals.In addition to its theoretical significance,the ability to estimate other indicators from core indicators has practical implications.This feature enables individuals who lack access to raw citation count data to utilize other indicators by simply using core indicators,which are typically easily accessible.展开更多
文摘Purpose: This paper presents an overview of different kinds of lists of scholarly publication channels and of experiences related to the construction and maintenance of national lists supporting performance-based research funding systems. It also contributes with a set of recommendations for the construction and maintenance of national lists of journals and book publishers.Design/methodology/approach: The study is based on analysis of previously published studies, policy papers, and reported experiences related to the construction and use of lists of scholarly publication channels. Findings: Several countries have systems for research funding and/or evaluation, that involve the use of national lists of scholarly publication channels(mainly journals and publishers). Typically, such lists are selective(do not include all scholarly or non-scholarly channels) and differentiated(distinguish between channels of different levels and quality). At the same time, most lists are embedded in a system that encompasses multiple or all disciplines. This raises the question how such lists can be organized and maintained to ensure that all relevant disciplines and all types of research are adequately represented. Research limitation: The conclusions and recommendations of the study are based on the authors' interpretation of a complex and sometimes controversial process with many different stakeholders involved.Practical implications: The recommendations and the related background information provided in this paper enable mutual learning that may feed into improvements in the construction and maintenance of national and other lists of scholarly publication channels in any geographical context. This may foster a development of responsible evaluation practices.Originality/value: This paper presents the first general overview and typology of different kinds of publication channel lists, provides insights on expert-based versus metrics-based evaluation, and formulates a set of recommendations for the responsible construction and maintenance of publication channel lists.
文摘Two journal-level indicators,respectively the mean(mi)and the standard deviation(vi)are proposed to be the core indicators of each journal and we show that quite several other indicators can be calculated from those two core indicators,assuming that yearly citation counts of papers in each journal follow more or less a log-normal distribution.Those other journal-level indicators include journal index,journal one-by-one-sample comparison citation success index S_(j)^(i),journal multiple-sample K^(i)-K^(j) comparison success rate S_(j,k^(j)^(i,k^(i))),and minimum representative sizes k_(j)^(i) and k_(i)^(j),the average ranking of all papers in a journal in a set of journals(R^(t)).We find that those indicators are consistent with those calculated directly using the raw citation data({C^(i)=(c_(1)^(i),c_(2)^(j),...c_(N)^(i),■i})of journals.In addition to its theoretical significance,the ability to estimate other indicators from core indicators has practical implications.This feature enables individuals who lack access to raw citation count data to utilize other indicators by simply using core indicators,which are typically easily accessible.