Purpose: This paper presents findings of a quasi-experimental assessment to gauge the research productivity and degree of interdisciplinarity of research center outputs. Of special interest, we share an enriched visu...Purpose: This paper presents findings of a quasi-experimental assessment to gauge the research productivity and degree of interdisciplinarity of research center outputs. Of special interest, we share an enriched visualization of research co-authoring patterns. Design/methodology/approach: We compile publications by 45 researchers in each of 1) the iUTAH project, which we consider here to be analogous to a "research center," 2) CG1-a comparison group of participants in two other Utah environmental research centers, and 3) CG2--a comparison group of Utah university environmental researchers not associated with a research center. We draw bibliometric data from Web of Science and from Google Scholar. We gather publications for a period before iUTAH had been established (2010-2012) and a period after (2014-2016). We compare these research outputs in terms of publications and citations thereto. We also measure interdisciplinarity using Integration scoring and generate science overlay maps to locate the research publications across disciplines. Findings: We find that participation in the iUTAH project appears to increase research outputs (publications in the After period) and increase research citation rates relative to the comparison group researchers (although CG 1 research remains most cited, as it was in the Before period). Most notably, participation in iUTAH markedly increases co-authoring among researchers--in general; and for junior, as well as senior, faculty; for men and women: across organizations; and across disciplines. Research limitations: The quasi-experimental design necessarily generates suggestive, not definitively causal, findings because of the imperfect controls. Practical implications: This study demonstrates a viable approach for research assessment of a center or program for which random assignment of control groups is not possible. It illustrates use of bibliometric indicators to inform R&D program management. Originality/value: New visualizations of researcher collaboration provide compelling comparisons of the extent and nature of social networking among target cohorts.展开更多
Asian leaf-litter toads of the genus Leptobrachella represent a great anuran diversification in Asia.Previous studies have suggested that the diversity of this genus is still underestimated. During herpetological surv...Asian leaf-litter toads of the genus Leptobrachella represent a great anuran diversification in Asia.Previous studies have suggested that the diversity of this genus is still underestimated. During herpetological surveys from 2013 to 2018, a series of Leptobrachella specimens were collected from the international border areas in the southern and western parts of Yunnan Province, China.Subsequent analyses based on morphological and molecular data revealed three distinct and previously unknown lineages, which we formally describe as three new species herein. Among them, we describe a new species that occurs at the highest known elevation for Leptobrachella in China. Four species of Leptobrachella, including two new species, are found in the same reserve. Furthermore, our results suggest that the population from Longchuan County,Yunnan, may represent an additional new species of Leptobrachella, although we tentatively assigned it to Leptobrachella cf. yingjiangensis due to the small sample size examined. Lastly, we provide the first description of females of L. yingjiangensis. Our results further highlight that both micro-endemism and sympatric distributions of species are common patterns in Leptobrachella, that contribute to taxonomic and conservation challenges in these frogs. We provide an identification key for Leptobrachella known to occur in Yunnan. Given the lack of knowledge on species diversity of Leptobrachella along international border areas, we recommend that future studies include transboundary collaborative surveys.展开更多
Purpose: The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and pr...Purpose: The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps Design/methodology/approach: We use the combined set of 2015 Journal Citation Reports for the Science Citation Index (n of journals = 8,778) and the Social Sciences Citation Index (n = 3,212) for a total of 11,365 journals. The set of Web of Science Categories in the Science Citation Index and the Social Sciences Citation Index increased from 224 in 2010 to 227 in 2015. Using dedicated software, a matrix of 227 × 227 cells is generated on the basis of whole-number citation counting. We normalize this matrix using the cosine function. We first develop the citing-side, cosine-normalized map using 2015 data and VOSviewer visualization with default parameter values. A routine for making overlays on the basis of the map ("wc 15.exe") is available at http://www.leydesdorff.net/wc 15/index.htm. Findings: Findings appear in the form of visuals throughout the manuscript. In Figures 1 9 we provide basemaps of science and science overlay maps for a number of companies, universities, and technologies. Research limitations: As Web of Science Categories change and/or are updated so is the need to update the routine we provide. Also, to apply the routine we provide users need access to the Web of Science. Practical implications: Visualization of science overlay maps is now more accurate and true to the 2015 Journal Citation Reports than was the case with the previous version of the routine advanced in our paper.Originality/value: The routine we advance allows users to visualize science overlay maps in VOSviewer using data from more recent Journal Citation Reports.展开更多
Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isol...Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames(as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.Design/methodology/approach: The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science(WOS) search for a given author's last name, followed by a comma, followed by the first initial of his or her first name(e.g., a search for ‘John Doe' would assume the form: ‘Doe, J'). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database(i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author(i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J' and ‘Doe, John' share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John' and ‘Doe, J' have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it's conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they're too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint(see www.thevantagepoint.com). While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user's part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.Our script begins by prompting the user for a surname and a first initial(for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name(referred to by the script as the primary author) within this field whom the user knows to be a true positive(a suggested approach is to point to an author name associated with one of the records that has the author's ORCID iD or email address attached to it).The script proceeds to identify and combine all author names sharing the primary author's surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller(and more manageable) dataset to manually inspect(and/or apply additional name disambiguation techniques to).Research limitations: Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user's part. Our procedure doesn't lend itself to scholars who have had a legal family name change(after marriage, for example). Moreover, the technique we advance is(sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.Practical implications: The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.Originality/value: Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.Findings: Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest(e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives(at least for more common surnames), it does significantly reduce manual effort on the user's part. Dataset reduction(after our procedure is applied) is in large part a function of(a) field availability and(b) field coverage.展开更多
Listeriosis is caused by Listeria monocytogenes(LM) and is currently considered to be one of the leading food-borne diseases worldwide, with mortality rate of 20%~30%. Currently, detection methods for LM are time-cons...Listeriosis is caused by Listeria monocytogenes(LM) and is currently considered to be one of the leading food-borne diseases worldwide, with mortality rate of 20%~30%. Currently, detection methods for LM are time-consuming with low sensitivity, and delayed detection results. SYTO9 has a high affinity for DNA and exhibits enhanced fluorescence upon binding. Therefore, this study used SYTO9 staining and image processing to develop a rapid loop mediated isothermal amplification(LAMP) detection method for LM. Smartphone was successfully used for detecting the color change in different concentrations of LM. Besides, the optimized LAMP reaction temperature was 63 °C by color identification, and the limit of detection for LM was 6 copies/μL in the green channel. So, the developed method, based on image processing, is simple, sensitive and rapid, which provides a new idea and method for rapid detection of LM and other food-borne bacterial pathogens.展开更多
Learning with coefficient-based regularization has attracted a considerable amount of attention in recent years, on both theoretical analysis and applications. In this paper, we study coefficient-based learning scheme...Learning with coefficient-based regularization has attracted a considerable amount of attention in recent years, on both theoretical analysis and applications. In this paper, we study coefficient-based learning scheme (CBLS) for regression problem with /q-regularizer (1 〈 q ≤ 2). Our analysis is conducted under more general conditions, and particularly the kernel function is not necessarily positive definite. This paper applies concentration inequality with/2-empirical covering numbers to present an elaborate capacity dependence analysis for CBLS, which yields sharper estimates than existing bounds. Moreover, we estimate the regularization error to support our assumptions in error analysis, also provide an illustrative example to further verify the theoretical results.展开更多
基金The five-year "innovative Urban Transitions and Aridregion Hydro-sustainability" (iUTAH) project was initiated in 2012 with support from the US National Science Foundation’s (NSF) "Established Program to Stimulate Competitive Research" (EPSCo R, award # OIA-1208732)
文摘Purpose: This paper presents findings of a quasi-experimental assessment to gauge the research productivity and degree of interdisciplinarity of research center outputs. Of special interest, we share an enriched visualization of research co-authoring patterns. Design/methodology/approach: We compile publications by 45 researchers in each of 1) the iUTAH project, which we consider here to be analogous to a "research center," 2) CG1-a comparison group of participants in two other Utah environmental research centers, and 3) CG2--a comparison group of Utah university environmental researchers not associated with a research center. We draw bibliometric data from Web of Science and from Google Scholar. We gather publications for a period before iUTAH had been established (2010-2012) and a period after (2014-2016). We compare these research outputs in terms of publications and citations thereto. We also measure interdisciplinarity using Integration scoring and generate science overlay maps to locate the research publications across disciplines. Findings: We find that participation in the iUTAH project appears to increase research outputs (publications in the After period) and increase research citation rates relative to the comparison group researchers (although CG 1 research remains most cited, as it was in the Before period). Most notably, participation in iUTAH markedly increases co-authoring among researchers--in general; and for junior, as well as senior, faculty; for men and women: across organizations; and across disciplines. Research limitations: The quasi-experimental design necessarily generates suggestive, not definitively causal, findings because of the imperfect controls. Practical implications: This study demonstrates a viable approach for research assessment of a center or program for which random assignment of control groups is not possible. It illustrates use of bibliometric indicators to inform R&D program management. Originality/value: New visualizations of researcher collaboration provide compelling comparisons of the extent and nature of social networking among target cohorts.
基金the National Natural Science Foundation of China(31900323 to J.M.C.,31622052 to J.C.)Southeast Asia Biodiversity Research Institute,Chinese Academy of Sciences(CAS)(Y4ZK111B01:2017CASSEABRIQG002)Nanjing Institute of Environmental Sciences,Ministry of Environmental Protection of China,and the Animal Branch of the Germplasm Bank of Wild Species,CAS(Large Research Infrastructure Funding)to J.C.,Russian Science Foundation(19-14-00050)to N.A.P.,Biodiversity Investigation,Observation and Assessment Program(2019-2023)of Ministry of Ecology and Environment of China to Z.Y.Y.,and Unit of Excellence 2020 on Biodiversity and Natural Resources Management,University of Phayao to C.S.
文摘Asian leaf-litter toads of the genus Leptobrachella represent a great anuran diversification in Asia.Previous studies have suggested that the diversity of this genus is still underestimated. During herpetological surveys from 2013 to 2018, a series of Leptobrachella specimens were collected from the international border areas in the southern and western parts of Yunnan Province, China.Subsequent analyses based on morphological and molecular data revealed three distinct and previously unknown lineages, which we formally describe as three new species herein. Among them, we describe a new species that occurs at the highest known elevation for Leptobrachella in China. Four species of Leptobrachella, including two new species, are found in the same reserve. Furthermore, our results suggest that the population from Longchuan County,Yunnan, may represent an additional new species of Leptobrachella, although we tentatively assigned it to Leptobrachella cf. yingjiangensis due to the small sample size examined. Lastly, we provide the first description of females of L. yingjiangensis. Our results further highlight that both micro-endemism and sympatric distributions of species are common patterns in Leptobrachella, that contribute to taxonomic and conservation challenges in these frogs. We provide an identification key for Leptobrachella known to occur in Yunnan. Given the lack of knowledge on species diversity of Leptobrachella along international border areas, we recommend that future studies include transboundary collaborative surveys.
文摘Purpose: The purpose of this study is to modernize previous work on science overlay maps by updating the underlying citation matrix, generating new clusters of scientific disciplines, enhancing visualizations, and providing more accessible means for analysts to generate their own maps Design/methodology/approach: We use the combined set of 2015 Journal Citation Reports for the Science Citation Index (n of journals = 8,778) and the Social Sciences Citation Index (n = 3,212) for a total of 11,365 journals. The set of Web of Science Categories in the Science Citation Index and the Social Sciences Citation Index increased from 224 in 2010 to 227 in 2015. Using dedicated software, a matrix of 227 × 227 cells is generated on the basis of whole-number citation counting. We normalize this matrix using the cosine function. We first develop the citing-side, cosine-normalized map using 2015 data and VOSviewer visualization with default parameter values. A routine for making overlays on the basis of the map ("wc 15.exe") is available at http://www.leydesdorff.net/wc 15/index.htm. Findings: Findings appear in the form of visuals throughout the manuscript. In Figures 1 9 we provide basemaps of science and science overlay maps for a number of companies, universities, and technologies. Research limitations: As Web of Science Categories change and/or are updated so is the need to update the routine we provide. Also, to apply the routine we provide users need access to the Web of Science. Practical implications: Visualization of science overlay maps is now more accurate and true to the 2015 Journal Citation Reports than was the case with the previous version of the routine advanced in our paper.Originality/value: The routine we advance allows users to visualize science overlay maps in VOSviewer using data from more recent Journal Citation Reports.
基金support from the US National Science Foundation under Award 1645237
文摘Purpose: The ability to identify the scholarship of individual authors is essential for performance evaluation. A number of factors hinder this endeavor. Common and similarly spelled surnames make it difficult to isolate the scholarship of individual authors indexed on large databases. Variations in name spelling of individual scholars further complicates matters. Common family names in scientific powerhouses like China make it problematic to distinguish between authors possessing ubiquitous and/or anglicized surnames(as well as the same or similar first names). The assignment of unique author identifiers provides a major step toward resolving these difficulties. We maintain, however, that in and of themselves, author identifiers are not sufficient to fully address the author uncertainty problem. In this study we build on the author identifier approach by considering commonalities in fielded data between authors containing the same surname and first initial of their first name. We illustrate our approach using three case studies.Design/methodology/approach: The approach we advance in this study is based on commonalities among fielded data in search results. We cast a broad initial net—i.e., a Web of Science(WOS) search for a given author's last name, followed by a comma, followed by the first initial of his or her first name(e.g., a search for ‘John Doe' would assume the form: ‘Doe, J'). Results for this search typically contain all of the scholarship legitimately belonging to this author in the given database(i.e., all of his or her true positives), along with a large amount of noise, or scholarship not belonging to this author(i.e., a large number of false positives). From this corpus we proceed to iteratively weed out false positives and retain true positives. Author identifiers provide a good starting point—e.g., if ‘Doe, J' and ‘Doe, John' share the same author identifier, this would be sufficient for us to conclude these are one and the same individual. We find email addresses similarly adequate—e.g., if two author names which share the same surname and same first initial have an email address in common, we conclude these authors are the same person. Author identifier and email address data is not always available, however. When this occurs, other fields are used to address the author uncertainty problem.Commonalities among author data other than unique identifiers and email addresses is less conclusive for name consolidation purposes. For example, if ‘Doe, John' and ‘Doe, J' have an affiliation in common, do we conclude that these names belong the same person? They may or may not; affiliations have employed two or more faculty members sharing the same last and first initial. Similarly, it's conceivable that two individuals with the same last name and first initial publish in the same journal, publish with the same co-authors, and/or cite the same references. Should we then ignore commonalities among these fields and conclude they're too imprecise for name consolidation purposes? It is our position that such commonalities are indeed valuable for addressing the author uncertainty problem, but more so when used in combination.Our approach makes use of automation as well as manual inspection, relying initially on author identifiers, then commonalities among fielded data other than author identifiers, and finally manual verification. To achieve name consolidation independent of author identifier matches, we have developed a procedure that is used with bibliometric software called VantagePoint(see www.thevantagepoint.com). While the application of our technique does not exclusively depend on VantagePoint, it is the software we find most efficient in this study. The script we developed to implement this procedure is designed to implement our name disambiguation procedure in a way that significantly reduces manual effort on the user's part. Those who seek to replicate our procedure independent of VantagePoint can do so by manually following the method we outline, but we note that the manual application of our procedure takes a significant amount of time and effort, especially when working with larger datasets.Our script begins by prompting the user for a surname and a first initial(for any author of interest). It then prompts the user to select a WOS field on which to consolidate author names. After this the user is prompted to point to the name of the authors field, and finally asked to identify a specific author name(referred to by the script as the primary author) within this field whom the user knows to be a true positive(a suggested approach is to point to an author name associated with one of the records that has the author's ORCID iD or email address attached to it).The script proceeds to identify and combine all author names sharing the primary author's surname and first initial of his or her first name who share commonalities in the WOS field on which the user was prompted to consolidate author names. This typically results in significant reduction in the initial dataset size. After the procedure completes the user is usually left with a much smaller(and more manageable) dataset to manually inspect(and/or apply additional name disambiguation techniques to).Research limitations: Match field coverage can be an issue. When field coverage is paltry dataset reduction is not as significant, which results in more manual inspection on the user's part. Our procedure doesn't lend itself to scholars who have had a legal family name change(after marriage, for example). Moreover, the technique we advance is(sometimes, but not always) likely to have a difficult time dealing with scholars who have changed careers or fields dramatically, as well as scholars whose work is highly interdisciplinary.Practical implications: The procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research, especially when the name under consideration is a more common family name. It is more effective when match field coverage is high and a number of match fields exist.Originality/value: Once again, the procedure we advance has the ability to save a significant amount of time and effort for individuals engaged in name disambiguation research. It combines preexisting with more recent approaches, harnessing the benefits of both.Findings: Our study applies the name disambiguation procedure we advance to three case studies. Ideal match fields are not the same for each of our case studies. We find that match field effectiveness is in large part a function of field coverage. Comparing original dataset size, the timeframe analyzed for each case study is not the same, nor are the subject areas in which they publish. Our procedure is more effective when applied to our third case study, both in terms of list reduction and 100% retention of true positives. We attribute this to excellent match field coverage, and especially in more specific match fields, as well as having a more modest/manageable number of publications.While machine learning is considered authoritative by many, we do not see it as practical or replicable. The procedure advanced herein is both practical, replicable and relatively user friendly. It might be categorized into a space between ORCID and machine learning. Machine learning approaches typically look for commonalities among citation data, which is not always available, structured or easy to work with. The procedure we advance is intended to be applied across numerous fields in a dataset of interest(e.g. emails, coauthors, affiliations, etc.), resulting in multiple rounds of reduction. Results indicate that effective match fields include author identifiers, emails, source titles, co-authors and ISSNs. While the script we present is not likely to result in a dataset consisting solely of true positives(at least for more common surnames), it does significantly reduce manual effort on the user's part. Dataset reduction(after our procedure is applied) is in large part a function of(a) field availability and(b) field coverage.
基金supported by the grant from the National Natural Science Foundation of China (Nos. 61901168, 8200240581902153)+1 种基金Zhuzhou Innovative City Construction Project(No. 2020–020)China Postdoctoral Science Foundation (No.2018M630498)。
文摘Listeriosis is caused by Listeria monocytogenes(LM) and is currently considered to be one of the leading food-borne diseases worldwide, with mortality rate of 20%~30%. Currently, detection methods for LM are time-consuming with low sensitivity, and delayed detection results. SYTO9 has a high affinity for DNA and exhibits enhanced fluorescence upon binding. Therefore, this study used SYTO9 staining and image processing to develop a rapid loop mediated isothermal amplification(LAMP) detection method for LM. Smartphone was successfully used for detecting the color change in different concentrations of LM. Besides, the optimized LAMP reaction temperature was 63 °C by color identification, and the limit of detection for LM was 6 copies/μL in the green channel. So, the developed method, based on image processing, is simple, sensitive and rapid, which provides a new idea and method for rapid detection of LM and other food-borne bacterial pathogens.
基金supported by National Natural Science Foundation of China (Grant Nos.11226111 and 71171166)
文摘Learning with coefficient-based regularization has attracted a considerable amount of attention in recent years, on both theoretical analysis and applications. In this paper, we study coefficient-based learning scheme (CBLS) for regression problem with /q-regularizer (1 〈 q ≤ 2). Our analysis is conducted under more general conditions, and particularly the kernel function is not necessarily positive definite. This paper applies concentration inequality with/2-empirical covering numbers to present an elaborate capacity dependence analysis for CBLS, which yields sharper estimates than existing bounds. Moreover, we estimate the regularization error to support our assumptions in error analysis, also provide an illustrative example to further verify the theoretical results.