期刊文献+
共找到8篇文章
< 1 >
每页显示 20 50 100
Methodological quality(risk of bias) assessment tools for primary and secondary medical studies: What are they and which is better? 被引量:32
1
作者 Lin-Lu Ma Yun-Yun Wang +3 位作者 Zhi-Hua Yang Di Huang Hong Weng Xian-Tao Zeng 《Military Medical Research》 SCIE CAS CSCD 2020年第3期359-370,共12页
Methodological quality(risk of bias)assessment is an important step before study initiation usage.Therefore,accurately judging study type is the first priority,and the choosing proper tool is also important.In this re... Methodological quality(risk of bias)assessment is an important step before study initiation usage.Therefore,accurately judging study type is the first priority,and the choosing proper tool is also important.In this review,we introduced methodological quality assessment tools for randomized controlled trial(including individual and cluster),animal study,non-randomized interventional studies(including follow-up study,controlled before-and-after study,before-after/pre-post study,uncontrolled longitudinal study,interrupted time series study),cohort study,case-control study,cross-sectional study(including analytical and descriptive),observational case series and case reports,comparative effectiveness research,diagnostic study,health economic evaluation,prediction study(including predictor finding study,prediction model impact study,prognostic prediction model study),qualitative study,outcome measurement instruments(including patient-reported outcome measure development,content validity,structural validity,internal consistency,cross-cultural validity/measurement invariance,reliability,measurement error,criterion validity,hypotheses testing for construct validity,and responsiveness),systematic review and meta-analysis,and clinical practice guideline.The readers of our review can distinguish the types of medical studies and choose appropriate tools.In one word,comprehensively mastering relevant knowledge and implementing more practices are basic requirements for correctly assessing the methodological quality. 展开更多
关键词 methodological quality Risk of bias quality assessment Critical appraisal Methodology checklist Appraisal tool Observational study Qualitative study Interventional study Outcome measurement instrument
下载PDF
Methodological quality assessment of meta-analyses of cognitive interventions among Alzheimer's disease
2
作者 Guang-Hong Han Xiao-Li Pang +1 位作者 Wei Wang Hui-Li Sun 《TMR Integrative Nursing》 2021年第5期170-173,共4页
Aims:The current study is designed to assess the methodological quality of the meta-analyses of cognitive interventions among Alzheimer's disease,and to investigate the compliance with 16 AMSTAR 2 items.Method:We ... Aims:The current study is designed to assess the methodological quality of the meta-analyses of cognitive interventions among Alzheimer's disease,and to investigate the compliance with 16 AMSTAR 2 items.Method:We searched Web of Scie nce,Sino med,PubMed,Embase,and Cochra ne Library from 2016 to 2021,to get the meta-a nalyses of cog nitive in terve nti ons in Alzheimer's disease.The AMSTAR 2 was used to assess the methodological quality.Furthermore,we also explored the complia nce with AMSTAR.Resul ts:We in cluded 9 studies in our research.Of them,6 articles were rated as"extremely low",2 articles as"low"and 1 article as"high".Furthermore,the reporting rates for 16 AMSTAR 2 items ranged from 22.22%to 100%.Conclusion:The methodological quality of meta-analyses of cognitive interventions in Alzheimer's disease is not ideal,and there is still room for improvement.Future studies are supposed to explore the releva nt factors that possibly in flue nce the methodological quality. 展开更多
关键词 Alzheimer's disease AMSTAR methodological quality META-ANALYSES Cognitive interventions
下载PDF
Methodological quality evaluation of systematic reviews of music therapy for Alzheimer’s disease in the recent five years
3
作者 Guang-Hong Han Xiao-Li Pang +1 位作者 Wei Wang Hui-Li Sun 《Aging Communications》 2022年第1期1-5,共5页
Aims:This study aims to investigate the methodological quality of the systematic reviews of music therapy for Alzheimer’s Disease in the past five years,and to explore their compliance with each AMSTAR(A Measure Tool... Aims:This study aims to investigate the methodological quality of the systematic reviews of music therapy for Alzheimer’s Disease in the past five years,and to explore their compliance with each AMSTAR(A Measure Tool to Assess Systematic Reviews)item.Based on the above,this study is intended to facilitate the evidence transformation of music therapy in Alzheimer's Disease.Method:Cochrane Library,Web of Science,Embase and PubMed were searched from 2017-2021,to obtain the systematic reviews of music therapy among Alzheimer’s Disease.We used the AMSTAR,to evaluate their methodological quality.Additionally,their compliance with 16 AMSTAR items was investigated.Results:12 systematic reviews were included in this study.The methodological quality of 10 articles was“very low”,1 article was“low”,and 1 article was“high”.The 12 systematic reviews had 25%to 100%compliance with 16 AMSTAR items.Conclusion:The methodological quality of systematic reviews of music therapy among Alzheimer’s Disease in the recent five years is not high,and needs to be further improved.Future research should continue to explore the factors that affect their methodological quality,to promote the transformation of evidence-based evidence. 展开更多
关键词 Alzheimer’s Disease methodological quality Music therapy AMSTAR Systematic review
下载PDF
The need to develop tailored tools for improving the quality of thematic bibliometric analyses: Evidence from papers published in Sustainability and Scientometrics
4
作者 Alvaro Cabezas-Clavijo Yusnelkis Milanés-Guisado +1 位作者 Ruben Alba-Ruiz Ángel MDelgado-Vázquez 《Journal of Data and Information Science》 CSCD 2023年第4期10-35,共26页
Purpose: The aim of this article is to explore up to seven parameters related to the methodological quality and reproducibility of thematic bibliometric research published in the two most productive journals in biblio... Purpose: The aim of this article is to explore up to seven parameters related to the methodological quality and reproducibility of thematic bibliometric research published in the two most productive journals in bibliometrics, Sustainability(a journal outside the discipline) and Scientometrics, the flagship journal in the field.Design/methodology/approach: The study identifies the need for developing tailored tools for improving the quality of thematic bibliometric analyses, and presents a framework that can guide the development of such tools. A total of 508 papers are analysed, 77% of Sustainability, and 23% published in Scientometrics, for the 2019-2021 period.Findings: An average of 2.6 shortcomings per paper was found for the whole sample, with an almost identical number of flaws in both journals. Sustainability has more flaws than Scientometrics in four of the seven parameters studied, while Scientometrics has more shortcomings in the remaining three variables.Research limitations: The first limitation of this work is that it is a study of two scientific journals, so the results cannot be directly extrapolated to the set of thematic bibliometric analyses published in journals from all fields.Practical implications: We propose the adoption of protocols, guidelines, and other similar tools, adapted to bibliometric practice, which could increase the thoroughness, transparency, and reproducibility of this type of research.Originality/value: These results show considerable room for improvement in terms of the adequate use and breakdown of methodological procedures in thematic bibliometric research, both in journals in the Information Science area and journals outside the discipline. 展开更多
关键词 Thematic bibliometric analyses SUSTAINABILITY SCIENTOMETRICS Reproducibility methodological quality
下载PDF
Unsatisfied methodological qualities assessment of systematic reviews/Meta-analyses on Chinese medicine for stroke and their risk factors
5
作者 Jia-Ying Wang Nan Li +1 位作者 Jun-Feng Wang Ming-Hui Wang 《Medical Data Mining》 2021年第1期1-9,共9页
Background:Stroke is not only high in morbidity and mortality but also poses a great burden of disease and it is also the most reported disease in Chinese medicine systematic reviews.Therefore,the quality of such evid... Background:Stroke is not only high in morbidity and mortality but also poses a great burden of disease and it is also the most reported disease in Chinese medicine systematic reviews.Therefore,the quality of such evidence couldn’t be ignored.This study aims to use a measurement tool to assess systematic reviews(AMSTAR)to assess the methodological qualities of SR/Meta-analyses of Chinese medicine on stroke.Methods:Systematic searching of seven electronic databases and PROSPERO registration platform was carried out.Two researchers separately selected studies,extracted bibliographical characteristics and scored every included study independently after training.Total score and the proportion of each item completion were explored in different subgroup comparisons.Spearman rank correlation and multivariable logistic regression were used to measure the association between bibliographical characteristics and total score or each item.Results:Total average score of AMSTAR 1.0 checklists of 234 systematic reviews/Meta-analyses was 4.47(95%CI 4.27–4.68)and the qualities were unsatisfied especially in terms of priori setting(2.14%),grey literature inclusion(5.13%),providing a list of excluded studies(2.14%)and conflict of interest(0.00%).No improvement was found in 3 years even after the publication of AMSTAR.Chinese or nonregistered systematic reviews/Meta-analyses showed even worse methodological qualities(P<0.01).Positive correlation was found between individual items and number of pages,number of authors,research questions,languages or Meta-analyse separately(P<0.05).Conclusion:The methodological qualities of systematic reviews/Meta-analyses of Chinese medicine on stroke are poor especially Chinese studies,non-registered studies,brief studies and studies without Meta-analyse or cooperation.There is no obvious improvement over these years even after the publication of AMSTAR tool,so it is urgent to promote the use of AMSTAR or develop other efficient methods to control the quantity and monitor the quality in future. 展开更多
关键词 STROKE Meta-analysis MEDICINE Chinese traditional Methodology quality
下载PDF
Clinical practice guidelines for traditional Chinese medicine and integrated traditional Chinese and western medicine:a cross-sectional study of data analysis from 2010 to 2020
6
作者 Jie Zhou Jing Guo +9 位作者 Jia-Ying Wang Qiao Huang Rong Zhang Zheng-Rong Zhao Hong-Jie Xia Xiang-Ying Ren Yi-Bei Si Jian-Peng Liao Ying-Hui Jin Hong-Cai Shang 《TMR Modern Herbal Medicine》 CAS 2022年第1期20-38,共19页
Objective With the increasing publication of clinical practice guidelines(CPG)for Traditional Chinese Medicine(TCM)and Integrated Traditional Chinese and Western Medicine(IM),the standardization and scientifiction of ... Objective With the increasing publication of clinical practice guidelines(CPG)for Traditional Chinese Medicine(TCM)and Integrated Traditional Chinese and Western Medicine(IM),the standardization and scientifiction of its formulation have gradually attracted many people’s attention.To offer an overview of TCM and IM CPGs published over the past decade and analyze their general characteristics and methodological quality.Methods The China National Knowledge Infrastructure(CNKI)and WANFANG databases were searched for clinical practice guidelines and expert consensus papers from January 2010 to June 2021.Two researchers independently completed the literature screening and cross-checking according to the inclusion and exclusion criteria of CPGs and extracted information on general characteristics and methodological quality of CPGs.Results According to the selection criteria,231 CPGs(EB-CPGs=119,CBCPGs=112)were selected and the number of CPGs published in the 11 years showed an overall upward trend.The vast majority of CPGs used the Western naming system for the diseases,and only 11 CPGs were named of TCM diseases or symptoms.TCM treatments were recommended in 223 CPGs.There were 156 ancient Chinese Medicine literature sources cited in 231 CPGs and opinions and experiences of 62 TCM experts cited in 37 CPGs.The methodological quality of EB-CPGs for TCM and IM were significantly better than CB-CPGs in 11 items.Only 60 EB-CPGs and 7 CB-CPGs designated clear criteria for grading quality of evidence and strength of the recommendations and 74 CPGs presented both the level of evidence and the strength of recommendations.We classified all CPGs according to whether or not they used GRADE,and the results showed that the CPGs using GRADE had higher methodological quality and more standardized reports.Conclusion This research has shown that the quantity and quality of CPGs in both TCM and IM have improved over the time span,but the methodological quality,especially evidence citation,and the use of criteria for grading quality of evidence and strength of the recommendations,still needs to further improvement in the future. 展开更多
关键词 Evidence-based CPG Consensus-based CPG Traditional Chinese medicine Integrated traditional Chinese and western medicine methodological quality
下载PDF
Delphi methodology in healthcare research:How to decide its appropriateness 被引量:23
7
作者 Prashant Nasa Ravi Jain Deven Juneja 《World Journal of Methodology》 2021年第4期116-129,共14页
The Delphi technique is a systematic process of forecasting using the collective opinion of panel members.The structured method of developing consensus among panel members using Delphi methodology has gained acceptanc... The Delphi technique is a systematic process of forecasting using the collective opinion of panel members.The structured method of developing consensus among panel members using Delphi methodology has gained acceptance in diverse fields of medicine.The Delphi methods assumed a pivotal role in the last few decades to develop best practice guidance using collective intelligence where research is limited,ethically/logistically difficult or evidence is conflicting.However,the attempts to assess the quality standard of Delphi studies have reported significant variance,and details of the process followed are usually unclear.We recommend systematic quality tools for evaluation of Delphi methodology;identification of problem area of research,selection of panel,anonymity of panelists,controlled feedback,iterative Delphi rounds,consensus criteria,analysis of consensus,closing criteria,and stability of the results.Based on these nine qualitative evaluation points,we assessed the quality of Delphi studies in the medical field related to coronavirus disease 2019.There was inconsistency in reporting vital elements of Delphi methods such as identification of panel members,defining consensus,closing criteria for rounds,and presenting the results.We propose our evaluation points for researchers,medical journal editorial boards,and reviewers to evaluate the quality of the Delphi methods in healthcare research. 展开更多
关键词 Delphi studies quality tools for methodology Research methods Delphi technique CONSENSUS Expert panel Coronavirus disease 2019 SARS-CoV-2
下载PDF
Investigation and evaluation of randomized controlled trials for interventions involving artificial intelligence
8
作者 Jianjian Wang Shouyuan Wu +16 位作者 Qiangqiang Guo Hui Lan Estill Janne Ling Wang Juanjuan Zhang Qi Wang Yang Song Nan Yang Xufei Luo Qi Zhou Qianling Shi Xuan Yu Yanfang Ma Joseph LMathew Hyeong Sik Ahn Myeong Soo Lee Yaolong Chen 《Intelligent Medicine》 2021年第2期61-69,共9页
Objective Complete and transparent reporting is of critical importance for randomized controlled trials(RCTs).The present study aimed to determine the reporting quality and methodological quality of RCTs for intervent... Objective Complete and transparent reporting is of critical importance for randomized controlled trials(RCTs).The present study aimed to determine the reporting quality and methodological quality of RCTs for interventions involving artificial intelligence(AI)and their protocols.Methods We searched MEDLINE(via PubMed),Embase,Web of Science,CBMdisc,Wanfang Data,and CNKI from January 1,2016,to November 11,2020,to collect RCTs involving AI.We also extracted the protocol of each included RCT if it could be obtained.CONSORT-AI(Consolidated Standards of Reporting Trials-Artificial Intelligence)statement and Cochrane Collaboration’s tool for assessing risk of bias(ROB)were used to evaluate the reporting quality and methodological quality,respectively,and SPIRIT-AI(The Standard Protocol Items:Recommendations for Interventional Trials-Artificial Intelligence)statement was used to evaluate the reporting quality of the protocols.The associations of the reporting rate of CONSORT-AI with the publication year,journal’s impact factor(IF),number of authors,sample size,and first author’s country were analyzed univariately using Pearson’s chi-squared test,or Fisher’s exact test if the expected values in any of the cells were below 5.The compliance of the retrieved protocols to SPIRIT-AI was presented descriptively.Results Overall,29 RCTs and three protocols were considered eligible.The CONSORT-AI items“title and abstract”and“interpretation of results”were reported by all RCTs,with the items with the lowest reporting rates being“funding”(0),“implementation”(3.5%),and“harms”(3.5%).The risk of bias was high in 13(44.8%)RCTs and not clear in 15(51.7%)RCTs.Only one RCT(3.5%)had a low risk of bias.The compliance was not significantly different in terms of the publication year,journal’s IF,number of authors,sample size,or first author’s country.Ten of the 35 SPIRIT-AI items(funding,participant timeline,allocation concealment mechanism,implementation,data management,auditing,declaration of interests,access to data,informed consent materials and biological specimens)were not reported by any of the three protocols.Conclusions The reporting and methodological quality of RCTs involving AI need to be improved.Because of the limited availability of protocols,their quality could not be fully judged.Following the CONSORT-AI and SPIRIT-AI statements and with appropriate guidance on the risk of bias when designing and reporting AI-related RCTs can promote standardization and transparency. 展开更多
关键词 Artificial intelligence Randomized controlled trials Reporting quality methodological quality
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部