期刊文献+

Advancing Scientific Rigor: Towards a Universal Informational Criterion for Assessing Model-Phenomenon Mismatch

Advancing Scientific Rigor: Towards a Universal Informational Criterion for Assessing Model-Phenomenon Mismatch
下载PDF
导出
摘要 The escalating costs of research and development, coupled with the influx of researchers, have led to a surge in published articles across scientific disciplines. However, concerns have arisen regarding the accuracy, validity, and reproducibility of reported findings. Issues such as replication problems, fraudulent practices, and a lack of expertise in measurement theory and uncertainty analysis have raised doubts about the reliability and credibility of scientific research. Rigorous assessment practices in certain fields highlight the importance of identifying potential errors and understanding the relationship between technical parameters and research outcomes. To address these concerns, a universally applicable criterion called comparative certainty is urgently needed. This criterion, grounded in an analysis of the modeling process and information transmission, accumulation, and transformation in both theoretical and applied research, aims to evaluate the acceptable deviation between a model and the observed phenomenon. It provides a theoretically grounded framework applicable to all scientific disciplines adhering to the International System of Units (SI). Objective evaluations based on this criterion can enhance the reproducibility and reliability of scientific investigations, instilling greater confidence in published findings. Establishing this criterion would be a significant stride towards ensuring the robustness and credibility of scientific research across disciplines. The escalating costs of research and development, coupled with the influx of researchers, have led to a surge in published articles across scientific disciplines. However, concerns have arisen regarding the accuracy, validity, and reproducibility of reported findings. Issues such as replication problems, fraudulent practices, and a lack of expertise in measurement theory and uncertainty analysis have raised doubts about the reliability and credibility of scientific research. Rigorous assessment practices in certain fields highlight the importance of identifying potential errors and understanding the relationship between technical parameters and research outcomes. To address these concerns, a universally applicable criterion called comparative certainty is urgently needed. This criterion, grounded in an analysis of the modeling process and information transmission, accumulation, and transformation in both theoretical and applied research, aims to evaluate the acceptable deviation between a model and the observed phenomenon. It provides a theoretically grounded framework applicable to all scientific disciplines adhering to the International System of Units (SI). Objective evaluations based on this criterion can enhance the reproducibility and reliability of scientific investigations, instilling greater confidence in published findings. Establishing this criterion would be a significant stride towards ensuring the robustness and credibility of scientific research across disciplines.
作者 Boris Menin Boris Menin(Independent Researcher, Beer-Sheva, Israel)
机构地区 Independent Researcher
出处 《Journal of Applied Mathematics and Physics》 2023年第7期1817-1836,共20页 应用数学与应用物理(英文)
关键词 Amount of Information Information Channel Measurement MODEL System of Units Uncertainty Underwater Electrical Discharge Amount of Information Information Channel Measurement Model System of Units Uncertainty Underwater Electrical Discharge
  • 相关文献

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部