The new World Health Organization (WHO) Manual for Semen Analysis contains several improvements. One is that the 20 million spermatozoa per mL paradigm has been ousted in favour of proper calculations of lower refer...The new World Health Organization (WHO) Manual for Semen Analysis contains several improvements. One is that the 20 million spermatozoa per mL paradigm has been ousted in favour of proper calculations of lower reference limits for semen from men, whose partners had a time-to-pregnancy of 12 months or less. The recommendation to grade the progressive motility as described in the third and fourth editions of the WHO manual was not evidence-based, and WHO was therefore motivated to abandon it. However, the new recommendation is not evidence-based either, and it is difficult to understand the rational for the new assessment. It may have been a compromise to avoid returning to the rather robust system recommended in the first edition (1980). The unconditional recommendation of the 'Tygerberg strict criteria' is not evidence-based, and seems to be the result of an unfortunate bias in the composition of the Committee in favour of individuals known to support the 'strict criteria' method. This recommendation will have negative effects on the develop- ment ofandrology as a scientific field. Given the importance of the WHO manual, it is unfortunate that the recommenda- tions for such important variables, as motility and morphology, lack evidence-based support.展开更多
The evaluation of simulated disasters(for example,exercises)and real responses are important activities.However,little attention has been paid to how reports documenting such events should be written.A key issue is ho...The evaluation of simulated disasters(for example,exercises)and real responses are important activities.However,little attention has been paid to how reports documenting such events should be written.A key issue is how to make them as useful as possible to professionals working in disaster risk management.Here,we focus on three aspects of a written evaluation:how the object of the evaluation is described,how the analysis is described,and how the conclusions are described.This empirical experiment,based on real evaluation documents,asked 84 Dutch mayors and crisis management professionals to evaluate the perceived usefulness of the three aspects noted above.The results showed that how evalua・tions are written does matter.Specifically,the usefulness of an evaluation intended for learning purposes is improved when its analysis and conclusions are clearer.In contrast,evaluations used for accountability purposes are only improved by the clarity of the con elusion.These findings have implications for the way disaster management evaluations should be documented.展开更多
文摘The new World Health Organization (WHO) Manual for Semen Analysis contains several improvements. One is that the 20 million spermatozoa per mL paradigm has been ousted in favour of proper calculations of lower reference limits for semen from men, whose partners had a time-to-pregnancy of 12 months or less. The recommendation to grade the progressive motility as described in the third and fourth editions of the WHO manual was not evidence-based, and WHO was therefore motivated to abandon it. However, the new recommendation is not evidence-based either, and it is difficult to understand the rational for the new assessment. It may have been a compromise to avoid returning to the rather robust system recommended in the first edition (1980). The unconditional recommendation of the 'Tygerberg strict criteria' is not evidence-based, and seems to be the result of an unfortunate bias in the composition of the Committee in favour of individuals known to support the 'strict criteria' method. This recommendation will have negative effects on the develop- ment ofandrology as a scientific field. Given the importance of the WHO manual, it is unfortunate that the recommenda- tions for such important variables, as motility and morphology, lack evidence-based support.
基金funding from the Institute for Safety(IFV)of the Netherlands and the Swedish Civil Contingencies Agency(MSB).
文摘The evaluation of simulated disasters(for example,exercises)and real responses are important activities.However,little attention has been paid to how reports documenting such events should be written.A key issue is how to make them as useful as possible to professionals working in disaster risk management.Here,we focus on three aspects of a written evaluation:how the object of the evaluation is described,how the analysis is described,and how the conclusions are described.This empirical experiment,based on real evaluation documents,asked 84 Dutch mayors and crisis management professionals to evaluate the perceived usefulness of the three aspects noted above.The results showed that how evalua・tions are written does matter.Specifically,the usefulness of an evaluation intended for learning purposes is improved when its analysis and conclusions are clearer.In contrast,evaluations used for accountability purposes are only improved by the clarity of the con elusion.These findings have implications for the way disaster management evaluations should be documented.