This article aims to review the development of acoustic computer simulation for performance spaces. The databases of Web of Science and Scopus were searched for peer-reviewed journal articles published in English betw...This article aims to review the development of acoustic computer simulation for performance spaces. The databases of Web of Science and Scopus were searched for peer-reviewed journal articles published in English between 1960 and 2021, using the keywords for “simulation”, “acoustic”, “performance space”, “measure”, and their synonyms. The inclusion criteria were as follows: (1) the searched article should be focused on the field of room acoustics (reviews were excluded);(2) a computer simulation algorithm should be used;(3) it should be clearly stated that the simulated object is a performance space;and (4) acoustic measurements should be used for comparison with the simulation. Finally, twenty studies were included. A standardised data extraction form was used to collect the modelling information, software/algorithm, indicators for comparison, and other information. The results revealed that the most used acoustic indicators were early decay time (EDT), reverberation time (T30), strength (G), and definition (D50). The accuracy of these indicators differed greatly. For non-iterative simulation, the simulation accuracies of most indicators were outside their respective just noticeable differences. Although a larger sample size was required for further validation, simulations of T30, EDT, and D50 all showed an increase in accuracy with increasing time from 1979 to 2020, except for G. In terms of frequency, the simulation was generally less accurate at lower frequencies, which occurred at T30, G, D50 and T20. However, EDT accuracy did not exhibit significant frequency sensitivity. The prediction accuracy of inter-aural cross-correlation coefficients (IACC) was even higher at low frequencies than it was at high frequencies. The average value of most indicators showed a clear systematic deviation from zero, providing hints for future algorithm improvements. Limitations and the risks of bias in this review were discussed. Finally, various types of benchmark tests were suggested for various comparison goals.展开更多
文摘This article aims to review the development of acoustic computer simulation for performance spaces. The databases of Web of Science and Scopus were searched for peer-reviewed journal articles published in English between 1960 and 2021, using the keywords for “simulation”, “acoustic”, “performance space”, “measure”, and their synonyms. The inclusion criteria were as follows: (1) the searched article should be focused on the field of room acoustics (reviews were excluded);(2) a computer simulation algorithm should be used;(3) it should be clearly stated that the simulated object is a performance space;and (4) acoustic measurements should be used for comparison with the simulation. Finally, twenty studies were included. A standardised data extraction form was used to collect the modelling information, software/algorithm, indicators for comparison, and other information. The results revealed that the most used acoustic indicators were early decay time (EDT), reverberation time (T30), strength (G), and definition (D50). The accuracy of these indicators differed greatly. For non-iterative simulation, the simulation accuracies of most indicators were outside their respective just noticeable differences. Although a larger sample size was required for further validation, simulations of T30, EDT, and D50 all showed an increase in accuracy with increasing time from 1979 to 2020, except for G. In terms of frequency, the simulation was generally less accurate at lower frequencies, which occurred at T30, G, D50 and T20. However, EDT accuracy did not exhibit significant frequency sensitivity. The prediction accuracy of inter-aural cross-correlation coefficients (IACC) was even higher at low frequencies than it was at high frequencies. The average value of most indicators showed a clear systematic deviation from zero, providing hints for future algorithm improvements. Limitations and the risks of bias in this review were discussed. Finally, various types of benchmark tests were suggested for various comparison goals.