摘要
For complex flows in compressors containing flow separations and adverse pressure gradients,the numerical simulation results based on Reynolds-averaged Navier-Stokes(RANS)models often deviate from experimental measurements more or less.To improve the prediction accuracy and reduce the difference between the RANS prediction results and experimental measurements,an experimental data-driven flow field prediction method based on deep learning and l_(1)regularization is proposed and applied to a compressor cascade flow field.The inlet boundary conditions and turbulence model parameters are calibrated to obtain the high-fidelity flow fields.The Saplart-Allmaras and SST turbulence models are used independently for mutual validation.The contributions of key modified parameters are also analyzed via sensitivity analysis.The results show that the prediction error can be reduced by nearly 70%based on the proposed algorithm.The flow fields predicted by the two calibrated turbulence models are almost the same and nearly independent of the turbulence models.The corrections of the inlet boundary conditions reduce the error in the first half of the chord.The turbulence model calibrations fix the overprediction of flow separation on the suction surface near the tail edge.
基金
the support of the National Natural Science Foundation of China(No.52106053,No.92152301)。