The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Lo...The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Low Earth Orbit (LEO) bound SLV comprising of solid and liquid stages with the use of Genetic Algorithm (GA) as global optimizer. Convergence of GA is improved by introducing initial population based on the Design of Experiments (DOE) Technique. Latin Hypercube Sampling (LHS)-DOE is used for its good space filling properties. LHS is a stratified random procedure that provides an efficient way of sampling variables from their multivariate distributions. In SLV design minimum Gross Lift offWeight (GLOW) concept is traditionally being sought. Since the development costs tend to vary as a function of GLOW, this minimum GLOW is considered as a minimum development cost concept. The design approach is meaningful to initial design sizing purpose for its computational efficiency gives a quick insight into the vehicle performance prior to detailed design.展开更多
Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of ...Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of snow spatial distribution and the multiple influencing factors concerning the SWE spatial distribution,statistical models are not usually able to present acceptable results.Therefore,applicable methods that are able to predict nonlinear trends are necessary.In this research,to predict SWE,the Sohrevard Watershed located in northwest of Iran was selected as the case study.Database was collected,and the required maps were derived.Snow depth(SD)at 150 points with two sampling patterns including systematic random sampling and Latin hypercube sampling(LHS),and snow density at 18 points were randomly measured,and then SWE was calculated.SWE was predicted using artificial neural network(ANN),adaptive neuro-fuzzy inference system(ANFIS)and regression methods.The results showed that the performance of ANN and ANFIS models with two sampling patterns were observed better than the regression method.Moreover,based on most of the efficiency criteria,the efficiency of ANN,ANFIS and regression methods under LHS pattern were observed higher than the systematic random sampling pattern.However,there were no significant differences between the two methods of ANN and ANFIS in SWE prediction.Data of both two sampling patterns had the highest sensitivity to the elevation.In addition,the LHS and the systematic random sampling patterns had the least sensitivity to the profile curvature and plan curvature,respectively.展开更多
Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including sour...Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including source intensity(M),release location(0 X,0 Y)and release time(0 T),based on monitoring well data.To address the issues of insufficient monitoring wells or weak correlation between monitoring data and model parameters,a monitoring well design optimization approach was developed based on the Bayesian formula and information entropy.To demonstrate how the model works,an exemplar problem with an instantaneous release of a contaminant in a confined groundwater aquifer was employed.The information entropy of the model parameters posterior distribution was used as a criterion to evaluate the monitoring data quantity index.The optimal monitoring well position and monitoring frequency were solved by the two-step Monte Carlo method and differential evolution algorithm given a known well monitoring locations and monitoring events.Based on the optimized monitoring well position and sampling frequency,the contamination source was identified by an improved Metropolis algorithm using the Latin hypercube sampling approach.The case study results show that the following parameters were obtained:1)the optimal monitoring well position(D)is at(445,200);and 2)the optimal monitoring frequency(Δt)is 7,providing that the monitoring events is set as 5 times.Employing the optimized monitoring well position and frequency,the mean errors of inverse modeling results in source parameters(M,X0,Y0,T0)were 9.20%,0.25%,0.0061%,and 0.33%,respectively.The optimized monitoring well position and sampling frequency canIt was also learnt that the improved Metropolis-Hastings algorithm(a Markov chain Monte Carlo method)can make the inverse modeling result independent of the initial sampling points and achieves an overall optimization,which significantly improved the accuracy and numerical stability of the inverse modeling results.展开更多
Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assi...Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assigned parameter values to simulate these processes. In this study, a nitrogen flux model was developed and coupled with the water quality model of an urban lake. After parameter sensitivity analyses and model calibration and validation, this model was used to simulate nitrogen exchange at the sediment–water interface in eight scenarios. The results showed that sediment acted as a buffer in the sediment–water system. It could store or release nitrogen at any time, regulate the distribution of nitrogen between sediment and the water column, and provide algae with nitrogen. The most effective way to reduce nitrogen levels in urban lakes within a short time is to reduce external nitrogen loadings. However, sediment release might continue to contribute to the water column until a new balance is achieved. Therefore, effective measures for reducing sediment nitrogen should be developed as supplementary measures. Furthermore, model parameter sensitivity should be individually examined for different research subjects.展开更多
The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecti...The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecting gravity dam anti-sliding stability along the foundation surface.According to the design specifications,the loads and factors affecting the stability of a gravity dam were comprehensively selected.Afterwards,the sensitivity of the factors was preliminarily analyzed using the Sobol method with Latin hypercube sampling.Then,the results of the sensitivity analysis were verified with those obtained using the Garson method.Finally,the effects of different sampling methods,probability distribution types of factor samples,and ranges of factor values on the analysis results were evaluated.A case study of a typical gravity dam in Yunnan Province of China showed that the dominant factors affecting the gravity dam anti-sliding stability were the anti-shear cohesion,upstream and downstream water levels,anti-shear friction coefficient,uplift pressure reduction coefficient,concrete density,and silt height.Choice of sampling methods showed no significant effect,but the probability distribution type and the range of factor values greatly affected the analysis results.Therefore,these two elements should be sufficiently considered to improve the reliability of the dam anti-sliding stability analysis.展开更多
In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the bla...In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.展开更多
This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The fa...This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The failure characteristics are investigated under various environmental conditions and direction combinations.Theβ-unzipping technique is adopted to determine critical failure components,and the entire system is simplified as a series-parallel system to approximately evaluate the structural system reliability.However,this approach needs excessive computational effort for searching failure components and failure paths.Based on a trained artificial neural network(ANN),which can be used to approximate the implicit limit-state function of a complicated structure,a new alternative procedure is proposed to improve the efficiency of the system reliability analysis method.The failure probability is calculated through Monte Carlo simulation(MCS)with Latin hypercube sampling(LHS).The features and applicability of the above procedure are discussed and compared using an example jacket platform located in Chengdao Oilfield,Bohai Sea,China.This study provides a reference for the evaluation of the system reliability of jacket structures.展开更多
To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-sol...To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-solar power output scene model based on peaking demand is established which has anti-peaking characteristic.This model uses balancing scenes and key scenes with probability distribution based on improved Latin hypercube sampling(LHS)algorithm and scene reduction technology to illustrate the influence of wind-solar on peaking demand.Based on this,a peak shaving operation optimization model of high proportion new energy power generation is established.The various operating indexes after optimization in multi-scene peaking are calculated,and the ability of power grid peaking operation is compared whth that considering wind-solar complementation and source-load coupling.Finally,a case of high proportion new energy verifies the feasibility and validity of the proposed operation strategy.展开更多
This study presents a robust design method for autonomous photovoltaic (PV)-wind hybrid power systems to obtain an optimum system configuration insensitive to design variable variations. This issue has been formulated...This study presents a robust design method for autonomous photovoltaic (PV)-wind hybrid power systems to obtain an optimum system configuration insensitive to design variable variations. This issue has been formulated as a constraint multi-objective optimization problem, which is solved by a multi-objective genetic algorithm, NSGA-II. Monte Carlo Simulation (MCS) method, combined with Latin Hypercube Sampling (LHS), is applied to evaluate the stochastic system performance. The potential of the proposed method has been demonstrated by a conceptual system design. A comparative study between the proposed robust method and the deterministic method presented in literature has been conducted. The results indicate that the proposed method can find a large mount of Pareto optimal system configurations with better compromising performance than the deterministic method. The trade-off information may be derived by a systematical comparison of these configurations. The proposed robust design method should be useful for hybrid power systems that require both optimality and robustness.展开更多
In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety o...In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety of the dam abutments is studied with quasi-static analysis for different hazard levels.The Londe limit equilibrium method is utilized to calculate the stability of the wedges in the abutments.Since the finite element method is time-consuming,the neural network is used as an alternative for calculating the wedge safety factor.For training the neural network,1000 random samples are generated and the dam response is calculated.The direction of applied acceleration is changed within 5-degree intervals to reveal the critical direction corresponding to the minimum safety factor.The Latin hypercube sampling(LHS)is employed for sample generation,and the safety level is determined with reliability analysis.Three sample numbers of 1000,2000 and 4000 are used to examine the average and standard deviation of the results.The global sensitivity analysis is used to identify the effects of random variables on the abutment stability.It is shown that friction,cohesion and uplift pressure have the most significant effects on the wedge stability variance.展开更多
This paper presents an artificial neural network(ANN)-based response surface method that can be used to predict the failure probability of c-φslopes with spatially variable soil.In this method,the Latin hypercube s...This paper presents an artificial neural network(ANN)-based response surface method that can be used to predict the failure probability of c-φslopes with spatially variable soil.In this method,the Latin hypercube sampling technique is adopted to generate input datasets for establishing an ANN model;the random finite element method is then utilized to calculate the corresponding output datasets considering the spatial variability of soil properties;and finally,an ANN model is trained to construct the response surface of failure probability and obtain an approximate function that incorporates the relevant variables.The results of the illustrated example indicate that the proposed method provides credible and accurate estimations of failure probability.As a result,the obtained approximate function can be used as an alternative to the specific analysis process in c-φslope reliability analyses.展开更多
As the first step of the fire/gas-detection systems of floating production storage and offloading(FPSO)units is to iden-tify leakage accidents,gas detectors play an important role in controlling the leakage risk.To im...As the first step of the fire/gas-detection systems of floating production storage and offloading(FPSO)units is to iden-tify leakage accidents,gas detectors play an important role in controlling the leakage risk.To improve the leakage scenario detection rate and reduce the cumulative risk value,this paper presents an optimization method of the gas detector placement.The probability density distribution and cumulative probability density distribution of the leakage source variables and environmental variables were calculated based on the Offshore Reliability Data and the statistical data of the relevant leakage variables.A potential leakage sce-nario set was constructed using Latin hypercube sampling.The typical FPSO leakage scenarios were analyzed through computational fluid dynamics(CFD),and the impacts of different parameters on the leakage were addressed.A series of detectors was deployed according to the simulation results.The minimization of the product of effective detection time and gas leakage volume was the risk optimization objective,and the location and number of detectors were taken as decision variables.A greedy extraction heuristic algo-rithm was used to solve the optimization problem.The results show that the optimized placement had a better monitoring effect on the leakage scenario.展开更多
This paper presents the probabilistic analysis of landslides in spatially variable soil deposits, modeled by a stochastic framework which integrates the random field theory with generalized interpolation material poin...This paper presents the probabilistic analysis of landslides in spatially variable soil deposits, modeled by a stochastic framework which integrates the random field theory with generalized interpolation material point method(GIMP). Random fields are simulated using Cholesky matrix decomposition(CMD) method and Latin hypercube sampling(LHS) method, which represent material properties discretized into sets of random soil shear strength variables with statistical properties. The approach is applied to landslides in clayey deposits under undrained conditions with random fields of undrained shear strength parameters, in order to quantify the uncertainties of post-failure behavior at different scales of fluctuation(SOF) and coefficients of variation(COV). Results show that the employed approach can reliably simulate the whole landslide process and assess the uncertainties of runout motions. It is demonstrated that the natural heterogeneity of shear strength in landslides notably influences their post-failure behavior. Compared with a homogeneous landslide model which yields conservative results and underestimation of the risks, consideration of heterogeneity shows larger landslide influence zones. With SOF values increasing, the variances of influence zones also increase, and with higher values of COV, the mean values of the influence zone also increase, resulting in higher uncertainties of post-failure behavior.展开更多
In this paper, we are interested to find the most sensitive parameter, local and global stability of ovarian tumor growth model. For sensitivity analysis, we use Latin Hypercube Sampling (LHS) method to generate sampl...In this paper, we are interested to find the most sensitive parameter, local and global stability of ovarian tumor growth model. For sensitivity analysis, we use Latin Hypercube Sampling (LHS) method to generate sample points and Partial Rank Correlation Coefficient (PRCC) method, uses those sample points to find out which parameters are important for the model. Based on our findings, we suggest some treatment strategies. We investigate the sensitivity of the parameters for tumor volume, <em>y</em>, cell nutrient density, <em>Q</em> and maximum tumor size, <em>ymax</em>. We also use Scatter Plot method using LHS samples to show the consistency of the results obtained by using PRCC. Moreover, we discuss the qualitative analysis of ovarian tumor growth model investigating the local and global stability.展开更多
The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process o...The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process of reliability-based optimization and reliability constrains are calculated in inner loop. Innovation of suggested approach is in application of newly developed optimization strategy based on multilevel simulation using an advanced Latin Hypercube Sampling technique. This method is called Aimed multilevel sampling and it is designated for optimization of problems where only limited number of simulations is possible to perform due to enormous com- putational demands.展开更多
Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatia...Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatial variability, is a selection procedure for identifying a set of sample locations spread over a geographical space or with a good feature space coverage. A good feature space coverage ensures accurate estimation of regression parameters, while spatial coverage contributes to effective spatial interpolation.First, we review several statistical and geometric SDs that mainly optimize the sampling pattern in a geographical space and illustrate the strengths and weaknesses of these SDs by considering spatial coverage, simplicity, accuracy, and efficiency. Furthermore, Latin hypercube sampling, which obtains a full representation of multivariate distribution in geographical space, is described in detail for its development, improvement, and application. In addition, we discuss the fuzzy k-means sampling, response surface sampling, and Kennard-Stone sampling, which optimize sampling patterns in a feature space. We then discuss some practical applications that are mainly addressed by the conditioned Latin hypercube sampling with the flexibility and feasibility of adding multiple optimization criteria. We also discuss different methods of validation, an important stage of DSM, and conclude that an independent dataset selected from the probability sampling is superior for its free model assumptions. For future work, we recommend: 1) exploring SDs with both good spatial coverage and feature space coverage; 2) uncovering the real impacts of an SD on the integral DSM procedure;and 3) testing the feasibility and contribution of SDs in three-dimensional(3 D) DSM with variability for multiple layers.展开更多
Digital soil mapping (DSM) aims to produce detailed maps of soil properties or soil classes to improve agricultural management and soil quality assessment. Optimized sampling design can reduce the substantial costs an...Digital soil mapping (DSM) aims to produce detailed maps of soil properties or soil classes to improve agricultural management and soil quality assessment. Optimized sampling design can reduce the substantial costs and efforts associated with sampling, profile description, and laboratory analysis. The purpose of this study was to compare common sampling designs for DSM, including grid sampling (GS), grid random sampling (GRS), stratified random sampling (StRS), and conditioned Latin hypercube sampling (cLHS). In an agricultural field (11 ha) in Quebec, Canada, a total of unique 118 locations were selected using each of the four sampling designs (45 locations each), and additional 30 sample locations were selected as an independent testing dataset (evaluation dataset). Soil visible near-infrared (Vis-NIR) spectra were collected in situ at the 148 locations (1 m depth), and soil cores were collected from a subset of 32 locations and subdivided at 10-cm depth intervals, totaling 251 samples. The Cubist model was used to elucidate the relationship between Vis-NIR spectra and soil properties (soil organic matter (SOM) and clay), which was then used to predict the soil properties at all 148 sample locations. Digital maps of soil properties at multiple depths for the entire field (148 sample locations) were prepared using a quantile random forest model to obtain complete model maps (CM-maps). Soil properties were also mapped using the samples from each of the 45 locations for each sampling design to obtain sampling design maps (SD-maps). The SD-maps were evaluated using the independent testing dataset (30 sample locations), and the spatial distribution and model uncertainty of each SD-map were compared with those of the corresponding CM-map. The spatial and feature space coverage were compared across the four sampling designs. The results showed that GS resulted in the most even spatial coverage, cLHS resulted in the best coverage of the feature space, and GS and cLHS resulted in similar prediction accuracies and spatial distributions of soil properties. The SOM content was underestimated using GRS, with large errors at 0–50 cm depth, due to some values not being captured by this sampling design, whereas larger errors for the deeper soil layers were produced using StRS. Predictions of SOM and clay contents had higher accuracy for topsoil (0–30 cm) than for deep subsoil (60–100 cm). It was concluded that the soil sampling designs with either good spatial coverage or feature space coverage can provide good accuracy in 3D DSM, but their performances may be different for different soil properties.展开更多
This study combined a neural network and Latin hypercube sampling(LHS)to calibrate soil parameters.The Monte Carlo parameters were calibrated by generating different numbers of training samples for pressuremeter tests...This study combined a neural network and Latin hypercube sampling(LHS)to calibrate soil parameters.The Monte Carlo parameters were calibrated by generating different numbers of training samples for pressuremeter tests and excavations.The results showed that when the number of samples was 25 or 50,the parameter calibration accuracy was very high.However,the improvement in accuracy did not increase significantly with a further increase in the number of samples,but tended to be stable.The number of training samples was set at 50 to strike a balance between the calibration accuracy and efficiency for four parameters.For 25 groups of samples,the calibration results using LHS were better than those using orthogonal sampling.Compared to stochastic optimization algorithms,a neural network combined with LHS could significantly reduce the calibration time.This method was applied to actual foundation pit engineering in China.The results showed that using the proposed calibration method clearly improved the accuracy when predicting the deformation induced by the excavation.展开更多
In this paper,the multi-objective optimization of wavy microchannel heat sinks is performed by combining numerical calculation,prediction algorithm and genetic algorithm.In numerical calculation,the fluid-solid conjug...In this paper,the multi-objective optimization of wavy microchannel heat sinks is performed by combining numerical calculation,prediction algorithm and genetic algorithm.In numerical calculation,the fluid-solid conjugate heat transfer of heat sinks with different parameters are simulated in Fluent.On this basis,the vari-able parameters and objective parameters are used to complete the training of neural network model,which aims to achieve accurate prediction of objective parameters.Finally,the multi-objective genetic algorithm is applied to find the Pareto front according to different requirements on the foundation of the prediction model.Results show that the coefficient of determination of the neural network models are all greater than 0.85,which proves that the prediction model has a high accuracy.The Pareto fronts are obtained by non-dominated sorting genetic algorithm(NSGA-II)with different objective parameters and they reveal that the channel with the optimal performance corresponds to a larger channel width or Reynolds number.In addition,it is also found the dimensionless temperature difference is correlated with Nusselt number.展开更多
Urban electricity and heat networks(UEHN)consist of the coupling and interactions between electric power systems and district heating systems,in which the geographical and functional features of integrated energy syst...Urban electricity and heat networks(UEHN)consist of the coupling and interactions between electric power systems and district heating systems,in which the geographical and functional features of integrated energy systems are demonstrated.UEHN have been expected to provide an effective way to accommodate the intermittent and unpredictable renewable energy sources,in which the application of stochastic optimization approaches to UEHN analysis is highly desired.In this paper,we propose a chance-constrained coordinated optimization approach for UEHN considering the uncertainties in electricity loads,heat loads,and photovoltaic outputs,as well as the correlations between these uncertain sources.A solution strategy,which combines the Latin Hypercube Sampling Monte Carlo Simulation(LHSMCS)approach and a heuristic algorithm,is specifically designed to deal with the proposed chance-constrained coordinated optimization.Finally,test results on an UEHN comprised of a modified IEEE 33-bus system and a 32-node district heating system at Barry Island have verified the feasibility and effectiveness of the proposed framework.展开更多
文摘The design of new Satellite Launch Vehicle (SLV) is of interest, especially when a combination of Solid and Liquid Propulsion is included. Proposed is a conceptual design and optimization technique for multistage Low Earth Orbit (LEO) bound SLV comprising of solid and liquid stages with the use of Genetic Algorithm (GA) as global optimizer. Convergence of GA is improved by introducing initial population based on the Design of Experiments (DOE) Technique. Latin Hypercube Sampling (LHS)-DOE is used for its good space filling properties. LHS is a stratified random procedure that provides an efficient way of sampling variables from their multivariate distributions. In SLV design minimum Gross Lift offWeight (GLOW) concept is traditionally being sought. Since the development costs tend to vary as a function of GLOW, this minimum GLOW is considered as a minimum development cost concept. The design approach is meaningful to initial design sizing purpose for its computational efficiency gives a quick insight into the vehicle performance prior to detailed design.
文摘Direct measurement of snow water equivalent(SWE)in snow-dominated mountainous areas is difficult,thus its prediction is essential for water resources management in such areas.In addition,because of nonlinear trend of snow spatial distribution and the multiple influencing factors concerning the SWE spatial distribution,statistical models are not usually able to present acceptable results.Therefore,applicable methods that are able to predict nonlinear trends are necessary.In this research,to predict SWE,the Sohrevard Watershed located in northwest of Iran was selected as the case study.Database was collected,and the required maps were derived.Snow depth(SD)at 150 points with two sampling patterns including systematic random sampling and Latin hypercube sampling(LHS),and snow density at 18 points were randomly measured,and then SWE was calculated.SWE was predicted using artificial neural network(ANN),adaptive neuro-fuzzy inference system(ANFIS)and regression methods.The results showed that the performance of ANN and ANFIS models with two sampling patterns were observed better than the regression method.Moreover,based on most of the efficiency criteria,the efficiency of ANN,ANFIS and regression methods under LHS pattern were observed higher than the systematic random sampling pattern.However,there were no significant differences between the two methods of ANN and ANFIS in SWE prediction.Data of both two sampling patterns had the highest sensitivity to the elevation.In addition,the LHS and the systematic random sampling patterns had the least sensitivity to the profile curvature and plan curvature,respectively.
基金This work was supported by Major Science and Technology Program for Water Pollution Control and Treatment(No.2015ZX07406005)Also thanks to the National Natural Science Foundation of China(No.41430643 and No.51774270)the National Key Research&Development Plan(No.2016YFC0501109).
文摘Coupling Bayes’Theorem with a two-dimensional(2D)groundwater solute advection-diffusion transport equation allows an inverse model to be established to identify a set of contamination source parameters including source intensity(M),release location(0 X,0 Y)and release time(0 T),based on monitoring well data.To address the issues of insufficient monitoring wells or weak correlation between monitoring data and model parameters,a monitoring well design optimization approach was developed based on the Bayesian formula and information entropy.To demonstrate how the model works,an exemplar problem with an instantaneous release of a contaminant in a confined groundwater aquifer was employed.The information entropy of the model parameters posterior distribution was used as a criterion to evaluate the monitoring data quantity index.The optimal monitoring well position and monitoring frequency were solved by the two-step Monte Carlo method and differential evolution algorithm given a known well monitoring locations and monitoring events.Based on the optimized monitoring well position and sampling frequency,the contamination source was identified by an improved Metropolis algorithm using the Latin hypercube sampling approach.The case study results show that the following parameters were obtained:1)the optimal monitoring well position(D)is at(445,200);and 2)the optimal monitoring frequency(Δt)is 7,providing that the monitoring events is set as 5 times.Employing the optimized monitoring well position and frequency,the mean errors of inverse modeling results in source parameters(M,X0,Y0,T0)were 9.20%,0.25%,0.0061%,and 0.33%,respectively.The optimized monitoring well position and sampling frequency canIt was also learnt that the improved Metropolis-Hastings algorithm(a Markov chain Monte Carlo method)can make the inverse modeling result independent of the initial sampling points and achieves an overall optimization,which significantly improved the accuracy and numerical stability of the inverse modeling results.
基金supported by the Funds of the Nanjing Institute of Technology (Grants No. JCYJ201619 and ZKJ201804).
文摘Nutrient release from sediment is considered a significant source for overlying water. Given that nutrient release mechanisms in sediment are complex and difficult to simulate, traditional approaches commonly use assigned parameter values to simulate these processes. In this study, a nitrogen flux model was developed and coupled with the water quality model of an urban lake. After parameter sensitivity analyses and model calibration and validation, this model was used to simulate nitrogen exchange at the sediment–water interface in eight scenarios. The results showed that sediment acted as a buffer in the sediment–water system. It could store or release nitrogen at any time, regulate the distribution of nitrogen between sediment and the water column, and provide algae with nitrogen. The most effective way to reduce nitrogen levels in urban lakes within a short time is to reduce external nitrogen loadings. However, sediment release might continue to contribute to the water column until a new balance is achieved. Therefore, effective measures for reducing sediment nitrogen should be developed as supplementary measures. Furthermore, model parameter sensitivity should be individually examined for different research subjects.
基金supported by the National Natural Science Foundation of China(Grant No.52079120).
文摘The anti-sliding stability of a gravity dam along its foundation surface is a key problem in the design of gravity dams.In this study,a sensitivity analysis framework was proposed for investigating the factors affecting gravity dam anti-sliding stability along the foundation surface.According to the design specifications,the loads and factors affecting the stability of a gravity dam were comprehensively selected.Afterwards,the sensitivity of the factors was preliminarily analyzed using the Sobol method with Latin hypercube sampling.Then,the results of the sensitivity analysis were verified with those obtained using the Garson method.Finally,the effects of different sampling methods,probability distribution types of factor samples,and ranges of factor values on the analysis results were evaluated.A case study of a typical gravity dam in Yunnan Province of China showed that the dominant factors affecting the gravity dam anti-sliding stability were the anti-shear cohesion,upstream and downstream water levels,anti-shear friction coefficient,uplift pressure reduction coefficient,concrete density,and silt height.Choice of sampling methods showed no significant effect,but the probability distribution type and the range of factor values greatly affected the analysis results.Therefore,these two elements should be sufficiently considered to improve the reliability of the dam anti-sliding stability analysis.
基金Supported by Jiangsu Provincical Natural Science Foundation of China(Grant No.BK20140554)National Natural Science Foundation of China(Grant No.51409123)+2 种基金China Postdoctoral Science Foundation(Grant No.2015T80507)Innovation Project for Postgraduates of Jiangsu Province,China(Grant No.KYLX15_1066)the Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)
文摘In order to widen the high-efficiency operating range of a low-specific-speed centrifugal pump, an optimization process for considering efficiencies under 1.0Qd and 1.4Qd is proposed. Three parameters, namely, the blade outlet width b2, blade outlet angle β2, and blade wrap angle φ, are selected as design variables. Impellers are generated using the optimal Latin hypercube sampling method. The pump efficiencies are calculated using the software CFX 14.5 at two operating points selected as objectives. Surrogate models are also constructed to analyze the relationship between the objectives and the design variables. Finally, the particle swarm optimization algorithm is applied to calculate the surrogate model to determine the best combination of the impeller parameters. The results show that the performance curve predicted by numerical simulation has a good agreement with the experimental results. Compared with the efficiencies of the original impeller, the hydraulic efficiencies of the optimized impeller are increased by 4.18% and 0.62% under 1.0Qd and 1.4Qd, respectively. The comparison of inner flow between the original pump and optimized one illustrates the improvement of performance. The optimization process can provide a useful reference on performance improvement of other pumps, even on reduction of pressure fluctuations.
基金supported by the National Natural Science Foundation of China (No. 51779236)the NSFC- Shandong Joint Fund Project (No. U1706226)the National Key Research and Development Program (No. 2016YFC 0303401)
文摘This study investigates strategies for solving the system reliability of large three-dimensional jacket structures.These structural systems normally fail as a result of a series of different components failures.The failure characteristics are investigated under various environmental conditions and direction combinations.Theβ-unzipping technique is adopted to determine critical failure components,and the entire system is simplified as a series-parallel system to approximately evaluate the structural system reliability.However,this approach needs excessive computational effort for searching failure components and failure paths.Based on a trained artificial neural network(ANN),which can be used to approximate the implicit limit-state function of a complicated structure,a new alternative procedure is proposed to improve the efficiency of the system reliability analysis method.The failure probability is calculated through Monte Carlo simulation(MCS)with Latin hypercube sampling(LHS).The features and applicability of the above procedure are discussed and compared using an example jacket platform located in Chengdao Oilfield,Bohai Sea,China.This study provides a reference for the evaluation of the system reliability of jacket structures.
基金Youth Science and Technology Fund Project of Gansu Province(No.18JR3RA011)Major Projects in Gansu Province(No.17ZD2GA010)+1 种基金Science and Technology Projects Funding of State Grid Corporation(No.522727160001)Science and Technology Projects of State Grid Gansu Electric Power Company(No.52272716000K)
文摘To optimize peaking operation when high proportion new energy accesses to power grid,evaluation indexes are proposed which simultaneously consider wind-solar complementation and source-load coupling.A typical wind-solar power output scene model based on peaking demand is established which has anti-peaking characteristic.This model uses balancing scenes and key scenes with probability distribution based on improved Latin hypercube sampling(LHS)algorithm and scene reduction technology to illustrate the influence of wind-solar on peaking demand.Based on this,a peak shaving operation optimization model of high proportion new energy power generation is established.The various operating indexes after optimization in multi-scene peaking are calculated,and the ability of power grid peaking operation is compared whth that considering wind-solar complementation and source-load coupling.Finally,a case of high proportion new energy verifies the feasibility and validity of the proposed operation strategy.
文摘This study presents a robust design method for autonomous photovoltaic (PV)-wind hybrid power systems to obtain an optimum system configuration insensitive to design variable variations. This issue has been formulated as a constraint multi-objective optimization problem, which is solved by a multi-objective genetic algorithm, NSGA-II. Monte Carlo Simulation (MCS) method, combined with Latin Hypercube Sampling (LHS), is applied to evaluate the stochastic system performance. The potential of the proposed method has been demonstrated by a conceptual system design. A comparative study between the proposed robust method and the deterministic method presented in literature has been conducted. The results indicate that the proposed method can find a large mount of Pareto optimal system configurations with better compromising performance than the deterministic method. The trade-off information may be derived by a systematical comparison of these configurations. The proposed robust design method should be useful for hybrid power systems that require both optimality and robustness.
文摘In this study,the seismic stability of arch dam abutments is investigated within the framework of the probabilistic method.A large concrete arch dam is considered with six wedges for each abutment.The seismic safety of the dam abutments is studied with quasi-static analysis for different hazard levels.The Londe limit equilibrium method is utilized to calculate the stability of the wedges in the abutments.Since the finite element method is time-consuming,the neural network is used as an alternative for calculating the wedge safety factor.For training the neural network,1000 random samples are generated and the dam response is calculated.The direction of applied acceleration is changed within 5-degree intervals to reveal the critical direction corresponding to the minimum safety factor.The Latin hypercube sampling(LHS)is employed for sample generation,and the safety level is determined with reliability analysis.Three sample numbers of 1000,2000 and 4000 are used to examine the average and standard deviation of the results.The global sensitivity analysis is used to identify the effects of random variables on the abutment stability.It is shown that friction,cohesion and uplift pressure have the most significant effects on the wedge stability variance.
基金financially supported by the National Natural Science Foundation of China(Grant No.51278217)
文摘This paper presents an artificial neural network(ANN)-based response surface method that can be used to predict the failure probability of c-φslopes with spatially variable soil.In this method,the Latin hypercube sampling technique is adopted to generate input datasets for establishing an ANN model;the random finite element method is then utilized to calculate the corresponding output datasets considering the spatial variability of soil properties;and finally,an ANN model is trained to construct the response surface of failure probability and obtain an approximate function that incorporates the relevant variables.The results of the illustrated example indicate that the proposed method provides credible and accurate estimations of failure probability.As a result,the obtained approximate function can be used as an alternative to the specific analysis process in c-φslope reliability analyses.
基金support of the Fundamen-tal Research Funds for the Central Universities(No.3072021CF0101)the‘Integration Software of Offshore Float-ing Platform Engineering Design(No.2016YFC0302900)’from the Ministry of Science and Technology of China,and the Project of Development of Floating Offshore Wind Turbine Risk Assessment Software Project,funded by the International S&T Cooperation Program of China(No.2013DFE73060).
文摘As the first step of the fire/gas-detection systems of floating production storage and offloading(FPSO)units is to iden-tify leakage accidents,gas detectors play an important role in controlling the leakage risk.To improve the leakage scenario detection rate and reduce the cumulative risk value,this paper presents an optimization method of the gas detector placement.The probability density distribution and cumulative probability density distribution of the leakage source variables and environmental variables were calculated based on the Offshore Reliability Data and the statistical data of the relevant leakage variables.A potential leakage sce-nario set was constructed using Latin hypercube sampling.The typical FPSO leakage scenarios were analyzed through computational fluid dynamics(CFD),and the impacts of different parameters on the leakage were addressed.A series of detectors was deployed according to the simulation results.The minimization of the product of effective detection time and gas leakage volume was the risk optimization objective,and the location and number of detectors were taken as decision variables.A greedy extraction heuristic algo-rithm was used to solve the optimization problem.The results show that the optimized placement had a better monitoring effect on the leakage scenario.
文摘This paper presents the probabilistic analysis of landslides in spatially variable soil deposits, modeled by a stochastic framework which integrates the random field theory with generalized interpolation material point method(GIMP). Random fields are simulated using Cholesky matrix decomposition(CMD) method and Latin hypercube sampling(LHS) method, which represent material properties discretized into sets of random soil shear strength variables with statistical properties. The approach is applied to landslides in clayey deposits under undrained conditions with random fields of undrained shear strength parameters, in order to quantify the uncertainties of post-failure behavior at different scales of fluctuation(SOF) and coefficients of variation(COV). Results show that the employed approach can reliably simulate the whole landslide process and assess the uncertainties of runout motions. It is demonstrated that the natural heterogeneity of shear strength in landslides notably influences their post-failure behavior. Compared with a homogeneous landslide model which yields conservative results and underestimation of the risks, consideration of heterogeneity shows larger landslide influence zones. With SOF values increasing, the variances of influence zones also increase, and with higher values of COV, the mean values of the influence zone also increase, resulting in higher uncertainties of post-failure behavior.
文摘In this paper, we are interested to find the most sensitive parameter, local and global stability of ovarian tumor growth model. For sensitivity analysis, we use Latin Hypercube Sampling (LHS) method to generate sample points and Partial Rank Correlation Coefficient (PRCC) method, uses those sample points to find out which parameters are important for the model. Based on our findings, we suggest some treatment strategies. We investigate the sensitivity of the parameters for tumor volume, <em>y</em>, cell nutrient density, <em>Q</em> and maximum tumor size, <em>ymax</em>. We also use Scatter Plot method using LHS samples to show the consistency of the results obtained by using PRCC. Moreover, we discuss the qualitative analysis of ovarian tumor growth model investigating the local and global stability.
基金support of projects of Ministry of Education of Czech Republic KONTAKT No.LH12062previous achievements worked out under the project of Technological Agency of Czech Republic No.TA01011019.
文摘The aim of the paper is to present a newly developed approach for reliability-based design optimization. It is based on double loop framework where the outer loop of algorithm covers the optimization part of process of reliability-based optimization and reliability constrains are calculated in inner loop. Innovation of suggested approach is in application of newly developed optimization strategy based on multilevel simulation using an advanced Latin Hypercube Sampling technique. This method is called Aimed multilevel sampling and it is designated for optimization of problems where only limited number of simulations is possible to perform due to enormous com- putational demands.
基金funded by the Natural Science and Engineering Research Council (NSERC) of Canada (No. RGPIN-2014-04100)
文摘Sampling design(SD) plays a crucial role in providing reliable input for digital soil mapping(DSM) and increasing its efficiency.Sampling design, with a predetermined sample size and consideration of budget and spatial variability, is a selection procedure for identifying a set of sample locations spread over a geographical space or with a good feature space coverage. A good feature space coverage ensures accurate estimation of regression parameters, while spatial coverage contributes to effective spatial interpolation.First, we review several statistical and geometric SDs that mainly optimize the sampling pattern in a geographical space and illustrate the strengths and weaknesses of these SDs by considering spatial coverage, simplicity, accuracy, and efficiency. Furthermore, Latin hypercube sampling, which obtains a full representation of multivariate distribution in geographical space, is described in detail for its development, improvement, and application. In addition, we discuss the fuzzy k-means sampling, response surface sampling, and Kennard-Stone sampling, which optimize sampling patterns in a feature space. We then discuss some practical applications that are mainly addressed by the conditioned Latin hypercube sampling with the flexibility and feasibility of adding multiple optimization criteria. We also discuss different methods of validation, an important stage of DSM, and conclude that an independent dataset selected from the probability sampling is superior for its free model assumptions. For future work, we recommend: 1) exploring SDs with both good spatial coverage and feature space coverage; 2) uncovering the real impacts of an SD on the integral DSM procedure;and 3) testing the feasibility and contribution of SDs in three-dimensional(3 D) DSM with variability for multiple layers.
基金the National Science and Engineering Research Council of Canada(No.RGPIN-2014-04100)for funding this project.
文摘Digital soil mapping (DSM) aims to produce detailed maps of soil properties or soil classes to improve agricultural management and soil quality assessment. Optimized sampling design can reduce the substantial costs and efforts associated with sampling, profile description, and laboratory analysis. The purpose of this study was to compare common sampling designs for DSM, including grid sampling (GS), grid random sampling (GRS), stratified random sampling (StRS), and conditioned Latin hypercube sampling (cLHS). In an agricultural field (11 ha) in Quebec, Canada, a total of unique 118 locations were selected using each of the four sampling designs (45 locations each), and additional 30 sample locations were selected as an independent testing dataset (evaluation dataset). Soil visible near-infrared (Vis-NIR) spectra were collected in situ at the 148 locations (1 m depth), and soil cores were collected from a subset of 32 locations and subdivided at 10-cm depth intervals, totaling 251 samples. The Cubist model was used to elucidate the relationship between Vis-NIR spectra and soil properties (soil organic matter (SOM) and clay), which was then used to predict the soil properties at all 148 sample locations. Digital maps of soil properties at multiple depths for the entire field (148 sample locations) were prepared using a quantile random forest model to obtain complete model maps (CM-maps). Soil properties were also mapped using the samples from each of the 45 locations for each sampling design to obtain sampling design maps (SD-maps). The SD-maps were evaluated using the independent testing dataset (30 sample locations), and the spatial distribution and model uncertainty of each SD-map were compared with those of the corresponding CM-map. The spatial and feature space coverage were compared across the four sampling designs. The results showed that GS resulted in the most even spatial coverage, cLHS resulted in the best coverage of the feature space, and GS and cLHS resulted in similar prediction accuracies and spatial distributions of soil properties. The SOM content was underestimated using GRS, with large errors at 0–50 cm depth, due to some values not being captured by this sampling design, whereas larger errors for the deeper soil layers were produced using StRS. Predictions of SOM and clay contents had higher accuracy for topsoil (0–30 cm) than for deep subsoil (60–100 cm). It was concluded that the soil sampling designs with either good spatial coverage or feature space coverage can provide good accuracy in 3D DSM, but their performances may be different for different soil properties.
基金supported by the Fundamental Research Funds for the Central Universities,the Shanghai Science and Technology Committee Rising-Star Program(19QC1400500)the National Nature Science Foundation of China(Grant No.41877252).
文摘This study combined a neural network and Latin hypercube sampling(LHS)to calibrate soil parameters.The Monte Carlo parameters were calibrated by generating different numbers of training samples for pressuremeter tests and excavations.The results showed that when the number of samples was 25 or 50,the parameter calibration accuracy was very high.However,the improvement in accuracy did not increase significantly with a further increase in the number of samples,but tended to be stable.The number of training samples was set at 50 to strike a balance between the calibration accuracy and efficiency for four parameters.For 25 groups of samples,the calibration results using LHS were better than those using orthogonal sampling.Compared to stochastic optimization algorithms,a neural network combined with LHS could significantly reduce the calibration time.This method was applied to actual foundation pit engineering in China.The results showed that using the proposed calibration method clearly improved the accuracy when predicting the deformation induced by the excavation.
文摘In this paper,the multi-objective optimization of wavy microchannel heat sinks is performed by combining numerical calculation,prediction algorithm and genetic algorithm.In numerical calculation,the fluid-solid conjugate heat transfer of heat sinks with different parameters are simulated in Fluent.On this basis,the vari-able parameters and objective parameters are used to complete the training of neural network model,which aims to achieve accurate prediction of objective parameters.Finally,the multi-objective genetic algorithm is applied to find the Pareto front according to different requirements on the foundation of the prediction model.Results show that the coefficient of determination of the neural network models are all greater than 0.85,which proves that the prediction model has a high accuracy.The Pareto fronts are obtained by non-dominated sorting genetic algorithm(NSGA-II)with different objective parameters and they reveal that the channel with the optimal performance corresponds to a larger channel width or Reynolds number.In addition,it is also found the dimensionless temperature difference is correlated with Nusselt number.
基金This work was supported in part by Natural Science Foundation of Jiangsu Province,China(No.BK20171433)in part by Science and Technology Project of State Grid Jiangsu Electric Power Corporation,China(No.J2018066).
文摘Urban electricity and heat networks(UEHN)consist of the coupling and interactions between electric power systems and district heating systems,in which the geographical and functional features of integrated energy systems are demonstrated.UEHN have been expected to provide an effective way to accommodate the intermittent and unpredictable renewable energy sources,in which the application of stochastic optimization approaches to UEHN analysis is highly desired.In this paper,we propose a chance-constrained coordinated optimization approach for UEHN considering the uncertainties in electricity loads,heat loads,and photovoltaic outputs,as well as the correlations between these uncertain sources.A solution strategy,which combines the Latin Hypercube Sampling Monte Carlo Simulation(LHSMCS)approach and a heuristic algorithm,is specifically designed to deal with the proposed chance-constrained coordinated optimization.Finally,test results on an UEHN comprised of a modified IEEE 33-bus system and a 32-node district heating system at Barry Island have verified the feasibility and effectiveness of the proposed framework.