The database of 254 rockburst events was examined for rockburst damage classification using stochastic gradient boosting (SGB) methods. Five potentially relevant indicators including the stress condition factor, the...The database of 254 rockburst events was examined for rockburst damage classification using stochastic gradient boosting (SGB) methods. Five potentially relevant indicators including the stress condition factor, the ground support system capacity, the excavation span, the geological structure and the peak particle velocity of rockburst sites were analyzed. The performance of the model was evaluated using a 10 folds cross-validation (CV) procedure with 80%of original data during modeling, and an external testing set (20%) was employed to validate the prediction performance of the SGB model. Two accuracy measures for multi-class problems were employed: classification accuracy rate and Cohen’s Kappa. The accuracy analysis together with Kappa for the rockburst damage dataset reveals that the SGB model for the prediction of rockburst damage is acceptable.展开更多
With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion a...With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.展开更多
In this paper,we establish a unified framework to study the almost sure global convergence and the expected convergencerates of a class ofmini-batch stochastic(projected)gradient(SG)methods,including two popular types...In this paper,we establish a unified framework to study the almost sure global convergence and the expected convergencerates of a class ofmini-batch stochastic(projected)gradient(SG)methods,including two popular types of SG:stepsize diminished SG and batch size increased SG.We also show that the standard variance uniformly bounded assumption,which is frequently used in the literature to investigate the convergence of SG,is actually not required when the gradient of the objective function is Lipschitz continuous.Finally,we show that our framework can also be used for analyzing the convergence of a mini-batch stochastic extragradient method for stochastic variational inequality.展开更多
In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimizati...In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimization problems and seek the optimal statistical parameters for the random surfaces.The optimizations at fixed frequency as well as at multiple frequencies and multiple incident angles are investigated.To evaluate the gradient of the objective function,we derive the shape derivatives for the interfaces and apply the adjoint state method to perform the computation.The stochastic gradient descent method evaluates the gradient of the objective function only at a few samples for each iteration,which reduces the computational cost significantly.Various numerical experiments are conducted to illustrate the efficiency of the method and significant increases of the absorptance for the optimal random structures.We also examine the convergence of the stochastic gradient descent algorithm theoretically and prove that the numerical method is convergent under certain assumptions for the random interfaces.展开更多
Deep learning is the process of determining parameters that reduce the cost function derived from the dataset.The optimization in neural networks at the time is known as the optimal parameters.To solve optimization,it...Deep learning is the process of determining parameters that reduce the cost function derived from the dataset.The optimization in neural networks at the time is known as the optimal parameters.To solve optimization,it initialize the parameters during the optimization process.There should be no variation in the cost function parameters at the global minimum.The momentum technique is a parameters optimization approach;however,it has difficulties stopping the parameter when the cost function value fulfills the global minimum(non-stop problem).Moreover,existing approaches use techniques;the learning rate is reduced during the iteration period.These techniques are monotonically reducing at a steady rate over time;our goal is to make the learning rate parameters.We present a method for determining the best parameters that adjust the learning rate in response to the cost function value.As a result,after the cost function has been optimized,the process of the rate Schedule is complete.This approach is shown to ensure convergence to the optimal parameters.This indicates that our strategy minimizes the cost function(or effective learning).The momentum approach is used in the proposed method.To solve the Momentum approach non-stop problem,we use the cost function of the parameter in our proposed method.As a result,this learning technique reduces the quantity of the parameter due to the impact of the cost function parameter.To verify that the learning works to test the strategy,we employed proof of convergence and empirical tests using current methods and the results are obtained using Python.展开更多
基金Project(2015CX005)supported by the Innovation Driven Plan of Central South University of ChinaProject supported by the Sheng Hua Lie Ying Program of Central South University,China
文摘The database of 254 rockburst events was examined for rockburst damage classification using stochastic gradient boosting (SGB) methods. Five potentially relevant indicators including the stress condition factor, the ground support system capacity, the excavation span, the geological structure and the peak particle velocity of rockburst sites were analyzed. The performance of the model was evaluated using a 10 folds cross-validation (CV) procedure with 80%of original data during modeling, and an external testing set (20%) was employed to validate the prediction performance of the SGB model. Two accuracy measures for multi-class problems were employed: classification accuracy rate and Cohen’s Kappa. The accuracy analysis together with Kappa for the rockburst damage dataset reveals that the SGB model for the prediction of rockburst damage is acceptable.
基金partially supported by the National Natural Science Foundation of China (No.41230318)
文摘With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.
基金the National Natural Science Foundation of China(Nos.11871135 and 11801054)the Fundamental Research Funds for the Central Universities(No.DUT19K46)。
文摘In this paper,we establish a unified framework to study the almost sure global convergence and the expected convergencerates of a class ofmini-batch stochastic(projected)gradient(SG)methods,including two popular types of SG:stepsize diminished SG and batch size increased SG.We also show that the standard variance uniformly bounded assumption,which is frequently used in the literature to investigate the convergence of SG,is actually not required when the gradient of the objective function is Lipschitz continuous.Finally,we show that our framework can also be used for analyzing the convergence of a mini-batch stochastic extragradient method for stochastic variational inequality.
基金partially supported by the DOE grant DE-SC0022253the work of JL was partially supported by the NSF grant DMS-1719851 and DMS-2011148.
文摘In this work,we develop a stochastic gradient descent method for the computational optimal design of random rough surfaces in thin-film solar cells.We formulate the design problems as random PDE-constrained optimization problems and seek the optimal statistical parameters for the random surfaces.The optimizations at fixed frequency as well as at multiple frequencies and multiple incident angles are investigated.To evaluate the gradient of the objective function,we derive the shape derivatives for the interfaces and apply the adjoint state method to perform the computation.The stochastic gradient descent method evaluates the gradient of the objective function only at a few samples for each iteration,which reduces the computational cost significantly.Various numerical experiments are conducted to illustrate the efficiency of the method and significant increases of the absorptance for the optimal random structures.We also examine the convergence of the stochastic gradient descent algorithm theoretically and prove that the numerical method is convergent under certain assumptions for the random interfaces.
基金funded by Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R79),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Deep learning is the process of determining parameters that reduce the cost function derived from the dataset.The optimization in neural networks at the time is known as the optimal parameters.To solve optimization,it initialize the parameters during the optimization process.There should be no variation in the cost function parameters at the global minimum.The momentum technique is a parameters optimization approach;however,it has difficulties stopping the parameter when the cost function value fulfills the global minimum(non-stop problem).Moreover,existing approaches use techniques;the learning rate is reduced during the iteration period.These techniques are monotonically reducing at a steady rate over time;our goal is to make the learning rate parameters.We present a method for determining the best parameters that adjust the learning rate in response to the cost function value.As a result,after the cost function has been optimized,the process of the rate Schedule is complete.This approach is shown to ensure convergence to the optimal parameters.This indicates that our strategy minimizes the cost function(or effective learning).The momentum approach is used in the proposed method.To solve the Momentum approach non-stop problem,we use the cost function of the parameter in our proposed method.As a result,this learning technique reduces the quantity of the parameter due to the impact of the cost function parameter.To verify that the learning works to test the strategy,we employed proof of convergence and empirical tests using current methods and the results are obtained using Python.