期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
A robust optimization method for label noisy datasets based on adaptive threshold: Adaptive-k
1
作者 Enes DEDEOGLU Himmet Toprak KESGIN Mehmet Fatih AMASYALI 《Frontiers of Computer Science》 SCIE EI CSCD 2024年第4期49-60,共12页
The use of all samples in the optimization process does not produce robust results in datasets with label noise.Because the gradients calculated according to the losses of the noisy samples cause the optimization proc... The use of all samples in the optimization process does not produce robust results in datasets with label noise.Because the gradients calculated according to the losses of the noisy samples cause the optimization process to go in the wrong direction.In this paper,we recommend using samples with loss less than a threshold determined during the optimization,instead of using all samples in the mini-batch.Our proposed method,Adaptive-k,aims to exclude label noise samples from the optimization process and make the process robust.On noisy datasets,we found that using a threshold-based approach,such as Adaptive-k,produces better results than using all samples or a fixed number of low-loss samples in the mini-batch.On the basis of our theoretical analysis and experimental results,we show that the Adaptive-k method is closest to the performance of the Oracle,in which noisy samples are entirely removed from the dataset.Adaptive-k is a simple but effective method.It does not require prior knowledge of the noise ratio of the dataset,does not require additional model training,and does not increase training time significantly.In the experiments,we also show that Adaptive-k is compatible with different optimizers such as SGD,SGDM,and Adam.The code for Adaptive-k is available at GitHub. 展开更多
关键词 robust optimization label noise noisy label deep learning noisy datasets noise ratio estimation robust training
原文传递
Towards robust neural networks via a global and monotonically decreasing robustness training strategy 被引量:1
2
作者 Zhen LIANG Taoran WU +4 位作者 Wanwei LIU Bai XUE Wenjing YANG Ji WANG Zhengbin PANG 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2023年第10期1375-1389,共15页
Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical domains.Instead of verifying whether the robustness property holds or not in c... Robustness of deep neural networks(DNNs)has caused great concerns in the academic and industrial communities,especially in safety-critical domains.Instead of verifying whether the robustness property holds or not in certain neural networks,this paper focuses on training robust neural networks with respect to given perturbations.State-of-the-art training methods,interval bound propagation(IBP)and CROWN-IBP,perform well with respect to small perturbations,but their performance declines significantly in large perturbation cases,which is termed“drawdown risk”in this paper.Specifically,drawdown risk refers to the phenomenon that IBPfamily training methods cannot provide expected robust neural networks in larger perturbation cases,as in smaller perturbation cases.To alleviate the unexpected drawdown risk,we propose a global and monotonically decreasing robustness training strategy that takes multiple perturbations into account during each training epoch(global robustness training),and the corresponding robustness losses are combined with monotonically decreasing weights(monotonically decreasing robustness training).With experimental demonstrations,our presented strategy maintains performance on small perturbations and the drawdown risk on large perturbations is alleviated to a great extent.It is also noteworthy that our training method achieves higher model accuracy than the original training methods,which means that our presented training strategy gives more balanced consideration to robustness and accuracy. 展开更多
关键词 robust neural networks training method Drawdown risk Global robustness training Monotonically decreasing robustness
原文传递
Robust energy-efficient train speed profile optimization in a scenario-based position-time-speed network 被引量:3
3
作者 Yu CHENG Jiateng YIN Lixing YANG 《Frontiers of Engineering Management》 2021年第4期595-614,共20页
Train speed profile optimization is an efficient approach to reducing energy consumption in urban rail transit systems.Different from most existing studies that assume deterministic parameters as model inputs,this pap... Train speed profile optimization is an efficient approach to reducing energy consumption in urban rail transit systems.Different from most existing studies that assume deterministic parameters as model inputs,this paper proposes a robust energy-efficient train speed profile optimization approach by considering the uncertainty of train modeling parameters.Specifically,we first construct a scenario-based position-time-speed(PTS)network by considering resistance parameters as discrete scenariobased random variables.Then,a percentile reliability model is proposed to generate a robust train speed profile,by which the scenario-based energy consumption is less than the model objective value at a confidence level.To solve the model efficiently,we present several algorithms to eliminate the infeasible nodes and arcs in the PTS network and propose a model reformulation strategy to transform the original model into an equivalent linear programming model.Lastly,on the basis of our field test data collected in Beijing metro Yizhuang line,a series of experiments are conducted to verify the effectiveness of the model and analyze the influences of parameter uncertainties on the generated train speed profile. 展开更多
关键词 robust train speed profile percentile reliability model scenario-based position-time-speed network mixed-integer programming
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部