The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction ...The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.展开更多
The estimation of model parameters is an important subject in engineering.In this area of work,the prevailing approach is to estimate or calculate these as deterministic parameters.In this study,we consider the model ...The estimation of model parameters is an important subject in engineering.In this area of work,the prevailing approach is to estimate or calculate these as deterministic parameters.In this study,we consider the model parameters from the perspective of random variables and describe the general form of the parameter distribution inference problem.Under this framework,we propose an ensemble Bayesian method by introducing Bayesian inference and the Markov chain Monte Carlo(MCMC)method.Experiments on a finite cylindrical reactor and a 2D IAEA benchmark problem show that the proposed method converges quickly and can estimate parameters effectively,even for several correlated parameters simultaneously.Our experiments include cases of engineering software calls,demonstrating that the method can be applied to engineering,such as nuclear reactor engineering.展开更多
This study presents a Bayesian methodology for de- signing step stress accelerated degradation testing (SSADT) and its application to batteries. First, the simulation-based Bayesian de- sign framework for SSADT is p...This study presents a Bayesian methodology for de- signing step stress accelerated degradation testing (SSADT) and its application to batteries. First, the simulation-based Bayesian de- sign framework for SSADT is presented. Then, by considering his- torical data, specific optimal objectives oriented Kullback-Leibler (KL) divergence is established. A numerical example is discussed to illustrate the design approach. It is assumed that the degrada- tion model (or process) follows a drift Brownian motion; the accele- ration model follows Arrhenius equation; and the corresponding parameters follow normal and Gamma prior distributions. Using the Markov Chain Monte Carlo (MCMC) method and WinBUGS software, the comparison shows that KL divergence is better than quadratic loss for optimal criteria. Further, the effect of simulation outiiers on the optimization plan is analyzed and the preferred sur- face fitting algorithm is chosen. At the end of the paper, a NASA lithium-ion battery dataset is used as historical information and the KL divergence oriented Bayesian design is compared with maxi- mum likelihood theory oriented locally optimal design. The results show that the proposed method can provide a much better testing plan for this engineering application.展开更多
In view of the shortage of traditional life prediction methods for machine tools,such as low accuracy of life prediction and few samples basis attributes,a life prediction model of machine tools combined with machine ...In view of the shortage of traditional life prediction methods for machine tools,such as low accuracy of life prediction and few samples basis attributes,a life prediction model of machine tools combined with machine tool attributes is proposed.The life prediction model of machine tool adopts KL dispersion distribution theory,uses modal superposition method to carry out machine tool life analysis,calculates the theoretical life of machine tool,and then carries on the simulation,obtains the machine tool life prediction value.Compared with the traditional method of machine tool life prediction,the model is based on the application life fatigue damage model,which superimposes the service times and maintenance cycle of the machine tool,derives the influence factor of machine tool life,and obtains the linear relationship between the influence factor of machine tool life and the life of machine tool.The influence factor of machine tool life is introduced as the life prediction parameter of machine tool.The data transformation relationship of HT300 parts is constructed.The original part data is enhanced.The effective training set is obtained.The life prediction model of machine tool based on deep learning is completed.The quantitative analysis of machine tool life is carried out.The experiment of machine tool life prediction using training data set proves the validity of the model.Regression test was carried out on the training data set to reflect the robustness of the model.The prediction accuracy of the model is further verified by Weibull test.展开更多
Currently,a surge in the number of spacecraft and fragments is observed,leading to more frequent breakup events in low Earth orbits(LEOs).The causes of these events are being identified,and specific triggers,such as c...Currently,a surge in the number of spacecraft and fragments is observed,leading to more frequent breakup events in low Earth orbits(LEOs).The causes of these events are being identified,and specific triggers,such as collisions or explosions,are being examined for their importance to space traffic management.Backward propagation methods were employed to trace the origins of these types of breakup events.Simulations were conducted using the NASA standard breakup model,and satellite Hitomi’s breakup was analyzed.Kullback-Leibler(KL)divergences,Euclidean 2-norms,and Jensen-Shannon(JS)divergences were computed to deduce potential types of breakups and the associated fragmentation masses.In the simulated case,a discrepancy of 22.12 s between the estimated and actual time was noted.Additionally,the breakup of the Hitomi satellite was estimated to have occurred around UTC 1:49:26.4 on March 26,2016.This contrasts with the epoch provided by the Joint Space Operation Center,which was estimated to be at 1:42 UTC±11 min.From the findings,it was suggested that the techniques introduced in the study can be effectively used to trace the origins of short-term breakup events and to deduce the types of collisions and fragmentation masses under certain conditions.展开更多
The sophistication of cyberattacks penetrating into enterprise networks has called for predictive defense beyond intrusion detection,where different attack strategies can be analyzed and used to anticipate next malici...The sophistication of cyberattacks penetrating into enterprise networks has called for predictive defense beyond intrusion detection,where different attack strategies can be analyzed and used to anticipate next malicious actions,especially the unusual ones.Unfortunately,traditional predictive analytics or machine learning techniques that require training data of known attack strategies are not practical,given the scarcity of representative data and the evolving nature of cyberattacks.This paper describes the design and evaluation of a novel automated system,ASSERT,which continuously synthesizes and separates cyberattack behavior models to enable better prediction of future actions.It takes streaming malicious event evidences as inputs,abstracts them to edge-based behavior aggregates,and associates the edges to attack models,where each represents a unique and collective attack behavior.It follows a dynamic Bayesian-based model generation approach to determine when a new attack behavior is present,and creates new attack models by maximizing a cluster validity index.ASSERT generates empirical attack models by separating evidences and use the generated models to predict unseen future incidents.It continuously evaluates the quality of the model separation and triggers a re-clustering process when needed.Through the use of 2017 National Collegiate Penetration Testing Competition data,this work demonstrates the effectiveness of ASSERT in terms of the quality of the generated empirical models and the predictability of future actions using the models.展开更多
Non-blind deblurring is crucial in image restoration.While most previous works assume that the exact blurring kernel is known,this is often not the case in prac-tice.The blurring kernel is most likely estimated by a b...Non-blind deblurring is crucial in image restoration.While most previous works assume that the exact blurring kernel is known,this is often not the case in prac-tice.The blurring kernel is most likely estimated by a blind deblurring method and is not error-free.In this work,we incorporate a kernel error term into an advanced non-blind deblurring method to recover the clear image with an inaccurately estimated kernel.Based on the celebrated principle of Maximum Entropy on the Mean(MEM),the regularization at the level of the probability distribution of images is carefully com-bined with the classical total variation regularizer at the level of image/kernel.Exten-sive experiments show clearly the effectiveness of the proposed method in the pres-ence of kernel error.As a traditional method,the proposed method is even better than some of the state-of-the-art deep-learning-based methods.We also demonstrate the potential of combining the MEM framework with classical regularization approaches in image deblurring,which is extremely inspiring for other related works.展开更多
The sophistication of cyberattacks penetrating into enterprise networks has called for predictive defense beyond intrusion detection,where different attack strategies can be analyzed and used to anticipate next malici...The sophistication of cyberattacks penetrating into enterprise networks has called for predictive defense beyond intrusion detection,where different attack strategies can be analyzed and used to anticipate next malicious actions,especially the unusual ones.Unfortunately,traditional predictive analytics or machine learning techniques that require training data of known attack strategies are not practical,given the scarcity of representative data and the evolving nature of cyberattacks.This paper describes the design and evaluation of a novel automated system,ASSERT,which continuously synthesizes and separates cyberattack behavior models to enable better prediction of future actions.It takes streaming malicious event evidences as inputs,abstracts them to edge-based behavior aggregates,and associates the edges to attack models,where each represents a unique and collective attack behavior.It follows a dynamic Bayesian-based model generation approach to determine when a new attack behavior is present,and creates new attack models by maximizing a cluster validity index.ASSERT generates empirical attack models by separating evidences and use the generated models to predict unseen future incidents.It continuously evaluates the quality of the model separation and triggers a re-clustering process when needed.Through the use of 2017 National Collegiate Penetration Testing Competition data,this work demonstrates the effectiveness of ASSERT in terms of the quality of the generated empirical models and the predictability of future actions using the models.展开更多
基金the National Natural Science Foundation of China(Grant No.61973033)Preliminary Research of Equipment(Grant No.9090102010305)for funding the experiments。
文摘The longitudinal dispersion of the projectile in shooting tests of two-dimensional trajectory corrections fused with fixed canards is extremely large that it sometimes exceeds the correction ability of the correction fuse actuator.The impact point easily deviates from the target,and thus the correction result cannot be readily evaluated.However,the cost of shooting tests is considerably high to conduct many tests for data collection.To address this issue,this study proposes an aiming method for shooting tests based on small sample size.The proposed method uses the Bootstrap method to expand the test data;repeatedly iterates and corrects the position of the simulated theoretical impact points through an improved compatibility test method;and dynamically adjusts the weight of the prior distribution of simulation results based on Kullback-Leibler divergence,which to some extent avoids the real data being"submerged"by the simulation data and achieves the fusion Bayesian estimation of the dispersion center.The experimental results show that when the simulation accuracy is sufficiently high,the proposed method yields a smaller mean-square deviation in estimating the dispersion center and higher shooting accuracy than those of the three comparison methods,which is more conducive to reflecting the effect of the control algorithm and facilitating test personnel to iterate their proposed structures and algorithms.;in addition,this study provides a knowledge base for further comprehensive studies in the future.
基金partially sponsored by the Natural Science Foundation of Shanghai(No.23ZR1429300)the Innovation Fund of CNNC(Lingchuang Fund)。
文摘The estimation of model parameters is an important subject in engineering.In this area of work,the prevailing approach is to estimate or calculate these as deterministic parameters.In this study,we consider the model parameters from the perspective of random variables and describe the general form of the parameter distribution inference problem.Under this framework,we propose an ensemble Bayesian method by introducing Bayesian inference and the Markov chain Monte Carlo(MCMC)method.Experiments on a finite cylindrical reactor and a 2D IAEA benchmark problem show that the proposed method converges quickly and can estimate parameters effectively,even for several correlated parameters simultaneously.Our experiments include cases of engineering software calls,demonstrating that the method can be applied to engineering,such as nuclear reactor engineering.
基金supported by the National Natural Science Foundation of China(61104182)
文摘This study presents a Bayesian methodology for de- signing step stress accelerated degradation testing (SSADT) and its application to batteries. First, the simulation-based Bayesian de- sign framework for SSADT is presented. Then, by considering his- torical data, specific optimal objectives oriented Kullback-Leibler (KL) divergence is established. A numerical example is discussed to illustrate the design approach. It is assumed that the degrada- tion model (or process) follows a drift Brownian motion; the accele- ration model follows Arrhenius equation; and the corresponding parameters follow normal and Gamma prior distributions. Using the Markov Chain Monte Carlo (MCMC) method and WinBUGS software, the comparison shows that KL divergence is better than quadratic loss for optimal criteria. Further, the effect of simulation outiiers on the optimization plan is analyzed and the preferred sur- face fitting algorithm is chosen. At the end of the paper, a NASA lithium-ion battery dataset is used as historical information and the KL divergence oriented Bayesian design is compared with maxi- mum likelihood theory oriented locally optimal design. The results show that the proposed method can provide a much better testing plan for this engineering application.
文摘In view of the shortage of traditional life prediction methods for machine tools,such as low accuracy of life prediction and few samples basis attributes,a life prediction model of machine tools combined with machine tool attributes is proposed.The life prediction model of machine tool adopts KL dispersion distribution theory,uses modal superposition method to carry out machine tool life analysis,calculates the theoretical life of machine tool,and then carries on the simulation,obtains the machine tool life prediction value.Compared with the traditional method of machine tool life prediction,the model is based on the application life fatigue damage model,which superimposes the service times and maintenance cycle of the machine tool,derives the influence factor of machine tool life,and obtains the linear relationship between the influence factor of machine tool life and the life of machine tool.The influence factor of machine tool life is introduced as the life prediction parameter of machine tool.The data transformation relationship of HT300 parts is constructed.The original part data is enhanced.The effective training set is obtained.The life prediction model of machine tool based on deep learning is completed.The quantitative analysis of machine tool life is carried out.The experiment of machine tool life prediction using training data set proves the validity of the model.Regression test was carried out on the training data set to reflect the robustness of the model.The prediction accuracy of the model is further verified by Weibull test.
基金grateful to the National Key R&D Program of China(Grant No.2022ZD0117301)for funding this study。
文摘Currently,a surge in the number of spacecraft and fragments is observed,leading to more frequent breakup events in low Earth orbits(LEOs).The causes of these events are being identified,and specific triggers,such as collisions or explosions,are being examined for their importance to space traffic management.Backward propagation methods were employed to trace the origins of these types of breakup events.Simulations were conducted using the NASA standard breakup model,and satellite Hitomi’s breakup was analyzed.Kullback-Leibler(KL)divergences,Euclidean 2-norms,and Jensen-Shannon(JS)divergences were computed to deduce potential types of breakups and the associated fragmentation masses.In the simulated case,a discrepancy of 22.12 s between the estimated and actual time was noted.Additionally,the breakup of the Hitomi satellite was estimated to have occurred around UTC 1:49:26.4 on March 26,2016.This contrasts with the epoch provided by the Joint Space Operation Center,which was estimated to be at 1:42 UTC±11 min.From the findings,it was suggested that the techniques introduced in the study can be effectively used to trace the origins of short-term breakup events and to deduce the types of collisions and fragmentation masses under certain conditions.
基金This research is supported by NSF Award#1526383.
文摘The sophistication of cyberattacks penetrating into enterprise networks has called for predictive defense beyond intrusion detection,where different attack strategies can be analyzed and used to anticipate next malicious actions,especially the unusual ones.Unfortunately,traditional predictive analytics or machine learning techniques that require training data of known attack strategies are not practical,given the scarcity of representative data and the evolving nature of cyberattacks.This paper describes the design and evaluation of a novel automated system,ASSERT,which continuously synthesizes and separates cyberattack behavior models to enable better prediction of future actions.It takes streaming malicious event evidences as inputs,abstracts them to edge-based behavior aggregates,and associates the edges to attack models,where each represents a unique and collective attack behavior.It follows a dynamic Bayesian-based model generation approach to determine when a new attack behavior is present,and creates new attack models by maximizing a cluster validity index.ASSERT generates empirical attack models by separating evidences and use the generated models to predict unseen future incidents.It continuously evaluates the quality of the model separation and triggers a re-clustering process when needed.Through the use of 2017 National Collegiate Penetration Testing Competition data,this work demonstrates the effectiveness of ASSERT in terms of the quality of the generated empirical models and the predictability of future actions using the models.
基金supported in part by the National Key R&D Programof China under Grant 2021YFE0203700Grant NSFC/RGCN CUHK 415/19,Grant ITFMHP/038/20,Grant RGC 14300219,14302920,14301121CUHK Direct Grant for Research under Grant 4053405,4053460.
文摘Non-blind deblurring is crucial in image restoration.While most previous works assume that the exact blurring kernel is known,this is often not the case in prac-tice.The blurring kernel is most likely estimated by a blind deblurring method and is not error-free.In this work,we incorporate a kernel error term into an advanced non-blind deblurring method to recover the clear image with an inaccurately estimated kernel.Based on the celebrated principle of Maximum Entropy on the Mean(MEM),the regularization at the level of the probability distribution of images is carefully com-bined with the classical total variation regularizer at the level of image/kernel.Exten-sive experiments show clearly the effectiveness of the proposed method in the pres-ence of kernel error.As a traditional method,the proposed method is even better than some of the state-of-the-art deep-learning-based methods.We also demonstrate the potential of combining the MEM framework with classical regularization approaches in image deblurring,which is extremely inspiring for other related works.
文摘The sophistication of cyberattacks penetrating into enterprise networks has called for predictive defense beyond intrusion detection,where different attack strategies can be analyzed and used to anticipate next malicious actions,especially the unusual ones.Unfortunately,traditional predictive analytics or machine learning techniques that require training data of known attack strategies are not practical,given the scarcity of representative data and the evolving nature of cyberattacks.This paper describes the design and evaluation of a novel automated system,ASSERT,which continuously synthesizes and separates cyberattack behavior models to enable better prediction of future actions.It takes streaming malicious event evidences as inputs,abstracts them to edge-based behavior aggregates,and associates the edges to attack models,where each represents a unique and collective attack behavior.It follows a dynamic Bayesian-based model generation approach to determine when a new attack behavior is present,and creates new attack models by maximizing a cluster validity index.ASSERT generates empirical attack models by separating evidences and use the generated models to predict unseen future incidents.It continuously evaluates the quality of the model separation and triggers a re-clustering process when needed.Through the use of 2017 National Collegiate Penetration Testing Competition data,this work demonstrates the effectiveness of ASSERT in terms of the quality of the generated empirical models and the predictability of future actions using the models.