Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the pro...Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.展开更多
Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and ...Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.展开更多
Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of labels.However,existing studies neglect positional information ...Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of labels.However,existing studies neglect positional information when learning discrete snapshots,resulting in insufficient network topology learning.At the same time,due to the lack of appropriate data augmentation methods,it is difficult to capture the evolving patterns of the network effectively.To address the above problems,a position-aware and subgraph enhanced dynamic graph contrastive learning method is proposed for discrete-time dynamic graphs.Firstly,the global snapshot is built based on the historical snapshots to express the stable pattern of the dynamic graph,and the random walk is used to obtain the position representation by learning the positional information of the nodes.Secondly,a new data augmentation method is carried out from the perspectives of short-term changes and long-term stable structures of dynamic graphs.Specifically,subgraph sampling based on snapshots and global snapshots is used to obtain two structural augmentation views,and node structures and evolving patterns are learned by combining graph neural network,gated recurrent unit,and attention mechanism.Finally,the quality of node representation is improved by combining the contrastive learning between different structural augmentation views and between the two representations of structure and position.Experimental results on four real datasets show that the performance of the proposed method is better than the existing unsupervised methods,and it is more competitive than the supervised learning method under a semi-supervised setting.展开更多
As the simplest hydrogen-bonded alcohol,liquid methanol has attracted intensive experimental and theoretical interest.However,theoretical investigations on this system have primarily relied on empirical intermolecular...As the simplest hydrogen-bonded alcohol,liquid methanol has attracted intensive experimental and theoretical interest.However,theoretical investigations on this system have primarily relied on empirical intermolecular force fields or ab initio molecular dynamics with semilocal density functionals.Inspired by recent studies on bulk water using increasingly accurate machine learning force fields,we report a new machine learning force field for liquid methanol with a hybrid functional revPBE0 plus dispersion correction.Molecular dynamics simulations on this machine learning force field are orders of magnitude faster than ab initio molecular dynamics simulations,yielding the radial distribution functions,selfdiffusion coefficients,and hydrogen bond network properties with very small statistical errors.The resulting structural and dynamical properties are compared well with the experimental data,demonstrating the superior accuracy of this machine learning force field.This work represents a successful step toward a first-principles description of this benchmark system and showcases the general applicability of the machine learning force field in studying liquid systems.展开更多
We present a large deviation theory that characterizes the exponential estimate for rare events in stochastic dynamical systems in the limit of weak noise.We aim to consider a next-to-leading-order approximation for m...We present a large deviation theory that characterizes the exponential estimate for rare events in stochastic dynamical systems in the limit of weak noise.We aim to consider a next-to-leading-order approximation for more accurate calculation of the mean exit time by computing large deviation prefactors with the aid of machine learning.More specifically,we design a neural network framework to compute quasipotential,most probable paths and prefactors based on the orthogonal decomposition of a vector field.We corroborate the higher effectiveness and accuracy of our algorithm with two toy models.Numerical experiments demonstrate its powerful functionality in exploring the internal mechanism of rare events triggered by weak random fluctuations.展开更多
Shear deformation mechanisms of diamond-like carbon(DLC)are commonly unclear since its thickness of several micrometers limits the detailed analysis of its microstructural evolution and mechanical performance,which fu...Shear deformation mechanisms of diamond-like carbon(DLC)are commonly unclear since its thickness of several micrometers limits the detailed analysis of its microstructural evolution and mechanical performance,which further influences the improvement of the friction and wear performance of DLC.This study aims to investigate this issue utilizing molecular dynamics simulation and machine learning(ML)techniques.It is indicated that the changes in the mechanical properties of DLC are mainly due to the expansion and reduction of sp3 networks,causing the stick-slip patterns in shear force.In addition,cluster analysis showed that the sp2-sp3 transitions arise in the stick stage,while the sp3-sp2 transitions occur in the slip stage.In order to analyze the mechanisms governing the bond breaking/re-formation in these transitions,the Random Forest(RF)model in ML identifies that the kinetic energies of sp3 atoms and their velocities along the loading direction have the highest influence.This is because high kinetic energies of atoms can exacerbate the instability of the bonding state and increase the probability of bond breaking/re-formation.Finally,the RF model finds that the shear force of DLC is highly correlated to its potential energy,with less correlation to its content of sp3 atoms.Since the changes in potential energy are caused by the variances in the content of sp3 atoms and localized strains,potential energy is an ideal parameter to evaluate the shear deformation of DLC.The results can enhance the understanding of the shear deformation of DLC and support the improvement of its frictional and wear performance.展开更多
Discrete dislocation dynamics(DDD)simulations reveal the evolution of dislocation structures and the interaction of dislocations.This study investigated the compression behavior of single-crystal copper micropillars u...Discrete dislocation dynamics(DDD)simulations reveal the evolution of dislocation structures and the interaction of dislocations.This study investigated the compression behavior of single-crystal copper micropillars using fewshot machine learning with data provided by DDD simulations.Two types of features are considered:external features comprising specimen size and loading orientation and internal features involving dislocation source length,Schmid factor,the orientation of the most easily activated dislocations and their distance from the free boundary.The yielding stress and stress-strain curves of single-crystal copper micropillar are predicted well by incorporating both external and internal features of the sample as separate or combined inputs.It is found that the machine learning accuracy predictions for single-crystal micropillar compression can be improved by incorporating easily activated dislocation features with external features.However,the effect of easily activated dislocation on yielding is less important compared to the effects of specimen size and Schmid factor which includes information of orientation but becomes more evident in small-sized micropillars.Overall,incorporating internal features,especially the information of most easily activated dislocations,improves predictive capabilities across diverse sample sizes and orientations.展开更多
With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders...With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders lack a balance between data benefits and privacy threats,leading to conservative data uploads and low revenue or excessive uploads and privacy breaches.To solve this problem,a Dynamic Privacy Measurement and Protection(DPMP)framework is proposed based on differential privacy and reinforcement learning.Firstly,a DPM model is designed to quantify the amount of data privacy,and a calculation method for personalized privacy threshold of different users is also designed.Furthermore,a Dynamic Private sensing data Selection(DPS)algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds.Finally,theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection,in particular,the proposed DPMP framework has 63%and 23%higher training efficiency and data benefits,respectively,compared to the Monte Carlo algorithm.展开更多
Conventional wing aerodynamic optimization processes can be time-consuming and imprecise due to the complexity of versatile flight missions.Plenty of existing literature has considered two-dimensional infinite airfoil...Conventional wing aerodynamic optimization processes can be time-consuming and imprecise due to the complexity of versatile flight missions.Plenty of existing literature has considered two-dimensional infinite airfoil optimization,while three-dimensional finite wing optimizations are subject to limited study because of high computational costs.Here we create an adaptive optimization methodology built upon digitized wing shape deformation and deep learning algorithms,which enable the rapid formulation of finite wing designs for specific aerodynamic performance demands under different cruise conditions.This methodology unfolds in three stages:radial basis function interpolated wing generation,collection of inputs from computational fluid dynamics simulations,and deep neural network that constructs the surrogate model for the optimal wing configuration.It has been demonstrated that the proposed methodology can significantly reduce the computational cost of numerical simulations.It also has the potential to optimize various aerial vehicles undergoing different mission environments,loading conditions,and safety requirements.展开更多
Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the p...Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the pipeline and PIPR. In this paper, we propose a dynamic regulating strategy to reduce the plugging-induced vibration by regulating the spoiler angle and plugging velocity. Firstly, the dynamic plugging simulation and experiment are performed to study the flow field changes during dynamic plugging. And the pressure difference is proposed to evaluate the degree of flow field vibration. Secondly, the mathematical models of pressure difference with plugging states and spoiler angles are established based on the extreme learning machine (ELM) optimized by improved sparrow search algorithm (ISSA). Finally, a modified Q-learning algorithm based on simulated annealing is applied to determine the optimal strategy for the spoiler angle and plugging velocity in real time. The results show that the proposed method can reduce the plugging-induced vibration by 19.9% and 32.7% on average, compared with single-regulating methods. This study can effectively ensure the stability of the plugging process.展开更多
Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control sy...Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.展开更多
Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to...Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to make dynamicdecisions continuously. This paper proposed a dynamic economic scheduling method for distribution networksbased on deep reinforcement learning. Firstly, the economic scheduling model of the new energy distributionnetwork is established considering the action characteristics of micro-gas turbines, and the dynamic schedulingmodel based on deep reinforcement learning is constructed for the new energy distribution network system with ahigh proportion of new energy, and the Markov decision process of the model is defined. Secondly, Second, for thechanging characteristics of source-load uncertainty, agents are trained interactively with the distributed networkin a data-driven manner. Then, through the proximal policy optimization algorithm, agents adaptively learn thescheduling strategy and realize the dynamic scheduling decision of the new energy distribution network system.Finally, the feasibility and superiority of the proposed method are verified by an improved IEEE 33-node simulationsystem.展开更多
The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to u...The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.展开更多
Prediction of stability in SG(Smart Grid)is essential in maintaining consistency and reliability of power supply in grid infrastructure.Analyzing the fluctuations in power generation and consumption patterns of smart ...Prediction of stability in SG(Smart Grid)is essential in maintaining consistency and reliability of power supply in grid infrastructure.Analyzing the fluctuations in power generation and consumption patterns of smart cities assists in effectively managing continuous power supply in the grid.It also possesses a better impact on averting overloading and permitting effective energy storage.Even though many traditional techniques have predicted the consumption rate for preserving stability,enhancement is required in prediction measures with minimized loss.To overcome the complications in existing studies,this paper intends to predict stability from the smart grid stability prediction dataset using machine learning algorithms.To accomplish this,pre-processing is performed initially to handle missing values since it develops biased models when missing values are mishandled and performs feature scaling to normalize independent data features.Then,the pre-processed data are taken for training and testing.Following that,the regression process is performed using Modified PSO(Particle Swarm Optimization)optimized XGBoost Technique with dynamic inertia weight update,which analyses variables like gamma(G),reaction time(tau1–tau4),and power balance(p1–p4)for providing effective future stability in SG.Since PSO attains optimal solution by adjusting position through dynamic inertial weights,it is integrated with XGBoost due to its scalability and faster computational speed characteristics.The hyperparameters of XGBoost are fine-tuned in the training process for achieving promising outcomes on prediction.Regression results are measured through evaluation metrics such as MSE(Mean Square Error)of 0.011312781,MAE(Mean Absolute Error)of 0.008596322,and RMSE(Root Mean Square Error)of 0.010636156 and MAPE(Mean Absolute Percentage Error)value of 0.0052 which determine the efficacy of the system.展开更多
In this paper, the stability of iterative learning control with data dropouts is discussed. By the super vector formulation, an iterative learning control (ILC) system with data dropouts can be modeled as an asynchr...In this paper, the stability of iterative learning control with data dropouts is discussed. By the super vector formulation, an iterative learning control (ILC) system with data dropouts can be modeled as an asynchronous dynamical system with rate constraints on events in the iteration domain. The stability condition is provided in the form of linear matrix inequalities (LMIS) depending on the stability of asynchronous dynamical systems. The analysis is supported by simulations.展开更多
A novel centralized approach for Dynamic Spectrum Allocation (DSA) in the Cognitive Radio (CR) network is presented in this paper. Instead of giving the solution in terms of formulas modeling network environment such ...A novel centralized approach for Dynamic Spectrum Allocation (DSA) in the Cognitive Radio (CR) network is presented in this paper. Instead of giving the solution in terms of formulas modeling network environment such as linear programming or convex optimization, the new approach obtains the capability of iteratively on-line learning environment performance by using Reinforcement Learning (RL) algorithm after observing the variability and uncertainty of the heterogeneous wireless networks. Appropriate decision-making access actions can then be obtained by employing Fuzzy Inference System (FIS) which ensures the strategy being able to explore the possible status and exploit the experiences sufficiently. The new approach considers multi-objective such as spectrum efficiency and fairness between CR Access Points (AP) effectively. By interacting with the environment and accumulating comprehensive advantages, it can achieve the largest long-term reward expected on the desired objectives and implement the best action. Moreover, the present algorithm is relatively simple and does not require complex calculations. Simulation results show that the proposed approach can get better performance with respect to fixed frequency planning scheme or general dynamic spectrum allocation policy.展开更多
Cognitive Internet of Vehicles(CIoV)can improve spectrum utilization by accessing the spectrum licensed to primary user(PU)under the premise of not disturbing the PU’s transmissions.However,the traditional static spe...Cognitive Internet of Vehicles(CIoV)can improve spectrum utilization by accessing the spectrum licensed to primary user(PU)under the premise of not disturbing the PU’s transmissions.However,the traditional static spectrum access makes the CIoV unable to adapt to the various spectrum environments.In this paper,a reinforcement learning based dynamic spectrum access scheme is proposed to improve the transmission performance of the CIoV in the licensed spectrum,and avoid causing harmful interference to the PU.The frame structure of the CIoV is separated into sensing period and access period,whereby the CIoV can optimize the transmission parameters in the access period according to the spectrum decisions in the sensing period.Considering both detection probability and false alarm probability,a Q-learning based spectrum access algorithm is proposed for the CIoV to intelligently select the optimal channel,bandwidth and transmit power under the dynamic spectrum states and various spectrum sensing performance.The simulations have shown that compared with the traditional non-learning spectrum access algorithm,the proposed Q-learning algorithm can effectively improve the spectral efficiency and throughput of the CIoV as well as decrease the interference power to the PU.展开更多
With the construction of the power Internet of Things(IoT),communication between smart devices in urban distribution networks has been gradually moving towards high speed,high compatibility,and low latency,which provi...With the construction of the power Internet of Things(IoT),communication between smart devices in urban distribution networks has been gradually moving towards high speed,high compatibility,and low latency,which provides reliable support for reconfiguration optimization in urban distribution networks.Thus,this study proposed a deep reinforcement learning based multi-level dynamic reconfiguration method for urban distribution networks in a cloud-edge collaboration architecture to obtain a real-time optimal multi-level dynamic reconfiguration solution.First,the multi-level dynamic reconfiguration method was discussed,which included feeder-,transformer-,and substation-levels.Subsequently,the multi-agent system was combined with the cloud-edge collaboration architecture to build a deep reinforcement learning model for multi-level dynamic reconfiguration in an urban distribution network.The cloud-edge collaboration architecture can effectively support the multi-agent system to conduct“centralized training and decentralized execution”operation modes and improve the learning efficiency of the model.Thereafter,for a multi-agent system,this study adopted a combination of offline and online learning to endow the model with the ability to realize automatic optimization and updation of the strategy.In the offline learning phase,a Q-learning-based multi-agent conservative Q-learning(MACQL)algorithm was proposed to stabilize the learning results and reduce the risk of the next online learning phase.In the online learning phase,a multi-agent deep deterministic policy gradient(MADDPG)algorithm based on policy gradients was proposed to explore the action space and update the experience pool.Finally,the effectiveness of the proposed method was verified through a simulation analysis of a real-world 445-node system.展开更多
Machine learning(ML)methods with good applicability to complex and highly nonlinear sequences have been attracting much attention in recent years for predictions of complicated mechanical properties of various materia...Machine learning(ML)methods with good applicability to complex and highly nonlinear sequences have been attracting much attention in recent years for predictions of complicated mechanical properties of various materials.As one of the widely known ML methods,back-propagation(BP)neural networks with and without optimization by genetic algorithm(GA)are also established for comparisons of time cost and prediction error.With the aim to further increase the prediction accuracy and efficiency,this paper proposes a long short-term memory(LSTM)networks model to predict the dynamic compressive performance of concrete-like materials at high strain rates.Dynamic explicit analysis is performed in the finite element(FE)software ABAQUS to simulate various waveforms in the split Hopkinson pressure bar(SHPB)experiments by applying different stress waves in the incident bar.The FE simulation accuracy is validated against SHPB experimental results from the viewpoint of dynamic increase factor.In order to cover more extensive loading scenarios,60 sets of FE simulations are conducted in this paper to generate three kinds of waveforms in the incident and transmission bars of SHPB experiments.By training the proposed three networks,the nonlinear mapping relations can be reasonably established between incident,reflect,and transmission waves.Statistical measures are used to quantify the network prediction accuracy,confirming that the predicted stress-strain curves of concrete-like materials at high strain rates by the proposed networks agree sufficiently with those by FE simulations.It is found that compared with BP network,the GA-BP network can effectively stabilize the network structure,indicating that the GA optimization improves the prediction accuracy of the SHPB dynamic responses by performing the crossover and mutation operations of weights and thresholds in the original BP network.By eliminating the long-time dependencies,the proposed LSTM network achieves better results than the BP and GA-BP networks,since smaller mean square error(MSE)and higher correlation coefficient are achieved.More importantly,the proposed LSTM algorithm,after the training process with a limited number of FE simulations,could replace the time-consuming and laborious FE pre-and post-processing and modelling.展开更多
Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This ...Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This paper tries to develop advanced deep learning approaches for structural dynamic response prediction and dam health diagnosis.At first,the improved long short-term memory(LSTM)networks are proposed for data-driven structural dynamic response analysis with the data generated by a single degree of freedom(SDOF)and the finite numerical simulation,due to the unavailability of abundant practical structural response data of concrete gravity dam under blast events.Three kinds of LSTM-based models are discussed with the various cases of noise-contaminated signals,and the results prove that LSTM-based models have the potential for quick structural response estimation under blast loads.Furthermore,the damage indicators(i.e.,peak vibration velocity and domain frequency)are extracted from the predicted velocity histories,and their relationship with the dam damage status from the numerical simulation is established.This study provides a deep-learning based structural health monitoring(SHM)framework for quick assessment of dam experienced underwater explosions through blastinduced monitoring data.展开更多
基金supported by the Research Grant Fund from Kwangwoon University in 2023,the National Natural Science Foundation of China under Grant(62311540155)the Taishan Scholars Project Special Funds(tsqn202312035)the open research foundation of State Key Laboratory of Integrated Chips and Systems.
文摘Wearable wristband systems leverage deep learning to revolutionize hand gesture recognition in daily activities.Unlike existing approaches that often focus on static gestures and require extensive labeled data,the proposed wearable wristband with selfsupervised contrastive learning excels at dynamic motion tracking and adapts rapidly across multiple scenarios.It features a four-channel sensing array composed of an ionic hydrogel with hierarchical microcone structures and ultrathin flexible electrodes,resulting in high-sensitivity capacitance output.Through wireless transmission from a Wi-Fi module,the proposed algorithm learns latent features from the unlabeled signals of random wrist movements.Remarkably,only few-shot labeled data are sufficient for fine-tuning the model,enabling rapid adaptation to various tasks.The system achieves a high accuracy of 94.9%in different scenarios,including the prediction of eight-direction commands,and air-writing of all numbers and letters.The proposed method facilitates smooth transitions between multiple tasks without the need for modifying the structure or undergoing extensive task-specific training.Its utility has been further extended to enhance human–machine interaction over digital platforms,such as game controls,calculators,and three-language login systems,offering users a natural and intuitive way of communication.
基金supported in part by the National Natural Science Foundation of China(62222301, 62073085, 62073158, 61890930-5, 62021003)the National Key Research and Development Program of China (2021ZD0112302, 2021ZD0112301, 2018YFC1900800-5)Beijing Natural Science Foundation (JQ19013)。
文摘Reinforcement learning(RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming(ADP) within the control community. This paper reviews recent developments in ADP along with RL and its applications to various advanced control fields. First, the background of the development of ADP is described, emphasizing the significance of regulation and tracking control problems. Some effective offline and online algorithms for ADP/adaptive critic control are displayed, where the main results towards discrete-time systems and continuous-time systems are surveyed, respectively.Then, the research progress on adaptive critic control based on the event-triggered framework and under uncertain environment is discussed, respectively, where event-based design, robust stabilization, and game design are reviewed. Moreover, the extensions of ADP for addressing control problems under complex environment attract enormous attention. The ADP architecture is revisited under the perspective of data-driven and RL frameworks,showing how they promote ADP formulation significantly.Finally, several typical control applications with respect to RL and ADP are summarized, particularly in the fields of wastewater treatment processes and power systems, followed by some general prospects for future research. Overall, the comprehensive survey on ADP and RL for advanced control applications has d emonstrated its remarkable potential within the artificial intelligence era. In addition, it also plays a vital role in promoting environmental protection and industrial intelligence.
文摘Unsupervised learning methods such as graph contrastive learning have been used for dynamic graph represen-tation learning to eliminate the dependence of labels.However,existing studies neglect positional information when learning discrete snapshots,resulting in insufficient network topology learning.At the same time,due to the lack of appropriate data augmentation methods,it is difficult to capture the evolving patterns of the network effectively.To address the above problems,a position-aware and subgraph enhanced dynamic graph contrastive learning method is proposed for discrete-time dynamic graphs.Firstly,the global snapshot is built based on the historical snapshots to express the stable pattern of the dynamic graph,and the random walk is used to obtain the position representation by learning the positional information of the nodes.Secondly,a new data augmentation method is carried out from the perspectives of short-term changes and long-term stable structures of dynamic graphs.Specifically,subgraph sampling based on snapshots and global snapshots is used to obtain two structural augmentation views,and node structures and evolving patterns are learned by combining graph neural network,gated recurrent unit,and attention mechanism.Finally,the quality of node representation is improved by combining the contrastive learning between different structural augmentation views and between the two representations of structure and position.Experimental results on four real datasets show that the performance of the proposed method is better than the existing unsupervised methods,and it is more competitive than the supervised learning method under a semi-supervised setting.
基金supported by the CAS Project for Young Scientists in Basic Research(YSBR-005)the National Natural Science Foundation of China(22325304,22221003 and 22033007)We acknowledge the Supercomputing Center of USTC,Hefei Advanced Computing Center,Beijing PARATERA Tech Co.,Ltd.,for providing high-performance computing services。
文摘As the simplest hydrogen-bonded alcohol,liquid methanol has attracted intensive experimental and theoretical interest.However,theoretical investigations on this system have primarily relied on empirical intermolecular force fields or ab initio molecular dynamics with semilocal density functionals.Inspired by recent studies on bulk water using increasingly accurate machine learning force fields,we report a new machine learning force field for liquid methanol with a hybrid functional revPBE0 plus dispersion correction.Molecular dynamics simulations on this machine learning force field are orders of magnitude faster than ab initio molecular dynamics simulations,yielding the radial distribution functions,selfdiffusion coefficients,and hydrogen bond network properties with very small statistical errors.The resulting structural and dynamical properties are compared well with the experimental data,demonstrating the superior accuracy of this machine learning force field.This work represents a successful step toward a first-principles description of this benchmark system and showcases the general applicability of the machine learning force field in studying liquid systems.
基金Project supported by the Natural Science Foundation of Jiangsu Province (Grant No.BK20220917)the National Natural Science Foundation of China (Grant Nos.12001213 and 12302035)。
文摘We present a large deviation theory that characterizes the exponential estimate for rare events in stochastic dynamical systems in the limit of weak noise.We aim to consider a next-to-leading-order approximation for more accurate calculation of the mean exit time by computing large deviation prefactors with the aid of machine learning.More specifically,we design a neural network framework to compute quasipotential,most probable paths and prefactors based on the orthogonal decomposition of a vector field.We corroborate the higher effectiveness and accuracy of our algorithm with two toy models.Numerical experiments demonstrate its powerful functionality in exploring the internal mechanism of rare events triggered by weak random fluctuations.
基金The simulations in this work are supported by the High-Performance Computing Center of Central South University.
文摘Shear deformation mechanisms of diamond-like carbon(DLC)are commonly unclear since its thickness of several micrometers limits the detailed analysis of its microstructural evolution and mechanical performance,which further influences the improvement of the friction and wear performance of DLC.This study aims to investigate this issue utilizing molecular dynamics simulation and machine learning(ML)techniques.It is indicated that the changes in the mechanical properties of DLC are mainly due to the expansion and reduction of sp3 networks,causing the stick-slip patterns in shear force.In addition,cluster analysis showed that the sp2-sp3 transitions arise in the stick stage,while the sp3-sp2 transitions occur in the slip stage.In order to analyze the mechanisms governing the bond breaking/re-formation in these transitions,the Random Forest(RF)model in ML identifies that the kinetic energies of sp3 atoms and their velocities along the loading direction have the highest influence.This is because high kinetic energies of atoms can exacerbate the instability of the bonding state and increase the probability of bond breaking/re-formation.Finally,the RF model finds that the shear force of DLC is highly correlated to its potential energy,with less correlation to its content of sp3 atoms.Since the changes in potential energy are caused by the variances in the content of sp3 atoms and localized strains,potential energy is an ideal parameter to evaluate the shear deformation of DLC.The results can enhance the understanding of the shear deformation of DLC and support the improvement of its frictional and wear performance.
基金supported by the National Natural Science Foundation of China(Grant Nos.12192214 and 12222209).
文摘Discrete dislocation dynamics(DDD)simulations reveal the evolution of dislocation structures and the interaction of dislocations.This study investigated the compression behavior of single-crystal copper micropillars using fewshot machine learning with data provided by DDD simulations.Two types of features are considered:external features comprising specimen size and loading orientation and internal features involving dislocation source length,Schmid factor,the orientation of the most easily activated dislocations and their distance from the free boundary.The yielding stress and stress-strain curves of single-crystal copper micropillar are predicted well by incorporating both external and internal features of the sample as separate or combined inputs.It is found that the machine learning accuracy predictions for single-crystal micropillar compression can be improved by incorporating easily activated dislocation features with external features.However,the effect of easily activated dislocation on yielding is less important compared to the effects of specimen size and Schmid factor which includes information of orientation but becomes more evident in small-sized micropillars.Overall,incorporating internal features,especially the information of most easily activated dislocations,improves predictive capabilities across diverse sample sizes and orientations.
基金supported in part by the National Natural Science Foundation of China under Grant U1905211,Grant 61872088,Grant 62072109,Grant 61872090,and Grant U1804263in part by the Guangxi Key Laboratory of Trusted Software under Grant KX202042+3 种基金in part by the Science and Technology Major Support Program of Guizhou Province under Grant 20183001in part by the Science and Technology Program of Guizhou Province under Grant 20191098in part by the Project of High-level Innovative Talents of Guizhou Province under Grant 20206008in part by the Open Research Fund of Key Laboratory of Cryptography of Zhejiang Province under Grant ZCL21015.
文摘With the maturity and development of 5G field,Mobile Edge CrowdSensing(MECS),as an intelligent data collection paradigm,provides a broad prospect for various applications in IoT.However,sensing users as data uploaders lack a balance between data benefits and privacy threats,leading to conservative data uploads and low revenue or excessive uploads and privacy breaches.To solve this problem,a Dynamic Privacy Measurement and Protection(DPMP)framework is proposed based on differential privacy and reinforcement learning.Firstly,a DPM model is designed to quantify the amount of data privacy,and a calculation method for personalized privacy threshold of different users is also designed.Furthermore,a Dynamic Private sensing data Selection(DPS)algorithm is proposed to help sensing users maximize data benefits within their privacy thresholds.Finally,theoretical analysis and ample experiment results show that DPMP framework is effective and efficient to achieve a balance between data benefits and sensing user privacy protection,in particular,the proposed DPMP framework has 63%and 23%higher training efficiency and data benefits,respectively,compared to the Monte Carlo algorithm.
基金supported by CITRIS and the Banatao Institute,Air Force Office of Scientific Research(Grant No.FA9550-22-1-0420)National Science Foundation(Grant No.ACI-1548562).
文摘Conventional wing aerodynamic optimization processes can be time-consuming and imprecise due to the complexity of versatile flight missions.Plenty of existing literature has considered two-dimensional infinite airfoil optimization,while three-dimensional finite wing optimizations are subject to limited study because of high computational costs.Here we create an adaptive optimization methodology built upon digitized wing shape deformation and deep learning algorithms,which enable the rapid formulation of finite wing designs for specific aerodynamic performance demands under different cruise conditions.This methodology unfolds in three stages:radial basis function interpolated wing generation,collection of inputs from computational fluid dynamics simulations,and deep neural network that constructs the surrogate model for the optimal wing configuration.It has been demonstrated that the proposed methodology can significantly reduce the computational cost of numerical simulations.It also has the potential to optimize various aerial vehicles undergoing different mission environments,loading conditions,and safety requirements.
基金This work was financially supported by the National Natural Science Foundation of China(Grant No.51575528)the Science Foundation of China University of Petroleum,Beijing(No.2462022QEDX011).
文摘Pipeline isolation plugging robot (PIPR) is an important tool in pipeline maintenance operation. During the plugging process, the violent vibration will occur by the flow field, which can cause serious damage to the pipeline and PIPR. In this paper, we propose a dynamic regulating strategy to reduce the plugging-induced vibration by regulating the spoiler angle and plugging velocity. Firstly, the dynamic plugging simulation and experiment are performed to study the flow field changes during dynamic plugging. And the pressure difference is proposed to evaluate the degree of flow field vibration. Secondly, the mathematical models of pressure difference with plugging states and spoiler angles are established based on the extreme learning machine (ELM) optimized by improved sparrow search algorithm (ISSA). Finally, a modified Q-learning algorithm based on simulated annealing is applied to determine the optimal strategy for the spoiler angle and plugging velocity in real time. The results show that the proposed method can reduce the plugging-induced vibration by 19.9% and 32.7% on average, compared with single-regulating methods. This study can effectively ensure the stability of the plugging process.
基金partly supported by the University of Malaya Impact Oriented Interdisci-plinary Research Grant under Grant IIRG008(A,B,C)-19IISS.
文摘Organizations are adopting the Bring Your Own Device(BYOD)concept to enhance productivity and reduce expenses.However,this trend introduces security challenges,such as unauthorized access.Traditional access control systems,such as Attribute-Based Access Control(ABAC)and Role-Based Access Control(RBAC),are limited in their ability to enforce access decisions due to the variability and dynamism of attributes related to users and resources.This paper proposes a method for enforcing access decisions that is adaptable and dynamic,based on multilayer hybrid deep learning techniques,particularly the Tabular Deep Neural Network Tabular DNN method.This technique transforms all input attributes in an access request into a binary classification(allow or deny)using multiple layers,ensuring accurate and efficient access decision-making.The proposed solution was evaluated using the Kaggle Amazon access control policy dataset and demonstrated its effectiveness by achieving a 94%accuracy rate.Additionally,the proposed solution enhances the implementation of access decisions based on a variety of resource and user attributes while ensuring privacy through indirect communication with the Policy Administration Point(PAP).This solution significantly improves the flexibility of access control systems,making themmore dynamic and adaptable to the evolving needs ofmodern organizations.Furthermore,it offers a scalable approach to manage the complexities associated with the BYOD environment,providing a robust framework for secure and efficient access management.
基金the State Grid Liaoning Electric Power Supply Co.,Ltd.(Research on Scheduling Decision Technology Based on Interactive Reinforcement Learning for Adapting High Proportion of New Energy,No.2023YF-49).
文摘Traditional optimal scheduling methods are limited to accurate physical models and parameter settings, which aredifficult to adapt to the uncertainty of source and load, and there are problems such as the inability to make dynamicdecisions continuously. This paper proposed a dynamic economic scheduling method for distribution networksbased on deep reinforcement learning. Firstly, the economic scheduling model of the new energy distributionnetwork is established considering the action characteristics of micro-gas turbines, and the dynamic schedulingmodel based on deep reinforcement learning is constructed for the new energy distribution network system with ahigh proportion of new energy, and the Markov decision process of the model is defined. Secondly, Second, for thechanging characteristics of source-load uncertainty, agents are trained interactively with the distributed networkin a data-driven manner. Then, through the proximal policy optimization algorithm, agents adaptively learn thescheduling strategy and realize the dynamic scheduling decision of the new energy distribution network system.Finally, the feasibility and superiority of the proposed method are verified by an improved IEEE 33-node simulationsystem.
文摘The ability to accurately predict urban traffic flows is crucial for optimising city operations.Consequently,various methods for forecasting urban traffic have been developed,focusing on analysing historical data to understand complex mobility patterns.Deep learning techniques,such as graph neural networks(GNNs),are popular for their ability to capture spatio-temporal dependencies.However,these models often become overly complex due to the large number of hyper-parameters involved.In this study,we introduce Dynamic Multi-Graph Spatial-Temporal Graph Neural Ordinary Differential Equation Networks(DMST-GNODE),a framework based on ordinary differential equations(ODEs)that autonomously discovers effective spatial-temporal graph neural network(STGNN)architectures for traffic prediction tasks.The comparative analysis of DMST-GNODE and baseline models indicates that DMST-GNODE model demonstrates superior performance across multiple datasets,consistently achieving the lowest Root Mean Square Error(RMSE)and Mean Absolute Error(MAE)values,alongside the highest accuracy.On the BKK(Bangkok)dataset,it outperformed other models with an RMSE of 3.3165 and an accuracy of 0.9367 for a 20-min interval,maintaining this trend across 40 and 60 min.Similarly,on the PeMS08 dataset,DMST-GNODE achieved the best performance with an RMSE of 19.4863 and an accuracy of 0.9377 at 20 min,demonstrating its effectiveness over longer periods.The Los_Loop dataset results further emphasise this model’s advantage,with an RMSE of 3.3422 and an accuracy of 0.7643 at 20 min,consistently maintaining superiority across all time intervals.These numerical highlights indicate that DMST-GNODE not only outperforms baseline models but also achieves higher accuracy and lower errors across different time intervals and datasets.
基金Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1445)。
文摘Prediction of stability in SG(Smart Grid)is essential in maintaining consistency and reliability of power supply in grid infrastructure.Analyzing the fluctuations in power generation and consumption patterns of smart cities assists in effectively managing continuous power supply in the grid.It also possesses a better impact on averting overloading and permitting effective energy storage.Even though many traditional techniques have predicted the consumption rate for preserving stability,enhancement is required in prediction measures with minimized loss.To overcome the complications in existing studies,this paper intends to predict stability from the smart grid stability prediction dataset using machine learning algorithms.To accomplish this,pre-processing is performed initially to handle missing values since it develops biased models when missing values are mishandled and performs feature scaling to normalize independent data features.Then,the pre-processed data are taken for training and testing.Following that,the regression process is performed using Modified PSO(Particle Swarm Optimization)optimized XGBoost Technique with dynamic inertia weight update,which analyses variables like gamma(G),reaction time(tau1–tau4),and power balance(p1–p4)for providing effective future stability in SG.Since PSO attains optimal solution by adjusting position through dynamic inertial weights,it is integrated with XGBoost due to its scalability and faster computational speed characteristics.The hyperparameters of XGBoost are fine-tuned in the training process for achieving promising outcomes on prediction.Regression results are measured through evaluation metrics such as MSE(Mean Square Error)of 0.011312781,MAE(Mean Absolute Error)of 0.008596322,and RMSE(Root Mean Square Error)of 0.010636156 and MAPE(Mean Absolute Percentage Error)value of 0.0052 which determine the efficacy of the system.
基金supported by General Program (No. 60774022)State Key Program (No. 60834001) of National Natural Science Foundation of China
文摘In this paper, the stability of iterative learning control with data dropouts is discussed. By the super vector formulation, an iterative learning control (ILC) system with data dropouts can be modeled as an asynchronous dynamical system with rate constraints on events in the iteration domain. The stability condition is provided in the form of linear matrix inequalities (LMIS) depending on the stability of asynchronous dynamical systems. The analysis is supported by simulations.
基金supported in part by National Science Fund for Distinguished Young Scholars project under Grant No.60725105National Basic Research Program of China (973 Pro-gram) under Grant No.2009CB320404+1 种基金National Natural Science Foundation of China under Grant No.61072068Fundamental Research Funds for the Central Universities under Grant No.JY10000901031
文摘A novel centralized approach for Dynamic Spectrum Allocation (DSA) in the Cognitive Radio (CR) network is presented in this paper. Instead of giving the solution in terms of formulas modeling network environment such as linear programming or convex optimization, the new approach obtains the capability of iteratively on-line learning environment performance by using Reinforcement Learning (RL) algorithm after observing the variability and uncertainty of the heterogeneous wireless networks. Appropriate decision-making access actions can then be obtained by employing Fuzzy Inference System (FIS) which ensures the strategy being able to explore the possible status and exploit the experiences sufficiently. The new approach considers multi-objective such as spectrum efficiency and fairness between CR Access Points (AP) effectively. By interacting with the environment and accumulating comprehensive advantages, it can achieve the largest long-term reward expected on the desired objectives and implement the best action. Moreover, the present algorithm is relatively simple and does not require complex calculations. Simulation results show that the proposed approach can get better performance with respect to fixed frequency planning scheme or general dynamic spectrum allocation policy.
基金This work was supported by the Joint Foundations of the National Natural Science Foundations of China and the Civil Aviation of China under Grant U1833102the Natural Science Foundation of Liaoning Province under Grants 2020-HYLH-13 and 2019-ZD-0014+1 种基金the fundamental research funds for the central universities under Grant DUT21JC20the Engineering Research Center of Mobile Communications,Ministry of Education.
文摘Cognitive Internet of Vehicles(CIoV)can improve spectrum utilization by accessing the spectrum licensed to primary user(PU)under the premise of not disturbing the PU’s transmissions.However,the traditional static spectrum access makes the CIoV unable to adapt to the various spectrum environments.In this paper,a reinforcement learning based dynamic spectrum access scheme is proposed to improve the transmission performance of the CIoV in the licensed spectrum,and avoid causing harmful interference to the PU.The frame structure of the CIoV is separated into sensing period and access period,whereby the CIoV can optimize the transmission parameters in the access period according to the spectrum decisions in the sensing period.Considering both detection probability and false alarm probability,a Q-learning based spectrum access algorithm is proposed for the CIoV to intelligently select the optimal channel,bandwidth and transmit power under the dynamic spectrum states and various spectrum sensing performance.The simulations have shown that compared with the traditional non-learning spectrum access algorithm,the proposed Q-learning algorithm can effectively improve the spectral efficiency and throughput of the CIoV as well as decrease the interference power to the PU.
基金supported by the National Natural Science Foundation of China under Grant 52077146.
文摘With the construction of the power Internet of Things(IoT),communication between smart devices in urban distribution networks has been gradually moving towards high speed,high compatibility,and low latency,which provides reliable support for reconfiguration optimization in urban distribution networks.Thus,this study proposed a deep reinforcement learning based multi-level dynamic reconfiguration method for urban distribution networks in a cloud-edge collaboration architecture to obtain a real-time optimal multi-level dynamic reconfiguration solution.First,the multi-level dynamic reconfiguration method was discussed,which included feeder-,transformer-,and substation-levels.Subsequently,the multi-agent system was combined with the cloud-edge collaboration architecture to build a deep reinforcement learning model for multi-level dynamic reconfiguration in an urban distribution network.The cloud-edge collaboration architecture can effectively support the multi-agent system to conduct“centralized training and decentralized execution”operation modes and improve the learning efficiency of the model.Thereafter,for a multi-agent system,this study adopted a combination of offline and online learning to endow the model with the ability to realize automatic optimization and updation of the strategy.In the offline learning phase,a Q-learning-based multi-agent conservative Q-learning(MACQL)algorithm was proposed to stabilize the learning results and reduce the risk of the next online learning phase.In the online learning phase,a multi-agent deep deterministic policy gradient(MADDPG)algorithm based on policy gradients was proposed to explore the action space and update the experience pool.Finally,the effectiveness of the proposed method was verified through a simulation analysis of a real-world 445-node system.
基金supported by the National Natural Science Foundation of China (No. 52175148)the Natural Science Foundation of Shaanxi Province (No. 2021KW-25)+1 种基金the Open Cooperation Innovation Fund of Xi’an Modern Chemistry Research Institute (No. SYJJ20210409)the Fundamental Research Funds for the Central Universities (No. 3102018ZY015)
文摘Machine learning(ML)methods with good applicability to complex and highly nonlinear sequences have been attracting much attention in recent years for predictions of complicated mechanical properties of various materials.As one of the widely known ML methods,back-propagation(BP)neural networks with and without optimization by genetic algorithm(GA)are also established for comparisons of time cost and prediction error.With the aim to further increase the prediction accuracy and efficiency,this paper proposes a long short-term memory(LSTM)networks model to predict the dynamic compressive performance of concrete-like materials at high strain rates.Dynamic explicit analysis is performed in the finite element(FE)software ABAQUS to simulate various waveforms in the split Hopkinson pressure bar(SHPB)experiments by applying different stress waves in the incident bar.The FE simulation accuracy is validated against SHPB experimental results from the viewpoint of dynamic increase factor.In order to cover more extensive loading scenarios,60 sets of FE simulations are conducted in this paper to generate three kinds of waveforms in the incident and transmission bars of SHPB experiments.By training the proposed three networks,the nonlinear mapping relations can be reasonably established between incident,reflect,and transmission waves.Statistical measures are used to quantify the network prediction accuracy,confirming that the predicted stress-strain curves of concrete-like materials at high strain rates by the proposed networks agree sufficiently with those by FE simulations.It is found that compared with BP network,the GA-BP network can effectively stabilize the network structure,indicating that the GA optimization improves the prediction accuracy of the SHPB dynamic responses by performing the crossover and mutation operations of weights and thresholds in the original BP network.By eliminating the long-time dependencies,the proposed LSTM network achieves better results than the BP and GA-BP networks,since smaller mean square error(MSE)and higher correlation coefficient are achieved.More importantly,the proposed LSTM algorithm,after the training process with a limited number of FE simulations,could replace the time-consuming and laborious FE pre-and post-processing and modelling.
基金supported by a grant from the National Natural Science Foundation of China(Grant No.52109163 and 51979188).
文摘Social infrastructures such as dams are likely to be exposed to high risk of terrorist and military attacks,leading to increasing attentions on their vulnerability and catastrophic consequences under such events.This paper tries to develop advanced deep learning approaches for structural dynamic response prediction and dam health diagnosis.At first,the improved long short-term memory(LSTM)networks are proposed for data-driven structural dynamic response analysis with the data generated by a single degree of freedom(SDOF)and the finite numerical simulation,due to the unavailability of abundant practical structural response data of concrete gravity dam under blast events.Three kinds of LSTM-based models are discussed with the various cases of noise-contaminated signals,and the results prove that LSTM-based models have the potential for quick structural response estimation under blast loads.Furthermore,the damage indicators(i.e.,peak vibration velocity and domain frequency)are extracted from the predicted velocity histories,and their relationship with the dam damage status from the numerical simulation is established.This study provides a deep-learning based structural health monitoring(SHM)framework for quick assessment of dam experienced underwater explosions through blastinduced monitoring data.