Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function...Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.展开更多
Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data main...Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data mainly from the National Performance Management Research Data Set (NPMRDS) and the Louisiana Crash Database were used to analyze Truck Travel Time Reliability Index, commercial vehicle User Delay Costs, and commercial vehicle safety. The results indicate that while Louisiana’s Interstate system remained reliable over the years, some segments were found to be unreliable, which were annually less than 12% of the state’s Interstate system mileage. The User Delay Costs by commercial vehicles on these unreliable segments were, on average, 65.45% of the User Delay Cost by all vehicles on the Interstate highway system between 2016 and 2019, 53.10% between 2020 and 2021, and 70.36% in 2022, which are considerably high. These disproportionate ratios indicate the economic impact of the unreliability of the Interstate system on commercial vehicle operations. Additionally, though the annual crash frequencies remained relatively constant, an increasing proportion of commercial vehicles are involved in crashes, with segments (mileposts) that have high crash frequencies seeming to correspond with locations with recurring congestion on the Interstate highway system. The study highlights the potential of using data to identify areas that need improvement in transportation systems to support better decision-making.展开更多
Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsands...Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsandstone fracturing. An integrated model combining geological engineering and numerical simulation of fracturepropagation and production was completed. Based on data analysis, the hydraulic fracture parameters wereoptimized to develop a differentiated fracturing treatment adjustment plan. The results indicate that the influenceof geological and engineering factors in the X1 and X2 development zones in the study area differs significantly.Therefore, it is challenging to adopt a uniform development strategy to achieve rapid production increase. Thedata analysis reveals that the variation in gas production rate is primarily affected by the reservoir thickness andpermeability parameters as geological factors. On the other hand, the amount of treatment fluid and proppantaddition significantly impact the gas production rate as engineering factors. Among these factors, the influence ofgeological factors is more pronounced in block X1. Therefore, the main focus should be on further optimizing thefracturing interval and adjusting the geological development well location. Given the existing well location, thereis limited potential for further optimizing fracture parameters to increase production. For block X2, the fracturingparameters should be optimized. Data screening was conducted to identify outliers in the entire dataset, and adata-driven fracturing parameter optimization method was employed to determine the basic adjustment directionfor reservoir stimulation in the target block. This approach provides insights into the influence of geological,stimulation, and completion parameters on gas production rate. Consequently, the subsequent fracturing parameteroptimization design can significantly reduce the modeling and simulation workload and guide field operations toimprove and optimize hydraulic fracturing efficiency.展开更多
In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the sy...In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the system in the open loop. To tackle these difficulties, an approach of data-driven model identification and control algorithm design based on the maximum stability degree criterion is proposed in this paper. The data-driven model identification procedure supposes the finding of the mathematical model of the system based on the undamped transient response of the closed-loop system. The system is approximated with the inertial model, where the coefficients are calculated based on the values of the critical transfer coefficient, oscillation amplitude and period of the underdamped response of the closed-loop system. The data driven control design supposes that the tuning parameters of the controller are calculated based on the parameters obtained from the previous step of system identification and there are presented the expressions for the calculation of the tuning parameters. The obtained results of data-driven model identification and algorithm for synthesis the controller were verified by computer simulation.展开更多
Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorpt...Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.展开更多
During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place i...During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.展开更多
The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
Fault prognosis is mainly referred to the estimation of the operating time before a failure occurs,which is vital for ensuring the stability,safety and long lifetime of degrading industrial systems.According to the re...Fault prognosis is mainly referred to the estimation of the operating time before a failure occurs,which is vital for ensuring the stability,safety and long lifetime of degrading industrial systems.According to the results of fault prognosis,the maintenance strategy for underlying industrial systems can realize the conversion from passive maintenance to active maintenance.With the increased complexity and the improved automation level of industrial systems,fault prognosis techniques have become more and more indispensable.Particularly,the datadriven based prognosis approaches,which tend to find the hidden fault factors and determine the specific fault occurrence time of the system by analysing historical or real-time measurement data,gain great attention from different industrial sectors.In this context,the major task of this paper is to present a systematic overview of data-driven fault prognosis for industrial systems.Firstly,the characteristics of different prognosis methods are revealed with the data-based ones being highlighted.Moreover,based on the different data characteristics that exist in industrial systems,the corresponding fault prognosis methodologies are illustrated,with emphasis on analyses and comparisons of different prognosis methods.Finally,we reveal the current research trends and look forward to the future challenges in this field.This review is expected to serve as a tutorial and source of references for fault prognosis researchers.展开更多
To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions...To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions of the machine and tooling during machining processes,the relevant diagnosis systems currently adopted in industries are incompetent.To address this issue,this paper presents a novel data-driven diagnosis system for anomalies.In this system,power data for condition monitoring are continuously collected during dynamic machining processes to support online diagnosis analysis.To facilitate the analysis,preprocessing mechanisms have been designed to de-noise,normalize,and align the monitored data.Important features are extracted from the monitored data and thresholds are defined to identify anomalies.Considering the dynamic conditions of the machine and tooling during machining processes,the thresholds used to identify anomalies can vary.Based on historical data,the values of thresholds are optimized using a fruit fly optimization(FFO)algorithm to achieve more accurate detection.Practical case studies were used to validate the system,thereby demonstrating the potential and effectiveness of the system for industrial applications.展开更多
In wastewater treatment process(WWTP), the accurate and real-time monitoring values of key variables are crucial for the operational strategies. However, most of the existing methods have difficulty in obtaining the r...In wastewater treatment process(WWTP), the accurate and real-time monitoring values of key variables are crucial for the operational strategies. However, most of the existing methods have difficulty in obtaining the real-time values of some key variables in the process. In order to handle this issue, a data-driven intelligent monitoring system, using the soft sensor technique and data distribution service, is developed to monitor the concentrations of effluent total phosphorous(TP) and ammonia nitrogen(NH_4-N). In this intelligent monitoring system, a fuzzy neural network(FNN) is applied for designing the soft sensor model, and a principal component analysis(PCA) method is used to select the input variables of the soft sensor model. Moreover, data transfer software is exploited to insert the soft sensor technique to the supervisory control and data acquisition(SCADA) system. Finally, this proposed intelligent monitoring system is tested in several real plants to demonstrate the reliability and effectiveness of the monitoring performance.展开更多
This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to o...This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to obtain a static equilibrium state of an elastic structure. Preliminary numerical experiments illustrate that, compared with existing methods, the proposed method finds a reasonable solution even if data points distribute coarsely in a given material data set.展开更多
Solid oxide fuel cells (SOFCs) are considered to be one of the most important clean,distributed resources. However,SOFCs present a challenging control problem owing to their slow dynamics,nonlinearity and tight operat...Solid oxide fuel cells (SOFCs) are considered to be one of the most important clean,distributed resources. However,SOFCs present a challenging control problem owing to their slow dynamics,nonlinearity and tight operating constraints. A novel data-driven nonlinear control strategy was proposed to solve the SOFC control problem by combining a virtual reference feedback tuning (VRFT) method and support vector machine. In order to fulfill the requirement for fuel utilization and control constraints,a dynamic constraints unit and an anti-windup scheme were adopted. In addition,a feedforward loop was designed to deal with the current disturbance. Detailed simulations demonstrate that the fast response of fuel flow for the current demand disturbance and zero steady error of the output voltage are both achieved. Meanwhile,fuel utilization is kept almost within the safe region.展开更多
Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate struct...Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate structure decomposition, which limits their practical application. The rapid expansion of data makes utilizing data to guide and improve system design indispensable in practical engineering. In this paper, a data driven uncertainty evaluation approach is proposed to support the design of complex engineered systems. The core of the approach is a data-mining based uncertainty evaluation method that predicts the uncertainty level of a specific system design by means of analyzing association relations along different system attributes and synthesizing the information entropy of the covered attribute areas, and a quantitative measure of system uncertainty can be obtained accordingly. Monte Carlo simulation is introduced to get the uncertainty extrema, and the possible data distributions under different situations is discussed in detail The uncertainty values can be normalized using the simulation results and the values can be used to evaluate different system designs. A prototype system is established, and two case studies have been carded out. The case of an inverted pendulum system validates the effectiveness of the proposed method, and the case of an oil sump design shows the practicability when two or more design plans need to be compared. This research can be used to evaluate the uncertainty of complex engineered systems completely relying on data, and is ideally suited for plan selection and performance analysis in system design.展开更多
The data-driven fault diagnosis methods can improve the reliability of analog circuits by using the data generated from it. The data have some characteristics, such as randomness and incompleteness, which lead to the ...The data-driven fault diagnosis methods can improve the reliability of analog circuits by using the data generated from it. The data have some characteristics, such as randomness and incompleteness, which lead to the diagnostic results being sensitive to the specific values and random noise. This paper presents a data-driven fault diagnosis method for analog circuits based on the robust competitive agglomeration (RCA), which can alleviate the incompleteness of the data by clustering with the competing process. And the robustness of the diagnostic results is enhanced by using the approach of robust statistics in RCA. A series of experiments are provided to demonstrate that RCA can classify the incomplete data with a high accuracy. The experimental results show that RCA is robust for the data needed to be classified as well as the parameters needed to be adjusted. The effectiveness of RCA in practical use is demonstrated by two analog circuits.展开更多
In this study the medium-term response of beach profiles was investigated at two sites: a gently sloping sandy beach and a steeper mixed sand and gravel beach. The former is the Duck site in North Carolina, on the ea...In this study the medium-term response of beach profiles was investigated at two sites: a gently sloping sandy beach and a steeper mixed sand and gravel beach. The former is the Duck site in North Carolina, on the east coast of the USA, which is exposed to Atlantic Ocean swells and storm waves, and the latter is the Milford-on-Sea site at Christchurch Bay, on the south coast of England, which is partially sheltered from Atlantic swells but has a directionally bimodal wave exposure. The data sets comprise detailed bathymetric surveys of beach profiles covering a period of more than 25 years for the Duck site and over 18 years for the Milford-on-Sea site. The structure of the data sets and the data-driven methods are described. Canonical correlation analysis (CCA) was used to find linkages between the wave characteristics and beach profiles. The sensitivity of the linkages was investigated by deploying a wave height threshold to filter out the smaller waves incrementally. The results of the analysis indicate that, for the gently sloping sandy beach, waves of all heights are important to the morphological response. For the mixed sand and gravel beach, filtering the smaller waves improves the statistical fit and it suggests that low-height waves do not play a primary role in the medium-term morohological resoonse, which is primarily driven by the intermittent larger storm waves.展开更多
The recently proposed data-driven pole placement method is able to make use of measurement data to simultaneously identify a state space model and derive pole placement state feedback gain. It can achieve this precise...The recently proposed data-driven pole placement method is able to make use of measurement data to simultaneously identify a state space model and derive pole placement state feedback gain. It can achieve this precisely for systems that are linear time-invariant and for which noiseless measurement datasets are available. However, for nonlinear systems, and/or when the only noisy measurement datasets available contain noise, this approach is unable to yield satisfactory results. In this study, we investigated the effect on data-driven pole placement performance of introducing a prefilter to reduce the noise present in datasets. Using numerical simulations of a self-balancing robot, we demonstrated the important role that prefiltering can play in reducing the interference caused by noise.展开更多
In this paper, a real-time online data-driven adaptive method is developed to deal with uncertainties such as high nonlinearity, strong coupling, parameter perturbation and external disturbances in attitude control of...In this paper, a real-time online data-driven adaptive method is developed to deal with uncertainties such as high nonlinearity, strong coupling, parameter perturbation and external disturbances in attitude control of fixed-wing unmanned aerial vehicles (UAVs). Firstly, a model-free adaptive control (MFAC) method requiring only input/output (I/O) data and no model information is adopted for control scheme design of angular velocity subsystem which contains all model information and up-mentioned uncertainties. Secondly, the internal model control (IMC) method featured with less tuning parameters and convenient tuning process is adopted for control scheme design of the certain Euler angle subsystem. Simulation results show that, the method developed is obviously superior to the cascade PID (CPID) method and the nonlinear dynamic inversion (NDI) method.展开更多
文摘Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.
文摘Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data mainly from the National Performance Management Research Data Set (NPMRDS) and the Louisiana Crash Database were used to analyze Truck Travel Time Reliability Index, commercial vehicle User Delay Costs, and commercial vehicle safety. The results indicate that while Louisiana’s Interstate system remained reliable over the years, some segments were found to be unreliable, which were annually less than 12% of the state’s Interstate system mileage. The User Delay Costs by commercial vehicles on these unreliable segments were, on average, 65.45% of the User Delay Cost by all vehicles on the Interstate highway system between 2016 and 2019, 53.10% between 2020 and 2021, and 70.36% in 2022, which are considerably high. These disproportionate ratios indicate the economic impact of the unreliability of the Interstate system on commercial vehicle operations. Additionally, though the annual crash frequencies remained relatively constant, an increasing proportion of commercial vehicles are involved in crashes, with segments (mileposts) that have high crash frequencies seeming to correspond with locations with recurring congestion on the Interstate highway system. The study highlights the potential of using data to identify areas that need improvement in transportation systems to support better decision-making.
基金Research and Application of Key Technologies for Tight Gas Production Improvement and Rehabilitation of Linxing Shenfu(YXKY-ZL-01-2021)。
文摘Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsandstone fracturing. An integrated model combining geological engineering and numerical simulation of fracturepropagation and production was completed. Based on data analysis, the hydraulic fracture parameters wereoptimized to develop a differentiated fracturing treatment adjustment plan. The results indicate that the influenceof geological and engineering factors in the X1 and X2 development zones in the study area differs significantly.Therefore, it is challenging to adopt a uniform development strategy to achieve rapid production increase. Thedata analysis reveals that the variation in gas production rate is primarily affected by the reservoir thickness andpermeability parameters as geological factors. On the other hand, the amount of treatment fluid and proppantaddition significantly impact the gas production rate as engineering factors. Among these factors, the influence ofgeological factors is more pronounced in block X1. Therefore, the main focus should be on further optimizing thefracturing interval and adjusting the geological development well location. Given the existing well location, thereis limited potential for further optimizing fracture parameters to increase production. For block X2, the fracturingparameters should be optimized. Data screening was conducted to identify outliers in the entire dataset, and adata-driven fracturing parameter optimization method was employed to determine the basic adjustment directionfor reservoir stimulation in the target block. This approach provides insights into the influence of geological,stimulation, and completion parameters on gas production rate. Consequently, the subsequent fracturing parameteroptimization design can significantly reduce the modeling and simulation workload and guide field operations toimprove and optimize hydraulic fracturing efficiency.
文摘In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the system in the open loop. To tackle these difficulties, an approach of data-driven model identification and control algorithm design based on the maximum stability degree criterion is proposed in this paper. The data-driven model identification procedure supposes the finding of the mathematical model of the system based on the undamped transient response of the closed-loop system. The system is approximated with the inertial model, where the coefficients are calculated based on the values of the critical transfer coefficient, oscillation amplitude and period of the underdamped response of the closed-loop system. The data driven control design supposes that the tuning parameters of the controller are calculated based on the parameters obtained from the previous step of system identification and there are presented the expressions for the calculation of the tuning parameters. The obtained results of data-driven model identification and algorithm for synthesis the controller were verified by computer simulation.
基金RPSEA and U.S.Department of Energy for partially funding this study
文摘Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.
基金Supported by State Key Program of National Natural Science Foundation of China (60834001) and National Natural Science Foundation of China (60774022).Acknowledgement Authors would like to thank NSFC organizers and participants who shared their ideas and works with us during the NSFC workshop on data-based control, decision making, scheduling, and fault diagnosis. In particular, authors would like to thank Chai Tian-You, Sun You-Xian, Wang Hong, Yan Hong-Sheng, and Gao Fu-Rong for discussing the concept on design model shown in Fig. 12, the concept on temporal multi-scale shown in Fig. 8, the concept on fault diagnosis shown in Fig. 14, the concept on dynamic scheduling shown in Fig. 15, and the concept on interval model shown in Fig. 16, respectively.
基金Supported by National Basic Research Program of China (973 Program) (2009CB320600), National Natural Science Foundation of China (60828007, 60534010, 60821063), the Leverhulme Trust (F/00. 120/BC) in the United Kingdom, and the 111 Project (B08015)
基金partially supported by the National Natural Science Foundation of China(61751306,61801208,61671233)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the Fundamental Research Funds for the Central Universities(021014380094)
文摘During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.
基金supported by the National Key R&D Program of China(Grant No.2021YFC2100100)the National Natural Science Foundation of China(Grant No.21901157)+1 种基金the Shanghai Science and Technology Project of China(Grant No.21JC1403400)the SJTU Global Strategic Partnership Fund(Grant No.2020 SJTUHUJI)。
文摘The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
基金supported by the National Natural Science Foundation of China(61773087)the National Key Research and Development Program of China(2018YFB1601500)High-tech Ship Research Project of Ministry of Industry and Information Technology-Research of Intelligent Ship Testing and Verifacation([2018]473)
文摘Fault prognosis is mainly referred to the estimation of the operating time before a failure occurs,which is vital for ensuring the stability,safety and long lifetime of degrading industrial systems.According to the results of fault prognosis,the maintenance strategy for underlying industrial systems can realize the conversion from passive maintenance to active maintenance.With the increased complexity and the improved automation level of industrial systems,fault prognosis techniques have become more and more indispensable.Particularly,the datadriven based prognosis approaches,which tend to find the hidden fault factors and determine the specific fault occurrence time of the system by analysing historical or real-time measurement data,gain great attention from different industrial sectors.In this context,the major task of this paper is to present a systematic overview of data-driven fault prognosis for industrial systems.Firstly,the characteristics of different prognosis methods are revealed with the data-based ones being highlighted.Moreover,based on the different data characteristics that exist in industrial systems,the corresponding fault prognosis methodologies are illustrated,with emphasis on analyses and comparisons of different prognosis methods.Finally,we reveal the current research trends and look forward to the future challenges in this field.This review is expected to serve as a tutorial and source of references for fault prognosis researchers.
基金funding from the EU Smarter project(PEOPLE-2013-IAPP-610675)
文摘To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions of the machine and tooling during machining processes,the relevant diagnosis systems currently adopted in industries are incompetent.To address this issue,this paper presents a novel data-driven diagnosis system for anomalies.In this system,power data for condition monitoring are continuously collected during dynamic machining processes to support online diagnosis analysis.To facilitate the analysis,preprocessing mechanisms have been designed to de-noise,normalize,and align the monitored data.Important features are extracted from the monitored data and thresholds are defined to identify anomalies.Considering the dynamic conditions of the machine and tooling during machining processes,the thresholds used to identify anomalies can vary.Based on historical data,the values of thresholds are optimized using a fruit fly optimization(FFO)algorithm to achieve more accurate detection.Practical case studies were used to validate the system,thereby demonstrating the potential and effectiveness of the system for industrial applications.
基金Supported by the National Natural Science Foundation of China(61622301,61533002)Beijing Natural Science Foundation(4172005)Major National Science and Technology Project(2017ZX07104)
文摘In wastewater treatment process(WWTP), the accurate and real-time monitoring values of key variables are crucial for the operational strategies. However, most of the existing methods have difficulty in obtaining the real-time values of some key variables in the process. In order to handle this issue, a data-driven intelligent monitoring system, using the soft sensor technique and data distribution service, is developed to monitor the concentrations of effluent total phosphorous(TP) and ammonia nitrogen(NH_4-N). In this intelligent monitoring system, a fuzzy neural network(FNN) is applied for designing the soft sensor model, and a principal component analysis(PCA) method is used to select the input variables of the soft sensor model. Moreover, data transfer software is exploited to insert the soft sensor technique to the supervisory control and data acquisition(SCADA) system. Finally, this proposed intelligent monitoring system is tested in several real plants to demonstrate the reliability and effectiveness of the monitoring performance.
基金Supported by National Basic Research Program of China(973 Program)(2013CB035500) National Natural Science Foundation of China(61233004,61221003,61074061)+1 种基金 International Cooperation Program of Shanghai Science and Technology Commission (12230709600) the Higher Education Research Fund for the Doctoral Program of China(20120073130006)
基金supported by JSPS KAKENHI (Grants 17K06633 and 18K18898)
文摘This paper presents a simple nonparametric regression approach to data-driven computing in elasticity. We apply the kernel regression to the material data set, and formulate a system of nonlinear equations solved to obtain a static equilibrium state of an elastic structure. Preliminary numerical experiments illustrate that, compared with existing methods, the proposed method finds a reasonable solution even if data points distribute coarsely in a given material data set.
基金Projects(51076027,51036002) supported by the National Natural Science Foundation of ChinaProject(20090092110051) supported by the Doctoral Fund of Ministry of Education of China
文摘Solid oxide fuel cells (SOFCs) are considered to be one of the most important clean,distributed resources. However,SOFCs present a challenging control problem owing to their slow dynamics,nonlinearity and tight operating constraints. A novel data-driven nonlinear control strategy was proposed to solve the SOFC control problem by combining a virtual reference feedback tuning (VRFT) method and support vector machine. In order to fulfill the requirement for fuel utilization and control constraints,a dynamic constraints unit and an anti-windup scheme were adopted. In addition,a feedforward loop was designed to deal with the current disturbance. Detailed simulations demonstrate that the fast response of fuel flow for the current demand disturbance and zero steady error of the output voltage are both achieved. Meanwhile,fuel utilization is kept almost within the safe region.
基金Supported by National Hi-tech Research and Development Program of China(863 Program,Grant No.2015AA042101)
文摘Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate structure decomposition, which limits their practical application. The rapid expansion of data makes utilizing data to guide and improve system design indispensable in practical engineering. In this paper, a data driven uncertainty evaluation approach is proposed to support the design of complex engineered systems. The core of the approach is a data-mining based uncertainty evaluation method that predicts the uncertainty level of a specific system design by means of analyzing association relations along different system attributes and synthesizing the information entropy of the covered attribute areas, and a quantitative measure of system uncertainty can be obtained accordingly. Monte Carlo simulation is introduced to get the uncertainty extrema, and the possible data distributions under different situations is discussed in detail The uncertainty values can be normalized using the simulation results and the values can be used to evaluate different system designs. A prototype system is established, and two case studies have been carded out. The case of an inverted pendulum system validates the effectiveness of the proposed method, and the case of an oil sump design shows the practicability when two or more design plans need to be compared. This research can be used to evaluate the uncertainty of complex engineered systems completely relying on data, and is ideally suited for plan selection and performance analysis in system design.
基金supported by the National Natural Science Foundation of China (61202078 61071139)the National High Technology Research and Development Program of China (863 Program)(SQ2011AA110101)
文摘The data-driven fault diagnosis methods can improve the reliability of analog circuits by using the data generated from it. The data have some characteristics, such as randomness and incompleteness, which lead to the diagnostic results being sensitive to the specific values and random noise. This paper presents a data-driven fault diagnosis method for analog circuits based on the robust competitive agglomeration (RCA), which can alleviate the incompleteness of the data by clustering with the competing process. And the robustness of the diagnostic results is enhanced by using the approach of robust statistics in RCA. A series of experiments are provided to demonstrate that RCA can classify the incomplete data with a high accuracy. The experimental results show that RCA is robust for the data needed to be classified as well as the parameters needed to be adjusted. The effectiveness of RCA in practical use is demonstrated by two analog circuits.
基金supported by the UK Natural Environment Research Council(Grant No.NE/J005606/1)the UK Engineering and Physical Sciences Research Council(Grant No.EP/C005392/1)the Ensemble Estimation of Flood Risk in a Changing Climate(EFRa CC)project funded by the British Council under its Global Innovation Initiative
文摘In this study the medium-term response of beach profiles was investigated at two sites: a gently sloping sandy beach and a steeper mixed sand and gravel beach. The former is the Duck site in North Carolina, on the east coast of the USA, which is exposed to Atlantic Ocean swells and storm waves, and the latter is the Milford-on-Sea site at Christchurch Bay, on the south coast of England, which is partially sheltered from Atlantic swells but has a directionally bimodal wave exposure. The data sets comprise detailed bathymetric surveys of beach profiles covering a period of more than 25 years for the Duck site and over 18 years for the Milford-on-Sea site. The structure of the data sets and the data-driven methods are described. Canonical correlation analysis (CCA) was used to find linkages between the wave characteristics and beach profiles. The sensitivity of the linkages was investigated by deploying a wave height threshold to filter out the smaller waves incrementally. The results of the analysis indicate that, for the gently sloping sandy beach, waves of all heights are important to the morphological response. For the mixed sand and gravel beach, filtering the smaller waves improves the statistical fit and it suggests that low-height waves do not play a primary role in the medium-term morohological resoonse, which is primarily driven by the intermittent larger storm waves.
文摘The recently proposed data-driven pole placement method is able to make use of measurement data to simultaneously identify a state space model and derive pole placement state feedback gain. It can achieve this precisely for systems that are linear time-invariant and for which noiseless measurement datasets are available. However, for nonlinear systems, and/or when the only noisy measurement datasets available contain noise, this approach is unable to yield satisfactory results. In this study, we investigated the effect on data-driven pole placement performance of introducing a prefilter to reduce the noise present in datasets. Using numerical simulations of a self-balancing robot, we demonstrated the important role that prefiltering can play in reducing the interference caused by noise.
文摘In this paper, a real-time online data-driven adaptive method is developed to deal with uncertainties such as high nonlinearity, strong coupling, parameter perturbation and external disturbances in attitude control of fixed-wing unmanned aerial vehicles (UAVs). Firstly, a model-free adaptive control (MFAC) method requiring only input/output (I/O) data and no model information is adopted for control scheme design of angular velocity subsystem which contains all model information and up-mentioned uncertainties. Secondly, the internal model control (IMC) method featured with less tuning parameters and convenient tuning process is adopted for control scheme design of the certain Euler angle subsystem. Simulation results show that, the method developed is obviously superior to the cascade PID (CPID) method and the nonlinear dynamic inversion (NDI) method.