Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function...Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.展开更多
With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests...With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests from both the industrial and academic communities.Input shaping(IS),as a simple and effective feedforward method,is greatly demanded in DDVC methods.It convolves the desired input command with impulse sequence without requiring parametric dynamics and the closed-loop system structure,thereby suppressing the residual vibration separately.Based on a thorough investigation into the state-of-the-art DDVC methods,this survey has made the following efforts:1)Introducing the IS theory and typical input shapers;2)Categorizing recent progress of DDVC methods;3)Summarizing commonly adopted metrics for DDVC;and 4)Discussing the engineering applications and future trends of DDVC.By doing so,this study provides a systematic and comprehensive overview of existing DDVC methods from designing to optimizing perspectives,aiming at promoting future research regarding this emerging and vital issue.展开更多
Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsands...Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsandstone fracturing. An integrated model combining geological engineering and numerical simulation of fracturepropagation and production was completed. Based on data analysis, the hydraulic fracture parameters wereoptimized to develop a differentiated fracturing treatment adjustment plan. The results indicate that the influenceof geological and engineering factors in the X1 and X2 development zones in the study area differs significantly.Therefore, it is challenging to adopt a uniform development strategy to achieve rapid production increase. Thedata analysis reveals that the variation in gas production rate is primarily affected by the reservoir thickness andpermeability parameters as geological factors. On the other hand, the amount of treatment fluid and proppantaddition significantly impact the gas production rate as engineering factors. Among these factors, the influence ofgeological factors is more pronounced in block X1. Therefore, the main focus should be on further optimizing thefracturing interval and adjusting the geological development well location. Given the existing well location, thereis limited potential for further optimizing fracture parameters to increase production. For block X2, the fracturingparameters should be optimized. Data screening was conducted to identify outliers in the entire dataset, and adata-driven fracturing parameter optimization method was employed to determine the basic adjustment directionfor reservoir stimulation in the target block. This approach provides insights into the influence of geological,stimulation, and completion parameters on gas production rate. Consequently, the subsequent fracturing parameteroptimization design can significantly reduce the modeling and simulation workload and guide field operations toimprove and optimize hydraulic fracturing efficiency.展开更多
Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data main...Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data mainly from the National Performance Management Research Data Set (NPMRDS) and the Louisiana Crash Database were used to analyze Truck Travel Time Reliability Index, commercial vehicle User Delay Costs, and commercial vehicle safety. The results indicate that while Louisiana’s Interstate system remained reliable over the years, some segments were found to be unreliable, which were annually less than 12% of the state’s Interstate system mileage. The User Delay Costs by commercial vehicles on these unreliable segments were, on average, 65.45% of the User Delay Cost by all vehicles on the Interstate highway system between 2016 and 2019, 53.10% between 2020 and 2021, and 70.36% in 2022, which are considerably high. These disproportionate ratios indicate the economic impact of the unreliability of the Interstate system on commercial vehicle operations. Additionally, though the annual crash frequencies remained relatively constant, an increasing proportion of commercial vehicles are involved in crashes, with segments (mileposts) that have high crash frequencies seeming to correspond with locations with recurring congestion on the Interstate highway system. The study highlights the potential of using data to identify areas that need improvement in transportation systems to support better decision-making.展开更多
In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the sy...In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the system in the open loop. To tackle these difficulties, an approach of data-driven model identification and control algorithm design based on the maximum stability degree criterion is proposed in this paper. The data-driven model identification procedure supposes the finding of the mathematical model of the system based on the undamped transient response of the closed-loop system. The system is approximated with the inertial model, where the coefficients are calculated based on the values of the critical transfer coefficient, oscillation amplitude and period of the underdamped response of the closed-loop system. The data driven control design supposes that the tuning parameters of the controller are calculated based on the parameters obtained from the previous step of system identification and there are presented the expressions for the calculation of the tuning parameters. The obtained results of data-driven model identification and algorithm for synthesis the controller were verified by computer simulation.展开更多
Dynamic data driven simulation (DDDS) is proposed to improve the model by incorporaing real data from the practical systems into the model. Instead of giving a static input, multiple possible sets of inputs are fed ...Dynamic data driven simulation (DDDS) is proposed to improve the model by incorporaing real data from the practical systems into the model. Instead of giving a static input, multiple possible sets of inputs are fed into the model. And the computational errors are corrected using statistical approaches. It involves a variety of aspects, including the uncertainty modeling, the measurement evaluation, the system model and the measurement model coupling ,the computation complexity, and the performance issue. Authors intend to set up the architecture of DDDS for wildfire spread model, DEVS-FIRE, based on the discrete event speeification (DEVS) formalism. The experimental results show that the framework can track the dynamically changing fire front based on fire sen- sor data, thus, it provides more aecurate predictions.展开更多
Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading...Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.展开更多
Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorpt...Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.展开更多
The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place i...During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.展开更多
Fault prognosis is mainly referred to the estimation of the operating time before a failure occurs,which is vital for ensuring the stability,safety and long lifetime of degrading industrial systems.According to the re...Fault prognosis is mainly referred to the estimation of the operating time before a failure occurs,which is vital for ensuring the stability,safety and long lifetime of degrading industrial systems.According to the results of fault prognosis,the maintenance strategy for underlying industrial systems can realize the conversion from passive maintenance to active maintenance.With the increased complexity and the improved automation level of industrial systems,fault prognosis techniques have become more and more indispensable.Particularly,the datadriven based prognosis approaches,which tend to find the hidden fault factors and determine the specific fault occurrence time of the system by analysing historical or real-time measurement data,gain great attention from different industrial sectors.In this context,the major task of this paper is to present a systematic overview of data-driven fault prognosis for industrial systems.Firstly,the characteristics of different prognosis methods are revealed with the data-based ones being highlighted.Moreover,based on the different data characteristics that exist in industrial systems,the corresponding fault prognosis methodologies are illustrated,with emphasis on analyses and comparisons of different prognosis methods.Finally,we reveal the current research trends and look forward to the future challenges in this field.This review is expected to serve as a tutorial and source of references for fault prognosis researchers.展开更多
To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions...To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions of the machine and tooling during machining processes,the relevant diagnosis systems currently adopted in industries are incompetent.To address this issue,this paper presents a novel data-driven diagnosis system for anomalies.In this system,power data for condition monitoring are continuously collected during dynamic machining processes to support online diagnosis analysis.To facilitate the analysis,preprocessing mechanisms have been designed to de-noise,normalize,and align the monitored data.Important features are extracted from the monitored data and thresholds are defined to identify anomalies.Considering the dynamic conditions of the machine and tooling during machining processes,the thresholds used to identify anomalies can vary.Based on historical data,the values of thresholds are optimized using a fruit fly optimization(FFO)algorithm to achieve more accurate detection.Practical case studies were used to validate the system,thereby demonstrating the potential and effectiveness of the system for industrial applications.展开更多
When designing large-sized complex machinery products, the design focus is always on the overall per- formance; however, there exist no design theory and method based on performance driven. In view of the defi- ciency...When designing large-sized complex machinery products, the design focus is always on the overall per- formance; however, there exist no design theory and method based on performance driven. In view of the defi- ciency of the existing design theory, according to the performance features of complex mechanical products, the performance indices are introduced into the traditional design theory of "Requirement-Function-Structure" to construct a new five-domain design theory of "Client Requirement-Function-Performance-Structure-Design Parameter". To support design practice based on this new theory, a product data model is established by using per- formance indices and the mapping relationship between them and the other four domains. When the product data model is applied to high-speed train design and combining the existing research result and relevant standards, the corresponding data model and its structure involving five domains of high-speed trains are established, which can provide technical support for studying the relationships between typical performance indices and design parame- ters and the fast achievement of a high-speed train scheme design. The five domains provide a reference for the design specification and evaluation criteria of high speed train and a new idea for the train's parameter design.展开更多
The data-driven fault diagnosis methods can improve the reliability of analog circuits by using the data generated from it. The data have some characteristics, such as randomness and incompleteness, which lead to the ...The data-driven fault diagnosis methods can improve the reliability of analog circuits by using the data generated from it. The data have some characteristics, such as randomness and incompleteness, which lead to the diagnostic results being sensitive to the specific values and random noise. This paper presents a data-driven fault diagnosis method for analog circuits based on the robust competitive agglomeration (RCA), which can alleviate the incompleteness of the data by clustering with the competing process. And the robustness of the diagnostic results is enhanced by using the approach of robust statistics in RCA. A series of experiments are provided to demonstrate that RCA can classify the incomplete data with a high accuracy. The experimental results show that RCA is robust for the data needed to be classified as well as the parameters needed to be adjusted. The effectiveness of RCA in practical use is demonstrated by two analog circuits.展开更多
文摘Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.
基金supported by the National Natural Science Foundation of China (62272078)。
文摘With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests from both the industrial and academic communities.Input shaping(IS),as a simple and effective feedforward method,is greatly demanded in DDVC methods.It convolves the desired input command with impulse sequence without requiring parametric dynamics and the closed-loop system structure,thereby suppressing the residual vibration separately.Based on a thorough investigation into the state-of-the-art DDVC methods,this survey has made the following efforts:1)Introducing the IS theory and typical input shapers;2)Categorizing recent progress of DDVC methods;3)Summarizing commonly adopted metrics for DDVC;and 4)Discussing the engineering applications and future trends of DDVC.By doing so,this study provides a systematic and comprehensive overview of existing DDVC methods from designing to optimizing perspectives,aiming at promoting future research regarding this emerging and vital issue.
基金Research and Application of Key Technologies for Tight Gas Production Improvement and Rehabilitation of Linxing Shenfu(YXKY-ZL-01-2021)。
文摘Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsandstone fracturing. An integrated model combining geological engineering and numerical simulation of fracturepropagation and production was completed. Based on data analysis, the hydraulic fracture parameters wereoptimized to develop a differentiated fracturing treatment adjustment plan. The results indicate that the influenceof geological and engineering factors in the X1 and X2 development zones in the study area differs significantly.Therefore, it is challenging to adopt a uniform development strategy to achieve rapid production increase. Thedata analysis reveals that the variation in gas production rate is primarily affected by the reservoir thickness andpermeability parameters as geological factors. On the other hand, the amount of treatment fluid and proppantaddition significantly impact the gas production rate as engineering factors. Among these factors, the influence ofgeological factors is more pronounced in block X1. Therefore, the main focus should be on further optimizing thefracturing interval and adjusting the geological development well location. Given the existing well location, thereis limited potential for further optimizing fracture parameters to increase production. For block X2, the fracturingparameters should be optimized. Data screening was conducted to identify outliers in the entire dataset, and adata-driven fracturing parameter optimization method was employed to determine the basic adjustment directionfor reservoir stimulation in the target block. This approach provides insights into the influence of geological,stimulation, and completion parameters on gas production rate. Consequently, the subsequent fracturing parameteroptimization design can significantly reduce the modeling and simulation workload and guide field operations toimprove and optimize hydraulic fracturing efficiency.
文摘Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data mainly from the National Performance Management Research Data Set (NPMRDS) and the Louisiana Crash Database were used to analyze Truck Travel Time Reliability Index, commercial vehicle User Delay Costs, and commercial vehicle safety. The results indicate that while Louisiana’s Interstate system remained reliable over the years, some segments were found to be unreliable, which were annually less than 12% of the state’s Interstate system mileage. The User Delay Costs by commercial vehicles on these unreliable segments were, on average, 65.45% of the User Delay Cost by all vehicles on the Interstate highway system between 2016 and 2019, 53.10% between 2020 and 2021, and 70.36% in 2022, which are considerably high. These disproportionate ratios indicate the economic impact of the unreliability of the Interstate system on commercial vehicle operations. Additionally, though the annual crash frequencies remained relatively constant, an increasing proportion of commercial vehicles are involved in crashes, with segments (mileposts) that have high crash frequencies seeming to correspond with locations with recurring congestion on the Interstate highway system. The study highlights the potential of using data to identify areas that need improvement in transportation systems to support better decision-making.
文摘In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the system in the open loop. To tackle these difficulties, an approach of data-driven model identification and control algorithm design based on the maximum stability degree criterion is proposed in this paper. The data-driven model identification procedure supposes the finding of the mathematical model of the system based on the undamped transient response of the closed-loop system. The system is approximated with the inertial model, where the coefficients are calculated based on the values of the critical transfer coefficient, oscillation amplitude and period of the underdamped response of the closed-loop system. The data driven control design supposes that the tuning parameters of the controller are calculated based on the parameters obtained from the previous step of system identification and there are presented the expressions for the calculation of the tuning parameters. The obtained results of data-driven model identification and algorithm for synthesis the controller were verified by computer simulation.
文摘Dynamic data driven simulation (DDDS) is proposed to improve the model by incorporaing real data from the practical systems into the model. Instead of giving a static input, multiple possible sets of inputs are fed into the model. And the computational errors are corrected using statistical approaches. It involves a variety of aspects, including the uncertainty modeling, the measurement evaluation, the system model and the measurement model coupling ,the computation complexity, and the performance issue. Authors intend to set up the architecture of DDDS for wildfire spread model, DEVS-FIRE, based on the discrete event speeification (DEVS) formalism. The experimental results show that the framework can track the dynamically changing fire front based on fire sen- sor data, thus, it provides more aecurate predictions.
文摘Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.
基金RPSEA and U.S.Department of Energy for partially funding this study
文摘Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.
基金supported by the National Key R&D Program of China(Grant No.2021YFC2100100)the National Natural Science Foundation of China(Grant No.21901157)+1 种基金the Shanghai Science and Technology Project of China(Grant No.21JC1403400)the SJTU Global Strategic Partnership Fund(Grant No.2020 SJTUHUJI)。
文摘The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
基金Supported by State Key Program of National Natural Science Foundation of China (60834001) and National Natural Science Foundation of China (60774022).Acknowledgement Authors would like to thank NSFC organizers and participants who shared their ideas and works with us during the NSFC workshop on data-based control, decision making, scheduling, and fault diagnosis. In particular, authors would like to thank Chai Tian-You, Sun You-Xian, Wang Hong, Yan Hong-Sheng, and Gao Fu-Rong for discussing the concept on design model shown in Fig. 12, the concept on temporal multi-scale shown in Fig. 8, the concept on fault diagnosis shown in Fig. 14, the concept on dynamic scheduling shown in Fig. 15, and the concept on interval model shown in Fig. 16, respectively.
基金partially supported by the National Natural Science Foundation of China(61751306,61801208,61671233)the Jiangsu Science Foundation(BK20170650)+2 种基金the Postdoctoral Science Foundation of China(BX201700118,2017M621712)the Jiangsu Postdoctoral Science Foundation(1701118B)the Fundamental Research Funds for the Central Universities(021014380094)
文摘During the past few decades,mobile wireless communications have experienced four generations of technological revolution,namely from 1 G to 4 G,and the deployment of the latest 5 G networks is expected to take place in 2019.One fundamental question is how we can push forward the development of mobile wireless communications while it has become an extremely complex and sophisticated system.We believe that the answer lies in the huge volumes of data produced by the network itself,and machine learning may become a key to exploit such information.In this paper,we elaborate why the conventional model-based paradigm,which has been widely proved useful in pre-5 G networks,can be less efficient or even less practical in the future 5 G and beyond mobile networks.Then,we explain how the data-driven paradigm,using state-of-the-art machine learning techniques,can become a promising solution.At last,we provide a typical use case of the data-driven paradigm,i.e.,proactive load balancing,in which online learning is utilized to adjust cell configurations in advance to avoid burst congestion caused by rapid traffic changes.
基金Supported by National Basic Research Program of China (973 Program) (2009CB320600), National Natural Science Foundation of China (60828007, 60534010, 60821063), the Leverhulme Trust (F/00. 120/BC) in the United Kingdom, and the 111 Project (B08015)
基金supported by the National Natural Science Foundation of China(61773087)the National Key Research and Development Program of China(2018YFB1601500)High-tech Ship Research Project of Ministry of Industry and Information Technology-Research of Intelligent Ship Testing and Verifacation([2018]473)
文摘Fault prognosis is mainly referred to the estimation of the operating time before a failure occurs,which is vital for ensuring the stability,safety and long lifetime of degrading industrial systems.According to the results of fault prognosis,the maintenance strategy for underlying industrial systems can realize the conversion from passive maintenance to active maintenance.With the increased complexity and the improved automation level of industrial systems,fault prognosis techniques have become more and more indispensable.Particularly,the datadriven based prognosis approaches,which tend to find the hidden fault factors and determine the specific fault occurrence time of the system by analysing historical or real-time measurement data,gain great attention from different industrial sectors.In this context,the major task of this paper is to present a systematic overview of data-driven fault prognosis for industrial systems.Firstly,the characteristics of different prognosis methods are revealed with the data-based ones being highlighted.Moreover,based on the different data characteristics that exist in industrial systems,the corresponding fault prognosis methodologies are illustrated,with emphasis on analyses and comparisons of different prognosis methods.Finally,we reveal the current research trends and look forward to the future challenges in this field.This review is expected to serve as a tutorial and source of references for fault prognosis researchers.
基金funding from the EU Smarter project(PEOPLE-2013-IAPP-610675)
文摘To achieve zero-defect production during computer numerical control(CNC)machining processes,it is imperative to develop effective diagnosis systems to detect anomalies efficiently.However,due to the dynamic conditions of the machine and tooling during machining processes,the relevant diagnosis systems currently adopted in industries are incompetent.To address this issue,this paper presents a novel data-driven diagnosis system for anomalies.In this system,power data for condition monitoring are continuously collected during dynamic machining processes to support online diagnosis analysis.To facilitate the analysis,preprocessing mechanisms have been designed to de-noise,normalize,and align the monitored data.Important features are extracted from the monitored data and thresholds are defined to identify anomalies.Considering the dynamic conditions of the machine and tooling during machining processes,the thresholds used to identify anomalies can vary.Based on historical data,the values of thresholds are optimized using a fruit fly optimization(FFO)algorithm to achieve more accurate detection.Practical case studies were used to validate the system,thereby demonstrating the potential and effectiveness of the system for industrial applications.
基金Supported by National Basic Research Program of China(973 Program)(2013CB035500) National Natural Science Foundation of China(61233004,61221003,61074061)+1 种基金 International Cooperation Program of Shanghai Science and Technology Commission (12230709600) the Higher Education Research Fund for the Doctoral Program of China(20120073130006)
基金Supported by National Natural Science Foundation of China(Grant Nos.51275432,51505390)Sichuan Application Foundation Projects(Grant No.2016JY0098)Independent Research Project of TPL(Grant No.TPL1501)
文摘When designing large-sized complex machinery products, the design focus is always on the overall per- formance; however, there exist no design theory and method based on performance driven. In view of the defi- ciency of the existing design theory, according to the performance features of complex mechanical products, the performance indices are introduced into the traditional design theory of "Requirement-Function-Structure" to construct a new five-domain design theory of "Client Requirement-Function-Performance-Structure-Design Parameter". To support design practice based on this new theory, a product data model is established by using per- formance indices and the mapping relationship between them and the other four domains. When the product data model is applied to high-speed train design and combining the existing research result and relevant standards, the corresponding data model and its structure involving five domains of high-speed trains are established, which can provide technical support for studying the relationships between typical performance indices and design parame- ters and the fast achievement of a high-speed train scheme design. The five domains provide a reference for the design specification and evaluation criteria of high speed train and a new idea for the train's parameter design.
基金supported by the National Natural Science Foundation of China (61202078 61071139)the National High Technology Research and Development Program of China (863 Program)(SQ2011AA110101)
文摘The data-driven fault diagnosis methods can improve the reliability of analog circuits by using the data generated from it. The data have some characteristics, such as randomness and incompleteness, which lead to the diagnostic results being sensitive to the specific values and random noise. This paper presents a data-driven fault diagnosis method for analog circuits based on the robust competitive agglomeration (RCA), which can alleviate the incompleteness of the data by clustering with the competing process. And the robustness of the diagnostic results is enhanced by using the approach of robust statistics in RCA. A series of experiments are provided to demonstrate that RCA can classify the incomplete data with a high accuracy. The experimental results show that RCA is robust for the data needed to be classified as well as the parameters needed to be adjusted. The effectiveness of RCA in practical use is demonstrated by two analog circuits.