With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests...With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests from both the industrial and academic communities.Input shaping(IS),as a simple and effective feedforward method,is greatly demanded in DDVC methods.It convolves the desired input command with impulse sequence without requiring parametric dynamics and the closed-loop system structure,thereby suppressing the residual vibration separately.Based on a thorough investigation into the state-of-the-art DDVC methods,this survey has made the following efforts:1)Introducing the IS theory and typical input shapers;2)Categorizing recent progress of DDVC methods;3)Summarizing commonly adopted metrics for DDVC;and 4)Discussing the engineering applications and future trends of DDVC.By doing so,this study provides a systematic and comprehensive overview of existing DDVC methods from designing to optimizing perspectives,aiming at promoting future research regarding this emerging and vital issue.展开更多
Dynamic data driven simulation (DDDS) is proposed to improve the model by incorporaing real data from the practical systems into the model. Instead of giving a static input, multiple possible sets of inputs are fed ...Dynamic data driven simulation (DDDS) is proposed to improve the model by incorporaing real data from the practical systems into the model. Instead of giving a static input, multiple possible sets of inputs are fed into the model. And the computational errors are corrected using statistical approaches. It involves a variety of aspects, including the uncertainty modeling, the measurement evaluation, the system model and the measurement model coupling ,the computation complexity, and the performance issue. Authors intend to set up the architecture of DDDS for wildfire spread model, DEVS-FIRE, based on the discrete event speeification (DEVS) formalism. The experimental results show that the framework can track the dynamically changing fire front based on fire sen- sor data, thus, it provides more aecurate predictions.展开更多
A data driven computational model that accounts for more than two material states has been presented in this work. Presented model can account for multiple state variables, such as stresses,strains, strain rates and f...A data driven computational model that accounts for more than two material states has been presented in this work. Presented model can account for multiple state variables, such as stresses,strains, strain rates and failure stress, as compared to previously reported models with two states.Model is used to perform deformation and failure simulations of carbon nanotubes and carbon nanotube/epoxy nanocomposites. The model capability of capturing the strain rate dependent deformation and failure has been demonstrated through predictions against uniaxial test data taken from literature. The predicted results show a good agreement between data set taken from literature and simulations.展开更多
Almost Vietnamese big businesses often use outsourcing services to do marketing researches such as analysing and evaluating consumer intention and behaviour,customers’satisfaction,customers’loyalty,market share,mark...Almost Vietnamese big businesses often use outsourcing services to do marketing researches such as analysing and evaluating consumer intention and behaviour,customers’satisfaction,customers’loyalty,market share,market segmentation and some similar marketing studies.One of the most favourite marketing research business in Vietnam is ACNielsen and Vietnam big businesses usually plan and adjust marketing activities based on ACNielsen’s report.Belong to the limitation of budget,Vietnamese small and medium enterprises(SMEs)often do marketing researches by themselves.Among the marketing researches activities in SMEs,customer segmentation is conducted by tools such as Excel,Facebook analytics or only by simple design thinking approach to help save costs.However,these tools are no longer suitable for the age of data information explosion today.This article uses case analysing of the United Kingdom online retailer through clustering algorithm on R package.The result proves clustering method’s superiority in customer segmentation compared to the traditional method(SPSS,Excel,Facebook analytics,design thinking)which Vietnamese SMEs are using.More important,this article helps Vietnamese SMEs understand and apply clustering algorithm on R in customer segmenting on their given data set efficiently.On that basis,Vietnamese SMEs can plan marketing programs and drive their actions as contextualizing and/or personalizing their message to their customers suitably.展开更多
Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate struct...Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate structure decomposition, which limits their practical application. The rapid expansion of data makes utilizing data to guide and improve system design indispensable in practical engineering. In this paper, a data driven uncertainty evaluation approach is proposed to support the design of complex engineered systems. The core of the approach is a data-mining based uncertainty evaluation method that predicts the uncertainty level of a specific system design by means of analyzing association relations along different system attributes and synthesizing the information entropy of the covered attribute areas, and a quantitative measure of system uncertainty can be obtained accordingly. Monte Carlo simulation is introduced to get the uncertainty extrema, and the possible data distributions under different situations is discussed in detail The uncertainty values can be normalized using the simulation results and the values can be used to evaluate different system designs. A prototype system is established, and two case studies have been carded out. The case of an inverted pendulum system validates the effectiveness of the proposed method, and the case of an oil sump design shows the practicability when two or more design plans need to be compared. This research can be used to evaluate the uncertainty of complex engineered systems completely relying on data, and is ideally suited for plan selection and performance analysis in system design.展开更多
We develop a data driven method(probability model) to construct a composite shape descriptor by combining a pair of scale-based shape descriptors. The selection of a pair of scale-based shape descriptors is modeled as...We develop a data driven method(probability model) to construct a composite shape descriptor by combining a pair of scale-based shape descriptors. The selection of a pair of scale-based shape descriptors is modeled as the computation of the union of two events, i.e.,retrieving similar shapes by using a single scale-based shape descriptor. The pair of scale-based shape descriptors with the highest probability forms the composite shape descriptor. Given a shape database, the composite shape descriptors for the shapes constitute a planar point set.A VoR-Tree of the planar point set is then used as an indexing structure for efficient query operation. Experiments and comparisons show the effectiveness and efficiency of the proposed composite shape descriptor.展开更多
In the transition of China’s economy from high-speed growth to high-quality growth in the new era,economic practices are oriented to fostering new growth drivers,developing new industries,and forming new models.Based...In the transition of China’s economy from high-speed growth to high-quality growth in the new era,economic practices are oriented to fostering new growth drivers,developing new industries,and forming new models.Based on the data flow,big data effectively integrates technology,material,fund,and human resource flows and reveals new paths for the development of new growth drivers,new industries and new models.Adopting an analytical framework with"macro-meso-micro"levels,this paper elaborates on the theoretical mechanisms by which big data drives high-quality growth through efficiency improvements,upgrades of industrial structures,and business model innovations.It also explores the practical foundations for big data driven high-quality growth including technological advancements of big data,the development of big data industries,and the formulation of big data strategies.Finally,this paper proposes policy options for big data promoting high-quality growth in terms of developing digital economy,consolidating the infrastructure construction of big data,expediting convergence of big data and the real economy,advocating for a big data culture,and expanding financing options for big data.展开更多
Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function...Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.展开更多
Accuracy of the fluid property data plays an absolutely pivotal role in the reservoir computational processes.Reliable data can be obtained through various experimental methods,but these methods are very expensive and...Accuracy of the fluid property data plays an absolutely pivotal role in the reservoir computational processes.Reliable data can be obtained through various experimental methods,but these methods are very expensive and time consuming.Alternative methods are numerical models.These methods used measured experimental data to develop a representative model for predicting desired parameters.In this study,to predict saturation pressure,oil formation volume factor,and solution gas oil ratio,several Artificial Intelligent(AI)models were developed.582 reported data sets were used as data bank that covers a wide range of fluid properties.Accuracy and reliability of the model was examined by some statistical parameters such as correlation coefficient(R2),average absolute relative deviation(AARD),and root mean square error(RMSE).The results illustrated good accordance between predicted data and target values.The model was also compared with previous works and developed empirical correlations which indicated that it is more reliable than all compared models and correlations.At the end,relevancy factor was calculated for each input parameters to illustrate the impact of different parameters on the predicted values.Relevancy factor showed that in these models,solution gas oil ratio has greatest impact on both saturation pressure and oil formation volume factor.In the other hand,saturation pressure has greatest effect on solution gas oil ratio.展开更多
Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data main...Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data mainly from the National Performance Management Research Data Set (NPMRDS) and the Louisiana Crash Database were used to analyze Truck Travel Time Reliability Index, commercial vehicle User Delay Costs, and commercial vehicle safety. The results indicate that while Louisiana’s Interstate system remained reliable over the years, some segments were found to be unreliable, which were annually less than 12% of the state’s Interstate system mileage. The User Delay Costs by commercial vehicles on these unreliable segments were, on average, 65.45% of the User Delay Cost by all vehicles on the Interstate highway system between 2016 and 2019, 53.10% between 2020 and 2021, and 70.36% in 2022, which are considerably high. These disproportionate ratios indicate the economic impact of the unreliability of the Interstate system on commercial vehicle operations. Additionally, though the annual crash frequencies remained relatively constant, an increasing proportion of commercial vehicles are involved in crashes, with segments (mileposts) that have high crash frequencies seeming to correspond with locations with recurring congestion on the Interstate highway system. The study highlights the potential of using data to identify areas that need improvement in transportation systems to support better decision-making.展开更多
Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsands...Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsandstone fracturing. An integrated model combining geological engineering and numerical simulation of fracturepropagation and production was completed. Based on data analysis, the hydraulic fracture parameters wereoptimized to develop a differentiated fracturing treatment adjustment plan. The results indicate that the influenceof geological and engineering factors in the X1 and X2 development zones in the study area differs significantly.Therefore, it is challenging to adopt a uniform development strategy to achieve rapid production increase. Thedata analysis reveals that the variation in gas production rate is primarily affected by the reservoir thickness andpermeability parameters as geological factors. On the other hand, the amount of treatment fluid and proppantaddition significantly impact the gas production rate as engineering factors. Among these factors, the influence ofgeological factors is more pronounced in block X1. Therefore, the main focus should be on further optimizing thefracturing interval and adjusting the geological development well location. Given the existing well location, thereis limited potential for further optimizing fracture parameters to increase production. For block X2, the fracturingparameters should be optimized. Data screening was conducted to identify outliers in the entire dataset, and adata-driven fracturing parameter optimization method was employed to determine the basic adjustment directionfor reservoir stimulation in the target block. This approach provides insights into the influence of geological,stimulation, and completion parameters on gas production rate. Consequently, the subsequent fracturing parameteroptimization design can significantly reduce the modeling and simulation workload and guide field operations toimprove and optimize hydraulic fracturing efficiency.展开更多
In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the sy...In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the system in the open loop. To tackle these difficulties, an approach of data-driven model identification and control algorithm design based on the maximum stability degree criterion is proposed in this paper. The data-driven model identification procedure supposes the finding of the mathematical model of the system based on the undamped transient response of the closed-loop system. The system is approximated with the inertial model, where the coefficients are calculated based on the values of the critical transfer coefficient, oscillation amplitude and period of the underdamped response of the closed-loop system. The data driven control design supposes that the tuning parameters of the controller are calculated based on the parameters obtained from the previous step of system identification and there are presented the expressions for the calculation of the tuning parameters. The obtained results of data-driven model identification and algorithm for synthesis the controller were verified by computer simulation.展开更多
Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorpt...Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.展开更多
In this article,we develop an online robust actor-critic-disturbance guidance law for a missile-target interception system with limited normal acceleration capability.Firstly,the missiletarget engagement is formulated...In this article,we develop an online robust actor-critic-disturbance guidance law for a missile-target interception system with limited normal acceleration capability.Firstly,the missiletarget engagement is formulated as a zero-sum pursuit-evasion game problem.The key is to seek the saddle point solution of the Hamilton Jacobi Isaacs(HJI)equation,which is generally intractable due to the nonlinearity of the problem.Then,based on the universal approximation capability of Neural Networks(NNs),we construct the critic NN,the actor NN and the disturbance NN,respectively.The Bellman error is adjusted by the normalized-least square method.The proposed scheme is proved to be Uniformly Ultimately Bounded(UUB)stable by Lyapunov method.Finally,the effectiveness and robustness of the developed method are illustrated through numerical simulations against different types of non-stationary targets and initial conditions.展开更多
The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
The use of data driven models has been shown to be useful for simulating complex engineering processes,when the only information available consists of the data of the process.In this study,four data-driven models,name...The use of data driven models has been shown to be useful for simulating complex engineering processes,when the only information available consists of the data of the process.In this study,four data-driven models,namely multiple linear regression,artificial neural network,adaptive neural fuzzy inference system,and K nearest neighbor models based on collection of 207 laboratory tests,are investigated for compressive strength prediction of concrete at high temperature.In addition for each model,two different sets of input variables are examined:a complete set and a parsimonious set of involved variables.The results obtained are compared with each other and also to the equations of NIST Technical Note standard and demonstrate the suitability of using the data driven models to predict the compressive strength at high temperature.In addition,the results show employing the parsimonious set of input variables is sufficient for the data driven models to make satisfactory results.展开更多
Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading...Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.展开更多
In this paper,a data-driven method to model the three-dimensional engineering structure under the cyclic load with the one-dimensional stress-strain data is proposed.In this method,one-dimensional stress-strain data o...In this paper,a data-driven method to model the three-dimensional engineering structure under the cyclic load with the one-dimensional stress-strain data is proposed.In this method,one-dimensional stress-strain data obtained under uniaxial load and different loading history is learned offline by gate recurrent unit(GRU)network.The learned constitutive model is embedded into the general finite element framework through data expansion from one dimension to three dimensions,which can perform stress updates under the three-dimensional setting.The proposed method is then adopted to drive numerical solutions of boundary value problems for engineering structures.Compared with direct numerical simulations using the J2 plasticity model,the stress-strain response of beam structure with elastoplastic materials under forward loading,reverse loading and cyclic loading were predicted accurately.Loading path dependent response of structure was captured and the effectiveness of the proposed method is verified.The shortcomings of the proposed method are also discussed.展开更多
基金supported by the National Natural Science Foundation of China (62272078)。
文摘With the ongoing advancements in sensor networks and data acquisition technologies across various systems like manufacturing,aviation,and healthcare,the data driven vibration control(DDVC)has attracted broad interests from both the industrial and academic communities.Input shaping(IS),as a simple and effective feedforward method,is greatly demanded in DDVC methods.It convolves the desired input command with impulse sequence without requiring parametric dynamics and the closed-loop system structure,thereby suppressing the residual vibration separately.Based on a thorough investigation into the state-of-the-art DDVC methods,this survey has made the following efforts:1)Introducing the IS theory and typical input shapers;2)Categorizing recent progress of DDVC methods;3)Summarizing commonly adopted metrics for DDVC;and 4)Discussing the engineering applications and future trends of DDVC.By doing so,this study provides a systematic and comprehensive overview of existing DDVC methods from designing to optimizing perspectives,aiming at promoting future research regarding this emerging and vital issue.
文摘Dynamic data driven simulation (DDDS) is proposed to improve the model by incorporaing real data from the practical systems into the model. Instead of giving a static input, multiple possible sets of inputs are fed into the model. And the computational errors are corrected using statistical approaches. It involves a variety of aspects, including the uncertainty modeling, the measurement evaluation, the system model and the measurement model coupling ,the computation complexity, and the performance issue. Authors intend to set up the architecture of DDDS for wildfire spread model, DEVS-FIRE, based on the discrete event speeification (DEVS) formalism. The experimental results show that the framework can track the dynamically changing fire front based on fire sen- sor data, thus, it provides more aecurate predictions.
文摘A data driven computational model that accounts for more than two material states has been presented in this work. Presented model can account for multiple state variables, such as stresses,strains, strain rates and failure stress, as compared to previously reported models with two states.Model is used to perform deformation and failure simulations of carbon nanotubes and carbon nanotube/epoxy nanocomposites. The model capability of capturing the strain rate dependent deformation and failure has been demonstrated through predictions against uniaxial test data taken from literature. The predicted results show a good agreement between data set taken from literature and simulations.
文摘Almost Vietnamese big businesses often use outsourcing services to do marketing researches such as analysing and evaluating consumer intention and behaviour,customers’satisfaction,customers’loyalty,market share,market segmentation and some similar marketing studies.One of the most favourite marketing research business in Vietnam is ACNielsen and Vietnam big businesses usually plan and adjust marketing activities based on ACNielsen’s report.Belong to the limitation of budget,Vietnamese small and medium enterprises(SMEs)often do marketing researches by themselves.Among the marketing researches activities in SMEs,customer segmentation is conducted by tools such as Excel,Facebook analytics or only by simple design thinking approach to help save costs.However,these tools are no longer suitable for the age of data information explosion today.This article uses case analysing of the United Kingdom online retailer through clustering algorithm on R package.The result proves clustering method’s superiority in customer segmentation compared to the traditional method(SPSS,Excel,Facebook analytics,design thinking)which Vietnamese SMEs are using.More important,this article helps Vietnamese SMEs understand and apply clustering algorithm on R in customer segmenting on their given data set efficiently.On that basis,Vietnamese SMEs can plan marketing programs and drive their actions as contextualizing and/or personalizing their message to their customers suitably.
基金Supported by National Basic Research Program of China (973 Program) (2009CB320600), National Natural Science Foundation of China (60828007, 60534010, 60821063), the Leverhulme Trust (F/00. 120/BC) in the United Kingdom, and the 111 Project (B08015)
基金Supported by National Hi-tech Research and Development Program of China(863 Program,Grant No.2015AA042101)
文摘Complex engineered systems are often difficult to analyze and design due to the tangled interdependencies among their subsystems and components. Conventional design methods often need exact modeling or accurate structure decomposition, which limits their practical application. The rapid expansion of data makes utilizing data to guide and improve system design indispensable in practical engineering. In this paper, a data driven uncertainty evaluation approach is proposed to support the design of complex engineered systems. The core of the approach is a data-mining based uncertainty evaluation method that predicts the uncertainty level of a specific system design by means of analyzing association relations along different system attributes and synthesizing the information entropy of the covered attribute areas, and a quantitative measure of system uncertainty can be obtained accordingly. Monte Carlo simulation is introduced to get the uncertainty extrema, and the possible data distributions under different situations is discussed in detail The uncertainty values can be normalized using the simulation results and the values can be used to evaluate different system designs. A prototype system is established, and two case studies have been carded out. The case of an inverted pendulum system validates the effectiveness of the proposed method, and the case of an oil sump design shows the practicability when two or more design plans need to be compared. This research can be used to evaluate the uncertainty of complex engineered systems completely relying on data, and is ideally suited for plan selection and performance analysis in system design.
基金supported by the National Key R&D Plan of China(2016YFB1001501)
文摘We develop a data driven method(probability model) to construct a composite shape descriptor by combining a pair of scale-based shape descriptors. The selection of a pair of scale-based shape descriptors is modeled as the computation of the union of two events, i.e.,retrieving similar shapes by using a single scale-based shape descriptor. The pair of scale-based shape descriptors with the highest probability forms the composite shape descriptor. Given a shape database, the composite shape descriptors for the shapes constitute a planar point set.A VoR-Tree of the planar point set is then used as an indexing structure for efficient query operation. Experiments and comparisons show the effectiveness and efficiency of the proposed composite shape descriptor.
基金funded by the Program for “Sanqin Scholar Innovation Teams in Shanxi Province”(SZTZ [2018] No.34)“the Research on the Mechanism,Effect Evaluation,and Policy Support of Replacing Business Tax with VAT In Promoting the Industrial Structure Upgrade of China” funded by the Humanity and Social Science Youth Foundation of the Ministry of Education of China(18YJC790078)“the Evaluation and Study of the Effect of Promoting Industrial Transformation and Upgrade of Shaanxi by Replacing Business Tax with Value-added Tax” funded by the Social Science Foundation Project of Shanxi Province(2017D037)
文摘In the transition of China’s economy from high-speed growth to high-quality growth in the new era,economic practices are oriented to fostering new growth drivers,developing new industries,and forming new models.Based on the data flow,big data effectively integrates technology,material,fund,and human resource flows and reveals new paths for the development of new growth drivers,new industries and new models.Adopting an analytical framework with"macro-meso-micro"levels,this paper elaborates on the theoretical mechanisms by which big data drives high-quality growth through efficiency improvements,upgrades of industrial structures,and business model innovations.It also explores the practical foundations for big data driven high-quality growth including technological advancements of big data,the development of big data industries,and the formulation of big data strategies.Finally,this paper proposes policy options for big data promoting high-quality growth in terms of developing digital economy,consolidating the infrastructure construction of big data,expediting convergence of big data and the real economy,advocating for a big data culture,and expanding financing options for big data.
文摘Brain tissue is one of the softest parts of the human body,composed of white matter and grey matter.The mechanical behavior of the brain tissue plays an essential role in regulating brain morphology and brain function.Besides,traumatic brain injury(TBI)and various brain diseases are also greatly influenced by the brain's mechanical properties.Whether white matter or grey matter,brain tissue contains multiscale structures composed of neurons,glial cells,fibers,blood vessels,etc.,each with different mechanical properties.As such,brain tissue exhibits complex mechanical behavior,usually with strong nonlinearity,heterogeneity,and directional dependence.Building a constitutive law for multiscale brain tissue using traditional function-based approaches can be very challenging.Instead,this paper proposes a data-driven approach to establish the desired mechanical model of brain tissue.We focus on blood vessels with internal pressure embedded in a white or grey matter matrix material to demonstrate our approach.The matrix is described by an isotropic or anisotropic nonlinear elastic model.A representative unit cell(RUC)with blood vessels is built,which is used to generate the stress-strain data under different internal blood pressure and various proportional displacement loading paths.The generated stress-strain data is then used to train a mechanical law using artificial neural networks to predict the macroscopic mechanical response of brain tissue under different internal pressures.Finally,the trained material model is implemented into finite element software to predict the mechanical behavior of a whole brain under intracranial pressure and distributed body forces.Compared with a direct numerical simulation that employs a reference material model,our proposed approach greatly reduces the computational cost and improves modeling efficiency.The predictions made by our trained model demonstrate sufficient accuracy.Specifically,we find that the level of internal blood pressure can greatly influence stress distribution and determine the possible related damage behaviors.
文摘Accuracy of the fluid property data plays an absolutely pivotal role in the reservoir computational processes.Reliable data can be obtained through various experimental methods,but these methods are very expensive and time consuming.Alternative methods are numerical models.These methods used measured experimental data to develop a representative model for predicting desired parameters.In this study,to predict saturation pressure,oil formation volume factor,and solution gas oil ratio,several Artificial Intelligent(AI)models were developed.582 reported data sets were used as data bank that covers a wide range of fluid properties.Accuracy and reliability of the model was examined by some statistical parameters such as correlation coefficient(R2),average absolute relative deviation(AARD),and root mean square error(RMSE).The results illustrated good accordance between predicted data and target values.The model was also compared with previous works and developed empirical correlations which indicated that it is more reliable than all compared models and correlations.At the end,relevancy factor was calculated for each input parameters to illustrate the impact of different parameters on the predicted values.Relevancy factor showed that in these models,solution gas oil ratio has greatest impact on both saturation pressure and oil formation volume factor.In the other hand,saturation pressure has greatest effect on solution gas oil ratio.
文摘Using Louisiana’s Interstate system, this paper aims to demonstrate how data can be used to evaluate freight movement reliability, economy, and safety of truck freight operations to improve decision-making. Data mainly from the National Performance Management Research Data Set (NPMRDS) and the Louisiana Crash Database were used to analyze Truck Travel Time Reliability Index, commercial vehicle User Delay Costs, and commercial vehicle safety. The results indicate that while Louisiana’s Interstate system remained reliable over the years, some segments were found to be unreliable, which were annually less than 12% of the state’s Interstate system mileage. The User Delay Costs by commercial vehicles on these unreliable segments were, on average, 65.45% of the User Delay Cost by all vehicles on the Interstate highway system between 2016 and 2019, 53.10% between 2020 and 2021, and 70.36% in 2022, which are considerably high. These disproportionate ratios indicate the economic impact of the unreliability of the Interstate system on commercial vehicle operations. Additionally, though the annual crash frequencies remained relatively constant, an increasing proportion of commercial vehicles are involved in crashes, with segments (mileposts) that have high crash frequencies seeming to correspond with locations with recurring congestion on the Interstate highway system. The study highlights the potential of using data to identify areas that need improvement in transportation systems to support better decision-making.
基金Research and Application of Key Technologies for Tight Gas Production Improvement and Rehabilitation of Linxing Shenfu(YXKY-ZL-01-2021)。
文摘Based on the actual data collected from the tight sandstone development zone, correlation analysis using theSpearman method was conducted to determine the main factors influencing the gas production rate of tightsandstone fracturing. An integrated model combining geological engineering and numerical simulation of fracturepropagation and production was completed. Based on data analysis, the hydraulic fracture parameters wereoptimized to develop a differentiated fracturing treatment adjustment plan. The results indicate that the influenceof geological and engineering factors in the X1 and X2 development zones in the study area differs significantly.Therefore, it is challenging to adopt a uniform development strategy to achieve rapid production increase. Thedata analysis reveals that the variation in gas production rate is primarily affected by the reservoir thickness andpermeability parameters as geological factors. On the other hand, the amount of treatment fluid and proppantaddition significantly impact the gas production rate as engineering factors. Among these factors, the influence ofgeological factors is more pronounced in block X1. Therefore, the main focus should be on further optimizing thefracturing interval and adjusting the geological development well location. Given the existing well location, thereis limited potential for further optimizing fracture parameters to increase production. For block X2, the fracturingparameters should be optimized. Data screening was conducted to identify outliers in the entire dataset, and adata-driven fracturing parameter optimization method was employed to determine the basic adjustment directionfor reservoir stimulation in the target block. This approach provides insights into the influence of geological,stimulation, and completion parameters on gas production rate. Consequently, the subsequent fracturing parameteroptimization design can significantly reduce the modeling and simulation workload and guide field operations toimprove and optimize hydraulic fracturing efficiency.
文摘In the synthesis of the control algorithm for complex systems, we are often faced with imprecise or unknown mathematical models of the dynamical systems, or even with problems in finding a mathematical model of the system in the open loop. To tackle these difficulties, an approach of data-driven model identification and control algorithm design based on the maximum stability degree criterion is proposed in this paper. The data-driven model identification procedure supposes the finding of the mathematical model of the system based on the undamped transient response of the closed-loop system. The system is approximated with the inertial model, where the coefficients are calculated based on the values of the critical transfer coefficient, oscillation amplitude and period of the underdamped response of the closed-loop system. The data driven control design supposes that the tuning parameters of the controller are calculated based on the parameters obtained from the previous step of system identification and there are presented the expressions for the calculation of the tuning parameters. The obtained results of data-driven model identification and algorithm for synthesis the controller were verified by computer simulation.
基金RPSEA and U.S.Department of Energy for partially funding this study
文摘Hydrocarbon production from shale has attracted much attention in the recent years. When applied to this prolific and hydrocarbon rich resource plays, our understanding of the complexities of the flow mechanism(sorption process and flow behavior in complex fracture systems- induced or natural) leaves much to be desired. In this paper, we present and discuss a novel approach to modeling, history matching of hydrocarbon production from a Marcellus shale asset in southwestern Pennsylvania using advanced data mining, pattern recognition and machine learning technologies. In this new approach instead of imposing our understanding of the flow mechanism, the impact of multi-stage hydraulic fractures, and the production process on the reservoir model, we allow the production history, well log, completion and hydraulic fracturing data to guide our model and determine its behavior. The uniqueness of this technology is that it incorporates the so-called "hard data" directly into the reservoir model, so that the model can be used to optimize the hydraulic fracture process. The "hard data" refers to field measurements during the hydraulic fracturing process such as fluid and proppant type and amount, injection pressure and rate as well as proppant concentration. This novel approach contrasts with the current industry focus on the use of "soft data"(non-measured, interpretive data such as frac length, width,height and conductivity) in the reservoir models. The study focuses on a Marcellus shale asset that includes 135 wells with multiple pads, different landing targets, well length and reservoir properties. The full field history matching process was successfully completed using this data driven approach thus capturing the production behavior with acceptable accuracy for individual wells and for the entire asset.
基金partially supported by the National Natural Science Foundation of China(Nos.61203095,61403407)。
文摘In this article,we develop an online robust actor-critic-disturbance guidance law for a missile-target interception system with limited normal acceleration capability.Firstly,the missiletarget engagement is formulated as a zero-sum pursuit-evasion game problem.The key is to seek the saddle point solution of the Hamilton Jacobi Isaacs(HJI)equation,which is generally intractable due to the nonlinearity of the problem.Then,based on the universal approximation capability of Neural Networks(NNs),we construct the critic NN,the actor NN and the disturbance NN,respectively.The Bellman error is adjusted by the normalized-least square method.The proposed scheme is proved to be Uniformly Ultimately Bounded(UUB)stable by Lyapunov method.Finally,the effectiveness and robustness of the developed method are illustrated through numerical simulations against different types of non-stationary targets and initial conditions.
基金supported by the National Key R&D Program of China(Grant No.2021YFC2100100)the National Natural Science Foundation of China(Grant No.21901157)+1 种基金the Shanghai Science and Technology Project of China(Grant No.21JC1403400)the SJTU Global Strategic Partnership Fund(Grant No.2020 SJTUHUJI)。
文摘The application scope and future development directions of machine learning models(supervised learning, transfer learning, and unsupervised learning) that have driven energy material design are discussed.
文摘The use of data driven models has been shown to be useful for simulating complex engineering processes,when the only information available consists of the data of the process.In this study,four data-driven models,namely multiple linear regression,artificial neural network,adaptive neural fuzzy inference system,and K nearest neighbor models based on collection of 207 laboratory tests,are investigated for compressive strength prediction of concrete at high temperature.In addition for each model,two different sets of input variables are examined:a complete set and a parsimonious set of involved variables.The results obtained are compared with each other and also to the equations of NIST Technical Note standard and demonstrate the suitability of using the data driven models to predict the compressive strength at high temperature.In addition,the results show employing the parsimonious set of input variables is sufficient for the data driven models to make satisfactory results.
文摘Cloud storage is widely used by large companies to store vast amounts of data and files,offering flexibility,financial savings,and security.However,information shoplifting poses significant threats,potentially leading to poor performance and privacy breaches.Blockchain-based cognitive computing can help protect and maintain information security and privacy in cloud platforms,ensuring businesses can focus on business development.To ensure data security in cloud platforms,this research proposed a blockchain-based Hybridized Data Driven Cognitive Computing(HD2C)model.However,the proposed HD2C framework addresses breaches of the privacy information of mixed participants of the Internet of Things(IoT)in the cloud.HD2C is developed by combining Federated Learning(FL)with a Blockchain consensus algorithm to connect smart contracts with Proof of Authority.The“Data Island”problem can be solved by FL’s emphasis on privacy and lightning-fast processing,while Blockchain provides a decentralized incentive structure that is impervious to poisoning.FL with Blockchain allows quick consensus through smart member selection and verification.The HD2C paradigm significantly improves the computational processing efficiency of intelligent manufacturing.Extensive analysis results derived from IIoT datasets confirm HD2C superiority.When compared to other consensus algorithms,the Blockchain PoA’s foundational cost is significant.The accuracy and memory utilization evaluation results predict the total benefits of the system.In comparison to the values 0.004 and 0.04,the value of 0.4 achieves good accuracy.According to the experiment results,the number of transactions per second has minimal impact on memory requirements.The findings of this study resulted in the development of a brand-new IIoT framework based on blockchain technology.
文摘In this paper,a data-driven method to model the three-dimensional engineering structure under the cyclic load with the one-dimensional stress-strain data is proposed.In this method,one-dimensional stress-strain data obtained under uniaxial load and different loading history is learned offline by gate recurrent unit(GRU)network.The learned constitutive model is embedded into the general finite element framework through data expansion from one dimension to three dimensions,which can perform stress updates under the three-dimensional setting.The proposed method is then adopted to drive numerical solutions of boundary value problems for engineering structures.Compared with direct numerical simulations using the J2 plasticity model,the stress-strain response of beam structure with elastoplastic materials under forward loading,reverse loading and cyclic loading were predicted accurately.Loading path dependent response of structure was captured and the effectiveness of the proposed method is verified.The shortcomings of the proposed method are also discussed.