In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene p...In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene production by acetonitrile.The accuracy of five prediction methods,UNIFAC(UNIQUAC Functional-group Activity Coefficients),UNIFAC-LL,UNIFAC-LBY,UNIFAC-DMD and COSMO-RS,applied to the butadiene extraction process was verified using partial phase equilibrium data.The results showed that the UNIFAC-DMD method had the highest accuracy in predicting phase equilibrium data for the missing system.COSMO-RS-predicted multiple systems showed good accuracy,and a large number of missing phase equilibrium data were estimated using the UNIFAC-DMD method and COSMO-RS method.The predicted phase equilibrium data were checked for consistency.The NRTL-RK(non-Random Two Liquid-Redlich-Kwong Equation of State)and UNIQUAC thermodynamic models were used to correlate the phase equilibrium data.Industrial device simulations were used to verify the accuracy of the thermodynamic model applied to the butadiene extraction process.The simulation results showed that the average deviations of the simulated results using the correlated thermodynamic model from the actual values were less than 2%compared to that using the commercial simulation software,Aspen Plus and its database.The average deviation was much smaller than that of the simulations using the Aspen Plus database(>10%),indicating that the obtained phase equilibrium data are highly accurate and reliable.The best phase equilibrium data and thermodynamic model parameters for butadiene extraction are provided.This improves the accuracy and reliability of the design,optimization and control of the process,and provides a basis and guarantee for developing a more environmentally friendly and economical butadiene extraction process.展开更多
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te...With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.展开更多
The data processing mode is vital to the performance of an entire coalmine gas early-warning system, especially in real-time performance. Our objective was to present the structural features of coalmine gas data, so t...The data processing mode is vital to the performance of an entire coalmine gas early-warning system, especially in real-time performance. Our objective was to present the structural features of coalmine gas data, so that the data could be processed at different priority levels in C language. Two different data processing models, one with priority and the other without priority, were built based on queuing theory. Their theoretical formulas were determined via a M/M/I model in order to calculate average occupation time of each measuring point in an early-warning program. We validated the model with the gas early-warning system of the Huaibei Coalmine Group Corp. The results indicate that the average occupation time for gas data processing by using the queuing system model with priority is nearly 1/30 of that of the model without priority.展开更多
The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased si...The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.展开更多
Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platf...Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.展开更多
Before the implementation of product data management (PDM) system, person model and enterprise process model (EPM) must be firstly established. For the convenience of project management, all the related users must be ...Before the implementation of product data management (PDM) system, person model and enterprise process model (EPM) must be firstly established. For the convenience of project management, all the related users must be allocated to the “Person User Role Group” net. Based on the person model and the direction of information flow, the EPM is established subsequently. The EPM consists of several release levels, in which the access controls are defined. The EPM procedure shows the blueprint of the workflow process structure. The establishment of person model and EPM in an enterprise has been instanced at the end of this paper.展开更多
A large amount of information is frequently encountered when characterizing the sample model in chemical process.A fault diagnosis method based on dynamic modeling of feature engineering is proposed to effectively rem...A large amount of information is frequently encountered when characterizing the sample model in chemical process.A fault diagnosis method based on dynamic modeling of feature engineering is proposed to effectively remove the nonlinear correlation redundancy of chemical process in this paper.From the whole process point of view,the method makes use of the characteristic of mutual information to select the optimal variable subset.It extracts the correlation among variables in the whitening process without limiting to only linear correlations.Further,PCA(Principal Component Analysis)dimension reduction is used to extract feature subset before fault diagnosis.The application results of the TE(Tennessee Eastman)simulation process show that the dynamic modeling process of MIFE(Mutual Information Feature Engineering)can accurately extract the nonlinear correlation relationship among process variables and can effectively reduce the dimension of feature detection in process monitoring.展开更多
A set of indices for performance evaluation for business processes with multiple inputs and multiple outputs is proposed, which are found in machinery manufacturers. Based on the traditional methods of data envelopmen...A set of indices for performance evaluation for business processes with multiple inputs and multiple outputs is proposed, which are found in machinery manufacturers. Based on the traditional methods of data envelopment analysis (DEA) and analytical hierarchical process (AHP), a hybrid model called DEA/AHP model is proposed to deal with the evaluation of business process performance. With the proposed method, the DEA is firstly used to develop a pairwise comparison matrix, and then the AHP is applied to evaluate the performance of business process using the pairwise comparison matrix. The significant advantage of this hybrid model is the use of objective data instead of subjective human judgment for performance evaluation. In the case study, a project of business process reengineering (BPR) with a hydraulic machinery manufacturer is used to demonstrate the effectiveness of the DEA/AHP model.展开更多
In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative ...In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative materials, this paper presents extensible markup language (XML) based strategy for several important problems of data processing in network supported collaborative design, such as the representation of standard for the exchange of product model data (STEP) with XML in the product information expression and the management of XML documents using relational database. The paper gives a detailed exposition on how to clarify the mapping between XML structure and the relationship database structure and how XML-QL queries can be translated into structured query language (SQL) queries. Finally, the structure of data processing system based on XML is presented.展开更多
The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regressi...The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regression calculation model for processing the flow data of a FCN-VGG19 aircraft is elaborated based on VGGNet(Visual Geometry Group Net)and FCN(Fully Convolutional Network)techniques.As shown by the results,the model displays a strong fitting ability,and there is almost no over-fitting in training.Moreover,the model has good accuracy and convergence.For different input data and different grids,the model basically achieves convergence,showing good performances.It is shown that the proposed simulation regression model based on FCN has great potential in typical problems of computational fluid dynamics(CFD)and related data processing.展开更多
Sandy debris flow deposits are present in Unit I during Miocene of Gas Field A in the Baiyun Depression of the South China Sea. The paucity of well data and the great variability of the sedimentary microfacies make it...Sandy debris flow deposits are present in Unit I during Miocene of Gas Field A in the Baiyun Depression of the South China Sea. The paucity of well data and the great variability of the sedimentary microfacies make it difficult to identify and predict the distribution patterns of the main gas reservoir, and have seriously hindered further exploration and development of the gas field. Therefore, making full use of the available seismic data is extremely important for predicting the spatial distribution of sedimentary microfacies when constructing three-dimensional reservoir models. A suitable reservoir modeling strategy or workflow controlled by sedimentary microfacies and seismic data has been developed. Five types of seismic attributes were selected to correlate with the sand percentage, and the root mean square (RMS) amplitude performed the best. The relation between the RMS amplitude and the sand percentage was used to construct a reservoir sand distribution map. Three types of main sedimentary microfacies were identified: debris channels, fan lobes, and natural levees. Using constraints from the sedimentary microfacies boundaries, a sedimentary microfacies model was constructed using the sequential indicator and assigned value simulation methods. Finally, reservoir models of physical properties for sandy debris flow deposits controlled by sedimentary microfacies and seismic inversion data were established. Property cutoff values were adopted because the sedimentary microfacies and the reservoir properties from well-logging interpretation are intrinsically different. Selection of appropriate reservoir property cutoffs is a key step in reservoir modeling when using simulation methods based on sedimentary microfacies control. When the abnormal data are truncated and the reservoir properties probability distribution fits a normal distribution, microfacies-controlled reservoir property models are more reliable than those obtained from the sequence Gauss simulation method. The cutoffs for effective porosity of the debris channel, fan lobe, and natural levee facies were 0.2, 0.09, and 0.12, respectively; the corresponding average effective porosities were 0.24, 0.13, and 0.15. The proposed modeling method makes full use of seismic attributes and seismic inversion data, and also makes the property data of single-well depositional microfacies more conformable to a normal distribution with geological significance. Thus, the method allows use of more reliable input data when we construct a model of a sandy debris flow.展开更多
The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectivene...The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.展开更多
The transmission of scientific data over long distances is required to enable interplanetary science expeditions. Current approaches include transmitting all collected data or transmitting low resolution data to enabl...The transmission of scientific data over long distances is required to enable interplanetary science expeditions. Current approaches include transmitting all collected data or transmitting low resolution data to enable ground controller review and selection of data for transmission. Model-based data transmission (MBDT) seeks to increase the amount of knowledge conveyed per unit of data transmitted by comparing high-resolution data collected in situ to a pre-existing (or potentially co-transmitted) model. This paper describes the application of MBDT to gravitational data and characterizes its utility and performance. This is performed by applying the MBDT technique to a selection of gravitational data previously collected for the Earth and comparing the transmission requirements to the level required for raw data transmis-sion and non-application-aware compression. Levels of transmission reduction up to 31.8% (without the use maximum-error-thresholding) and up to 97.17% (with the use of maximum-error-thresholding) resulted. These levels significantly exceed what is possible with non-application-aware compression.展开更多
<div style="text-align:justify;"> <span style="font-family:Verdana;">Various open source software are managed by using several bug tracking systems. In particular, the open source softw...<div style="text-align:justify;"> <span style="font-family:Verdana;">Various open source software are managed by using several bug tracking systems. In particular, the open source software extends to the cloud service and edge computing. Recently, OSF Edge Computing Group is launched by OpenStack. There are big data behind the internet services such as cloud and edge computing. Then, it is important to consider the impact of big data in order to assess the reliability of open source software. Various optimal software release problems have been proposed by specific researchers. In the typical optimal software release problems, the cost parameters are defined as the known parameter. However, it is difficult to decide the cost parameter because of the uncertainty. The purpose of our research is to estimate the effort parameters included in our models. In this paper, we propose an estimation method of effort parameter by using the genetic algorithm. Then, we show the estimation method in section 3. Moreover, we analyze actual data to show numerical examples for the estimation method of effort parameter. As the research results, we found that the OSS managers would be able to comprehend the human resources required before the OSS project in advance by using our method.</span> </div>展开更多
Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the...Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.展开更多
基金supported by the National Natural Science Foundation of China(22178190)。
文摘In response to the lack of reliable physical parameters in the process simulation of the butadiene extraction,a large amount of phase equilibrium data were collected in the context of the actual process of butadiene production by acetonitrile.The accuracy of five prediction methods,UNIFAC(UNIQUAC Functional-group Activity Coefficients),UNIFAC-LL,UNIFAC-LBY,UNIFAC-DMD and COSMO-RS,applied to the butadiene extraction process was verified using partial phase equilibrium data.The results showed that the UNIFAC-DMD method had the highest accuracy in predicting phase equilibrium data for the missing system.COSMO-RS-predicted multiple systems showed good accuracy,and a large number of missing phase equilibrium data were estimated using the UNIFAC-DMD method and COSMO-RS method.The predicted phase equilibrium data were checked for consistency.The NRTL-RK(non-Random Two Liquid-Redlich-Kwong Equation of State)and UNIQUAC thermodynamic models were used to correlate the phase equilibrium data.Industrial device simulations were used to verify the accuracy of the thermodynamic model applied to the butadiene extraction process.The simulation results showed that the average deviations of the simulated results using the correlated thermodynamic model from the actual values were less than 2%compared to that using the commercial simulation software,Aspen Plus and its database.The average deviation was much smaller than that of the simulations using the Aspen Plus database(>10%),indicating that the obtained phase equilibrium data are highly accurate and reliable.The best phase equilibrium data and thermodynamic model parameters for butadiene extraction are provided.This improves the accuracy and reliability of the design,optimization and control of the process,and provides a basis and guarantee for developing a more environmentally friendly and economical butadiene extraction process.
基金supported by the National Natural Science Foundation of China(No.U1960202)。
文摘With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.
基金Project 70533050 supported by the National Natural Science Foundation of China
文摘The data processing mode is vital to the performance of an entire coalmine gas early-warning system, especially in real-time performance. Our objective was to present the structural features of coalmine gas data, so that the data could be processed at different priority levels in C language. Two different data processing models, one with priority and the other without priority, were built based on queuing theory. Their theoretical formulas were determined via a M/M/I model in order to calculate average occupation time of each measuring point in an early-warning program. We validated the model with the gas early-warning system of the Huaibei Coalmine Group Corp. The results indicate that the average occupation time for gas data processing by using the queuing system model with priority is nearly 1/30 of that of the model without priority.
基金supported in part by the National Natural Science Foundation of China(NSFC)(92167106,61833014)Key Research and Development Program of Zhejiang Province(2022C01206)。
文摘The curse of dimensionality refers to the problem o increased sparsity and computational complexity when dealing with high-dimensional data.In recent years,the types and vari ables of industrial data have increased significantly,making data driven models more challenging to develop.To address this prob lem,data augmentation technology has been introduced as an effective tool to solve the sparsity problem of high-dimensiona industrial data.This paper systematically explores and discusses the necessity,feasibility,and effectiveness of augmented indus trial data-driven modeling in the context of the curse of dimen sionality and virtual big data.Then,the process of data augmen tation modeling is analyzed,and the concept of data boosting augmentation is proposed.The data boosting augmentation involves designing the reliability weight and actual-virtual weigh functions,and developing a double weighted partial least squares model to optimize the three stages of data generation,data fusion and modeling.This approach significantly improves the inter pretability,effectiveness,and practicality of data augmentation in the industrial modeling.Finally,the proposed method is verified using practical examples of fault diagnosis systems and virtua measurement systems in the industry.The results demonstrate the effectiveness of the proposed approach in improving the accu racy and robustness of data-driven models,making them more suitable for real-world industrial applications.
基金funded by Ministry of Industry and Information Technology of the People’s Republic of China[Grant No.2018473].
文摘Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.
文摘Before the implementation of product data management (PDM) system, person model and enterprise process model (EPM) must be firstly established. For the convenience of project management, all the related users must be allocated to the “Person User Role Group” net. Based on the person model and the direction of information flow, the EPM is established subsequently. The EPM consists of several release levels, in which the access controls are defined. The EPM procedure shows the blueprint of the workflow process structure. The establishment of person model and EPM in an enterprise has been instanced at the end of this paper.
基金Supported by the National Natural Science Foundation of China(21576143).
文摘A large amount of information is frequently encountered when characterizing the sample model in chemical process.A fault diagnosis method based on dynamic modeling of feature engineering is proposed to effectively remove the nonlinear correlation redundancy of chemical process in this paper.From the whole process point of view,the method makes use of the characteristic of mutual information to select the optimal variable subset.It extracts the correlation among variables in the whitening process without limiting to only linear correlations.Further,PCA(Principal Component Analysis)dimension reduction is used to extract feature subset before fault diagnosis.The application results of the TE(Tennessee Eastman)simulation process show that the dynamic modeling process of MIFE(Mutual Information Feature Engineering)can accurately extract the nonlinear correlation relationship among process variables and can effectively reduce the dimension of feature detection in process monitoring.
基金This project is supported by National Natural Science Foundation of China (No. 70471009)Natural Science Foundation Project of CQ CSTC, China (No. 2006BA2033).
文摘A set of indices for performance evaluation for business processes with multiple inputs and multiple outputs is proposed, which are found in machinery manufacturers. Based on the traditional methods of data envelopment analysis (DEA) and analytical hierarchical process (AHP), a hybrid model called DEA/AHP model is proposed to deal with the evaluation of business process performance. With the proposed method, the DEA is firstly used to develop a pairwise comparison matrix, and then the AHP is applied to evaluate the performance of business process using the pairwise comparison matrix. The significant advantage of this hybrid model is the use of objective data instead of subjective human judgment for performance evaluation. In the case study, a project of business process reengineering (BPR) with a hydraulic machinery manufacturer is used to demonstrate the effectiveness of the DEA/AHP model.
基金supported by National High Technology Research and Development Program of China (863 Program) (No. AA420060)
文摘In the course of network supported collaborative design, the data processing plays a very vital role. Much effort has been spent in this area, and many kinds of approaches have been proposed. Based on the correlative materials, this paper presents extensible markup language (XML) based strategy for several important problems of data processing in network supported collaborative design, such as the representation of standard for the exchange of product model data (STEP) with XML in the product information expression and the management of XML documents using relational database. The paper gives a detailed exposition on how to clarify the mapping between XML structure and the relationship database structure and how XML-QL queries can be translated into structured query language (SQL) queries. Finally, the structure of data processing system based on XML is presented.
文摘The present study aims to improve the efficiency of typical procedures used for post-processing flow field data by applying a neural-network technology.Assuming a problem of aircraft design as the workhorse,a regression calculation model for processing the flow data of a FCN-VGG19 aircraft is elaborated based on VGGNet(Visual Geometry Group Net)and FCN(Fully Convolutional Network)techniques.As shown by the results,the model displays a strong fitting ability,and there is almost no over-fitting in training.Moreover,the model has good accuracy and convergence.For different input data and different grids,the model basically achieves convergence,showing good performances.It is shown that the proposed simulation regression model based on FCN has great potential in typical problems of computational fluid dynamics(CFD)and related data processing.
基金partly supported by the National Natural Science Foundation of China(grants no.41272132 and 41572080)the Fundamental Research Funds for central Universities(grant no.2-9-2013-97)the Major State Science and Technology Research Programs(grants no.2008ZX05056-002-02-01 and 2011ZX05010-001-009)
文摘Sandy debris flow deposits are present in Unit I during Miocene of Gas Field A in the Baiyun Depression of the South China Sea. The paucity of well data and the great variability of the sedimentary microfacies make it difficult to identify and predict the distribution patterns of the main gas reservoir, and have seriously hindered further exploration and development of the gas field. Therefore, making full use of the available seismic data is extremely important for predicting the spatial distribution of sedimentary microfacies when constructing three-dimensional reservoir models. A suitable reservoir modeling strategy or workflow controlled by sedimentary microfacies and seismic data has been developed. Five types of seismic attributes were selected to correlate with the sand percentage, and the root mean square (RMS) amplitude performed the best. The relation between the RMS amplitude and the sand percentage was used to construct a reservoir sand distribution map. Three types of main sedimentary microfacies were identified: debris channels, fan lobes, and natural levees. Using constraints from the sedimentary microfacies boundaries, a sedimentary microfacies model was constructed using the sequential indicator and assigned value simulation methods. Finally, reservoir models of physical properties for sandy debris flow deposits controlled by sedimentary microfacies and seismic inversion data were established. Property cutoff values were adopted because the sedimentary microfacies and the reservoir properties from well-logging interpretation are intrinsically different. Selection of appropriate reservoir property cutoffs is a key step in reservoir modeling when using simulation methods based on sedimentary microfacies control. When the abnormal data are truncated and the reservoir properties probability distribution fits a normal distribution, microfacies-controlled reservoir property models are more reliable than those obtained from the sequence Gauss simulation method. The cutoffs for effective porosity of the debris channel, fan lobe, and natural levee facies were 0.2, 0.09, and 0.12, respectively; the corresponding average effective porosities were 0.24, 0.13, and 0.15. The proposed modeling method makes full use of seismic attributes and seismic inversion data, and also makes the property data of single-well depositional microfacies more conformable to a normal distribution with geological significance. Thus, the method allows use of more reliable input data when we construct a model of a sandy debris flow.
文摘The scientists are dedicated to studying the detection of Alzheimer’s disease onset to find a cure, or at the very least, medication that can slow the progression of the disease. This article explores the effectiveness of longitudinal data analysis, artificial intelligence, and machine learning approaches based on magnetic resonance imaging and positron emission tomography neuroimaging modalities for progression estimation and the detection of Alzheimer’s disease onset. The significance of feature extraction in highly complex neuroimaging data, identification of vulnerable brain regions, and the determination of the threshold values for plaques, tangles, and neurodegeneration of these regions will extensively be evaluated. Developing automated methods to improve the aforementioned research areas would enable specialists to determine the progression of the disease and find the link between the biomarkers and more accurate detection of Alzheimer’s disease onset.
文摘The transmission of scientific data over long distances is required to enable interplanetary science expeditions. Current approaches include transmitting all collected data or transmitting low resolution data to enable ground controller review and selection of data for transmission. Model-based data transmission (MBDT) seeks to increase the amount of knowledge conveyed per unit of data transmitted by comparing high-resolution data collected in situ to a pre-existing (or potentially co-transmitted) model. This paper describes the application of MBDT to gravitational data and characterizes its utility and performance. This is performed by applying the MBDT technique to a selection of gravitational data previously collected for the Earth and comparing the transmission requirements to the level required for raw data transmis-sion and non-application-aware compression. Levels of transmission reduction up to 31.8% (without the use maximum-error-thresholding) and up to 97.17% (with the use of maximum-error-thresholding) resulted. These levels significantly exceed what is possible with non-application-aware compression.
文摘<div style="text-align:justify;"> <span style="font-family:Verdana;">Various open source software are managed by using several bug tracking systems. In particular, the open source software extends to the cloud service and edge computing. Recently, OSF Edge Computing Group is launched by OpenStack. There are big data behind the internet services such as cloud and edge computing. Then, it is important to consider the impact of big data in order to assess the reliability of open source software. Various optimal software release problems have been proposed by specific researchers. In the typical optimal software release problems, the cost parameters are defined as the known parameter. However, it is difficult to decide the cost parameter because of the uncertainty. The purpose of our research is to estimate the effort parameters included in our models. In this paper, we propose an estimation method of effort parameter by using the genetic algorithm. Then, we show the estimation method in section 3. Moreover, we analyze actual data to show numerical examples for the estimation method of effort parameter. As the research results, we found that the OSS managers would be able to comprehend the human resources required before the OSS project in advance by using our method.</span> </div>
基金National Natural Science Foundation of China(No.61374140)the Youth Foundation of National Natural Science Foundation of China(No.61403072)
文摘Complex industry processes often need multiple operation modes to meet the change of production conditions. In the same mode,there are discrete samples belonging to this mode. Therefore,it is important to consider the samples which are sparse in the mode.To solve this issue,a new approach called density-based support vector data description( DBSVDD) is proposed. In this article,an algorithm using Gaussian mixture model( GMM) with the DBSVDD technique is proposed for process monitoring. The GMM method is used to obtain the center of each mode and determine the number of the modes. Considering the complexity of the data distribution and discrete samples in monitoring process,the DBSVDD is utilized for process monitoring. Finally,the validity and effectiveness of the DBSVDD method are illustrated through the Tennessee Eastman( TE) process.