Due to the demand of data processing for polar ice radar in our laboratory, a Curvelet Thresholding Neural Network (TNN) noise reduction method is proposed, and a new threshold function with infinite-order continuous ...Due to the demand of data processing for polar ice radar in our laboratory, a Curvelet Thresholding Neural Network (TNN) noise reduction method is proposed, and a new threshold function with infinite-order continuous derivative is constructed. The method is based on TNN model. In the learning process of TNN, the gradient descent method is adopted to solve the adaptive optimal thresholds of different scales and directions in Curvelet domain, and to achieve an optimal mean square error performance. In this paper, the specific implementation steps are presented, and the superiority of this method is verified by simulation. Finally, the proposed method is used to process the ice radar data obtained during the 28th Chinese National Antarctic Research Expedition in the region of Zhongshan Station, Antarctica. Experimental results show that the proposed method can reduce the noise effectively, while preserving the edge of the ice layers.展开更多
In order to increase drilling speed in deep complicated formations in Kela-2 gas field, Tarim Basin, Xinjiang, west China, it is important to predict the formation lithology for drilling bit optimization. Based on the...In order to increase drilling speed in deep complicated formations in Kela-2 gas field, Tarim Basin, Xinjiang, west China, it is important to predict the formation lithology for drilling bit optimization. Based on the conventional back propagation (BP) model, an improved BP model was proposed, with main modifications of back propagation of error, self-adapting algorithm, and activation function, also a prediction program was developed. The improved BP model was successfully applied to predicting the lithology of formations to be drilled in the Kela-2 gas field.展开更多
Seismic inversion and basic theory are briefly presented and the main idea of this method is introduced. Both non-linear wave equation inversion technique and Complete Utilization of Samples Information (CUSI) neural ...Seismic inversion and basic theory are briefly presented and the main idea of this method is introduced. Both non-linear wave equation inversion technique and Complete Utilization of Samples Information (CUSI) neural network analysis are used in lithological interpretation in Jibei coal field. The prediction results indicate that this method can provide reliable data for thin coal exploitation and promising area evaluation.展开更多
The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause of...The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause ofdeath and among over a hundred types of cancer;lung cancer is the secondmost common type of cancer as well as the leading cause of cancer-relateddeaths. Anyhow, an accurate lung cancer diagnosis in a timely manner canelevate the likelihood of survival by a noticeable margin and medical imagingis a prevalent manner of cancer diagnosis since it is easily accessible to peoplearound the globe. Nonetheless, this is not eminently efficacious consideringhuman inspection of medical images can yield a high false positive rate. Ineffectiveand inefficient diagnosis is a crucial reason for such a high mortalityrate for this malady. However, the conspicuous advancements in deep learningand artificial intelligence have stimulated the development of exceedinglyprecise diagnosis systems. The development and performance of these systemsrely prominently on the data that is used to train these systems. A standardproblem witnessed in publicly available medical image datasets is the severeimbalance of data between different classes. This grave imbalance of data canmake a deep learning model biased towards the dominant class and unableto generalize. This study aims to present an end-to-end convolutional neuralnetwork that can accurately differentiate lung nodules from non-nodules andreduce the false positive rate to a bare minimum. To tackle the problem ofdata imbalance, we oversampled the data by transforming available images inthe minority class. The average false positive rate in the proposed method isa mere 1.5 percent. However, the average false negative rate is 31.76 percent.The proposed neural network has 68.66 percent sensitivity and 98.42 percentspecificity.展开更多
In the data retrieval process of the Data recommendation system,the matching prediction and similarity identification take place a major role in the ontology.In that,there are several methods to improve the retrieving...In the data retrieval process of the Data recommendation system,the matching prediction and similarity identification take place a major role in the ontology.In that,there are several methods to improve the retrieving process with improved accuracy and to reduce the searching time.Since,in the data recommendation system,this type of data searching becomes complex to search for the best matching for given query data and fails in the accuracy of the query recommendation process.To improve the performance of data validation,this paper proposed a novel model of data similarity estimation and clustering method to retrieve the relevant data with the best matching in the big data processing.In this paper advanced model of the Logarithmic Directionality Texture Pattern(LDTP)method with a Metaheuristic Pattern Searching(MPS)system was used to estimate the similarity between the query data in the entire database.The overall work was implemented for the application of the data recommendation process.These are all indexed and grouped as a cluster to form a paged format of database structure which can reduce the computation time while at the searching period.Also,with the help of a neural network,the relevancies of feature attributes in the database are predicted,and the matching index was sorted to provide the recommended data for given query data.This was achieved by using the Distributional Recurrent Neural Network(DRNN).This is an enhanced model of Neural Network technology to find the relevancy based on the correlation factor of the feature set.The training process of the DRNN classifier was carried out by estimating the correlation factor of the attributes of the dataset.These are formed as clusters and paged with proper indexing based on the MPS parameter of similarity metric.The overall performance of the proposed work can be evaluated by varying the size of the training database by 60%,70%,and 80%.The parameters that are considered for performance analysis are Precision,Recall,F1-score and the accuracy of data retrieval,the query recommendation output,and comparison with other state-of-art methods.展开更多
Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5...Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5 microns)is the chief culprit causing aerosol.To forecast the condition of PM2.5,this paper adopts the related the meteorological data and air pollutes data to predict the concentration of PM2.5.Since the meteorological data and air pollutes data are typical time series data,it is reasonable to adopt a machine learning method called Single Hidden-Layer Long Short-Term Memory Neural Network(SSHL-LSTMNN)containing memory capability to implement the prediction.However,the number of neurons in the hidden layer is difficult to decide unless manual testing is operated.In order to decide the best structure of the neural network and improve the accuracy of prediction,this paper employs a self-organizing algorithm,which uses Information Processing Capability(IPC)to adjust the number of the hidden neurons automatically during a learning phase.In a word,to predict PM2.5 concentration accurately,this paper proposes the SSHL-LSTMNN to predict PM2.5 concentration.In the experiment,not only the hourly precise prediction but also the daily longer-term prediction is taken into account.At last,the experimental results reflect that SSHL-LSTMNN performs the best.展开更多
In this paper,a variety of classical convolutional neural networks are trained on two different datasets using transfer learning method.We demonstrated that the training dataset has a significant impact on the trainin...In this paper,a variety of classical convolutional neural networks are trained on two different datasets using transfer learning method.We demonstrated that the training dataset has a significant impact on the training results,in addition to the optimization achieved through the model structure.However,the lack of open-source agricultural data,combined with the absence of a comprehensive open-source data sharing platform,remains a substantial obstacle.This issue is closely related to the difficulty and high cost of obtaining high-quality agricultural data,the low level of education of most employees,underdeveloped distributed training systems and unsecured data security.To address these challenges,this paper proposes a novel idea of constructing an agricultural data sharing platform based on a federated learning(FL)framework,aiming to overcome the deficiency of high-quality data in agricultural field training.展开更多
Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vect...Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vector machine,particle swarm optimization,etc.,lack accuracy,robustness and efficiency,in this study,the authors propose a new method for the prediction of NSS based on recurrent neural network(RNN)with gated recurrent unit.Design/methodology/approach-This method extracts internal and external information features from the original time-series network data for the first time.Then,the extracted features are applied to the deep RNN model for training and validation.After iteration and optimization,the accuracy of predictions of NSS will be obtained by the well-trained model,and the model is robust for the unstable network data.Findings-Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models.Although the deep RNN models need more time consumption for training,they guarantee the accuracy and robustness of prediction in return for validation.Originality/value-In the prediction of NSS time-series data,the proposed internal and external information features are well described the original data,and the employment of deep RNN model will outperform the state-of-the-arts models.展开更多
Nowadays, it becomes very urgent to find remain oil under the oil shortage worldwide.However, most of simple reservoirs have been discovered and those undiscovered are mostly complex structural, stratigraphic and lith...Nowadays, it becomes very urgent to find remain oil under the oil shortage worldwide.However, most of simple reservoirs have been discovered and those undiscovered are mostly complex structural, stratigraphic and lithologic ones. Summarized in this paper is the integrated seismic processing/interpretation technique established on the basis of pre-stack AVO processing and interpretation.Information feedbacks occurred between the pre-stack and post-stack processes so as to improve the accuracy in utilization of data and avoid pitfalls in seismic attributes. Through the integration of seismic data with geologic data, parameters that were most essential to describing hydrocarbon characteristics were determined and comprehensively appraised, and regularities of reservoir generation and distribution were described so as to accurately appraise reservoirs, delineate favorite traps and pinpoint wells.展开更多
Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability...Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.展开更多
In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated alon...In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated along with time-consuming to process a massive amount of data.Thus,to design the Distribution Preserving Framework for BD,a novel methodology has been proposed utilizing Manhattan Distance(MD)-centered Partition Around Medoid(MD–PAM)along with Conjugate Gradient Artificial Neural Network(CG-ANN),which undergoes various steps to reduce the complications of BD.Firstly,the data are processed in the pre-processing phase by mitigating the data repetition utilizing the map-reduce function;subsequently,the missing data are handled by substituting or by ignoring the missed values.After that,the data are transmuted into a normalized form.Next,to enhance the classification performance,the data’s dimensionalities are minimized by employing Gaussian Kernel(GK)-Fisher Discriminant Analysis(GK-FDA).Afterwards,the processed data is submitted to the partitioning phase after transmuting it into a structured format.In the partition phase,by utilizing the MD-PAM,the data are partitioned along with grouped into a cluster.Lastly,by employing CG-ANN,the data are classified in the classification phase so that the needed data can be effortlessly retrieved by the user.To analogize the outcomes of the CG-ANN with the prevailing methodologies,the NSL-KDD openly accessible datasets are utilized.The experiential outcomes displayed that an efficient result along with a reduced computation cost was shown by the proposed CG-ANN.The proposed work outperforms well in terms of accuracy,sensitivity and specificity than the existing systems.展开更多
How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable i...How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].展开更多
Geologists interpret seismic data to understand subsurface properties and subsequently to locate underground hydrocarbon resources.Channels are among the most important geological features interpreters analyze to loca...Geologists interpret seismic data to understand subsurface properties and subsequently to locate underground hydrocarbon resources.Channels are among the most important geological features interpreters analyze to locate petroleum reservoirs.However,manual channel picking is both time consuming and tedious.Moreover,similar to any other process dependent on human intervention,manual channel picking is error prone and inconsistent.To address these issues,automatic channel detection is both necessary and important for efficient and accurate seismic interpretation.Modern systems make use of real-time image processing techniques for different tasks.Automatic channel detection is a combination of different mathematical methods in digital image processing that can identify streaks within the images called channels that are important to the oil companies.In this paper,we propose an innovative automatic channel detection algorithm based on machine learning techniques.The new algorithm can identify channels in seismic data/images fully automatically and tremendously increases the efficiency and accuracy of the interpretation process.The algorithm uses deep neural network to train the classifier with both the channel and non-channel patches.We provide a field data example to demonstrate the performance of the new algorithm.The training phase gave a maximum accuracy of 84.6%for the classifier and it performed even better in the testing phase,giving a maximum accuracy of 90%.展开更多
The paper will be introduced as sentimental analysis system of film criticism based on deep learning.Which contains four main processing sections.Compared with other systems,our sentimental analysis system based on de...The paper will be introduced as sentimental analysis system of film criticism based on deep learning.Which contains four main processing sections.Compared with other systems,our sentimental analysis system based on deep learning has plenty of advantages,including simple structure,high accuracy,and rapid encoding speed.展开更多
数据驱动建模方法改变了发电机传统的建模范式,导致传统的机电暂态时域仿真方法无法直接应用于新范式下的电力系统。为此,该文提出一种基于数据-模型混合驱动的机电暂态时域仿真(data and physics driven time domain simulation,DPD-T...数据驱动建模方法改变了发电机传统的建模范式,导致传统的机电暂态时域仿真方法无法直接应用于新范式下的电力系统。为此,该文提出一种基于数据-模型混合驱动的机电暂态时域仿真(data and physics driven time domain simulation,DPD-TDS)算法。算法中发电机状态变量与节点注入电流通过数据驱动模型推理计算,并通过网络方程完成节点电压计算,两者交替求解完成仿真。算法提出一种混合驱动范式下的网络代数方程组预处理方法,用以改善仿真的收敛性;算法设计一种中央处理器单元-神经网络处理器单元(central processing unit-neural network processing unit,CPU-NPU)异构计算框架以加速仿真,CPU进行机理模型的微分代数方程求解;NPU作协处理器完成数据驱动模型的前向推理。最后在IEEE-39和Polish-2383系统中将部分或全部发电机替换为数据驱动模型进行验证,仿真结果表明,所提出的仿真算法收敛性好,计算速度快,结果准确。展开更多
Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the i...Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.展开更多
基金Supported by the National High Technology Research and Development Program of China (No. 2011AA040202)the National Natural Science Foundation of China (No. 40976114)
文摘Due to the demand of data processing for polar ice radar in our laboratory, a Curvelet Thresholding Neural Network (TNN) noise reduction method is proposed, and a new threshold function with infinite-order continuous derivative is constructed. The method is based on TNN model. In the learning process of TNN, the gradient descent method is adopted to solve the adaptive optimal thresholds of different scales and directions in Curvelet domain, and to achieve an optimal mean square error performance. In this paper, the specific implementation steps are presented, and the superiority of this method is verified by simulation. Finally, the proposed method is used to process the ice radar data obtained during the 28th Chinese National Antarctic Research Expedition in the region of Zhongshan Station, Antarctica. Experimental results show that the proposed method can reduce the noise effectively, while preserving the edge of the ice layers.
文摘In order to increase drilling speed in deep complicated formations in Kela-2 gas field, Tarim Basin, Xinjiang, west China, it is important to predict the formation lithology for drilling bit optimization. Based on the conventional back propagation (BP) model, an improved BP model was proposed, with main modifications of back propagation of error, self-adapting algorithm, and activation function, also a prediction program was developed. The improved BP model was successfully applied to predicting the lithology of formations to be drilled in the Kela-2 gas field.
文摘Seismic inversion and basic theory are briefly presented and the main idea of this method is introduced. Both non-linear wave equation inversion technique and Complete Utilization of Samples Information (CUSI) neural network analysis are used in lithological interpretation in Jibei coal field. The prediction results indicate that this method can provide reliable data for thin coal exploitation and promising area evaluation.
基金supported this research through the National Research Foundation of Korea (NRF)funded by the Ministry of Science,ICT (2019M3F2A1073387)this work was supported by the Institute for Information&communications Technology Promotion (IITP) (NO.2022-0-00980Cooperative Intelligence Framework of Scene Perception for Autonomous IoT Device).
文摘The extent of the peril associated with cancer can be perceivedfrom the lack of treatment, ineffective early diagnosis techniques, and mostimportantly its fatality rate. Globally, cancer is the second leading cause ofdeath and among over a hundred types of cancer;lung cancer is the secondmost common type of cancer as well as the leading cause of cancer-relateddeaths. Anyhow, an accurate lung cancer diagnosis in a timely manner canelevate the likelihood of survival by a noticeable margin and medical imagingis a prevalent manner of cancer diagnosis since it is easily accessible to peoplearound the globe. Nonetheless, this is not eminently efficacious consideringhuman inspection of medical images can yield a high false positive rate. Ineffectiveand inefficient diagnosis is a crucial reason for such a high mortalityrate for this malady. However, the conspicuous advancements in deep learningand artificial intelligence have stimulated the development of exceedinglyprecise diagnosis systems. The development and performance of these systemsrely prominently on the data that is used to train these systems. A standardproblem witnessed in publicly available medical image datasets is the severeimbalance of data between different classes. This grave imbalance of data canmake a deep learning model biased towards the dominant class and unableto generalize. This study aims to present an end-to-end convolutional neuralnetwork that can accurately differentiate lung nodules from non-nodules andreduce the false positive rate to a bare minimum. To tackle the problem ofdata imbalance, we oversampled the data by transforming available images inthe minority class. The average false positive rate in the proposed method isa mere 1.5 percent. However, the average false negative rate is 31.76 percent.The proposed neural network has 68.66 percent sensitivity and 98.42 percentspecificity.
文摘In the data retrieval process of the Data recommendation system,the matching prediction and similarity identification take place a major role in the ontology.In that,there are several methods to improve the retrieving process with improved accuracy and to reduce the searching time.Since,in the data recommendation system,this type of data searching becomes complex to search for the best matching for given query data and fails in the accuracy of the query recommendation process.To improve the performance of data validation,this paper proposed a novel model of data similarity estimation and clustering method to retrieve the relevant data with the best matching in the big data processing.In this paper advanced model of the Logarithmic Directionality Texture Pattern(LDTP)method with a Metaheuristic Pattern Searching(MPS)system was used to estimate the similarity between the query data in the entire database.The overall work was implemented for the application of the data recommendation process.These are all indexed and grouped as a cluster to form a paged format of database structure which can reduce the computation time while at the searching period.Also,with the help of a neural network,the relevancies of feature attributes in the database are predicted,and the matching index was sorted to provide the recommended data for given query data.This was achieved by using the Distributional Recurrent Neural Network(DRNN).This is an enhanced model of Neural Network technology to find the relevancy based on the correlation factor of the feature set.The training process of the DRNN classifier was carried out by estimating the correlation factor of the attributes of the dataset.These are formed as clusters and paged with proper indexing based on the MPS parameter of similarity metric.The overall performance of the proposed work can be evaluated by varying the size of the training database by 60%,70%,and 80%.The parameters that are considered for performance analysis are Precision,Recall,F1-score and the accuracy of data retrieval,the query recommendation output,and comparison with other state-of-art methods.
文摘Haze-fog,which is an atmospheric aerosol caused by natural or man-made factors,seriously affects the physical and mental health of human beings.PM2.5(a particulate matter whose diameter is smaller than or equal to 2.5 microns)is the chief culprit causing aerosol.To forecast the condition of PM2.5,this paper adopts the related the meteorological data and air pollutes data to predict the concentration of PM2.5.Since the meteorological data and air pollutes data are typical time series data,it is reasonable to adopt a machine learning method called Single Hidden-Layer Long Short-Term Memory Neural Network(SSHL-LSTMNN)containing memory capability to implement the prediction.However,the number of neurons in the hidden layer is difficult to decide unless manual testing is operated.In order to decide the best structure of the neural network and improve the accuracy of prediction,this paper employs a self-organizing algorithm,which uses Information Processing Capability(IPC)to adjust the number of the hidden neurons automatically during a learning phase.In a word,to predict PM2.5 concentration accurately,this paper proposes the SSHL-LSTMNN to predict PM2.5 concentration.In the experiment,not only the hourly precise prediction but also the daily longer-term prediction is taken into account.At last,the experimental results reflect that SSHL-LSTMNN performs the best.
基金National Key Research and Development Program of China(2021ZD0113704).
文摘In this paper,a variety of classical convolutional neural networks are trained on two different datasets using transfer learning method.We demonstrated that the training dataset has a significant impact on the training results,in addition to the optimization achieved through the model structure.However,the lack of open-source agricultural data,combined with the absence of a comprehensive open-source data sharing platform,remains a substantial obstacle.This issue is closely related to the difficulty and high cost of obtaining high-quality agricultural data,the low level of education of most employees,underdeveloped distributed training systems and unsecured data security.To address these challenges,this paper proposes a novel idea of constructing an agricultural data sharing platform based on a federated learning(FL)framework,aiming to overcome the deficiency of high-quality data in agricultural field training.
基金supported by the funds of Ningde Normal University Youth Teacher Research Program(2015Q15)The Education Science Project of the Junior Teacher in the Education Department of Fujian province(JAT160532).
文摘Purpose-The purpose of this paper is to solve the shortage of the existing methods for the prediction of network security situations(NSS).Because the conventional methods for the prediction of NSS,such as support vector machine,particle swarm optimization,etc.,lack accuracy,robustness and efficiency,in this study,the authors propose a new method for the prediction of NSS based on recurrent neural network(RNN)with gated recurrent unit.Design/methodology/approach-This method extracts internal and external information features from the original time-series network data for the first time.Then,the extracted features are applied to the deep RNN model for training and validation.After iteration and optimization,the accuracy of predictions of NSS will be obtained by the well-trained model,and the model is robust for the unstable network data.Findings-Experiments on bench marked data set show that the proposed method obtains more accurate and robust prediction results than conventional models.Although the deep RNN models need more time consumption for training,they guarantee the accuracy and robustness of prediction in return for validation.Originality/value-In the prediction of NSS time-series data,the proposed internal and external information features are well described the original data,and the employment of deep RNN model will outperform the state-of-the-arts models.
文摘Nowadays, it becomes very urgent to find remain oil under the oil shortage worldwide.However, most of simple reservoirs have been discovered and those undiscovered are mostly complex structural, stratigraphic and lithologic ones. Summarized in this paper is the integrated seismic processing/interpretation technique established on the basis of pre-stack AVO processing and interpretation.Information feedbacks occurred between the pre-stack and post-stack processes so as to improve the accuracy in utilization of data and avoid pitfalls in seismic attributes. Through the integration of seismic data with geologic data, parameters that were most essential to describing hydrocarbon characteristics were determined and comprehensively appraised, and regularities of reservoir generation and distribution were described so as to accurately appraise reservoirs, delineate favorite traps and pinpoint wells.
基金supported and granted by the Ministry of Science and Technology,Taiwan(MOST110-2622-E-390-001 and MOST109-2622-E-390-002-CC3).
文摘Big data analytics in business intelligence do not provide effective data retrieval methods and job scheduling that will cause execution inefficiency and low system throughput.This paper aims to enhance the capability of data retrieval and job scheduling to speed up the operation of big data analytics to overcome inefficiency and low throughput problems.First,integrating stacked sparse autoencoder and Elasticsearch indexing explored fast data searching and distributed indexing,which reduces the search scope of the database and dramatically speeds up data searching.Next,exploiting a deep neural network to predict the approximate execution time of a job gives prioritized job scheduling based on the shortest job first,which reduces the average waiting time of job execution.As a result,the proposed data retrieval approach outperforms the previous method using a deep autoencoder and Solr indexing,significantly improving the speed of data retrieval up to 53%and increasing system throughput by 53%.On the other hand,the proposed job scheduling algorithmdefeats both first-in-first-out andmemory-sensitive heterogeneous early finish time scheduling algorithms,effectively shortening the average waiting time up to 5%and average weighted turnaround time by 19%,respectively.
文摘In several fields like financial dealing,industry,business,medicine,et cetera,Big Data(BD)has been utilized extensively,which is nothing but a collection of a huge amount of data.However,it is highly complicated along with time-consuming to process a massive amount of data.Thus,to design the Distribution Preserving Framework for BD,a novel methodology has been proposed utilizing Manhattan Distance(MD)-centered Partition Around Medoid(MD–PAM)along with Conjugate Gradient Artificial Neural Network(CG-ANN),which undergoes various steps to reduce the complications of BD.Firstly,the data are processed in the pre-processing phase by mitigating the data repetition utilizing the map-reduce function;subsequently,the missing data are handled by substituting or by ignoring the missed values.After that,the data are transmuted into a normalized form.Next,to enhance the classification performance,the data’s dimensionalities are minimized by employing Gaussian Kernel(GK)-Fisher Discriminant Analysis(GK-FDA).Afterwards,the processed data is submitted to the partitioning phase after transmuting it into a structured format.In the partition phase,by utilizing the MD-PAM,the data are partitioned along with grouped into a cluster.Lastly,by employing CG-ANN,the data are classified in the classification phase so that the needed data can be effortlessly retrieved by the user.To analogize the outcomes of the CG-ANN with the prevailing methodologies,the NSL-KDD openly accessible datasets are utilized.The experiential outcomes displayed that an efficient result along with a reduced computation cost was shown by the proposed CG-ANN.The proposed work outperforms well in terms of accuracy,sensitivity and specificity than the existing systems.
文摘How organizations analyze and use data for decision-making has been changed by cognitive computing and artificial intelligence (AI). Cognitive computing solutions can translate enormous amounts of data into valuable insights by utilizing the power of cutting-edge algorithms and machine learning, empowering enterprises to make deft decisions quickly and efficiently. This article explores the idea of cognitive computing and AI in decision-making, emphasizing its function in converting unvalued data into valuable knowledge. It details the advantages of utilizing these technologies, such as greater productivity, accuracy, and efficiency. Businesses may use cognitive computing and AI to their advantage to obtain a competitive edge in today’s data-driven world by knowing their capabilities and possibilities [1].
文摘Geologists interpret seismic data to understand subsurface properties and subsequently to locate underground hydrocarbon resources.Channels are among the most important geological features interpreters analyze to locate petroleum reservoirs.However,manual channel picking is both time consuming and tedious.Moreover,similar to any other process dependent on human intervention,manual channel picking is error prone and inconsistent.To address these issues,automatic channel detection is both necessary and important for efficient and accurate seismic interpretation.Modern systems make use of real-time image processing techniques for different tasks.Automatic channel detection is a combination of different mathematical methods in digital image processing that can identify streaks within the images called channels that are important to the oil companies.In this paper,we propose an innovative automatic channel detection algorithm based on machine learning techniques.The new algorithm can identify channels in seismic data/images fully automatically and tremendously increases the efficiency and accuracy of the interpretation process.The algorithm uses deep neural network to train the classifier with both the channel and non-channel patches.We provide a field data example to demonstrate the performance of the new algorithm.The training phase gave a maximum accuracy of 84.6%for the classifier and it performed even better in the testing phase,giving a maximum accuracy of 90%.
文摘The paper will be introduced as sentimental analysis system of film criticism based on deep learning.Which contains four main processing sections.Compared with other systems,our sentimental analysis system based on deep learning has plenty of advantages,including simple structure,high accuracy,and rapid encoding speed.
基金funded by the Natural Science Foundation of Shandong Province (ZR2021MD061ZR2023QD025)+3 种基金China Postdoctoral Science Foundation (2022M721972)National Natural Science Foundation of China (41174098)Young Talents Foundation of Inner Mongolia University (10000-23112101/055)Qingdao Postdoctoral Science Foundation (QDBSH20230102094)。
文摘Conventional machine learning(CML)methods have been successfully applied for gas reservoir prediction.Their prediction accuracy largely depends on the quality of the sample data;therefore,feature optimization of the input samples is particularly important.Commonly used feature optimization methods increase the interpretability of gas reservoirs;however,their steps are cumbersome,and the selected features cannot sufficiently guide CML models to mine the intrinsic features of sample data efficiently.In contrast to CML methods,deep learning(DL)methods can directly extract the important features of targets from raw data.Therefore,this study proposes a feature optimization and gas-bearing prediction method based on a hybrid fusion model that combines a convolutional neural network(CNN)and an adaptive particle swarm optimization-least squares support vector machine(APSO-LSSVM).This model adopts an end-to-end algorithm structure to directly extract features from sensitive multicomponent seismic attributes,considerably simplifying the feature optimization.A CNN was used for feature optimization to highlight sensitive gas reservoir information.APSO-LSSVM was used to fully learn the relationship between the features extracted by the CNN to obtain the prediction results.The constructed hybrid fusion model improves gas-bearing prediction accuracy through two processes of feature optimization and intelligent prediction,giving full play to the advantages of DL and CML methods.The prediction results obtained are better than those of a single CNN model or APSO-LSSVM model.In the feature optimization process of multicomponent seismic attribute data,CNN has demonstrated better gas reservoir feature extraction capabilities than commonly used attribute optimization methods.In the prediction process,the APSO-LSSVM model can learn the gas reservoir characteristics better than the LSSVM model and has a higher prediction accuracy.The constructed CNN-APSO-LSSVM model had lower errors and a better fit on the test dataset than the other individual models.This method proves the effectiveness of DL technology for the feature extraction of gas reservoirs and provides a feasible way to combine DL and CML technologies to predict gas reservoirs.