Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t...Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.展开更多
The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous net...The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous network architectures.Despite its strategic importance,the UWSOS network is highly susceptible to hostile infiltrations,which significantly impede its battlefield recovery capabilities.Existing methods to enhance network resilience predominantly focus on basic graph relationships,neglecting the crucial higher-order dependencies among nodes necessary for capturing multi-hop meta-paths within the UWSOS.To address these limitations,we propose the Enhanced-Resilience Multi-Layer Attention Graph Convolutional Network(E-MAGCN),designed to augment the adaptability of UWSOS.Our approach employs BERT for extracting semantic insights from nodes and edges,thereby refining feature representations by leveraging various node and edge categories.Additionally,E-MAGCN integrates a regularization-based multi-layer attention mechanism and a semantic node fusion algo-rithm within the Graph Convolutional Network(GCN)framework.Through extensive simulation experiments,our model demonstrates an enhancement in resilience performance ranging from 1.2% to 7% over existing algorithms.展开更多
Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer l...Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer learningenhanced convolutional neural network(CNN)was proposed to identify the gross weight and the axle weight of moving vehicles on the bridge.The proposed transfer learning-enhanced CNN model was expected to weigh different bridges based on a small amount of training datasets and provide high identification accuracy.First of all,a CNN algorithm for bridge weigh-in-motion(B-WIM)technology was proposed to identify the axle weight and the gross weight of the typical two-axle,three-axle,and five-axle vehicles as they crossed the bridge with different loading routes and speeds.Then,the pre-trained CNN model was transferred by fine-tuning to weigh themoving vehicle on another bridge.Finally,the identification accuracy and the amount of training data required were compared between the two CNN models.Results showed that the pre-trained CNN model using transfer learning for B-WIM technology could be successfully used for the identification of the axle weight and the gross weight for moving vehicles on another bridge while reducing the training data by 63%.Moreover,the recognition accuracy of the pre-trained CNN model using transfer learning was comparable to that of the original model,showing its promising potentials in the actual applications.展开更多
Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Ou...Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.展开更多
Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventio...Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.展开更多
In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is di...In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.展开更多
Bleachers play a crucial role in practical engineering applications, and any damage incurred during their operationposes a significant threat to the safety of both life and property. Consequently, it becomes imperativ...Bleachers play a crucial role in practical engineering applications, and any damage incurred during their operationposes a significant threat to the safety of both life and property. Consequently, it becomes imperative to conductdamage diagnosis and health monitoring of bleachers. The intricate structure of bleachers, the varied types ofpotential damage, and the presence of similar vibration data in adjacent locations make it challenging to achievesatisfactory diagnosis accuracy through traditional time-frequency analysis methods. Furthermore, field environmentalnoise can adversely impact the accuracy of bleacher damage diagnosis. To enhance the accuracy and antinoisecapabilities of bleacher damage diagnosis, this paper proposes improvements to the existing ConvolutionalNeural Network with Training Interference (TICNN). The result is an advanced Convolutional Neural Networkmodel with superior accuracy and robust anti-noise capabilities, referred to as Enhanced TICNN (ETICNN).ETICNN autonomously extracts optimal damage-sensitive features from the original vibration data. To validatethe superiority of the proposed ETICNN, experiments are conducted using the bleacher model from Qatar Universityas the subject. Comparative studies under identical experimental conditions involve TICNN, Deep ConvolutionalNeural Networks with wide first-layer kernels (WDCNN), and One-Dimensional ConvolutionalNeural Network (1DCNN). The experimental findings demonstrate that the ETICNN model achieves the highestaccuracy, approximately 99%, and exhibits robust classification abilities in both Phases I and II of the damagediagnosis experiments. Simultaneously, the ETICNN model demonstrates strong anti-noise capabilities, outperformingTICNN by 3% to 4% and surpassing other models in performance.展开更多
In light of the prevailing issue that the existing convolutional neural network(CNN)power quality disturbance identification method can only extract single-scale features,which leads to a lack of feature information a...In light of the prevailing issue that the existing convolutional neural network(CNN)power quality disturbance identification method can only extract single-scale features,which leads to a lack of feature information and weak anti-noise performance,a new approach for identifying power quality disturbances based on an adaptive Kalman filter(KF)and multi-scale channel attention(MS-CAM)fused convolutional neural network is suggested.Single and composite-disruption signals are generated through simulation.The adaptive maximum likelihood Kalman filter is employed for noise reduction in the initial disturbance signal,and subsequent integration of multi-scale features into the conventional CNN architecture is conducted.The multi-scale features of the signal are captured by convolution kernels of different sizes so that the model can obtain diverse feature expressions.The attention mechanism(ATT)is introduced to adaptively allocate the extracted features,and the features are fused and selected to obtain the new main features.The Softmax classifier is employed for the classification of power quality disturbances.Finally,by comparing the recognition accuracy of the convolutional neural network(CNN),the model using the attention mechanism,the bidirectional long-term and short-term memory network(MS-Bi-LSTM),and the multi-scale convolutional neural network(MSCNN)with the attention mechanism with the proposed method.The simulation results demonstrate that the proposed method is higher than CNN,MS-Bi-LSTM,and MSCNN,and the overall recognition rate exceeds 99%,and the proposed method has significant classification accuracy and robust classification performance.This achievement provides a new perspective for further exploration in the field of power quality disturbance classification.展开更多
To enhance the accuracy and efficiency of bridge damage identification,a novel data-driven damage identification method was proposed.First,convolutional autoencoder(CAE)was used to extract key features from the accele...To enhance the accuracy and efficiency of bridge damage identification,a novel data-driven damage identification method was proposed.First,convolutional autoencoder(CAE)was used to extract key features from the acceleration signal of the bridge structure through data reconstruction.The extreme gradient boosting tree(XGBoost)was then used to perform analysis on the feature data to achieve damage detection with high accuracy and high performance.The proposed method was applied in a numerical simulation study on a three-span continuous girder and further validated experimentally on a scaled model of a cable-stayed bridge.The numerical simulation results show that the identification errors remain within 2.9%for six single-damage cases and within 3.1%for four double-damage cases.The experimental validation results demonstrate that when the tension in a single cable of the cable-stayed bridge decreases by 20%,the method accurately identifies damage at different cable locations using only sensors installed on the main girder,achieving identification accuracies above 95.8%in all cases.The proposed method shows high identification accuracy and generalization ability across various damage scenarios.展开更多
In this study, we prove the of existence of solutions of a convolution Volterra integral equation in the space of the Lebesgue integrable function on the set of positive real numbers and with the standard norm defined...In this study, we prove the of existence of solutions of a convolution Volterra integral equation in the space of the Lebesgue integrable function on the set of positive real numbers and with the standard norm defined on it. An operator P was assigned to the convolution integral operator which was later expressed in terms of the superposition operator and the nonlinear operator. Given a ball B<sub>r</sub> belonging to the space L it was established that the operator P maps the ball into itself. The Hausdorff measure of noncompactness was then applied by first proving that given a set M∈ B r the set is bounded, closed, convex and nondecreasing. Finally, the Darbo fixed point theorem was applied on the measure obtained from the set E belonging to M. From this application, it was observed that the conditions for the Darbo fixed point theorem was satisfied. This indicated the presence of at least a fixed point for the integral equation which thereby implying the existence of solutions for the integral equation.展开更多
Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and e...Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers a...This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency.展开更多
Dear Editor,This letter presents an organoid segmentation model based on multi-axis attention with convolution parallel block.MACPNet adeptly captures dynamic dependencies within bright-field microscopy images,improvi...Dear Editor,This letter presents an organoid segmentation model based on multi-axis attention with convolution parallel block.MACPNet adeptly captures dynamic dependencies within bright-field microscopy images,improving global modeling beyond conventional UNet.展开更多
Aiming at the problems of low accuracy and slow convergence speed of current intrusion detection models,SpiralConvolution is combined with Long Short-Term Memory Network to construct a new intrusion detection model.Th...Aiming at the problems of low accuracy and slow convergence speed of current intrusion detection models,SpiralConvolution is combined with Long Short-Term Memory Network to construct a new intrusion detection model.The dataset is first preprocessed using solo thermal encoding and normalization functions.Then the spiral convolution-Long Short-Term Memory Network model is constructed,which consists of spiral convolution,a two-layer long short-term memory network,and a classifier.It is shown through experiments that the model is characterized by high accuracy,small model computation,and fast convergence speed relative to previous deep learning models.The model uses a new neural network to achieve fast and accurate network traffic intrusion detection.The model in this paper achieves 0.9706 and 0.8432 accuracy rates on the NSL-KDD dataset and the UNSWNB-15 dataset under five classifications and ten classes,respectively.展开更多
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca...Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.展开更多
Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been ...Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.展开更多
In recent years,there has been a growing interest in graph convolutional networks(GCN).However,existing GCN and variants are predominantly based on simple graph or hypergraph structures,which restricts their ability t...In recent years,there has been a growing interest in graph convolutional networks(GCN).However,existing GCN and variants are predominantly based on simple graph or hypergraph structures,which restricts their ability to handle complex data correlations in practical applications.These limitations stem from the difficulty in establishing multiple hierarchies and acquiring adaptive weights for each of them.To address this issue,this paper introduces the latest concept of complex hypergraphs and constructs a versatile high-order multi-level data correlation model.This model is realized by establishing a three-tier structure of complexes-hypergraphs-vertices.Specifically,we start by establishing hyperedge clusters on a foundational network,utilizing a second-order hypergraph structure to depict potential correlations.For this second-order structure,truncation methods are used to assess and generate a three-layer composite structure.During the construction of the composite structure,an adaptive learning strategy is implemented to merge correlations across different levels.We evaluate this model on several popular datasets and compare it with recent state-of-the-art methods.The comprehensive assessment results demonstrate that the proposed model surpasses the existing methods,particularly in modeling implicit data correlations(the classification accuracy of nodes on five public datasets Cora,Citeseer,Pubmed,Github Web ML,and Facebook are 86.1±0.33,79.2±0.35,83.1±0.46,83.8±0.23,and 80.1±0.37,respectively).This indicates that our approach possesses advantages in handling datasets with implicit multi-level structures.展开更多
In convolutional neural networks,pooling methods are used to reduce both the size of the data and the number of parameters after the convolution of the models.These methods reduce the computational amount of convoluti...In convolutional neural networks,pooling methods are used to reduce both the size of the data and the number of parameters after the convolution of the models.These methods reduce the computational amount of convolutional neural networks,making the neural network more efficient.Maximum pooling,average pooling,and minimum pooling methods are generally used in convolutional neural networks.However,these pooling methods are not suitable for all datasets used in neural network applications.In this study,a new pooling approach to the literature is proposed to increase the efficiency and success rates of convolutional neural networks.This method,which we call MAM(Maximum Average Minimum)pooling,is more interactive than other traditional maximum pooling,average pooling,and minimum pooling methods and reduces data loss by calculating the more appropriate pixel value.The proposed MAM pooling method increases the performance of the neural network by calculating the optimal value during the training of convolutional neural networks.To determine the success accuracy of the proposed MAM pooling method and compare it with other traditional pooling methods,training was carried out on the LeNet-5 model using CIFAR-10,CIFAR-100,and MNIST datasets.According to the results obtained,the proposed MAM pooling method performed better than the maximum pooling,average pooling,and minimum pooling methods in all pool sizes on three different datasets.展开更多
The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively...The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively evaluate the relative importance of model parameters on the production forecasting performance,sensitivity analysis of parameters is required.The parameters are ranked according to the sensitivity coefficients for the subsequent optimization scheme design.A data-driven global sensitivity analysis(GSA)method using convolutional neural networks(CNN)is proposed to identify the influencing parameters in shale gas production.The CNN is trained on a large dataset,validated against numerical simulations,and utilized as a surrogate model for efficient sensitivity analysis.Our approach integrates CNN with the Sobol'global sensitivity analysis method,presenting three key scenarios for sensitivity analysis:analysis of the production stage as a whole,analysis by fixed time intervals,and analysis by declining rate.The findings underscore the predominant influence of reservoir thickness and well length on shale gas production.Furthermore,the temporal sensitivity analysis reveals the dynamic shifts in parameter importance across the distinct production stages.展开更多
基金supported by the National Key Research and Development Program of China(No.2018YFB2101300)the National Natural Science Foundation of China(Grant No.61871186)the Dean’s Fund of Engineering Research Center of Software/Hardware Co-Design Technology and Application,Ministry of Education(East China Normal University).
文摘Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.
基金This research was supported by the Key Research and Development Program of Shaanxi Province(2024GX-YBXM-010)the National Science Foundation of China(61972302).
文摘The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous network architectures.Despite its strategic importance,the UWSOS network is highly susceptible to hostile infiltrations,which significantly impede its battlefield recovery capabilities.Existing methods to enhance network resilience predominantly focus on basic graph relationships,neglecting the crucial higher-order dependencies among nodes necessary for capturing multi-hop meta-paths within the UWSOS.To address these limitations,we propose the Enhanced-Resilience Multi-Layer Attention Graph Convolutional Network(E-MAGCN),designed to augment the adaptability of UWSOS.Our approach employs BERT for extracting semantic insights from nodes and edges,thereby refining feature representations by leveraging various node and edge categories.Additionally,E-MAGCN integrates a regularization-based multi-layer attention mechanism and a semantic node fusion algo-rithm within the Graph Convolutional Network(GCN)framework.Through extensive simulation experiments,our model demonstrates an enhancement in resilience performance ranging from 1.2% to 7% over existing algorithms.
基金the financial support provided by the National Natural Science Foundation of China(Grant No.52208213)the Excellent Youth Foundation of Education Department in Hunan Province(Grant No.22B0141)+1 种基金the Xiaohe Sci-Tech Talents Special Funding under Hunan Provincial Sci-Tech Talents Sponsorship Program(2023TJ-X65)the Science Foundation of Xiangtan University(Grant No.21QDZ23).
文摘Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer learningenhanced convolutional neural network(CNN)was proposed to identify the gross weight and the axle weight of moving vehicles on the bridge.The proposed transfer learning-enhanced CNN model was expected to weigh different bridges based on a small amount of training datasets and provide high identification accuracy.First of all,a CNN algorithm for bridge weigh-in-motion(B-WIM)technology was proposed to identify the axle weight and the gross weight of the typical two-axle,three-axle,and five-axle vehicles as they crossed the bridge with different loading routes and speeds.Then,the pre-trained CNN model was transferred by fine-tuning to weigh themoving vehicle on another bridge.Finally,the identification accuracy and the amount of training data required were compared between the two CNN models.Results showed that the pre-trained CNN model using transfer learning for B-WIM technology could be successfully used for the identification of the axle weight and the gross weight for moving vehicles on another bridge while reducing the training data by 63%.Moreover,the recognition accuracy of the pre-trained CNN model using transfer learning was comparable to that of the original model,showing its promising potentials in the actual applications.
文摘Deep learning, especially through convolutional neural networks (CNN) such as the U-Net 3D model, has revolutionized fault identification from seismic data, representing a significant leap over traditional methods. Our review traces the evolution of CNN, emphasizing the adaptation and capabilities of the U-Net 3D model in automating seismic fault delineation with unprecedented accuracy. We find: 1) The transition from basic neural networks to sophisticated CNN has enabled remarkable advancements in image recognition, which are directly applicable to analyzing seismic data. The U-Net 3D model, with its innovative architecture, exemplifies this progress by providing a method for detailed and accurate fault detection with reduced manual interpretation bias. 2) The U-Net 3D model has demonstrated its superiority over traditional fault identification methods in several key areas: it has enhanced interpretation accuracy, increased operational efficiency, and reduced the subjectivity of manual methods. 3) Despite these achievements, challenges such as the need for effective data preprocessing, acquisition of high-quality annotated datasets, and achieving model generalization across different geological conditions remain. Future research should therefore focus on developing more complex network architectures and innovative training strategies to refine fault identification performance further. Our findings confirm the transformative potential of deep learning, particularly CNN like the U-Net 3D model, in geosciences, advocating for its broader integration to revolutionize geological exploration and seismic analysis.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2023R809).
文摘Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.
文摘In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.
基金the Nature Science Foundation of Hebei Province Grant No.E2020402060Key Laboratory of Intelligent Industrial Equipment Technology of Hebei Province(Hebei University of Engineering)under Grant 202206.
文摘Bleachers play a crucial role in practical engineering applications, and any damage incurred during their operationposes a significant threat to the safety of both life and property. Consequently, it becomes imperative to conductdamage diagnosis and health monitoring of bleachers. The intricate structure of bleachers, the varied types ofpotential damage, and the presence of similar vibration data in adjacent locations make it challenging to achievesatisfactory diagnosis accuracy through traditional time-frequency analysis methods. Furthermore, field environmentalnoise can adversely impact the accuracy of bleacher damage diagnosis. To enhance the accuracy and antinoisecapabilities of bleacher damage diagnosis, this paper proposes improvements to the existing ConvolutionalNeural Network with Training Interference (TICNN). The result is an advanced Convolutional Neural Networkmodel with superior accuracy and robust anti-noise capabilities, referred to as Enhanced TICNN (ETICNN).ETICNN autonomously extracts optimal damage-sensitive features from the original vibration data. To validatethe superiority of the proposed ETICNN, experiments are conducted using the bleacher model from Qatar Universityas the subject. Comparative studies under identical experimental conditions involve TICNN, Deep ConvolutionalNeural Networks with wide first-layer kernels (WDCNN), and One-Dimensional ConvolutionalNeural Network (1DCNN). The experimental findings demonstrate that the ETICNN model achieves the highestaccuracy, approximately 99%, and exhibits robust classification abilities in both Phases I and II of the damagediagnosis experiments. Simultaneously, the ETICNN model demonstrates strong anti-noise capabilities, outperformingTICNN by 3% to 4% and surpassing other models in performance.
基金The project is supported by the National Natural Science Foundation of China(52067013)the Key Projects of the Natural Science Foundation of Gansu Provincial Science and Technology Department(22JR5RA318).
文摘In light of the prevailing issue that the existing convolutional neural network(CNN)power quality disturbance identification method can only extract single-scale features,which leads to a lack of feature information and weak anti-noise performance,a new approach for identifying power quality disturbances based on an adaptive Kalman filter(KF)and multi-scale channel attention(MS-CAM)fused convolutional neural network is suggested.Single and composite-disruption signals are generated through simulation.The adaptive maximum likelihood Kalman filter is employed for noise reduction in the initial disturbance signal,and subsequent integration of multi-scale features into the conventional CNN architecture is conducted.The multi-scale features of the signal are captured by convolution kernels of different sizes so that the model can obtain diverse feature expressions.The attention mechanism(ATT)is introduced to adaptively allocate the extracted features,and the features are fused and selected to obtain the new main features.The Softmax classifier is employed for the classification of power quality disturbances.Finally,by comparing the recognition accuracy of the convolutional neural network(CNN),the model using the attention mechanism,the bidirectional long-term and short-term memory network(MS-Bi-LSTM),and the multi-scale convolutional neural network(MSCNN)with the attention mechanism with the proposed method.The simulation results demonstrate that the proposed method is higher than CNN,MS-Bi-LSTM,and MSCNN,and the overall recognition rate exceeds 99%,and the proposed method has significant classification accuracy and robust classification performance.This achievement provides a new perspective for further exploration in the field of power quality disturbance classification.
基金The National Natural Science Foundation of China(No.52361165658,52378318,52078459).
文摘To enhance the accuracy and efficiency of bridge damage identification,a novel data-driven damage identification method was proposed.First,convolutional autoencoder(CAE)was used to extract key features from the acceleration signal of the bridge structure through data reconstruction.The extreme gradient boosting tree(XGBoost)was then used to perform analysis on the feature data to achieve damage detection with high accuracy and high performance.The proposed method was applied in a numerical simulation study on a three-span continuous girder and further validated experimentally on a scaled model of a cable-stayed bridge.The numerical simulation results show that the identification errors remain within 2.9%for six single-damage cases and within 3.1%for four double-damage cases.The experimental validation results demonstrate that when the tension in a single cable of the cable-stayed bridge decreases by 20%,the method accurately identifies damage at different cable locations using only sensors installed on the main girder,achieving identification accuracies above 95.8%in all cases.The proposed method shows high identification accuracy and generalization ability across various damage scenarios.
文摘In this study, we prove the of existence of solutions of a convolution Volterra integral equation in the space of the Lebesgue integrable function on the set of positive real numbers and with the standard norm defined on it. An operator P was assigned to the convolution integral operator which was later expressed in terms of the superposition operator and the nonlinear operator. Given a ball B<sub>r</sub> belonging to the space L it was established that the operator P maps the ball into itself. The Hausdorff measure of noncompactness was then applied by first proving that given a set M∈ B r the set is bounded, closed, convex and nondecreasing. Finally, the Darbo fixed point theorem was applied on the measure obtained from the set E belonging to M. From this application, it was observed that the conditions for the Darbo fixed point theorem was satisfied. This indicated the presence of at least a fixed point for the integral equation which thereby implying the existence of solutions for the integral equation.
基金financial support provided by the Future Energy System at University of Alberta and NSERC Discovery Grant RGPIN-2023-04084。
文摘Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金Support by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY 0026,2023YFH0004).
文摘This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency.
基金supported by the Xinjiang Tianchi Talents Program(E33B9401)the Natural Science Foundation of Xinjiang Uygur Autonomous Region(2023D01E15)+1 种基金the National Natural Science Foundation of China(62302495)the National Natural Science Foundation of China(62373348)。
文摘Dear Editor,This letter presents an organoid segmentation model based on multi-axis attention with convolution parallel block.MACPNet adeptly captures dynamic dependencies within bright-field microscopy images,improving global modeling beyond conventional UNet.
基金the Gansu University of Political Science and Law Key Research Funding Project in 2018(GZF2018XZDLW20)Gansu Provincial Science and Technology Plan Project(Technology Innovation Guidance Plan)(20CX9ZA072).
文摘Aiming at the problems of low accuracy and slow convergence speed of current intrusion detection models,SpiralConvolution is combined with Long Short-Term Memory Network to construct a new intrusion detection model.The dataset is first preprocessed using solo thermal encoding and normalization functions.Then the spiral convolution-Long Short-Term Memory Network model is constructed,which consists of spiral convolution,a two-layer long short-term memory network,and a classifier.It is shown through experiments that the model is characterized by high accuracy,small model computation,and fast convergence speed relative to previous deep learning models.The model uses a new neural network to achieve fast and accurate network traffic intrusion detection.The model in this paper achieves 0.9706 and 0.8432 accuracy rates on the NSL-KDD dataset and the UNSWNB-15 dataset under five classifications and ten classes,respectively.
基金supported by the National Natural Science Foundation of China-China State Railway Group Co.,Ltd.Railway Basic Research Joint Fund (Grant No.U2268217)the Scientific Funding for China Academy of Railway Sciences Corporation Limited (No.2021YJ183).
文摘Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.
基金This work was supported by the Kyonggi University Research Grant 2022.
文摘Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.12275179 and 11875042)the Natural Science Foundation of Shanghai Municipality,China(Grant No.21ZR1443900)。
文摘In recent years,there has been a growing interest in graph convolutional networks(GCN).However,existing GCN and variants are predominantly based on simple graph or hypergraph structures,which restricts their ability to handle complex data correlations in practical applications.These limitations stem from the difficulty in establishing multiple hierarchies and acquiring adaptive weights for each of them.To address this issue,this paper introduces the latest concept of complex hypergraphs and constructs a versatile high-order multi-level data correlation model.This model is realized by establishing a three-tier structure of complexes-hypergraphs-vertices.Specifically,we start by establishing hyperedge clusters on a foundational network,utilizing a second-order hypergraph structure to depict potential correlations.For this second-order structure,truncation methods are used to assess and generate a three-layer composite structure.During the construction of the composite structure,an adaptive learning strategy is implemented to merge correlations across different levels.We evaluate this model on several popular datasets and compare it with recent state-of-the-art methods.The comprehensive assessment results demonstrate that the proposed model surpasses the existing methods,particularly in modeling implicit data correlations(the classification accuracy of nodes on five public datasets Cora,Citeseer,Pubmed,Github Web ML,and Facebook are 86.1±0.33,79.2±0.35,83.1±0.46,83.8±0.23,and 80.1±0.37,respectively).This indicates that our approach possesses advantages in handling datasets with implicit multi-level structures.
文摘In convolutional neural networks,pooling methods are used to reduce both the size of the data and the number of parameters after the convolution of the models.These methods reduce the computational amount of convolutional neural networks,making the neural network more efficient.Maximum pooling,average pooling,and minimum pooling methods are generally used in convolutional neural networks.However,these pooling methods are not suitable for all datasets used in neural network applications.In this study,a new pooling approach to the literature is proposed to increase the efficiency and success rates of convolutional neural networks.This method,which we call MAM(Maximum Average Minimum)pooling,is more interactive than other traditional maximum pooling,average pooling,and minimum pooling methods and reduces data loss by calculating the more appropriate pixel value.The proposed MAM pooling method increases the performance of the neural network by calculating the optimal value during the training of convolutional neural networks.To determine the success accuracy of the proposed MAM pooling method and compare it with other traditional pooling methods,training was carried out on the LeNet-5 model using CIFAR-10,CIFAR-100,and MNIST datasets.According to the results obtained,the proposed MAM pooling method performed better than the maximum pooling,average pooling,and minimum pooling methods in all pool sizes on three different datasets.
基金supported by the National Natural Science Foundation of China (Nos.52274048 and 52374017)Beijing Natural Science Foundation (No.3222037)the CNPC 14th five-year perspective fundamental research project (No.2021DJ2104)。
文摘The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively evaluate the relative importance of model parameters on the production forecasting performance,sensitivity analysis of parameters is required.The parameters are ranked according to the sensitivity coefficients for the subsequent optimization scheme design.A data-driven global sensitivity analysis(GSA)method using convolutional neural networks(CNN)is proposed to identify the influencing parameters in shale gas production.The CNN is trained on a large dataset,validated against numerical simulations,and utilized as a surrogate model for efficient sensitivity analysis.Our approach integrates CNN with the Sobol'global sensitivity analysis method,presenting three key scenarios for sensitivity analysis:analysis of the production stage as a whole,analysis by fixed time intervals,and analysis by declining rate.The findings underscore the predominant influence of reservoir thickness and well length on shale gas production.Furthermore,the temporal sensitivity analysis reveals the dynamic shifts in parameter importance across the distinct production stages.