This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers a...This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency.展开更多
Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in t...Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.展开更多
Aiming at the problems of low accuracy and slow convergence speed of current intrusion detection models,SpiralConvolution is combined with Long Short-Term Memory Network to construct a new intrusion detection model.Th...Aiming at the problems of low accuracy and slow convergence speed of current intrusion detection models,SpiralConvolution is combined with Long Short-Term Memory Network to construct a new intrusion detection model.The dataset is first preprocessed using solo thermal encoding and normalization functions.Then the spiral convolution-Long Short-Term Memory Network model is constructed,which consists of spiral convolution,a two-layer long short-term memory network,and a classifier.It is shown through experiments that the model is characterized by high accuracy,small model computation,and fast convergence speed relative to previous deep learning models.The model uses a new neural network to achieve fast and accurate network traffic intrusion detection.The model in this paper achieves 0.9706 and 0.8432 accuracy rates on the NSL-KDD dataset and the UNSWNB-15 dataset under five classifications and ten classes,respectively.展开更多
Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been ...Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.展开更多
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca...Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and e...Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.展开更多
The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provide...Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.展开更多
Since chemical processes are highly non-linear and multiscale,it is vital to deeply mine the multiscale coupling relationships embedded in the massive process data for the prediction and anomaly tracing of crucial pro...Since chemical processes are highly non-linear and multiscale,it is vital to deeply mine the multiscale coupling relationships embedded in the massive process data for the prediction and anomaly tracing of crucial process parameters and production indicators.While the integrated method of adaptive signal decomposition combined with time series models could effectively predict process variables,it does have limitations in capturing the high-frequency detail of the operation state when applied to complex chemical processes.In light of this,a novel Multiscale Multi-radius Multi-step Convolutional Neural Network(Msrt Net)is proposed for mining spatiotemporal multiscale information.First,the industrial data from the Fluid Catalytic Cracking(FCC)process decomposition using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise(CEEMDAN)extract the multi-energy scale information of the feature subset.Then,convolution kernels with varying stride and padding structures are established to decouple the long-period operation process information encapsulated within the multi-energy scale data.Finally,a reconciliation network is trained to reconstruct the multiscale prediction results and obtain the final output.Msrt Net is initially assessed for its capability to untangle the spatiotemporal multiscale relationships among variables in the Tennessee Eastman Process(TEP).Subsequently,the performance of Msrt Net is evaluated in predicting product yield for a 2.80×10^(6) t/a FCC unit,taking diesel and gasoline yield as examples.In conclusion,Msrt Net can decouple and effectively extract spatiotemporal multiscale information from chemical process data and achieve a approximately reduction of 30%in prediction error compared to other time-series models.Furthermore,its robustness and transferability underscore its promising potential for broader applications.展开更多
With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural network...With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural networks.These intelligent and automated methods significantly reduce manual labor,particularly in the laborious task of manually labeling seismic facies.However,the extensive demand for training data imposes limitations on their wider application.To overcome this challenge,we adopt the UNet architecture as the foundational network structure for seismic facies classification,which has demonstrated effective segmentation results even with small-sample training data.Additionally,we integrate spatial pyramid pooling and dilated convolution modules into the network architecture to enhance the perception of spatial information across a broader range.The seismic facies classification test on the public data from the F3 block verifies the superior performance of our proposed improved network structure in delineating seismic facies boundaries.Comparative analysis against the traditional UNet model reveals that our method achieves more accurate predictive classification results,as evidenced by various evaluation metrics for image segmentation.Obviously,the classification accuracy reaches an impressive 96%.Furthermore,the results of seismic facies classification in the seismic slice dimension provide further confirmation of the superior performance of our proposed method,which accurately defines the range of different seismic facies.This approach holds significant potential for analyzing geological patterns and extracting valuable depositional information.展开更多
Extracting valuable information frombiomedical texts is one of the current research hotspots of concern to a wide range of scholars.The biomedical corpus contains numerous complex long sentences and overlapping relati...Extracting valuable information frombiomedical texts is one of the current research hotspots of concern to a wide range of scholars.The biomedical corpus contains numerous complex long sentences and overlapping relational triples,making most generalized domain joint modeling methods difficult to apply effectively in this field.For a complex semantic environment in biomedical texts,in this paper,we propose a novel perspective to perform joint entity and relation extraction;existing studies divide the relation triples into several steps or modules.However,the three elements in the relation triples are interdependent and inseparable,so we regard joint extraction as a tripartite classification problem.At the same time,fromthe perspective of triple classification,we design amulti-granularity 2D convolution to refine the word pair table and better utilize the dependencies between biomedical word pairs.Finally,we use a biaffine predictor to assist in predicting the labels of word pairs for relation extraction.Our model(MCTPL)Multi-granularity Convolutional Tokens Pairs of Labeling better utilizes the elements of triples and improves the ability to extract overlapping triples compared to previous approaches.Finally,we evaluated our model on two publicly accessible datasets.The experimental results show that our model’s ability to extract relation triples on the CPI dataset improves the F1 score by 2.34%compared to the current optimal model.On the DDI dataset,the F1 value improves the F1 value by 1.68%compared to the current optimal model.Our model achieved state-of-the-art performance compared to other baseline models in biomedical text entity relation extraction.展开更多
Dear Editor,This letter presents an organoid segmentation model based on multi-axis attention with convolution parallel block.MACPNet adeptly captures dynamic dependencies within bright-field microscopy images,improvi...Dear Editor,This letter presents an organoid segmentation model based on multi-axis attention with convolution parallel block.MACPNet adeptly captures dynamic dependencies within bright-field microscopy images,improving global modeling beyond conventional UNet.展开更多
The relationship between users and items,which cannot be recovered by traditional techniques,can be extracted by the recommendation algorithm based on the graph convolution network.The current simple linear combinatio...The relationship between users and items,which cannot be recovered by traditional techniques,can be extracted by the recommendation algorithm based on the graph convolution network.The current simple linear combination of these algorithms may not be sufficient to extract the complex structure of user interaction data.This paper presents a new approach to address such issues,utilizing the graph convolution network to extract association relations.The proposed approach mainly includes three modules:Embedding layer,forward propagation layer,and score prediction layer.The embedding layer models users and items according to their interaction information and generates initial feature vectors as input for the forward propagation layer.The forward propagation layer designs two parallel graph convolution networks with self-connections,which extract higher-order association relevance from users and items separately by multi-layer graph convolution.Furthermore,the forward propagation layer integrates the attention factor to assign different weights among the hop neighbors of the graph convolution network fusion,capturing more comprehensive association relevance between users and items as input for the score prediction layer.The score prediction layer introduces MLP(multi-layer perceptron)to conduct non-linear feature interaction between users and items,respectively.Finally,the prediction score of users to items is obtained.The recall rate and normalized discounted cumulative gain were used as evaluation indexes.The proposed approach effectively integrates higher-order information in user entries,and experimental analysis demonstrates its superiority over the existing algorithms.展开更多
The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integrat...The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.展开更多
The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications ...The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications services,the implementation of the number portability policy,and the intensifying competition among operators.At the same time,users'consumption preferences and choices are evolving.Excellent churn prediction models must be created in order to accurately predict the churn tendency,since keeping existing customers is far less expensive than acquiring new ones.But conventional or learning-based algorithms can only go so far into a single subscriber's data;they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features.Additionally,the current churn prediction models have a high computational burden,a fuzzy weight distribution,and significant resource economic costs.The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures,ignoring the reference value supplied by other users with the same package.This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network(GAT-CNN)to address the aforementioned issues.The main contributions of this paper are as follows:Firstly,we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device,edge,and cloud layers.Second,we extend the use of users'own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously.Lastly,we build an integrated offline-online system for churn prediction based on the strengths of the two models,and we experimentally validate the efficacy of cloudside collaborative training and inference.In summary,the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.展开更多
Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread a...Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively...The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively evaluate the relative importance of model parameters on the production forecasting performance,sensitivity analysis of parameters is required.The parameters are ranked according to the sensitivity coefficients for the subsequent optimization scheme design.A data-driven global sensitivity analysis(GSA)method using convolutional neural networks(CNN)is proposed to identify the influencing parameters in shale gas production.The CNN is trained on a large dataset,validated against numerical simulations,and utilized as a surrogate model for efficient sensitivity analysis.Our approach integrates CNN with the Sobol'global sensitivity analysis method,presenting three key scenarios for sensitivity analysis:analysis of the production stage as a whole,analysis by fixed time intervals,and analysis by declining rate.The findings underscore the predominant influence of reservoir thickness and well length on shale gas production.Furthermore,the temporal sensitivity analysis reveals the dynamic shifts in parameter importance across the distinct production stages.展开更多
基金Support by Sichuan Science and Technology Program(2021YFQ0003,2023YFSY 0026,2023YFH0004).
文摘This study addresses the limitations of Transformer models in image feature extraction,particularly their lack of inductive bias for visual structures.Compared to Convolutional Neural Networks(CNNs),the Transformers are more sensitive to different hyperparameters of optimizers,which leads to a lack of stability and slow convergence.To tackle these challenges,we propose the Convolution-based Efficient Transformer Image Feature Extraction Network(CEFormer)as an enhancement of the Transformer architecture.Our model incorporates E-Attention,depthwise separable convolution,and dilated convolution to introduce crucial inductive biases,such as translation invariance,locality,and scale invariance,into the Transformer framework.Additionally,we implement a lightweight convolution module to process the input images,resulting in faster convergence and improved stability.This results in an efficient convolution combined Transformer image feature extraction network.Experimental results on the ImageNet1k Top-1 dataset demonstrate that the proposed network achieves better accuracy while maintaining high computational speed.It achieves up to 85.0%accuracy across various model sizes on image classification,outperforming various baseline models.When integrated into the Mask Region-ConvolutionalNeuralNetwork(R-CNN)framework as a backbone network,CEFormer outperforms other models and achieves the highest mean Average Precision(mAP)scores.This research presents a significant advancement in Transformer-based image feature extraction,balancing performance and computational efficiency.
基金supported by the National Key Research and Development Program of China(No.2018YFB2101300)the National Natural Science Foundation of China(Grant No.61871186)the Dean’s Fund of Engineering Research Center of Software/Hardware Co-Design Technology and Application,Ministry of Education(East China Normal University).
文摘Time series forecasting plays an important role in various fields, such as energy, finance, transport, and weather. Temporal convolutional networks (TCNs) based on dilated causal convolution have been widely used in time series forecasting. However, two problems weaken the performance of TCNs. One is that in dilated casual convolution, causal convolution leads to the receptive fields of outputs being concentrated in the earlier part of the input sequence, whereas the recent input information will be severely lost. The other is that the distribution shift problem in time series has not been adequately solved. To address the first problem, we propose a subsequence-based dilated convolution method (SDC). By using multiple convolutional filters to convolve elements of neighboring subsequences, the method extracts temporal features from a growing receptive field via a growing subsequence rather than a single element. Ultimately, the receptive field of each output element can cover the whole input sequence. To address the second problem, we propose a difference and compensation method (DCM). The method reduces the discrepancies between and within the input sequences by difference operations and then compensates the outputs for the information lost due to difference operations. Based on SDC and DCM, we further construct a temporal subsequence-based convolutional network with difference (TSCND) for time series forecasting. The experimental results show that TSCND can reduce prediction mean squared error by 7.3% and save runtime, compared with state-of-the-art models and vanilla TCN.
基金the Gansu University of Political Science and Law Key Research Funding Project in 2018(GZF2018XZDLW20)Gansu Provincial Science and Technology Plan Project(Technology Innovation Guidance Plan)(20CX9ZA072).
文摘Aiming at the problems of low accuracy and slow convergence speed of current intrusion detection models,SpiralConvolution is combined with Long Short-Term Memory Network to construct a new intrusion detection model.The dataset is first preprocessed using solo thermal encoding and normalization functions.Then the spiral convolution-Long Short-Term Memory Network model is constructed,which consists of spiral convolution,a two-layer long short-term memory network,and a classifier.It is shown through experiments that the model is characterized by high accuracy,small model computation,and fast convergence speed relative to previous deep learning models.The model uses a new neural network to achieve fast and accurate network traffic intrusion detection.The model in this paper achieves 0.9706 and 0.8432 accuracy rates on the NSL-KDD dataset and the UNSWNB-15 dataset under five classifications and ten classes,respectively.
基金This work was supported by the Kyonggi University Research Grant 2022.
文摘Recommendation Information Systems(RIS)are pivotal in helping users in swiftly locating desired content from the vast amount of information available on the Internet.Graph Convolution Network(GCN)algorithms have been employed to implement the RIS efficiently.However,the GCN algorithm faces limitations in terms of performance enhancement owing to the due to the embedding value-vanishing problem that occurs during the learning process.To address this issue,we propose a Weighted Forwarding method using the GCN(WF-GCN)algorithm.The proposed method involves multiplying the embedding results with different weights for each hop layer during graph learning.By applying the WF-GCN algorithm,which adjusts weights for each hop layer before forwarding to the next,nodes with many neighbors achieve higher embedding values.This approach facilitates the learning of more hop layers within the GCN framework.The efficacy of the WF-GCN was demonstrated through its application to various datasets.In the MovieLens dataset,the implementation of WF-GCN in LightGCN resulted in significant performance improvements,with recall and NDCG increasing by up to+163.64%and+132.04%,respectively.Similarly,in the Last.FM dataset,LightGCN using WF-GCN enhanced with WF-GCN showed substantial improvements,with the recall and NDCG metrics rising by up to+174.40%and+169.95%,respectively.Furthermore,the application of WF-GCN to Self-supervised Graph Learning(SGL)and Simple Graph Contrastive Learning(SimGCL)also demonstrated notable enhancements in both recall and NDCG across these datasets.
基金supported by the National Natural Science Foundation of China-China State Railway Group Co.,Ltd.Railway Basic Research Joint Fund (Grant No.U2268217)the Scientific Funding for China Academy of Railway Sciences Corporation Limited (No.2021YJ183).
文摘Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
基金financial support provided by the Future Energy System at University of Alberta and NSERC Discovery Grant RGPIN-2023-04084。
文摘Geomechanical assessment using coupled reservoir-geomechanical simulation is becoming increasingly important for analyzing the potential geomechanical risks in subsurface geological developments.However,a robust and efficient geomechanical upscaling technique for heterogeneous geological reservoirs is lacking to advance the applications of three-dimensional(3D)reservoir-scale geomechanical simulation considering detailed geological heterogeneities.Here,we develop convolutional neural network(CNN)proxies that reproduce the anisotropic nonlinear geomechanical response caused by lithological heterogeneity,and compute upscaled geomechanical properties from CNN proxies.The CNN proxies are trained using a large dataset of randomly generated spatially correlated sand-shale realizations as inputs and simulation results of their macroscopic geomechanical response as outputs.The trained CNN models can provide the upscaled shear strength(R^(2)>0.949),stress-strain behavior(R^(2)>0.925),and volumetric strain changes(R^(2)>0.958)that highly agree with the numerical simulation results while saving over two orders of magnitude of computational time.This is a major advantage in computing the upscaled geomechanical properties directly from geological realizations without the need to perform local numerical simulations to obtain the geomechanical response.The proposed CNN proxybased upscaling technique has the ability to(1)bridge the gap between the fine-scale geocellular models considering geological uncertainties and computationally efficient geomechanical models used to assess the geomechanical risks of large-scale subsurface development,and(2)improve the efficiency of numerical upscaling techniques that rely on local numerical simulations,leading to significantly increased computational time for uncertainty quantification using numerous geological realizations.
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
基金financially supported by China Postdoctoral Science Foundation(Grant No.2023M730365)Natural Science Foundation of Hubei Province of China(Grant No.2023AFB232)。
文摘Gas chromatography-mass spectrometry(GC-MS)is an extremely important analytical technique that is widely used in organic geochemistry.It is the only approach to capture biomarker features of organic matter and provides the key evidence for oil-source correlation and thermal maturity determination.However,the conventional way of processing and interpreting the mass chromatogram is both timeconsuming and labor-intensive,which increases the research cost and restrains extensive applications of this method.To overcome this limitation,a correlation model is developed based on the convolution neural network(CNN)to link the mass chromatogram and biomarker features of samples from the Triassic Yanchang Formation,Ordos Basin,China.In this way,the mass chromatogram can be automatically interpreted.This research first performs dimensionality reduction for 15 biomarker parameters via the factor analysis and then quantifies the biomarker features using two indexes(i.e.MI and PMI)that represent the organic matter thermal maturity and parent material type,respectively.Subsequently,training,interpretation,and validation are performed multiple times using different CNN models to optimize the model structure and hyper-parameter setting,with the mass chromatogram used as the input and the obtained MI and PMI values for supervision(label).The optimized model presents high accuracy in automatically interpreting the mass chromatogram,with R2values typically above 0.85 and0.80 for the thermal maturity and parent material interpretation results,respectively.The significance of this research is twofold:(i)developing an efficient technique for geochemical research;(ii)more importantly,demonstrating the potential of artificial intelligence in organic geochemistry and providing vital references for future related studies.
文摘Since chemical processes are highly non-linear and multiscale,it is vital to deeply mine the multiscale coupling relationships embedded in the massive process data for the prediction and anomaly tracing of crucial process parameters and production indicators.While the integrated method of adaptive signal decomposition combined with time series models could effectively predict process variables,it does have limitations in capturing the high-frequency detail of the operation state when applied to complex chemical processes.In light of this,a novel Multiscale Multi-radius Multi-step Convolutional Neural Network(Msrt Net)is proposed for mining spatiotemporal multiscale information.First,the industrial data from the Fluid Catalytic Cracking(FCC)process decomposition using Complete Ensemble Empirical Mode Decomposition with Adaptive Noise(CEEMDAN)extract the multi-energy scale information of the feature subset.Then,convolution kernels with varying stride and padding structures are established to decouple the long-period operation process information encapsulated within the multi-energy scale data.Finally,a reconciliation network is trained to reconstruct the multiscale prediction results and obtain the final output.Msrt Net is initially assessed for its capability to untangle the spatiotemporal multiscale relationships among variables in the Tennessee Eastman Process(TEP).Subsequently,the performance of Msrt Net is evaluated in predicting product yield for a 2.80×10^(6) t/a FCC unit,taking diesel and gasoline yield as examples.In conclusion,Msrt Net can decouple and effectively extract spatiotemporal multiscale information from chemical process data and achieve a approximately reduction of 30%in prediction error compared to other time-series models.Furthermore,its robustness and transferability underscore its promising potential for broader applications.
基金funded by the Fundamental Research Project of CNPC Geophysical Key Lab(2022DQ0604-4)the Strategic Cooperation Technology Projects of China National Petroleum Corporation and China University of Petroleum-Beijing(ZLZX 202003)。
文摘With the successful application and breakthrough of deep learning technology in image segmentation,there has been continuous development in the field of seismic facies interpretation using convolutional neural networks.These intelligent and automated methods significantly reduce manual labor,particularly in the laborious task of manually labeling seismic facies.However,the extensive demand for training data imposes limitations on their wider application.To overcome this challenge,we adopt the UNet architecture as the foundational network structure for seismic facies classification,which has demonstrated effective segmentation results even with small-sample training data.Additionally,we integrate spatial pyramid pooling and dilated convolution modules into the network architecture to enhance the perception of spatial information across a broader range.The seismic facies classification test on the public data from the F3 block verifies the superior performance of our proposed improved network structure in delineating seismic facies boundaries.Comparative analysis against the traditional UNet model reveals that our method achieves more accurate predictive classification results,as evidenced by various evaluation metrics for image segmentation.Obviously,the classification accuracy reaches an impressive 96%.Furthermore,the results of seismic facies classification in the seismic slice dimension provide further confirmation of the superior performance of our proposed method,which accurately defines the range of different seismic facies.This approach holds significant potential for analyzing geological patterns and extracting valuable depositional information.
基金supported by the National Natural Science Foundation of China(Nos.62002206 and 62202373)the open topic of the Green Development Big Data Decision-Making Key Laboratory(DM202003).
文摘Extracting valuable information frombiomedical texts is one of the current research hotspots of concern to a wide range of scholars.The biomedical corpus contains numerous complex long sentences and overlapping relational triples,making most generalized domain joint modeling methods difficult to apply effectively in this field.For a complex semantic environment in biomedical texts,in this paper,we propose a novel perspective to perform joint entity and relation extraction;existing studies divide the relation triples into several steps or modules.However,the three elements in the relation triples are interdependent and inseparable,so we regard joint extraction as a tripartite classification problem.At the same time,fromthe perspective of triple classification,we design amulti-granularity 2D convolution to refine the word pair table and better utilize the dependencies between biomedical word pairs.Finally,we use a biaffine predictor to assist in predicting the labels of word pairs for relation extraction.Our model(MCTPL)Multi-granularity Convolutional Tokens Pairs of Labeling better utilizes the elements of triples and improves the ability to extract overlapping triples compared to previous approaches.Finally,we evaluated our model on two publicly accessible datasets.The experimental results show that our model’s ability to extract relation triples on the CPI dataset improves the F1 score by 2.34%compared to the current optimal model.On the DDI dataset,the F1 value improves the F1 value by 1.68%compared to the current optimal model.Our model achieved state-of-the-art performance compared to other baseline models in biomedical text entity relation extraction.
基金supported by the Xinjiang Tianchi Talents Program(E33B9401)the Natural Science Foundation of Xinjiang Uygur Autonomous Region(2023D01E15)+1 种基金the National Natural Science Foundation of China(62302495)the National Natural Science Foundation of China(62373348)。
文摘Dear Editor,This letter presents an organoid segmentation model based on multi-axis attention with convolution parallel block.MACPNet adeptly captures dynamic dependencies within bright-field microscopy images,improving global modeling beyond conventional UNet.
基金supported by the Fundamental Research Funds for Higher Education Institutions of Heilongjiang Province(145209126)the Heilongjiang Province Higher Education Teaching Reform Project under Grant No.SJGY20200770.
文摘The relationship between users and items,which cannot be recovered by traditional techniques,can be extracted by the recommendation algorithm based on the graph convolution network.The current simple linear combination of these algorithms may not be sufficient to extract the complex structure of user interaction data.This paper presents a new approach to address such issues,utilizing the graph convolution network to extract association relations.The proposed approach mainly includes three modules:Embedding layer,forward propagation layer,and score prediction layer.The embedding layer models users and items according to their interaction information and generates initial feature vectors as input for the forward propagation layer.The forward propagation layer designs two parallel graph convolution networks with self-connections,which extract higher-order association relevance from users and items separately by multi-layer graph convolution.Furthermore,the forward propagation layer integrates the attention factor to assign different weights among the hop neighbors of the graph convolution network fusion,capturing more comprehensive association relevance between users and items as input for the score prediction layer.The score prediction layer introduces MLP(multi-layer perceptron)to conduct non-linear feature interaction between users and items,respectively.Finally,the prediction score of users to items is obtained.The recall rate and normalized discounted cumulative gain were used as evaluation indexes.The proposed approach effectively integrates higher-order information in user entries,and experimental analysis demonstrates its superiority over the existing algorithms.
基金supported by the MSIT(Ministry of Science and ICT),Korea,under the ICAN(ICT Challenge and Advanced Network of HRD)Program(IITP-2024-RS-2022-00156326)supervised by the IITP(Institute of Information&Communications Technology Planning&Evaluation)+2 种基金The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program Grant Code(NU/GP/SERC/13/30)funding for this work was provided by Princess Nourah bint Abdulrahman University Researchers Supporting Project Number(PNURSP2024R410)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the Project Number“NBU-FFR-2024-231-06”.
文摘The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance.
基金supported by National Key R&D Program of China(No.2022YFB3104500)Natural Science Foundation of Jiangsu Province(No.BK20222013)Scientific Research Foundation of Nanjing Institute of Technology(No.3534113223036)。
文摘The telecommunications industry is becoming increasingly aware of potential subscriber churn as a result of the growing popularity of smartphones in the mobile Internet era,the quick development of telecommunications services,the implementation of the number portability policy,and the intensifying competition among operators.At the same time,users'consumption preferences and choices are evolving.Excellent churn prediction models must be created in order to accurately predict the churn tendency,since keeping existing customers is far less expensive than acquiring new ones.But conventional or learning-based algorithms can only go so far into a single subscriber's data;they cannot take into consideration changes in a subscriber's subscription and ignore the coupling and correlation between various features.Additionally,the current churn prediction models have a high computational burden,a fuzzy weight distribution,and significant resource economic costs.The prediction algorithms involving network models currently in use primarily take into account the private information shared between users with text and pictures,ignoring the reference value supplied by other users with the same package.This work suggests a user churn prediction model based on Graph Attention Convolutional Neural Network(GAT-CNN)to address the aforementioned issues.The main contributions of this paper are as follows:Firstly,we present a three-tiered hierarchical cloud-edge cooperative framework that increases the volume of user feature input by means of two aggregations at the device,edge,and cloud layers.Second,we extend the use of users'own data by introducing self-attention and graph convolution models to track the relative changes of both users and packages simultaneously.Lastly,we build an integrated offline-online system for churn prediction based on the strengths of the two models,and we experimentally validate the efficacy of cloudside collaborative training and inference.In summary,the churn prediction model based on Graph Attention Convolutional Neural Network presented in this paper can effectively address the drawbacks of conventional algorithms and offer telecom operators crucial decision support in developing subscriber retention strategies and cutting operational expenses.
文摘Multi-label image classification is recognized as an important task within the field of computer vision,a discipline that has experienced a significant escalation in research endeavors in recent years.The widespread adoption of convolutional neural networks(CNNs)has catalyzed the remarkable success of architectures such as ResNet-101 within the domain of image classification.However,inmulti-label image classification tasks,it is crucial to consider the correlation between labels.In order to improve the accuracy and performance of multi-label classification and fully combine visual and semantic features,many existing studies use graph convolutional networks(GCN)for modeling.Object detection and multi-label image classification exhibit a degree of conceptual overlap;however,the integration of these two tasks within a unified framework has been relatively underexplored in the existing literature.In this paper,we come up with Object-GCN framework,a model combining object detection network YOLOv5 and graph convolutional network,and we carry out a thorough experimental analysis using a range of well-established public datasets.The designed framework Object-GCN achieves significantly better performance than existing studies in public datasets COCO2014,VOC2007,VOC2012.The final results achieved are 86.9%,96.7%,and 96.3%mean Average Precision(mAP)across the three datasets.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金supported by the National Natural Science Foundation of China (Nos.52274048 and 52374017)Beijing Natural Science Foundation (No.3222037)the CNPC 14th five-year perspective fundamental research project (No.2021DJ2104)。
文摘The shale gas development process is complex in terms of its flow mechanisms and the accuracy of the production forecasting is influenced by geological parameters and engineering parameters.Therefore,to quantitatively evaluate the relative importance of model parameters on the production forecasting performance,sensitivity analysis of parameters is required.The parameters are ranked according to the sensitivity coefficients for the subsequent optimization scheme design.A data-driven global sensitivity analysis(GSA)method using convolutional neural networks(CNN)is proposed to identify the influencing parameters in shale gas production.The CNN is trained on a large dataset,validated against numerical simulations,and utilized as a surrogate model for efficient sensitivity analysis.Our approach integrates CNN with the Sobol'global sensitivity analysis method,presenting three key scenarios for sensitivity analysis:analysis of the production stage as a whole,analysis by fixed time intervals,and analysis by declining rate.The findings underscore the predominant influence of reservoir thickness and well length on shale gas production.Furthermore,the temporal sensitivity analysis reveals the dynamic shifts in parameter importance across the distinct production stages.