The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method in...The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.展开更多
Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerabl...Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments.展开更多
This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that co...This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that compare different CNN configurations and deployed the best-performing architecture to downscale one-month lead seasonal forecasts of June–July–August–September(JJAS) precipitation from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0(NUIST-CFS1.0) for 1982–2020. We also perform hyper-parameter optimization and introduce predictors over a larger area to include information about the main large-scale circulations that drive precipitation over the East Africa region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results show that the CNN-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme precipitation spatial patterns. Besides, CNN-based downscaling yields a much more accurate forecast of extreme and spell indicators and reduces the significant relative biases exhibited by the raw model predictions. Moreover, our results show that CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of East Africa. The results demonstrate the potential usefulness of CNN in downscaling seasonal precipitation predictions over East Africa,particularly in providing improved forecast products which are essential for end users.展开更多
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca...Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.展开更多
Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer l...Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer learningenhanced convolutional neural network(CNN)was proposed to identify the gross weight and the axle weight of moving vehicles on the bridge.The proposed transfer learning-enhanced CNN model was expected to weigh different bridges based on a small amount of training datasets and provide high identification accuracy.First of all,a CNN algorithm for bridge weigh-in-motion(B-WIM)technology was proposed to identify the axle weight and the gross weight of the typical two-axle,three-axle,and five-axle vehicles as they crossed the bridge with different loading routes and speeds.Then,the pre-trained CNN model was transferred by fine-tuning to weigh themoving vehicle on another bridge.Finally,the identification accuracy and the amount of training data required were compared between the two CNN models.Results showed that the pre-trained CNN model using transfer learning for B-WIM technology could be successfully used for the identification of the axle weight and the gross weight for moving vehicles on another bridge while reducing the training data by 63%.Moreover,the recognition accuracy of the pre-trained CNN model using transfer learning was comparable to that of the original model,showing its promising potentials in the actual applications.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a f...In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a focus on the area over the Yellow Sea and the Bohai Sea(32°-42°N,117°-127°E).The objective was to develop an algorithm for fusing and segmenting multi-channel images from geostationary meteorological satellites,specifically for monitoring sea fog in this region.Firstly,the extreme gradient boosting algorithm was adopted to evaluate the data from the 16 channels of the Himawari-8 satellite for sea fog detection,and we found that the top three channels in order of importance were channels 3,4,and 14,which were fused into false color daytime images,while channels 7,13,and 15 were fused into false color nighttime images.Secondly,the simple linear iterative super-pixel clustering algorithm was used for the pixel-level segmentation of false color images,and based on super-pixel blocks,manual sea-fog annotation was performed to obtain fine-grained annotation labels.The deep convolutional neural network D-LinkNet was built on the ResNet backbone and the dilated convolutional layers with direct connections were added in the central part to form a string-and-combine structure with five branches having different depths and receptive fields.Results show that the accuracy rate of fog area(proportion of detected real fog to detected fog)was 66.5%,the recognition rate of fog zone(proportion of detected real fog to real fog or cloud cover)was 51.9%,and the detection accuracy rate(proportion of samples detected correctly to total samples)was 93.2%.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for India...Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.展开更多
BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algor...BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algorithm similar to deep learning,has demonstrated its capability to recognise specific features that can detect pathological lesions.AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance.METHODS The databases PubMed,EMBASE,and the Web of Science and research books were systematically searched using related keywords.Studies analysing pathological anatomy,cellular,and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer,differentiating cancer from other lesions,or staging the lesion.The data were extracted as per a predefined extraction.The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed.The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection.RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified.The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions(n=6),HCC from cirrhosis or development of new tumours(n=3),and HCC nuclei grading or segmentation(n=2).The CNNs showed satisfactory levels of accuracy.The studies aimed at detecting lesions(n=4),classification(n=5),and segmentation(n=2).Several methods were used to assess the accuracy of CNN models used.CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies.While a few limitations have been identified in these studies,overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.展开更多
The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-hei...The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-height remote sensing technique, which is flexible, efficient with low cost and with high resolution, is widely applied to investing various resources. Based on this, a novel extraction method for cultivated land information based on Deep Convolutional Neural Network and Transfer Learning (DTCLE) was proposed. First, linear features (roads and ridges etc.) were excluded based on Deep Convolutional Neural Network (DCNN). Next, feature extraction method learned from DCNN was used to cultivated land information extraction by introducing transfer learning mechanism. Last, cultivated land information extraction results were completed by the DTCLE and eCognifion for cultivated land information extraction (ECLE). The location of the Pengzhou County and Guanghan County, Sichuan Province were selected for the experimental purpose. The experimental results showed that the overall precision for the experimental image 1, 2 and 3 (of extracting cultivated land) with the DTCLE method was 91.7%, 88.1% and 88.2% respectively, and the overall precision of ECLE is 9o.7%, 90.5% and 87.0%, respectively. Accuracy of DTCLE was equivalent to that of ECLE, and also outperformed ECLE in terms of integrity and continuity.展开更多
Nowadays,the amount of wed data is increasing at a rapid speed,which presents a serious challenge to the web monitoring.Text sentiment analysis,an important research topic in the area of natural language processing,is...Nowadays,the amount of wed data is increasing at a rapid speed,which presents a serious challenge to the web monitoring.Text sentiment analysis,an important research topic in the area of natural language processing,is a crucial task in the web monitoring area.The accuracy of traditional text sentiment analysis methods might be degraded in dealing with mass data.Deep learning is a hot research topic of the artificial intelligence in the recent years.By now,several research groups have studied the sentiment analysis of English texts using deep learning methods.In contrary,relatively few works have so far considered the Chinese text sentiment analysis toward this direction.In this paper,a method for analyzing the Chinese text sentiment is proposed based on the convolutional neural network(CNN)in deep learning in order to improve the analysis accuracy.The feature values of the CNN after the training process are nonuniformly distributed.In order to overcome this problem,a method for normalizing the feature values is proposed.Moreover,the dimensions of the text features are optimized through simulations.Finally,a method for updating the learning rate in the training process of the CNN is presented in order to achieve better performances.Experiment results on the typical datasets indicate that the accuracy of the proposed method can be improved compared with that of the traditional supervised machine learning methods,e.g.,the support vector machine method.展开更多
Providing autonomous systems with an effective quantity and quality of information from a desired task is challenging. In particular, autonomous vehicles, must have a reliable vision of their workspace to robustly acc...Providing autonomous systems with an effective quantity and quality of information from a desired task is challenging. In particular, autonomous vehicles, must have a reliable vision of their workspace to robustly accomplish driving functions. Speaking of machine vision, deep learning techniques, and specifically convolutional neural networks, have been proven to be the state of the art technology in the field. As these networks typically involve millions of parameters and elements, designing an optimal architecture for deep learning structures is a difficult task which is globally under investigation by researchers. This study experimentally evaluates the impact of three major architectural properties of convolutional networks, including the number of layers, filters, and filter size on their performance. In this study, several models with different properties are developed,equally trained, and then applied to an autonomous car in a realistic simulation environment. A new ensemble approach is also proposed to calculate and update weights for the models regarding their mean squared error values. Based on design properties,performance results are reported and compared for further investigations. Surprisingly, the number of filters itself does not largely affect the performance efficiency. As a result, proper allocation of filters with different kernel sizes through the layers introduces a considerable improvement in the performance.Achievements of this study will provide the researchers with a clear clue and direction in designing optimal network architectures for deep learning purposes.展开更多
The only known predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subsp.) occurs in the Australian offshore waters of the northern Great Barrier Reef in May-August each year. The identification ...The only known predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subsp.) occurs in the Australian offshore waters of the northern Great Barrier Reef in May-August each year. The identification of individual whales is required for research on the whales’ population characteristics and for monitoring the potential impacts of tourism activities, including commercial swims with the whales. At present, it is not cost-effective for researchers to manually process and analyze the tens of thousands of underwater images collated after each observation/tourist season, and a large data base of historical non-identified imagery exists. This study reports the first proof of concept for recognizing individual dwarf minke whales using the Deep Learning Convolutional Neural Networks (CNN).The “off-the-shelf” Image net-trained VGG16 CNN was used as the feature-encoder of the perpixel sematic segmentation Automatic Minke Whale Recognizer (AMWR). The most frequently photographed whale in a sample of 76 individual whales (MW1020) was identified in 179 images out of the total 1320 images provid-ed. Training and image augmentation procedures were developed to compen-sate for the small number of available images. The trained AMWR achieved 93% prediction accuracy on the testing subset of 36 positive/MW1020 and 228 negative/not-MW1020 images, where each negative image contained at least one of the other 75 whales. Furthermore on the test subset, AMWR achieved 74% precision, 80% recall, and 4% false-positive rate, making the presented approach comparable or better to other state-of-the-art individual animal recognition results.展开更多
Land cover classification provides efficient and accurate information regarding human land-use, which is crucial for monitoring urban development patterns, management of water and other natural resources, and land-use...Land cover classification provides efficient and accurate information regarding human land-use, which is crucial for monitoring urban development patterns, management of water and other natural resources, and land-use planning and regulation. However, land-use classification requires highly trained, complex learning algorithms for accurate classification. Current machine learning techniques already exist to provide accurate image recognition. This research paper develops an image-based land-use classifier using transfer learning with a pre-trained ResNet-18 convolutional neural network. Variations of the resulting approach were compared to show a direct relationship between training dataset size and epoch length to accuracy. Experiment results show that transfer learning is an effective way to create models to classify satellite images of land-use with a predictive performance. This approach would be beneficial to the monitoring and predicting of urban development patterns, management of water and other natural resources, and land-use planning.展开更多
The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly int...The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.展开更多
In recent years,there has been significant research on the application of deep learning(DL)in topology optimization(TO)to accelerate structural design.However,these methods have primarily focused on solving binary TO ...In recent years,there has been significant research on the application of deep learning(DL)in topology optimization(TO)to accelerate structural design.However,these methods have primarily focused on solving binary TO problems,and effective solutions for multi-material topology optimization(MMTO)which requires a lot of computing resources are still lacking.Therefore,this paper proposes the framework of multiphase topology optimization using deep learning to accelerate MMTO design.The framework employs convolutional neural network(CNN)to construct a surrogate model for solving MMTO,and the obtained surrogate model can rapidly generate multi-material structure topologies in negligible time without any iterations.The performance evaluation results show that the proposed method not only outputs multi-material topologies with clear material boundary but also reduces the calculation cost with high prediction accuracy.Additionally,in order to find a more reasonable modeling method for MMTO,this paper studies the characteristics of surrogate modeling as regression task and classification task.Through the training of 297 models,our findings show that the regression task yields slightly better results than the classification task in most cases.Furthermore,The results indicate that the prediction accuracy is primarily influenced by factors such as the TO problem,material category,and data scale.Conversely,factors such as the domain size and the material property have minimal impact on the accuracy.展开更多
Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventio...Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.展开更多
Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have b...Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.展开更多
In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhanc...In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation.展开更多
基金Science and Technology Funds from the Liaoning Education Department(Serial Number:LJKZ0104).
文摘The motivation for this study is that the quality of deep fakes is constantly improving,which leads to the need to develop new methods for their detection.The proposed Customized Convolutional Neural Network method involves extracting structured data from video frames using facial landmark detection,which is then used as input to the CNN.The customized Convolutional Neural Network method is the date augmented-based CNN model to generate‘fake data’or‘fake images’.This study was carried out using Python and its libraries.We used 242 films from the dataset gathered by the Deep Fake Detection Challenge,of which 199 were made up and the remaining 53 were real.Ten seconds were allotted for each video.There were 318 videos used in all,199 of which were fake and 119 of which were real.Our proposedmethod achieved a testing accuracy of 91.47%,loss of 0.342,and AUC score of 0.92,outperforming two alternative approaches,CNN and MLP-CNN.Furthermore,our method succeeded in greater accuracy than contemporary models such as XceptionNet,Meso-4,EfficientNet-BO,MesoInception-4,VGG-16,and DST-Net.The novelty of this investigation is the development of a new Convolutional Neural Network(CNN)learning model that can accurately detect deep fake face photos.
文摘Smart Industrial environments use the Industrial Internet of Things(IIoT)for their routine operations and transform their industrial operations with intelligent and driven approaches.However,IIoT devices are vulnerable to cyber threats and exploits due to their connectivity with the internet.Traditional signature-based IDS are effective in detecting known attacks,but they are unable to detect unknown emerging attacks.Therefore,there is the need for an IDS which can learn from data and detect new threats.Ensemble Machine Learning(ML)and individual Deep Learning(DL)based IDS have been developed,and these individual models achieved low accuracy;however,their performance can be improved with the ensemble stacking technique.In this paper,we have proposed a Deep Stacked Neural Network(DSNN)based IDS,which consists of two stacked Convolutional Neural Network(CNN)models as base learners and Extreme Gradient Boosting(XGB)as the meta learner.The proposed DSNN model was trained and evaluated with the next-generation dataset,TON_IoT.Several pre-processing techniques were applied to prepare a dataset for the model,including ensemble feature selection and the SMOTE technique.Accuracy,precision,recall,F1-score,and false positive rates were used to evaluate the performance of the proposed ensemble model.Our experimental results showed that the accuracy for binary classification is 99.61%,which is better than in the baseline individual DL and ML models.In addition,the model proposed for IDS has been compared with similar models.The proposed DSNN achieved better performance metrics than the other models.The proposed DSNN model will be used to develop enhanced IDS for threat mitigation in smart industrial environments.
基金supported by the National Key Research and Development Program of China (Grant No.2020YFA0608000)the National Natural Science Foundation of China (Grant No. 42030605)the High-Performance Computing of Nanjing University of Information Science&Technology for their support of this work。
文摘This study assesses the suitability of convolutional neural networks(CNNs) for downscaling precipitation over East Africa in the context of seasonal forecasting. To achieve this, we design a set of experiments that compare different CNN configurations and deployed the best-performing architecture to downscale one-month lead seasonal forecasts of June–July–August–September(JJAS) precipitation from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0(NUIST-CFS1.0) for 1982–2020. We also perform hyper-parameter optimization and introduce predictors over a larger area to include information about the main large-scale circulations that drive precipitation over the East Africa region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results show that the CNN-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme precipitation spatial patterns. Besides, CNN-based downscaling yields a much more accurate forecast of extreme and spell indicators and reduces the significant relative biases exhibited by the raw model predictions. Moreover, our results show that CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of East Africa. The results demonstrate the potential usefulness of CNN in downscaling seasonal precipitation predictions over East Africa,particularly in providing improved forecast products which are essential for end users.
基金supported by the National Natural Science Foundation of China-China State Railway Group Co.,Ltd.Railway Basic Research Joint Fund (Grant No.U2268217)the Scientific Funding for China Academy of Railway Sciences Corporation Limited (No.2021YJ183).
文摘Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.
基金the financial support provided by the National Natural Science Foundation of China(Grant No.52208213)the Excellent Youth Foundation of Education Department in Hunan Province(Grant No.22B0141)+1 种基金the Xiaohe Sci-Tech Talents Special Funding under Hunan Provincial Sci-Tech Talents Sponsorship Program(2023TJ-X65)the Science Foundation of Xiangtan University(Grant No.21QDZ23).
文摘Transfer learning could reduce the time and resources required by the training of new models and be therefore important for generalized applications of the trainedmachine learning algorithms.In this study,a transfer learningenhanced convolutional neural network(CNN)was proposed to identify the gross weight and the axle weight of moving vehicles on the bridge.The proposed transfer learning-enhanced CNN model was expected to weigh different bridges based on a small amount of training datasets and provide high identification accuracy.First of all,a CNN algorithm for bridge weigh-in-motion(B-WIM)technology was proposed to identify the axle weight and the gross weight of the typical two-axle,three-axle,and five-axle vehicles as they crossed the bridge with different loading routes and speeds.Then,the pre-trained CNN model was transferred by fine-tuning to weigh themoving vehicle on another bridge.Finally,the identification accuracy and the amount of training data required were compared between the two CNN models.Results showed that the pre-trained CNN model using transfer learning for B-WIM technology could be successfully used for the identification of the axle weight and the gross weight for moving vehicles on another bridge while reducing the training data by 63%.Moreover,the recognition accuracy of the pre-trained CNN model using transfer learning was comparable to that of the original model,showing its promising potentials in the actual applications.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金National Key R&D Program of China(2021YFC3000905)Open Research Program of the State Key Laboratory of Severe Weather(2022LASW-B09)National Natural Science Foundation of China(42375010)。
文摘In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a focus on the area over the Yellow Sea and the Bohai Sea(32°-42°N,117°-127°E).The objective was to develop an algorithm for fusing and segmenting multi-channel images from geostationary meteorological satellites,specifically for monitoring sea fog in this region.Firstly,the extreme gradient boosting algorithm was adopted to evaluate the data from the 16 channels of the Himawari-8 satellite for sea fog detection,and we found that the top three channels in order of importance were channels 3,4,and 14,which were fused into false color daytime images,while channels 7,13,and 15 were fused into false color nighttime images.Secondly,the simple linear iterative super-pixel clustering algorithm was used for the pixel-level segmentation of false color images,and based on super-pixel blocks,manual sea-fog annotation was performed to obtain fine-grained annotation labels.The deep convolutional neural network D-LinkNet was built on the ResNet backbone and the dilated convolutional layers with direct connections were added in the central part to form a string-and-combine structure with five branches having different depths and receptive fields.Results show that the accuracy rate of fog area(proportion of detected real fog to detected fog)was 66.5%,the recognition rate of fog zone(proportion of detected real fog to real fog or cloud cover)was 51.9%,and the detection accuracy rate(proportion of samples detected correctly to total samples)was 93.2%.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
文摘Audiovisual speech recognition is an emerging research topic.Lipreading is the recognition of what someone is saying using visual information,primarily lip movements.In this study,we created a custom dataset for Indian English linguistics and categorized it into three main categories:(1)audio recognition,(2)visual feature extraction,and(3)combined audio and visual recognition.Audio features were extracted using the mel-frequency cepstral coefficient,and classification was performed using a one-dimension convolutional neural network.Visual feature extraction uses Dlib and then classifies visual speech using a long short-term memory type of recurrent neural networks.Finally,integration was performed using a deep convolutional network.The audio speech of Indian English was successfully recognized with accuracies of 93.67%and 91.53%,respectively,using testing data from 200 epochs.The training accuracy for visual speech recognition using the Indian English dataset was 77.48%and the test accuracy was 76.19%using 60 epochs.After integration,the accuracies of audiovisual speech recognition using the Indian English dataset for training and testing were 94.67%and 91.75%,respectively.
基金Supported by the College of Medicine Research Centre,Deanship of Scientific Research,King Saud University,Riyadh,Saudi Arabia
文摘BACKGROUND Artificial intelligence,such as convolutional neural networks(CNNs),has been used in the interpretation of images and the diagnosis of hepatocellular cancer(HCC)and liver masses.CNN,a machine-learning algorithm similar to deep learning,has demonstrated its capability to recognise specific features that can detect pathological lesions.AIM To assess the use of CNNs in examining HCC and liver masses images in the diagnosis of cancer and evaluating the accuracy level of CNNs and their performance.METHODS The databases PubMed,EMBASE,and the Web of Science and research books were systematically searched using related keywords.Studies analysing pathological anatomy,cellular,and radiological images on HCC or liver masses using CNNs were identified according to the study protocol to detect cancer,differentiating cancer from other lesions,or staging the lesion.The data were extracted as per a predefined extraction.The accuracy level and performance of the CNNs in detecting cancer or early stages of cancer were analysed.The primary outcomes of the study were analysing the type of cancer or liver mass and identifying the type of images that showed optimum accuracy in cancer detection.RESULTS A total of 11 studies that met the selection criteria and were consistent with the aims of the study were identified.The studies demonstrated the ability to differentiate liver masses or differentiate HCC from other lesions(n=6),HCC from cirrhosis or development of new tumours(n=3),and HCC nuclei grading or segmentation(n=2).The CNNs showed satisfactory levels of accuracy.The studies aimed at detecting lesions(n=4),classification(n=5),and segmentation(n=2).Several methods were used to assess the accuracy of CNN models used.CONCLUSION The role of CNNs in analysing images and as tools in early detection of HCC or liver masses has been demonstrated in these studies.While a few limitations have been identified in these studies,overall there was an optimal level of accuracy of the CNNs used in segmentation and classification of liver cancers images.
基金supported by the Fundamental Research Funds for the Central Universities of China(Grant No.2013SCU11006)the Key Laboratory of Digital Mapping and Land Information Application of National Administration of Surveying,Mapping and Geoinformation of China(Grant NO.DM2014SC02)the Key Laboratory of Geospecial Information Technology,Ministry of Land and Resources of China(Grant NO.KLGSIT201504)
文摘The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-height remote sensing technique, which is flexible, efficient with low cost and with high resolution, is widely applied to investing various resources. Based on this, a novel extraction method for cultivated land information based on Deep Convolutional Neural Network and Transfer Learning (DTCLE) was proposed. First, linear features (roads and ridges etc.) were excluded based on Deep Convolutional Neural Network (DCNN). Next, feature extraction method learned from DCNN was used to cultivated land information extraction by introducing transfer learning mechanism. Last, cultivated land information extraction results were completed by the DTCLE and eCognifion for cultivated land information extraction (ECLE). The location of the Pengzhou County and Guanghan County, Sichuan Province were selected for the experimental purpose. The experimental results showed that the overall precision for the experimental image 1, 2 and 3 (of extracting cultivated land) with the DTCLE method was 91.7%, 88.1% and 88.2% respectively, and the overall precision of ECLE is 9o.7%, 90.5% and 87.0%, respectively. Accuracy of DTCLE was equivalent to that of ECLE, and also outperformed ECLE in terms of integrity and continuity.
文摘Nowadays,the amount of wed data is increasing at a rapid speed,which presents a serious challenge to the web monitoring.Text sentiment analysis,an important research topic in the area of natural language processing,is a crucial task in the web monitoring area.The accuracy of traditional text sentiment analysis methods might be degraded in dealing with mass data.Deep learning is a hot research topic of the artificial intelligence in the recent years.By now,several research groups have studied the sentiment analysis of English texts using deep learning methods.In contrary,relatively few works have so far considered the Chinese text sentiment analysis toward this direction.In this paper,a method for analyzing the Chinese text sentiment is proposed based on the convolutional neural network(CNN)in deep learning in order to improve the analysis accuracy.The feature values of the CNN after the training process are nonuniformly distributed.In order to overcome this problem,a method for normalizing the feature values is proposed.Moreover,the dimensions of the text features are optimized through simulations.Finally,a method for updating the learning rate in the training process of the CNN is presented in order to achieve better performances.Experiment results on the typical datasets indicate that the accuracy of the proposed method can be improved compared with that of the traditional supervised machine learning methods,e.g.,the support vector machine method.
文摘Providing autonomous systems with an effective quantity and quality of information from a desired task is challenging. In particular, autonomous vehicles, must have a reliable vision of their workspace to robustly accomplish driving functions. Speaking of machine vision, deep learning techniques, and specifically convolutional neural networks, have been proven to be the state of the art technology in the field. As these networks typically involve millions of parameters and elements, designing an optimal architecture for deep learning structures is a difficult task which is globally under investigation by researchers. This study experimentally evaluates the impact of three major architectural properties of convolutional networks, including the number of layers, filters, and filter size on their performance. In this study, several models with different properties are developed,equally trained, and then applied to an autonomous car in a realistic simulation environment. A new ensemble approach is also proposed to calculate and update weights for the models regarding their mean squared error values. Based on design properties,performance results are reported and compared for further investigations. Surprisingly, the number of filters itself does not largely affect the performance efficiency. As a result, proper allocation of filters with different kernel sizes through the layers introduces a considerable improvement in the performance.Achievements of this study will provide the researchers with a clear clue and direction in designing optimal network architectures for deep learning purposes.
文摘The only known predictable aggregation of dwarf minke whales (Balaenoptera acutorostrata subsp.) occurs in the Australian offshore waters of the northern Great Barrier Reef in May-August each year. The identification of individual whales is required for research on the whales’ population characteristics and for monitoring the potential impacts of tourism activities, including commercial swims with the whales. At present, it is not cost-effective for researchers to manually process and analyze the tens of thousands of underwater images collated after each observation/tourist season, and a large data base of historical non-identified imagery exists. This study reports the first proof of concept for recognizing individual dwarf minke whales using the Deep Learning Convolutional Neural Networks (CNN).The “off-the-shelf” Image net-trained VGG16 CNN was used as the feature-encoder of the perpixel sematic segmentation Automatic Minke Whale Recognizer (AMWR). The most frequently photographed whale in a sample of 76 individual whales (MW1020) was identified in 179 images out of the total 1320 images provid-ed. Training and image augmentation procedures were developed to compen-sate for the small number of available images. The trained AMWR achieved 93% prediction accuracy on the testing subset of 36 positive/MW1020 and 228 negative/not-MW1020 images, where each negative image contained at least one of the other 75 whales. Furthermore on the test subset, AMWR achieved 74% precision, 80% recall, and 4% false-positive rate, making the presented approach comparable or better to other state-of-the-art individual animal recognition results.
文摘Land cover classification provides efficient and accurate information regarding human land-use, which is crucial for monitoring urban development patterns, management of water and other natural resources, and land-use planning and regulation. However, land-use classification requires highly trained, complex learning algorithms for accurate classification. Current machine learning techniques already exist to provide accurate image recognition. This research paper develops an image-based land-use classifier using transfer learning with a pre-trained ResNet-18 convolutional neural network. Variations of the resulting approach were compared to show a direct relationship between training dataset size and epoch length to accuracy. Experiment results show that transfer learning is an effective way to create models to classify satellite images of land-use with a predictive performance. This approach would be beneficial to the monitoring and predicting of urban development patterns, management of water and other natural resources, and land-use planning.
基金Researchers Supporting Project Number(RSP2024R206),King Saud University,Riyadh,Saudi Arabia.
文摘The rapid growth of Internet of Things(IoT)devices has brought numerous benefits to the interconnected world.However,the ubiquitous nature of IoT networks exposes them to various security threats,including anomaly intrusion attacks.In addition,IoT devices generate a high volume of unstructured data.Traditional intrusion detection systems often struggle to cope with the unique characteristics of IoT networks,such as resource constraints and heterogeneous data sources.Given the unpredictable nature of network technologies and diverse intrusion methods,conventional machine-learning approaches seem to lack efficiency.Across numerous research domains,deep learning techniques have demonstrated their capability to precisely detect anomalies.This study designs and enhances a novel anomaly-based intrusion detection system(AIDS)for IoT networks.Firstly,a Sparse Autoencoder(SAE)is applied to reduce the high dimension and get a significant data representation by calculating the reconstructed error.Secondly,the Convolutional Neural Network(CNN)technique is employed to create a binary classification approach.The proposed SAE-CNN approach is validated using the Bot-IoT dataset.The proposed models exceed the performance of the existing deep learning approach in the literature with an accuracy of 99.9%,precision of 99.9%,recall of 100%,F1 of 99.9%,False Positive Rate(FPR)of 0.0003,and True Positive Rate(TPR)of 0.9992.In addition,alternative metrics,such as training and testing durations,indicated that SAE-CNN performs better.
基金supported in part by National Natural Science Foundation of China under Grant Nos.51675525,52005505,and 62001502Post-Graduate Scientific Research Innovation Project of Hunan Province under Grant No.XJCX2023185.
文摘In recent years,there has been significant research on the application of deep learning(DL)in topology optimization(TO)to accelerate structural design.However,these methods have primarily focused on solving binary TO problems,and effective solutions for multi-material topology optimization(MMTO)which requires a lot of computing resources are still lacking.Therefore,this paper proposes the framework of multiphase topology optimization using deep learning to accelerate MMTO design.The framework employs convolutional neural network(CNN)to construct a surrogate model for solving MMTO,and the obtained surrogate model can rapidly generate multi-material structure topologies in negligible time without any iterations.The performance evaluation results show that the proposed method not only outputs multi-material topologies with clear material boundary but also reduces the calculation cost with high prediction accuracy.Additionally,in order to find a more reasonable modeling method for MMTO,this paper studies the characteristics of surrogate modeling as regression task and classification task.Through the training of 297 models,our findings show that the regression task yields slightly better results than the classification task in most cases.Furthermore,The results indicate that the prediction accuracy is primarily influenced by factors such as the TO problem,material category,and data scale.Conversely,factors such as the domain size and the material property have minimal impact on the accuracy.
基金funded by the Researchers Supporting Program at King Saud University(RSPD2023R809).
文摘Geopolymer concrete emerges as a promising avenue for sustainable development and offers an effective solution to environmental problems.Its attributes as a non-toxic,low-carbon,and economical substitute for conventional cement concrete,coupled with its elevated compressive strength and reduced shrinkage properties,position it as a pivotal material for diverse applications spanning from architectural structures to transportation infrastructure.In this context,this study sets out the task of using machine learning(ML)algorithms to increase the accuracy and interpretability of predicting the compressive strength of geopolymer concrete in the civil engineering field.To achieve this goal,a new approach using convolutional neural networks(CNNs)has been adopted.This study focuses on creating a comprehensive dataset consisting of compositional and strength parameters of 162 geopolymer concrete mixes,all containing Class F fly ash.The selection of optimal input parameters is guided by two distinct criteria.The first criterion leverages insights garnered from previous research on the influence of individual features on compressive strength.The second criterion scrutinizes the impact of these features within the model’s predictive framework.Key to enhancing the CNN model’s performance is the meticulous determination of the optimal hyperparameters.Through a systematic trial-and-error process,the study ascertains the ideal number of epochs for data division and the optimal value of k for k-fold cross-validation—a technique vital to the model’s robustness.The model’s predictive prowess is rigorously assessed via a suite of performance metrics and comprehensive score analyses.Furthermore,the model’s adaptability is gauged by integrating a secondary dataset into its predictive framework,facilitating a comparative evaluation against conventional prediction methods.To unravel the intricacies of the CNN model’s learning trajectory,a loss plot is deployed to elucidate its learning rate.The study culminates in compelling findings that underscore the CNN model’s accurate prediction of geopolymer concrete compressive strength.To maximize the dataset’s potential,the application of bivariate plots unveils nuanced trends and interactions among variables,fortifying the consistency with earlier research.Evidenced by promising prediction accuracy,the study’s outcomes hold significant promise in guiding the development of innovative geopolymer concrete formulations,thereby reinforcing its role as an eco-conscious and robust construction material.The findings prove that the CNN model accurately estimated geopolymer concrete’s compressive strength.The results show that the prediction accuracy is promising and can be used for the development of new geopolymer concrete mixes.The outcomes not only underscore the significance of leveraging technology for sustainable construction practices but also pave the way for innovation and efficiency in the field of civil engineering.
基金the National Natural Science Foundation of China(62003298,62163036)the Major Project of Science and Technology of Yunnan Province(202202AD080005,202202AH080009)the Yunnan University Professional Degree Graduate Practice Innovation Fund Project(ZC-22222770)。
文摘Oscillation detection has been a hot research topic in industries due to the high incidence of oscillation loops and their negative impact on plant profitability.Although numerous automatic detection techniques have been proposed,most of them can only address part of the practical difficulties.An oscillation is heuristically defined as a visually apparent periodic variation.However,manual visual inspection is labor-intensive and prone to missed detection.Convolutional neural networks(CNNs),inspired by animal visual systems,have been raised with powerful feature extraction capabilities.In this work,an exploration of the typical CNN models for visual oscillation detection is performed.Specifically,we tested MobileNet-V1,ShuffleNet-V2,Efficient Net-B0,and GhostNet models,and found that such a visual framework is well-suited for oscillation detection.The feasibility and validity of this framework are verified utilizing extensive numerical and industrial cases.Compared with state-of-theart oscillation detectors,the suggested framework is more straightforward and more robust to noise and mean-nonstationarity.In addition,this framework generalizes well and is capable of handling features that are not present in the training data,such as multiple oscillations and outliers.
基金the National Natural Science Foundation of China(52175236)Qingdao People’s Livelihood Science and Technology Plan(19-6-1-88-nsh).
文摘In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation.