In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a f...In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a focus on the area over the Yellow Sea and the Bohai Sea(32°-42°N,117°-127°E).The objective was to develop an algorithm for fusing and segmenting multi-channel images from geostationary meteorological satellites,specifically for monitoring sea fog in this region.Firstly,the extreme gradient boosting algorithm was adopted to evaluate the data from the 16 channels of the Himawari-8 satellite for sea fog detection,and we found that the top three channels in order of importance were channels 3,4,and 14,which were fused into false color daytime images,while channels 7,13,and 15 were fused into false color nighttime images.Secondly,the simple linear iterative super-pixel clustering algorithm was used for the pixel-level segmentation of false color images,and based on super-pixel blocks,manual sea-fog annotation was performed to obtain fine-grained annotation labels.The deep convolutional neural network D-LinkNet was built on the ResNet backbone and the dilated convolutional layers with direct connections were added in the central part to form a string-and-combine structure with five branches having different depths and receptive fields.Results show that the accuracy rate of fog area(proportion of detected real fog to detected fog)was 66.5%,the recognition rate of fog zone(proportion of detected real fog to real fog or cloud cover)was 51.9%,and the detection accuracy rate(proportion of samples detected correctly to total samples)was 93.2%.展开更多
This paper proposes a cascade deep convolutional neural network to address the loosening detection problem of bolts on axlebox covers.Firstly,an SSD network based on ResNet50 and CBAM module by improving bolt image fe...This paper proposes a cascade deep convolutional neural network to address the loosening detection problem of bolts on axlebox covers.Firstly,an SSD network based on ResNet50 and CBAM module by improving bolt image features is proposed for locating bolts on axlebox covers.And then,theA2-PFN is proposed according to the slender features of the marker lines for extracting more accurate marker lines regions of the bolts.Finally,a rectangular approximationmethod is proposed to regularize themarker line regions asaway tocalculate the angle of themarker line and plot all the angle values into an angle table,according to which the criteria of the angle table can determine whether the bolt with the marker line is in danger of loosening.Meanwhile,our improved algorithm is compared with the pre-improved algorithmin the object localization stage.The results show that our proposed method has a significant improvement in both detection accuracy and detection speed,where ourmAP(IoU=0.75)reaches 0.77 and fps reaches 16.6.And in the saliency detection stage,after qualitative comparison and quantitative comparison,our method significantly outperforms other state-of-the-art methods,where our MAE reaches 0.092,F-measure reaches 0.948 and AUC reaches 0.943.Ultimately,according to the angle table,out of 676 bolt samples,a total of 60 bolts are loose,69 bolts are at risk of loosening,and 547 bolts are tightened.展开更多
Enabling high mobility applications in millimeter wave(mmWave)based systems opens up a slew of new possibilities,including vehicle communi-cations in addition to wireless virtual/augmented reality.The narrow beam usag...Enabling high mobility applications in millimeter wave(mmWave)based systems opens up a slew of new possibilities,including vehicle communi-cations in addition to wireless virtual/augmented reality.The narrow beam usage in addition to the millimeter waves sensitivity might block the coverage along with the reliability of the mobile links.In this research work,the improvement in the quality of experience faced by the user for multimedia-related applications over the millimeter-wave band is investigated.The high attenuation loss in high frequencies is compensated with a massive array structure named Multiple Input and Multiple Output(MIMO)which is utilized in a hyperdense environment called heterogeneous networks(HetNet).The optimization problem which arises while maximizing the Mean Opinion Score(MOS)is analyzed along with the QoE(Quality of Experience)metric by considering the Base Station(BS)powers in addition to the needed Quality of Service(QoS).Most of the approaches related to wireless network communication are not suitable for the millimeter-wave band because of its problems due to high complexity and its dynamic nature.Hence a deep reinforcement learning framework is developed for tackling the same opti-mization problem.In this work,a Fuzzy-based Deep Convolutional Neural Net-work(FDCNN)is proposed in addition to a Deep Reinforcing Learning Framework(DRLF)for extracting the features of highly correlated data.The investigational results prove that the proposed method yields the highest satisfac-tion to the user by increasing the number of antennas in addition with the small-scale antennas at the base stations.The proposed work outperforms in terms of MOS with multiple antennas.展开更多
The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-hei...The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-height remote sensing technique, which is flexible, efficient with low cost and with high resolution, is widely applied to investing various resources. Based on this, a novel extraction method for cultivated land information based on Deep Convolutional Neural Network and Transfer Learning (DTCLE) was proposed. First, linear features (roads and ridges etc.) were excluded based on Deep Convolutional Neural Network (DCNN). Next, feature extraction method learned from DCNN was used to cultivated land information extraction by introducing transfer learning mechanism. Last, cultivated land information extraction results were completed by the DTCLE and eCognifion for cultivated land information extraction (ECLE). The location of the Pengzhou County and Guanghan County, Sichuan Province were selected for the experimental purpose. The experimental results showed that the overall precision for the experimental image 1, 2 and 3 (of extracting cultivated land) with the DTCLE method was 91.7%, 88.1% and 88.2% respectively, and the overall precision of ECLE is 9o.7%, 90.5% and 87.0%, respectively. Accuracy of DTCLE was equivalent to that of ECLE, and also outperformed ECLE in terms of integrity and continuity.展开更多
Compressive strength of concrete is a significant factor to assess building structure health and safety.Therefore,various methods have been developed to evaluate the compressive strength of concrete structures.However...Compressive strength of concrete is a significant factor to assess building structure health and safety.Therefore,various methods have been developed to evaluate the compressive strength of concrete structures.However,previous methods have several challenges in costly,time-consuming,and unsafety.To address these drawbacks,this paper proposed a digital vision based concrete compressive strength evaluating model using deep convolutional neural network(DCNN).The proposed model presented an alternative approach to evaluating the concrete strength and contributed to improving efficiency and accuracy.The model was developed with 4,000 digital images and 61,996 images extracted from video recordings collected from concrete samples.The experimental results indicated a root mean square error(RMSE)value of 3.56(MPa),demonstrating a strong feasibility that the proposed model can be utilized to predict the concrete strength with digital images of their surfaces and advantages to overcome the previous limitations.This experiment contributed to provide the basis that could be extended to future research with image analysis technique and artificial neural network in the diagnosis of concrete building structures.展开更多
In this study,we examined the efficacy of a deep convolutional neural network(DCNN)in recognizing concrete surface images and predicting the compressive strength of concrete.A digital single-lens reflex(DSLR)camera an...In this study,we examined the efficacy of a deep convolutional neural network(DCNN)in recognizing concrete surface images and predicting the compressive strength of concrete.A digital single-lens reflex(DSLR)camera and microscope were simultaneously used to obtain concrete surface images used as the input data for the DCNN.Thereafter,training,validation,and testing of the DCNNs were performed based on the DSLR camera and microscope image data.Results of the analysis indicated that the DCNN employing DSLR image data achieved a relatively higher accuracy.The accuracy of the DSLR-derived image data was attributed to the relatively wider range of the DSLR camera,which was beneficial for extracting a larger number of features.Moreover,the DSLR camera procured more realistic images than the microscope.Thus,when the compressive strength of concrete was evaluated using the DCNN employing a DSLR camera,time and cost were reduced,whereas the usefulness increased.Furthermore,an indirect comparison of the accuracy of the DCNN with that of existing non-destructive methods for evaluating the strength of concrete proved the reliability of DCNN-derived concrete strength predictions.In addition,it was determined that the DCNN used for concrete strength evaluations in this study can be further expanded to detect and evaluate various deteriorative factors that affect the durability of structures,such as salt damage,carbonation,sulfation,corrosion,and freezing-thawing.展开更多
Assessing the age of an individual via bones serves as a fool proof method in true determination of individual skills.Several attempts are reported in the past for assessment of chronological age of an individual base...Assessing the age of an individual via bones serves as a fool proof method in true determination of individual skills.Several attempts are reported in the past for assessment of chronological age of an individual based on variety of discriminative features found in wrist radiograph images.The permutation and combination of these features realized satisfactory accuracies for a set of limited groups.In this paper,assessment of gender for individuals of chronological age between 1-17 years is performed using left hand wrist radiograph images.A fully automated approach is proposed for removal of noise persisted due to non-uniform illumination during the process of radiograph acquisition process.Subsequent to this a computational technique for extraction of wrist region is proposed using operations on specific bit planes of image.A framework called GeNet of deep convolutional neural network is applied for classification of extracted wrist regions into male and female.The experimentations are conducted on the datasets of Radiological Society of North America(RSNA)of about 12442 images.Efficiency of preprocessing and segmentation techniques resulted into a correlation of about 99.09%.Performance of GeNet is evaluated on the extracted wrist regions resulting into an accuracy of 82.18%.展开更多
Graph embedding aims to map the high-dimensional nodes to a low-dimensional space and learns the graph relationship from its latent representations.Most existing graph embedding methods focus on the topological struct...Graph embedding aims to map the high-dimensional nodes to a low-dimensional space and learns the graph relationship from its latent representations.Most existing graph embedding methods focus on the topological structure of graph data,but ignore the semantic information of graph data,which results in the unsatisfied performance in practical applications.To overcome the problem,this paper proposes a novel deep convolutional adversarial graph autoencoder(GAE)model.To embed the semantic information between nodes in the graph data,the random walk strategy is first used to construct the positive pointwise mutual information(PPMI)matrix,then,graph convolutional net-work(GCN)is employed to encode the PPMI matrix and node content into the latent representation.Finally,the learned latent representation is used to reconstruct the topological structure of the graph data by decoder.Furthermore,the deep convolutional adversarial training algorithm is introduced to make the learned latent representation conform to the prior distribution better.The state-of-the-art experimental results on the graph data validate the effectiveness of the proposed model in the link prediction,node clustering and graph visualization tasks for three standard datasets,Cora,Citeseer and Pubmed.展开更多
Combining both visible and infrared object information, multispectral data is a promising source data for automatic maritime ship recognition. In this paper, in order to take advantage of deep convolutional neural net...Combining both visible and infrared object information, multispectral data is a promising source data for automatic maritime ship recognition. In this paper, in order to take advantage of deep convolutional neural network and multispectral data, we model multispectral ship recognition task into a convolutional feature fusion problem, and propose a feature fusion architecture called Hybrid Fusion. We fine-tune the VGG-16 model pre-trained on ImageNet through three channels single spectral image and four channels multispectral images, and use existing regularization techniques to avoid over-fitting problem. Hybrid Fusion as well as the other three feature fusion architectures is investigated. Each fusion architecture consists of visible image and infrared image feature extraction branches, in which the pre-trained and fine-tuned VGG-16 models are taken as feature extractor. In each fusion architecture, image features of two branches are firstly extracted from the same layer or different layers of VGG-16 model. Subsequently, the features extracted from the two branches are flattened and concatenated to produce a multispectral feature vector, which is finally fed into a classifier to achieve ship recognition task. Furthermore, based on these fusion architectures, we also evaluate recognition performance of a feature vector normalization method and three combinations of feature extractors. Experimental results on the visible and infrared ship (VAIS) dataset show that the best Hybrid Fusion achieves 89.6% mean per-class recognition accuracy on daytime paired images and 64.9% on nighttime infrared images, and outperforms the state-of-the-art method by 1.4% and 3.9%, respectively.展开更多
The novel coronavirus 2019(COVID-19)rapidly spreading around the world and turns into a pandemic situation,consequently,detecting the coronavirus(COVID-19)affected patients are now the most critical task for medical s...The novel coronavirus 2019(COVID-19)rapidly spreading around the world and turns into a pandemic situation,consequently,detecting the coronavirus(COVID-19)affected patients are now the most critical task for medical specialists.The deficiency of medical testing kits leading to huge complexity in detecting COVID-19 patients worldwide,resulting in the number of infected cases is expanding.Therefore,a significant study is necessary about detecting COVID-19 patients using an automated diagnosis method,which hinders the spreading of coronavirus.In this paper,the study suggests a Deep Convolutional Neural Network-based multi-classification framework(COV-MCNet)using eight different pre-trained architectures such as VGG16,VGG19,ResNet50V2,DenseNet201,InceptionV3,MobileNet,InceptionResNetV2,Xception which are trained and tested on the X-ray images of COVID-19,Normal,Viral Pneumonia,and Bacterial Pneumonia.The results from 4-class(Normal vs.COVID-19 vs.Viral Pneumonia vs.Bacterial Pneumonia)demonstrated that the pre-trained model DenseNet201 provides the highest classification performance(accuracy:92.54%,precision:93.05%,recall:92.81%,F1-score:92.83%,specificity:97.47%).Notably,the DenseNet201(4-class classification)pre-trained model in the proposed COV-MCNet framework showed higher accuracy compared to the rest seven models.Important to mention that the proposed COV-MCNet model showed comparatively higher classification accuracy based on the small number of pre-processed datasets that specifies the designed system can produce superior results when more data become available.The proposed multi-classification network(COV-MCNet)significantly speeds up the existing radiology based method which will be helpful for the medical community and clinical specialists to early diagnosis the COVID-19 cases during this pandemic.展开更多
In underground mining,the belt is a critical component,as its state directly affects the safe and stable operation of the conveyor.Most of the existing non-contact detection methods based on machine vision can only de...In underground mining,the belt is a critical component,as its state directly affects the safe and stable operation of the conveyor.Most of the existing non-contact detection methods based on machine vision can only detect a single type of damage and they require pre-processing operations.This tends to cause a large amount of calculation and low detection precision.To solve these problems,in the work described in this paper a belt tear detection method based on a multi-class conditional deep convolutional generative adversarial network(CDCGAN)was designed.In the traditional DCGAN,the image generated by the generator has a certain degree of randomness.Here,a small number of labeled belt images are taken as conditions and added them to the generator and discriminator,so the generator can generate images with the characteristics of belt damage under the aforementioned conditions.Moreover,because the discriminator cannot identify multiple types of damage,the multi-class softmax function is used as the output function of the discriminator to output a vector of class probabilities,and it can accurately classify cracks,scratches,and tears.To avoid the features learned incompletely,skiplayer connection is adopted in the generator and discriminator.This not only can minimize the loss of features,but also improves the convergence speed.Compared with other algorithms,experimental results show that the loss value of the generator and discriminator is the least.Moreover,its convergence speed is faster,and the mean average precision of the proposed algorithm is up to 96.2%,which is at least 6%higher than that of other algorithms.展开更多
Background:Myopic maculopathy(MM)has become a major cause of visual impairment and blindness worldwide,especially in East Asian countries.Deep learning approaches such as deep convolutional neural networks(DCNN)have b...Background:Myopic maculopathy(MM)has become a major cause of visual impairment and blindness worldwide,especially in East Asian countries.Deep learning approaches such as deep convolutional neural networks(DCNN)have been successfully applied to identify some common retinal diseases and show great potential for the intelligent analysis of MM.This study aimed to build a reliable approach for automated detection of MM from retinal fundus images using DCNN models.Methods:A dual-stream DCNN(DCNN-DS)model that perceives features from both original images and corresponding processed images by color histogram distribution optimization method was designed for classification of no MM,tessellated fundus(TF),and pathologic myopia(PM).A total of 36,515 gradable images from four hospitals were used for DCNN model development,and 14,986 gradable images from the other two hospitals for external testing.We also compared the performance of the DCNN-DS model and four ophthalmologists on 3000 randomly sampledfundus images.Results:The DCNN-DS model achieved sensitivities of 93.3%and 91.0%,specificities of 99.6%and 98.7%,areas under the receiver operating characteristic curves(AUCs)of 0.998 and 0.994 for detecting PM,whereas sensitivities of 98.8%and 92.8%,specificities of 95.6%and 94.1%,AUCs of 0.986 and 0.970 for detecting TF in two external testing datasets.In the sampled testing dataset,the sensitivities of four ophthalmologists ranged from 88.3%to 95.8%and 81.1%to 89.1%,and the specificities ranged from 95.9%to 99.2%and 77.8%to 97.3%for detecting PM and TF,respectively.Meanwhile,the DCNN-DS model achieved sensitivities of 90.8%and 97.9%and specificities of 99.1%and 94.0%for detecting PMand T,respectively.Conclusions:The proposed DCNN-DS approach demonstrated reliable performance with high sensitivity,specificity,and AUC to classify different MM levels on fundus photographs sourced from clinics.It can help identify MM automatically among the large myopic groups and show great potential for real-life applications.展开更多
Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to sca...Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.展开更多
Residual magnetic error remains after standard levelling process.The weak non-geological effect,manifesting itself as streaky noise along flight lines,creates a challenge for airborne geophysical data processing and i...Residual magnetic error remains after standard levelling process.The weak non-geological effect,manifesting itself as streaky noise along flight lines,creates a challenge for airborne geophysical data processing and interpretation.Microleveling is the process to eliminate this residual noise and is now a standard areogeophysical data processing step.In this paper,we propose a two-step procedure for single aerogeophysical data microleveling:a deep convolutional network is first adopted as approximator to map the original data into a low-level part with nature geological structures and a corrugated residual which still contains high-level detail geological structures;second,the mixture of Gaussian robust principal component analysis(MoG-RPCA)is then used to separate the weak energy fine structures from the residual.The final microleveling result is the addition of low-level structures from deep convolutional network and fine structures from MoG-RPCA.The deep convolutional network does not need dataset for training and the handcrafted network serves as prior(deep image prior)to capture the low-level nature geological structures in the areogeophysical data.Experiments on synthetic data and field data demonstrate that the combination of deep convolutional network and MoG-RPCA is an effective framework for single areogeophysical data microleveling.展开更多
As a huge number of satellites revolve around the earth,a great probability exists to observe and determine the change phenomena on the earth through the analysis of satellite images on a real-time basis.Therefore,cla...As a huge number of satellites revolve around the earth,a great probability exists to observe and determine the change phenomena on the earth through the analysis of satellite images on a real-time basis.Therefore,classifying satellite images plays strong assistance in remote sensing communities for predicting tropical cyclones.In this article,a classification approach is proposed using Deep Convolutional Neural Network(DCNN),comprising numerous layers,which extract the features through a downsampling process for classifying satellite cloud images.DCNN is trained marvelously on cloud images with an impressive amount of prediction accuracy.Delivery time decreases for testing images,whereas prediction accuracy increases using an appropriate deep convolutional network with a huge number of training dataset instances.The satellite images are taken from the Meteorological&Oceanographic Satellite Data Archival Centre,the organization is responsible for availing satellite cloud images of India and its subcontinent.The proposed cloud image classification shows 94% prediction accuracy with the DCNN framework.展开更多
Watermarking is the advanced technology utilized to secure digital data by integrating ownership or copyright protection.Most of the traditional extracting processes in audio watermarking have some restrictions due to...Watermarking is the advanced technology utilized to secure digital data by integrating ownership or copyright protection.Most of the traditional extracting processes in audio watermarking have some restrictions due to low reliability to various attacks.Hence,a deep learning-based audio watermarking system is proposed in this research to overcome the restriction in the traditional methods.The implication of the research relies on enhancing the performance of the watermarking system using the Discrete Wavelet Transform(DWT)and the optimized deep learning technique.The selection of optimal embedding location is the research contribution that is carried out by the deep convolutional neural network(DCNN).The hyperparameter tuning is performed by the so-called search location optimization,which minimizes the errors in the classifier.The experimental result reveals that the proposed digital audio watermarking system provides better robustness and performance in terms of Bit Error Rate(BER),Mean Square Error(MSE),and Signal-to-noise ratio.The BER,MSE,and SNR of the proposed audio watermarking model without the noise are 0.082,0.099,and 45.363 respectively,which is found to be better performance than the existing watermarking models.展开更多
Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,i...Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,images are often affected by various types of degradation which can significantly impact the performance of CNNs.In this work,we investigate the influence of image degradation on three typical image classification CNNs and propose a Degradation Type Adaptive Image Classification Model(DTA-ICM)to improve the existing CNNs’classification accuracy on degraded images.The proposed DTA-ICM comprises two key components:a Degradation Type Predictor(DTP)and a Degradation Type Specified Image Classifier(DTS-IC)set,which is trained on existing CNNs for specified types of degradation.The DTP predicts the degradation type of a test image,and the corresponding DTS-IC is then selected to classify the image.We evaluate the performance of both the proposed DTP and the DTA-ICMon the Caltech 101 database.The experimental results demonstrate that the proposed DTP achieves an average accuracy of 99.70%.Moreover,the proposed DTA-ICM,based on AlexNet,VGG19,and ResNet152,exhibits an average accuracy improvement of 20.63%,18.22%,and 12.9%,respectively,compared with the original CNNs in classifying degraded images.It suggests that the proposed DTA-ICM can effectively improve the classification performance of existing CNNs on degraded images,which has important practical implications.展开更多
As an important component of load transfer,various fatigue damages occur in the track as the rail service life and train traffic increase gradually,such as rail corrugation,rail joint damage,uneven thermite welds,rail ...As an important component of load transfer,various fatigue damages occur in the track as the rail service life and train traffic increase gradually,such as rail corrugation,rail joint damage,uneven thermite welds,rail squats fas-tener defects,etc.Real-time recognition of track defects plays a vital role in ensuring the safe and stable operation of rail transit.In this paper,an intelligent and innovative method is proposed to detect the track defects by using axle-box vibration acceleration and deep learning network,and the coexistence of the above-mentioned typical track defects in the track system is considered.Firstly,the dynamic relationship between the track defects(using the example of the fastening defects)and the axle-box vibration acceleration(ABVA)is investigated using the dynamic vehicle-track model.Then,a simulation model for the coupled dynamics of the vehicle and track with different track defects is established,and the wavelet power spectrum(WPS)analysis is performed for the vibra-tion acceleration signals of the axle box to extract the characteristic response.Lastly,using wavelet spectrum photos as input,an automatic detection technique based on the deep convolution neural network(DCNN)is sug-gested to realize the real-time intelligent detection and identification of various track problems.Thefindings demonstrate that the suggested approach achieves a 96.72%classification accuracy.展开更多
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.展开更多
The automatic individual identification of Amur tigers(Panthera tigris altaica)is important for population monitoring and making effective conservation strategies.Most existing research primarily relies on manual iden...The automatic individual identification of Amur tigers(Panthera tigris altaica)is important for population monitoring and making effective conservation strategies.Most existing research primarily relies on manual identifi-cation,which does not scale well to large datasets.In this paper,the deep convolution neural networks algorithm is constructed to implement the automatic individual identification for large numbers of Amur tiger images.The experimental data were obtained from 40 Amur tigers in Tieling Guaipo Tiger Park,China.The number of images collected from each tiger was approximately 200,and a total of 8277 images were obtained.The experiments were carried out on both the left and right side of body.Our results suggested that the recognition accuracy rate of left and right sides are 90.48%and 93.5%,respectively.The accuracy of our network has achieved the similar level compared to other state of the art networks like LeNet,ResNet34,and ZF_Net.The running time is much shorter than that of other networks.Consequently,this study can provide a new approach on automatic individual identification technology in the case of the Amur tiger.展开更多
基金National Key R&D Program of China(2021YFC3000905)Open Research Program of the State Key Laboratory of Severe Weather(2022LASW-B09)National Natural Science Foundation of China(42375010)。
文摘In this paper,we utilized the deep convolutional neural network D-LinkNet,a model for semantic segmentation,to analyze the Himawari-8 satellite data captured from 16 channels at a spatial resolution of 0.5 km,with a focus on the area over the Yellow Sea and the Bohai Sea(32°-42°N,117°-127°E).The objective was to develop an algorithm for fusing and segmenting multi-channel images from geostationary meteorological satellites,specifically for monitoring sea fog in this region.Firstly,the extreme gradient boosting algorithm was adopted to evaluate the data from the 16 channels of the Himawari-8 satellite for sea fog detection,and we found that the top three channels in order of importance were channels 3,4,and 14,which were fused into false color daytime images,while channels 7,13,and 15 were fused into false color nighttime images.Secondly,the simple linear iterative super-pixel clustering algorithm was used for the pixel-level segmentation of false color images,and based on super-pixel blocks,manual sea-fog annotation was performed to obtain fine-grained annotation labels.The deep convolutional neural network D-LinkNet was built on the ResNet backbone and the dilated convolutional layers with direct connections were added in the central part to form a string-and-combine structure with five branches having different depths and receptive fields.Results show that the accuracy rate of fog area(proportion of detected real fog to detected fog)was 66.5%,the recognition rate of fog zone(proportion of detected real fog to real fog or cloud cover)was 51.9%,and the detection accuracy rate(proportion of samples detected correctly to total samples)was 93.2%.
文摘This paper proposes a cascade deep convolutional neural network to address the loosening detection problem of bolts on axlebox covers.Firstly,an SSD network based on ResNet50 and CBAM module by improving bolt image features is proposed for locating bolts on axlebox covers.And then,theA2-PFN is proposed according to the slender features of the marker lines for extracting more accurate marker lines regions of the bolts.Finally,a rectangular approximationmethod is proposed to regularize themarker line regions asaway tocalculate the angle of themarker line and plot all the angle values into an angle table,according to which the criteria of the angle table can determine whether the bolt with the marker line is in danger of loosening.Meanwhile,our improved algorithm is compared with the pre-improved algorithmin the object localization stage.The results show that our proposed method has a significant improvement in both detection accuracy and detection speed,where ourmAP(IoU=0.75)reaches 0.77 and fps reaches 16.6.And in the saliency detection stage,after qualitative comparison and quantitative comparison,our method significantly outperforms other state-of-the-art methods,where our MAE reaches 0.092,F-measure reaches 0.948 and AUC reaches 0.943.Ultimately,according to the angle table,out of 676 bolt samples,a total of 60 bolts are loose,69 bolts are at risk of loosening,and 547 bolts are tightened.
文摘Enabling high mobility applications in millimeter wave(mmWave)based systems opens up a slew of new possibilities,including vehicle communi-cations in addition to wireless virtual/augmented reality.The narrow beam usage in addition to the millimeter waves sensitivity might block the coverage along with the reliability of the mobile links.In this research work,the improvement in the quality of experience faced by the user for multimedia-related applications over the millimeter-wave band is investigated.The high attenuation loss in high frequencies is compensated with a massive array structure named Multiple Input and Multiple Output(MIMO)which is utilized in a hyperdense environment called heterogeneous networks(HetNet).The optimization problem which arises while maximizing the Mean Opinion Score(MOS)is analyzed along with the QoE(Quality of Experience)metric by considering the Base Station(BS)powers in addition to the needed Quality of Service(QoS).Most of the approaches related to wireless network communication are not suitable for the millimeter-wave band because of its problems due to high complexity and its dynamic nature.Hence a deep reinforcement learning framework is developed for tackling the same opti-mization problem.In this work,a Fuzzy-based Deep Convolutional Neural Net-work(FDCNN)is proposed in addition to a Deep Reinforcing Learning Framework(DRLF)for extracting the features of highly correlated data.The investigational results prove that the proposed method yields the highest satisfac-tion to the user by increasing the number of antennas in addition with the small-scale antennas at the base stations.The proposed work outperforms in terms of MOS with multiple antennas.
基金supported by the Fundamental Research Funds for the Central Universities of China(Grant No.2013SCU11006)the Key Laboratory of Digital Mapping and Land Information Application of National Administration of Surveying,Mapping and Geoinformation of China(Grant NO.DM2014SC02)the Key Laboratory of Geospecial Information Technology,Ministry of Land and Resources of China(Grant NO.KLGSIT201504)
文摘The development of precision agriculture demands high accuracy and efficiency of cultivated land information extraction. As a new means of monitoring the ground in recent years, unmanned aerial vehicle (UAV) low-height remote sensing technique, which is flexible, efficient with low cost and with high resolution, is widely applied to investing various resources. Based on this, a novel extraction method for cultivated land information based on Deep Convolutional Neural Network and Transfer Learning (DTCLE) was proposed. First, linear features (roads and ridges etc.) were excluded based on Deep Convolutional Neural Network (DCNN). Next, feature extraction method learned from DCNN was used to cultivated land information extraction by introducing transfer learning mechanism. Last, cultivated land information extraction results were completed by the DTCLE and eCognifion for cultivated land information extraction (ECLE). The location of the Pengzhou County and Guanghan County, Sichuan Province were selected for the experimental purpose. The experimental results showed that the overall precision for the experimental image 1, 2 and 3 (of extracting cultivated land) with the DTCLE method was 91.7%, 88.1% and 88.2% respectively, and the overall precision of ECLE is 9o.7%, 90.5% and 87.0%, respectively. Accuracy of DTCLE was equivalent to that of ECLE, and also outperformed ECLE in terms of integrity and continuity.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(NRF-2018R1A2B6007333).
文摘Compressive strength of concrete is a significant factor to assess building structure health and safety.Therefore,various methods have been developed to evaluate the compressive strength of concrete structures.However,previous methods have several challenges in costly,time-consuming,and unsafety.To address these drawbacks,this paper proposed a digital vision based concrete compressive strength evaluating model using deep convolutional neural network(DCNN).The proposed model presented an alternative approach to evaluating the concrete strength and contributed to improving efficiency and accuracy.The model was developed with 4,000 digital images and 61,996 images extracted from video recordings collected from concrete samples.The experimental results indicated a root mean square error(RMSE)value of 3.56(MPa),demonstrating a strong feasibility that the proposed model can be utilized to predict the concrete strength with digital images of their surfaces and advantages to overcome the previous limitations.This experiment contributed to provide the basis that could be extended to future research with image analysis technique and artificial neural network in the diagnosis of concrete building structures.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(NRF-2018R1A2B6007333)This study was supported by 2018 Research Grant from Kangwon National University.
文摘In this study,we examined the efficacy of a deep convolutional neural network(DCNN)in recognizing concrete surface images and predicting the compressive strength of concrete.A digital single-lens reflex(DSLR)camera and microscope were simultaneously used to obtain concrete surface images used as the input data for the DCNN.Thereafter,training,validation,and testing of the DCNNs were performed based on the DSLR camera and microscope image data.Results of the analysis indicated that the DCNN employing DSLR image data achieved a relatively higher accuracy.The accuracy of the DSLR-derived image data was attributed to the relatively wider range of the DSLR camera,which was beneficial for extracting a larger number of features.Moreover,the DSLR camera procured more realistic images than the microscope.Thus,when the compressive strength of concrete was evaluated using the DCNN employing a DSLR camera,time and cost were reduced,whereas the usefulness increased.Furthermore,an indirect comparison of the accuracy of the DCNN with that of existing non-destructive methods for evaluating the strength of concrete proved the reliability of DCNN-derived concrete strength predictions.In addition,it was determined that the DCNN used for concrete strength evaluations in this study can be further expanded to detect and evaluate various deteriorative factors that affect the durability of structures,such as salt damage,carbonation,sulfation,corrosion,and freezing-thawing.
文摘Assessing the age of an individual via bones serves as a fool proof method in true determination of individual skills.Several attempts are reported in the past for assessment of chronological age of an individual based on variety of discriminative features found in wrist radiograph images.The permutation and combination of these features realized satisfactory accuracies for a set of limited groups.In this paper,assessment of gender for individuals of chronological age between 1-17 years is performed using left hand wrist radiograph images.A fully automated approach is proposed for removal of noise persisted due to non-uniform illumination during the process of radiograph acquisition process.Subsequent to this a computational technique for extraction of wrist region is proposed using operations on specific bit planes of image.A framework called GeNet of deep convolutional neural network is applied for classification of extracted wrist regions into male and female.The experimentations are conducted on the datasets of Radiological Society of North America(RSNA)of about 12442 images.Efficiency of preprocessing and segmentation techniques resulted into a correlation of about 99.09%.Performance of GeNet is evaluated on the extracted wrist regions resulting into an accuracy of 82.18%.
基金Supported by the Strategy Priority Research Program of Chinese Academy of Sciences(No.XDC02070600).
文摘Graph embedding aims to map the high-dimensional nodes to a low-dimensional space and learns the graph relationship from its latent representations.Most existing graph embedding methods focus on the topological structure of graph data,but ignore the semantic information of graph data,which results in the unsatisfied performance in practical applications.To overcome the problem,this paper proposes a novel deep convolutional adversarial graph autoencoder(GAE)model.To embed the semantic information between nodes in the graph data,the random walk strategy is first used to construct the positive pointwise mutual information(PPMI)matrix,then,graph convolutional net-work(GCN)is employed to encode the PPMI matrix and node content into the latent representation.Finally,the learned latent representation is used to reconstruct the topological structure of the graph data by decoder.Furthermore,the deep convolutional adversarial training algorithm is introduced to make the learned latent representation conform to the prior distribution better.The state-of-the-art experimental results on the graph data validate the effectiveness of the proposed model in the link prediction,node clustering and graph visualization tasks for three standard datasets,Cora,Citeseer and Pubmed.
文摘Combining both visible and infrared object information, multispectral data is a promising source data for automatic maritime ship recognition. In this paper, in order to take advantage of deep convolutional neural network and multispectral data, we model multispectral ship recognition task into a convolutional feature fusion problem, and propose a feature fusion architecture called Hybrid Fusion. We fine-tune the VGG-16 model pre-trained on ImageNet through three channels single spectral image and four channels multispectral images, and use existing regularization techniques to avoid over-fitting problem. Hybrid Fusion as well as the other three feature fusion architectures is investigated. Each fusion architecture consists of visible image and infrared image feature extraction branches, in which the pre-trained and fine-tuned VGG-16 models are taken as feature extractor. In each fusion architecture, image features of two branches are firstly extracted from the same layer or different layers of VGG-16 model. Subsequently, the features extracted from the two branches are flattened and concatenated to produce a multispectral feature vector, which is finally fed into a classifier to achieve ship recognition task. Furthermore, based on these fusion architectures, we also evaluate recognition performance of a feature vector normalization method and three combinations of feature extractors. Experimental results on the visible and infrared ship (VAIS) dataset show that the best Hybrid Fusion achieves 89.6% mean per-class recognition accuracy on daytime paired images and 64.9% on nighttime infrared images, and outperforms the state-of-the-art method by 1.4% and 3.9%, respectively.
文摘The novel coronavirus 2019(COVID-19)rapidly spreading around the world and turns into a pandemic situation,consequently,detecting the coronavirus(COVID-19)affected patients are now the most critical task for medical specialists.The deficiency of medical testing kits leading to huge complexity in detecting COVID-19 patients worldwide,resulting in the number of infected cases is expanding.Therefore,a significant study is necessary about detecting COVID-19 patients using an automated diagnosis method,which hinders the spreading of coronavirus.In this paper,the study suggests a Deep Convolutional Neural Network-based multi-classification framework(COV-MCNet)using eight different pre-trained architectures such as VGG16,VGG19,ResNet50V2,DenseNet201,InceptionV3,MobileNet,InceptionResNetV2,Xception which are trained and tested on the X-ray images of COVID-19,Normal,Viral Pneumonia,and Bacterial Pneumonia.The results from 4-class(Normal vs.COVID-19 vs.Viral Pneumonia vs.Bacterial Pneumonia)demonstrated that the pre-trained model DenseNet201 provides the highest classification performance(accuracy:92.54%,precision:93.05%,recall:92.81%,F1-score:92.83%,specificity:97.47%).Notably,the DenseNet201(4-class classification)pre-trained model in the proposed COV-MCNet framework showed higher accuracy compared to the rest seven models.Important to mention that the proposed COV-MCNet model showed comparatively higher classification accuracy based on the small number of pre-processed datasets that specifies the designed system can produce superior results when more data become available.The proposed multi-classification network(COV-MCNet)significantly speeds up the existing radiology based method which will be helpful for the medical community and clinical specialists to early diagnosis the COVID-19 cases during this pandemic.
基金This work was supported by the Shanxi Province Applied Basic Research Project,China(Grant No.201901D111100).Xiaoli Hao received the grant,and the URL of the sponsors’website is http://kjt.shanxi.gov.cn/.
文摘In underground mining,the belt is a critical component,as its state directly affects the safe and stable operation of the conveyor.Most of the existing non-contact detection methods based on machine vision can only detect a single type of damage and they require pre-processing operations.This tends to cause a large amount of calculation and low detection precision.To solve these problems,in the work described in this paper a belt tear detection method based on a multi-class conditional deep convolutional generative adversarial network(CDCGAN)was designed.In the traditional DCGAN,the image generated by the generator has a certain degree of randomness.Here,a small number of labeled belt images are taken as conditions and added them to the generator and discriminator,so the generator can generate images with the characteristics of belt damage under the aforementioned conditions.Moreover,because the discriminator cannot identify multiple types of damage,the multi-class softmax function is used as the output function of the discriminator to output a vector of class probabilities,and it can accurately classify cracks,scratches,and tears.To avoid the features learned incompletely,skiplayer connection is adopted in the generator and discriminator.This not only can minimize the loss of features,but also improves the convergence speed.Compared with other algorithms,experimental results show that the loss value of the generator and discriminator is the least.Moreover,its convergence speed is faster,and the mean average precision of the proposed algorithm is up to 96.2%,which is at least 6%higher than that of other algorithms.
基金The research has been supported by the Qingdao Science and Technology Demonstration and Guidance Project(Grant No.20-3-4-45-nsh)Academic Promotion Plan of Shandong First Medical University&Shandong Academy of Medical Sciences(Grant No.2019ZL001)National Science and Technology Major Project of China(Grant No.2017ZX09304010).
文摘Background:Myopic maculopathy(MM)has become a major cause of visual impairment and blindness worldwide,especially in East Asian countries.Deep learning approaches such as deep convolutional neural networks(DCNN)have been successfully applied to identify some common retinal diseases and show great potential for the intelligent analysis of MM.This study aimed to build a reliable approach for automated detection of MM from retinal fundus images using DCNN models.Methods:A dual-stream DCNN(DCNN-DS)model that perceives features from both original images and corresponding processed images by color histogram distribution optimization method was designed for classification of no MM,tessellated fundus(TF),and pathologic myopia(PM).A total of 36,515 gradable images from four hospitals were used for DCNN model development,and 14,986 gradable images from the other two hospitals for external testing.We also compared the performance of the DCNN-DS model and four ophthalmologists on 3000 randomly sampledfundus images.Results:The DCNN-DS model achieved sensitivities of 93.3%and 91.0%,specificities of 99.6%and 98.7%,areas under the receiver operating characteristic curves(AUCs)of 0.998 and 0.994 for detecting PM,whereas sensitivities of 98.8%and 92.8%,specificities of 95.6%and 94.1%,AUCs of 0.986 and 0.970 for detecting TF in two external testing datasets.In the sampled testing dataset,the sensitivities of four ophthalmologists ranged from 88.3%to 95.8%and 81.1%to 89.1%,and the specificities ranged from 95.9%to 99.2%and 77.8%to 97.3%for detecting PM and TF,respectively.Meanwhile,the DCNN-DS model achieved sensitivities of 90.8%and 97.9%and specificities of 99.1%and 94.0%for detecting PMand T,respectively.Conclusions:The proposed DCNN-DS approach demonstrated reliable performance with high sensitivity,specificity,and AUC to classify different MM levels on fundus photographs sourced from clinics.It can help identify MM automatically among the large myopic groups and show great potential for real-life applications.
基金supported by the National Natural Science Foundation of China-China State Railway Group Co.,Ltd.Railway Basic Research Joint Fund (Grant No.U2268217)the Scientific Funding for China Academy of Railway Sciences Corporation Limited (No.2021YJ183).
文摘Graph Convolutional Neural Networks(GCNs)have been widely used in various fields due to their powerful capabilities in processing graph-structured data.However,GCNs encounter significant challenges when applied to scale-free graphs with power-law distributions,resulting in substantial distortions.Moreover,most of the existing GCN models are shallow structures,which restricts their ability to capture dependencies among distant nodes and more refined high-order node features in scale-free graphs with hierarchical structures.To more broadly and precisely apply GCNs to real-world graphs exhibiting scale-free or hierarchical structures and utilize multi-level aggregation of GCNs for capturing high-level information in local representations,we propose the Hyperbolic Deep Graph Convolutional Neural Network(HDGCNN),an end-to-end deep graph representation learning framework that can map scale-free graphs from Euclidean space to hyperbolic space.In HDGCNN,we define the fundamental operations of deep graph convolutional neural networks in hyperbolic space.Additionally,we introduce a hyperbolic feature transformation method based on identity mapping and a dense connection scheme based on a novel non-local message passing framework.In addition,we present a neighborhood aggregation method that combines initial structural featureswith hyperbolic attention coefficients.Through the above methods,HDGCNN effectively leverages both the structural features and node features of graph data,enabling enhanced exploration of non-local structural features and more refined node features in scale-free or hierarchical graphs.Experimental results demonstrate that HDGCNN achieves remarkable performance improvements over state-ofthe-art GCNs in node classification and link prediction tasks,even when utilizing low-dimensional embedding representations.Furthermore,when compared to shallow hyperbolic graph convolutional neural network models,HDGCNN exhibits notable advantages and performance enhancements.
文摘Residual magnetic error remains after standard levelling process.The weak non-geological effect,manifesting itself as streaky noise along flight lines,creates a challenge for airborne geophysical data processing and interpretation.Microleveling is the process to eliminate this residual noise and is now a standard areogeophysical data processing step.In this paper,we propose a two-step procedure for single aerogeophysical data microleveling:a deep convolutional network is first adopted as approximator to map the original data into a low-level part with nature geological structures and a corrugated residual which still contains high-level detail geological structures;second,the mixture of Gaussian robust principal component analysis(MoG-RPCA)is then used to separate the weak energy fine structures from the residual.The final microleveling result is the addition of low-level structures from deep convolutional network and fine structures from MoG-RPCA.The deep convolutional network does not need dataset for training and the handcrafted network serves as prior(deep image prior)to capture the low-level nature geological structures in the areogeophysical data.Experiments on synthetic data and field data demonstrate that the combination of deep convolutional network and MoG-RPCA is an effective framework for single areogeophysical data microleveling.
文摘As a huge number of satellites revolve around the earth,a great probability exists to observe and determine the change phenomena on the earth through the analysis of satellite images on a real-time basis.Therefore,classifying satellite images plays strong assistance in remote sensing communities for predicting tropical cyclones.In this article,a classification approach is proposed using Deep Convolutional Neural Network(DCNN),comprising numerous layers,which extract the features through a downsampling process for classifying satellite cloud images.DCNN is trained marvelously on cloud images with an impressive amount of prediction accuracy.Delivery time decreases for testing images,whereas prediction accuracy increases using an appropriate deep convolutional network with a huge number of training dataset instances.The satellite images are taken from the Meteorological&Oceanographic Satellite Data Archival Centre,the organization is responsible for availing satellite cloud images of India and its subcontinent.The proposed cloud image classification shows 94% prediction accuracy with the DCNN framework.
文摘Watermarking is the advanced technology utilized to secure digital data by integrating ownership or copyright protection.Most of the traditional extracting processes in audio watermarking have some restrictions due to low reliability to various attacks.Hence,a deep learning-based audio watermarking system is proposed in this research to overcome the restriction in the traditional methods.The implication of the research relies on enhancing the performance of the watermarking system using the Discrete Wavelet Transform(DWT)and the optimized deep learning technique.The selection of optimal embedding location is the research contribution that is carried out by the deep convolutional neural network(DCNN).The hyperparameter tuning is performed by the so-called search location optimization,which minimizes the errors in the classifier.The experimental result reveals that the proposed digital audio watermarking system provides better robustness and performance in terms of Bit Error Rate(BER),Mean Square Error(MSE),and Signal-to-noise ratio.The BER,MSE,and SNR of the proposed audio watermarking model without the noise are 0.082,0.099,and 45.363 respectively,which is found to be better performance than the existing watermarking models.
基金This work was supported by Special Funds for the Construction of an Innovative Province of Hunan(GrantNo.2020GK2028)lNatural Science Foundation of Hunan Province(Grant No.2022JJ30002)lScientific Research Project of Hunan Provincial EducationDepartment(GrantNo.21B0833)lScientific Research Key Project of Hunan Education Department(Grant No.21A0592)lScientific Research Project of Hunan Provincial Education Department(Grant No.22A0663).
文摘Deep Convolutional Neural Networks(CNNs)have achieved high accuracy in image classification tasks,however,most existing models are trained on high-quality images that are not subject to image degradation.In practice,images are often affected by various types of degradation which can significantly impact the performance of CNNs.In this work,we investigate the influence of image degradation on three typical image classification CNNs and propose a Degradation Type Adaptive Image Classification Model(DTA-ICM)to improve the existing CNNs’classification accuracy on degraded images.The proposed DTA-ICM comprises two key components:a Degradation Type Predictor(DTP)and a Degradation Type Specified Image Classifier(DTS-IC)set,which is trained on existing CNNs for specified types of degradation.The DTP predicts the degradation type of a test image,and the corresponding DTS-IC is then selected to classify the image.We evaluate the performance of both the proposed DTP and the DTA-ICMon the Caltech 101 database.The experimental results demonstrate that the proposed DTP achieves an average accuracy of 99.70%.Moreover,the proposed DTA-ICM,based on AlexNet,VGG19,and ResNet152,exhibits an average accuracy improvement of 20.63%,18.22%,and 12.9%,respectively,compared with the original CNNs in classifying degraded images.It suggests that the proposed DTA-ICM can effectively improve the classification performance of existing CNNs on degraded images,which has important practical implications.
基金supported by the Doctoral Fund Project(Grant No.X22003Z).
文摘As an important component of load transfer,various fatigue damages occur in the track as the rail service life and train traffic increase gradually,such as rail corrugation,rail joint damage,uneven thermite welds,rail squats fas-tener defects,etc.Real-time recognition of track defects plays a vital role in ensuring the safe and stable operation of rail transit.In this paper,an intelligent and innovative method is proposed to detect the track defects by using axle-box vibration acceleration and deep learning network,and the coexistence of the above-mentioned typical track defects in the track system is considered.Firstly,the dynamic relationship between the track defects(using the example of the fastening defects)and the axle-box vibration acceleration(ABVA)is investigated using the dynamic vehicle-track model.Then,a simulation model for the coupled dynamics of the vehicle and track with different track defects is established,and the wavelet power spectrum(WPS)analysis is performed for the vibra-tion acceleration signals of the axle box to extract the characteristic response.Lastly,using wavelet spectrum photos as input,an automatic detection technique based on the deep convolution neural network(DCNN)is sug-gested to realize the real-time intelligent detection and identification of various track problems.Thefindings demonstrate that the suggested approach achieves a 96.72%classification accuracy.
文摘The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two state-of-the-art methods.
基金the Fundamental Research Funds for the Central Universities(2572018BC07,2572017PZ14)the Heilongjiang postdoctoral project fund project(LBH-Z18003)+2 种基金Biodiversity Survey,Monitoring and Assessment Project of Ministry of Ecology and Environment,China(2019HB2096001006)the National Natural Science Foundation of China(NSFC 31872241,31572285)the Individual Identification Technological Research on Camera-trapping images of Amur tigers(NFGA 2017).
文摘The automatic individual identification of Amur tigers(Panthera tigris altaica)is important for population monitoring and making effective conservation strategies.Most existing research primarily relies on manual identifi-cation,which does not scale well to large datasets.In this paper,the deep convolution neural networks algorithm is constructed to implement the automatic individual identification for large numbers of Amur tiger images.The experimental data were obtained from 40 Amur tigers in Tieling Guaipo Tiger Park,China.The number of images collected from each tiger was approximately 200,and a total of 8277 images were obtained.The experiments were carried out on both the left and right side of body.Our results suggested that the recognition accuracy rate of left and right sides are 90.48%and 93.5%,respectively.The accuracy of our network has achieved the similar level compared to other state of the art networks like LeNet,ResNet34,and ZF_Net.The running time is much shorter than that of other networks.Consequently,this study can provide a new approach on automatic individual identification technology in the case of the Amur tiger.