Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being...Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate.展开更多
Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and c...Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and categorize them into predefined entity types.This process can provide basic support for the automatic construction of knowledge bases.In contrast to general texts,biomedical texts frequently contain numerous nested entities and local dependencies among these entities,presenting significant challenges to prevailing NER models.To address these issues,we propose a novel Chinese nested biomedical NER model based on RoBERTa and Global Pointer(RoBGP).Our model initially utilizes the RoBERTa-wwm-ext-large pretrained language model to dynamically generate word-level initial vectors.It then incorporates a Bidirectional Long Short-Term Memory network for capturing bidirectional semantic information,effectively addressing the issue of long-distance dependencies.Furthermore,the Global Pointer model is employed to comprehensively recognize all nested entities in the text.We conduct extensive experiments on the Chinese medical dataset CMeEE and the results demonstrate the superior performance of RoBGP over several baseline models.This research confirms the effectiveness of RoBGP in Chinese biomedical NER,providing reliable technical support for biomedical information extraction and knowledge base construction.展开更多
Internet of Vehicles (IoV) is a new system that enables individual vehicles to connect with nearby vehicles,people, transportation infrastructure, and networks, thereby realizing amore intelligent and efficient transp...Internet of Vehicles (IoV) is a new system that enables individual vehicles to connect with nearby vehicles,people, transportation infrastructure, and networks, thereby realizing amore intelligent and efficient transportationsystem. The movement of vehicles and the three-dimensional (3D) nature of the road network cause the topologicalstructure of IoV to have the high space and time complexity.Network modeling and structure recognition for 3Droads can benefit the description of topological changes for IoV. This paper proposes a 3Dgeneral roadmodel basedon discrete points of roads obtained from GIS. First, the constraints imposed by 3D roads on moving vehicles areanalyzed. Then the effects of road curvature radius (Ra), longitudinal slope (Slo), and length (Len) on speed andacceleration are studied. Finally, a general 3D road network model based on road section features is established.This paper also presents intersection and road section recognition methods based on the structural features ofthe 3D road network model and the road features. Real GIS data from a specific region of Beijing is adopted tocreate the simulation scenario, and the simulation results validate the general 3D road network model and therecognitionmethod. Therefore, thiswork makes contributions to the field of intelligent transportation by providinga comprehensive approach tomodeling the 3Droad network and its topological changes in achieving efficient trafficflowand improved road safety.展开更多
Expanding photovoltaic(PV)resources in rural-grid areas is an essential means to augment the share of solar energy in the energy landscape,aligning with the“carbon peaking and carbon neutrality”objectives.However,ru...Expanding photovoltaic(PV)resources in rural-grid areas is an essential means to augment the share of solar energy in the energy landscape,aligning with the“carbon peaking and carbon neutrality”objectives.However,rural power grids often lack digitalization;thus,the load distribution within these areas is not fully known.This hinders the calculation of the available PV capacity and deduction of node voltages.This study proposes a load-distribution modeling approach based on remote-sensing image recognition in pursuit of a scientific framework for developing distributed PV resources in rural grid areas.First,houses in remote-sensing images are accurately recognized using deep-learning techniques based on the YOLOv5 model.The distribution of the houses is then used to estimate the load distribution in the grid area.Next,equally spaced and clustered distribution models are used to adaptively determine the location of the nodes and load power in the distribution lines.Finally,by calculating the connectivity matrix of the nodes,a minimum spanning tree is extracted,the topology of the network is constructed,and the node parameters of the load-distribution model are calculated.The proposed scheme is implemented in a software package and its efficacy is demonstrated by analyzing typical remote-sensing images of rural grid areas.The results underscore the ability of the proposed approach to effectively discern the distribution-line structure and compute the node parameters,thereby offering vital support for determining PV access capability.展开更多
Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format fo...Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.展开更多
In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classif...In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classifiers on large signal datasets with redundant samples requires significant memory and high costs.This paper proposes a support databased core-set selection method(SD)for signal recognition,aiming to screen a representative subset that approximates the large signal dataset.Specifically,this subset can be identified by employing the labeled information during the early stages of model training,as some training samples are labeled as supporting data frequently.This support data is crucial for model training and can be found using a border sample selector.Simulation results demonstrate that the SD method minimizes the impact on model recognition performance while reducing the dataset size,and outperforms five other state-of-the-art core-set selection methods when the fraction of training sample kept is less than or equal to 0.3 on the RML2016.04C dataset or 0.5 on the RML22 dataset.The SD method is particularly helpful for signal recognition tasks with limited memory and computing resources.展开更多
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La...Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.展开更多
Dynamic signature is a biometric modality that recognizes an individual’s anatomic and behavioural characteristics when signing their name. The rampant case of signature falsification (Identity Theft) was the key mot...Dynamic signature is a biometric modality that recognizes an individual’s anatomic and behavioural characteristics when signing their name. The rampant case of signature falsification (Identity Theft) was the key motivating factor for embarking on this study. This study was necessitated by the damages and dangers posed by signature forgery coupled with the intractable nature of the problem. The aim and objectives of this study is to design a proactive and responsive system that could compare two signature samples and detect the correct signature against the forged one. Dynamic Signature verification is an important biometric technique that aims to detect whether a given signature is genuine or forged. In this research work, Convolutional Neural Networks (CNNsor ConvNet) which is a class of deep, feed forward artificial neural networks that has successfully been applied to analysing visual imagery was used to train the model. The signature images are stored in a file directory structure which the Keras Python library can work with. Then the CNN was implemented in python using the Keras with the TensorFlow backend to learn the patterns associated with the signature. The result showed that for the same CNNs-based network experimental result of average accuracy, the larger the training dataset, the higher the test accuracy. However, when the training dataset are insufficient, better results can be obtained. The paper concluded that by training datasets using CNNs network, 98% accuracy in the result was recorded, in the experimental part, the model achieved a high degree of accuracy in the classification of the biometric parameters used.展开更多
This study aims to address the deviation in downstream tasks caused by inaccurate recognition results when applying Automatic Speech Recognition(ASR)technology in the Air Traffic Control(ATC)field.This paper presents ...This study aims to address the deviation in downstream tasks caused by inaccurate recognition results when applying Automatic Speech Recognition(ASR)technology in the Air Traffic Control(ATC)field.This paper presents a novel cascaded model architecture,namely Conformer-CTC/Attention-T5(CCAT),to build a highly accurate and robust ATC speech recognition model.To tackle the challenges posed by noise and fast speech rate in ATC,the Conformer model is employed to extract robust and discriminative speech representations from raw waveforms.On the decoding side,the Attention mechanism is integrated to facilitate precise alignment between input features and output characters.The Text-To-Text Transfer Transformer(T5)language model is also introduced to handle particular pronunciations and code-mixing issues,providing more accurate and concise textual output for downstream tasks.To enhance the model’s robustness,transfer learning and data augmentation techniques are utilized in the training strategy.The model’s performance is optimized by performing hyperparameter tunings,such as adjusting the number of attention heads,encoder layers,and the weights of the loss function.The experimental results demonstrate the significant contributions of data augmentation,hyperparameter tuning,and error correction models to the overall model performance.On the Our ATC Corpus dataset,the proposed model achieves a Character Error Rate(CER)of 3.44%,representing a 3.64%improvement compared to the baseline model.Moreover,the effectiveness of the proposed model is validated on two publicly available datasets.On the AISHELL-1 dataset,the CCAT model achieves a CER of 3.42%,showcasing a 1.23%improvement over the baseline model.Similarly,on the LibriSpeech dataset,the CCAT model achieves a Word Error Rate(WER)of 5.27%,demonstrating a performance improvement of 7.67%compared to the baseline model.Additionally,this paper proposes an evaluation criterion for assessing the robustness of ATC speech recognition systems.In robustness evaluation experiments based on this criterion,the proposed model demonstrates a performance improvement of 22%compared to the baseline model.展开更多
Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these...Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these systems,it is important to deploy efficient models capable of adapting to diverse deployment conditions.In recent years,on-demand pruning methods have obtained significant attention within the ASR domain due to their adaptability in various deployment scenarios.However,these methods often confront substantial trade-offs,particularly in terms of unstable accuracy when reducing the model size.To address challenges,this study introduces two crucial empirical findings.Firstly,it proposes the incorporation of an online distillation mechanism during on-demand pruning training,which holds the promise of maintaining more consistent accuracy levels.Secondly,it proposes the utilization of the Mogrifier long short-term memory(LSTM)language model(LM),an advanced iteration of the conventional LSTM LM,as an effective alternative for pruning targets within the ASR framework.Through rigorous experimentation on the ASR system,employing the Mogrifier LSTM LM and training it using the suggested joint on-demand pruning and online distillation method,this study provides compelling evidence.The results exhibit that the proposed methods significantly outperform a benchmark model trained solely with on-demand pruning methods.Impressively,the proposed strategic configuration successfully reduces the parameter count by approximately 39%,all the while minimizing trade-offs.展开更多
Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Car...Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Cardiology,medical imaging technology(2D ultrasonic,MRI)has been proved to be helpful to detect congenital defects of the fetal heart and assists sonographers in prenatal diagnosis.It is a highly complex task to recognize 2D fetal heart ultrasonic standard plane(FHUSP)manually.Compared withmanual identification,automatic identification through artificial intelligence can save a lot of time,ensure the efficiency of diagnosis,and improve the accuracy of diagnosis.In this study,a feature extraction method based on texture features(Local Binary Pattern LBP and Histogram of Oriented Gradient HOG)and combined with Bag of Words(BOW)model is carried out,and then feature fusion is performed.Finally,it adopts Support VectorMachine(SVM)to realize automatic recognition and classification of FHUSP.The data includes 788 standard plane data sets and 448 normal and abnormal plane data sets.Compared with some other methods and the single method model,the classification accuracy of our model has been obviously improved,with the highest accuracy reaching 87.35%.Similarly,we also verify the performance of the model in normal and abnormal planes,and the average accuracy in classifying abnormal and normal planes is 84.92%.The experimental results show that thismethod can effectively classify and predict different FHUSP and can provide certain assistance for sonographers to diagnose fetal congenital heart disease.展开更多
Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most ...Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most existing micro-expression recognition technologies so far focus on modeling the single category of micro-expression images and neural network structure.Aiming at the problems of low recognition rate and weak model generalization ability in micro-expression recognition, a micro-expression recognition algorithm is proposed based on graph convolution network(GCN) and Transformer model.Firstly, action unit(AU) feature detection is extracted and facial muscle nodes in the neighborhood are divided into three subsets for recognition.Then, graph convolution layer is used to find the layout of dependencies between AU nodes of micro-expression classification.Finally, multiple attentional features of each facial action are enriched with Transformer model to include more sequence information before calculating the overall correlation of each region.The proposed method is validated in CASME II and CAS(ME)^2 datasets, and the recognition rate reached 69.85%.展开更多
The appearance of a face is severely altered by illumination conditions that makes automatic face recognition a challenging task. In this paper we propose a Gaussian Mixture Models (GMM)-based human face identificatio...The appearance of a face is severely altered by illumination conditions that makes automatic face recognition a challenging task. In this paper we propose a Gaussian Mixture Models (GMM)-based human face identification technique built in the Fourier or frequency domain that is robust to illumination changes and does not require “illumination normalization” (removal of illumination effects) prior to application unlike many existing methods. The importance of the Fourier domain phase in human face identification is a well-established fact in signal processing. A maximum a posteriori (or, MAP) estimate based on the posterior likelihood is used to perform identification, achieving misclassification error rates as low as 2% on a database that contains images of 65 individuals under 21 different illumination conditions. Furthermore, a misclassification rate of 3.5% is observed on the Yale database with 10 people and 64 different illumination conditions. Both these sets of results are significantly better than those obtained from traditional PCA and LDA classifiers. Statistical analysis pertaining to model selection is also presented.展开更多
Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, mos...Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, most existing deep learning based recognition frameworks are not optimized for action in the surveillance videos. In this paper, we propose a novel method to deal with the recognition of different types of actions in outdoor surveillance videos. The proposed method first introduces motion compensation to improve the detection of human target. Then, it uses three different types of deep models with single and sequenced images as inputs for the recognition of different types of actions. Finally, predictions from different models are fused with a linear model. Experimental results show that the proposed method works well on the real surveillance videos.展开更多
In many animal-related studies, a high-performance animal behavior recognition system can help researchers reduce or get rid of the limitation of human assessments and make the experiments easier to reproduce. Recentl...In many animal-related studies, a high-performance animal behavior recognition system can help researchers reduce or get rid of the limitation of human assessments and make the experiments easier to reproduce. Recently, although deep learning models are holding state-of-the-art performances in human action recognition tasks, these models are not well-studied in applying to animal behavior recognition tasks. One reason is the lack of extensive datasets which are required to train these deep models for good performances. In this research, we investigated two current state-of-the-art deep learning models in human action recognition tasks, the I3D model and the R(2 + 1)D model, in solving a mouse behavior recognition task. We compared their performances with other models from previous researches and the results showed that the deep learning models that pre-trained using human action datasets then fine-tuned using the mouse behavior dataset can outperform other models from previous researches. It also shows promises of applying these deep learning models to other animal behavior recognition tasks without any significant modification in the models’ architecture, all we need to do is collecting proper datasets for the tasks and fine-tuning the pre-trained models using the collected data.展开更多
In order to accurately calculate drilled trajectories,the method of quantitatively recognizing borehole trajectory models was provided,and a case analysis was conducted.Because the measurement-while-drilling data prov...In order to accurately calculate drilled trajectories,the method of quantitatively recognizing borehole trajectory models was provided,and a case analysis was conducted.Because the measurement-while-drilling data provide with measured values of tool-face angle besides inclination angle and azimuth angle,this paper presents the technological approach of recognizing borehole trajectory models based on tool-face angle.A universal tool-face angle equation was established based on the directional deflection mechanism of steerable drilling tools,and it can calculate the tool-face angles with characteristic parameters of various borehole trajectory models.Then,by evaluating the error between the theoretical values and the measured values of tool-face angle,the trajectory model most consistent with the actual well trajectory can be selected.The model recognition of borehole trajectory provides with the quantitative evaluation index and selection basis of survey calculation methods,which can avoid subjectively and randomly selecting the survey calculation method,and consequently improve the monitoring accuracy and reliability of borehole trajectory.展开更多
The laws of functioning of discrete deterministic dynamical systems are investigated,presented in the form of automata models defined by geometric images.Due to the use of the apparatus of geometric images of automata...The laws of functioning of discrete deterministic dynamical systems are investigated,presented in the form of automata models defined by geometric images.Due to the use of the apparatus of geometric images of automata,developed by V.A.Tverdokhlebov,the analysis of automata models is carried out on the basis of the analysis of mathematical structures represented by geometric curves and numerical sequences.The purpose of present research is to further develop the mathematical apparatus of geometric images of automaton models of systems,including the development of new methods for recognizing automata by their geometric images,given both geometric curves and numerical sequences.展开更多
Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due...Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach.展开更多
Seabed sediment recognition is vital for the exploitation of marine resources.Side-scan sonar(SSS)is an excellent tool for acquiring the imagery of seafloor topography.Combined with ocean surface sampling,it provides ...Seabed sediment recognition is vital for the exploitation of marine resources.Side-scan sonar(SSS)is an excellent tool for acquiring the imagery of seafloor topography.Combined with ocean surface sampling,it provides detailed and accurate images of marine substrate features.Most of the processing of SSS imagery works around limited sampling stations and requires manual interpretation to complete the classification of seabed sediment imagery.In complex sea areas,with manual interpretation,small targets are often lost due to a large amount of information.To date,studies related to the automatic recognition of seabed sediments are still few.This paper proposes a seabed sediment recognition method based on You Only Look Once version 5 and SSS imagery to perform real-time sedi-ment classification and localization for accuracy,particularly on small targets and faster speeds.We used methods such as changing the dataset size,epoch,and optimizer and adding multiscale training to overcome the challenges of having a small sample and a low accuracy.With these methods,we improved the results on mean average precision by 8.98%and F1 score by 11.12%compared with the original method.In addition,the detection speed was approximately 100 frames per second,which is faster than that of previous methods.This speed enabled us to achieve real-time seabed sediment recognition from SSS imagery.展开更多
文摘Cyberspace is extremely dynamic,with new attacks arising daily.Protecting cybersecurity controls is vital for network security.Deep Learning(DL)models find widespread use across various fields,with cybersecurity being one of the most crucial due to their rapid cyberattack detection capabilities on networks and hosts.The capabilities of DL in feature learning and analyzing extensive data volumes lead to the recognition of network traffic patterns.This study presents novel lightweight DL models,known as Cybernet models,for the detection and recognition of various cyber Distributed Denial of Service(DDoS)attacks.These models were constructed to have a reasonable number of learnable parameters,i.e.,less than 225,000,hence the name“lightweight.”This not only helps reduce the number of computations required but also results in faster training and inference times.Additionally,these models were designed to extract features in parallel from 1D Convolutional Neural Networks(CNN)and Long Short-Term Memory(LSTM),which makes them unique compared to earlier existing architectures and results in better performance measures.To validate their robustness and effectiveness,they were tested on the CIC-DDoS2019 dataset,which is an imbalanced and large dataset that contains different types of DDoS attacks.Experimental results revealed that bothmodels yielded promising results,with 99.99% for the detectionmodel and 99.76% for the recognition model in terms of accuracy,precision,recall,and F1 score.Furthermore,they outperformed the existing state-of-the-art models proposed for the same task.Thus,the proposed models can be used in cyber security research domains to successfully identify different types of attacks with a high detection and recognition rate.
基金supported by the Outstanding Youth Team Project of Central Universities(QNTD202308)the Ant Group through CCF-Ant Research Fund(CCF-AFSG 769498 RF20220214).
文摘Named Entity Recognition(NER)stands as a fundamental task within the field of biomedical text mining,aiming to extract specific types of entities such as genes,proteins,and diseases from complex biomedical texts and categorize them into predefined entity types.This process can provide basic support for the automatic construction of knowledge bases.In contrast to general texts,biomedical texts frequently contain numerous nested entities and local dependencies among these entities,presenting significant challenges to prevailing NER models.To address these issues,we propose a novel Chinese nested biomedical NER model based on RoBERTa and Global Pointer(RoBGP).Our model initially utilizes the RoBERTa-wwm-ext-large pretrained language model to dynamically generate word-level initial vectors.It then incorporates a Bidirectional Long Short-Term Memory network for capturing bidirectional semantic information,effectively addressing the issue of long-distance dependencies.Furthermore,the Global Pointer model is employed to comprehensively recognize all nested entities in the text.We conduct extensive experiments on the Chinese medical dataset CMeEE and the results demonstrate the superior performance of RoBGP over several baseline models.This research confirms the effectiveness of RoBGP in Chinese biomedical NER,providing reliable technical support for biomedical information extraction and knowledge base construction.
基金the National Natural Science Foundation of China(Nos.62272063,62072056 and 61902041)the Natural Science Foundation of Hunan Province(Nos.2022JJ30617 and 2020JJ2029)+4 种基金Open Research Fund of Key Lab of Broadband Wireless Communication and Sensor Network Technology,Nanjing University of Posts and Telecommunications(No.JZNY202102)the Traffic Science and Technology Project of Hunan Province,China(No.202042)Hunan Provincial Key Research and Development Program(No.2022GK2019)this work was funded by the Researchers Supporting Project Number(RSPD2023R681)King Saud University,Riyadh,Saudi Arabia.
文摘Internet of Vehicles (IoV) is a new system that enables individual vehicles to connect with nearby vehicles,people, transportation infrastructure, and networks, thereby realizing amore intelligent and efficient transportationsystem. The movement of vehicles and the three-dimensional (3D) nature of the road network cause the topologicalstructure of IoV to have the high space and time complexity.Network modeling and structure recognition for 3Droads can benefit the description of topological changes for IoV. This paper proposes a 3Dgeneral roadmodel basedon discrete points of roads obtained from GIS. First, the constraints imposed by 3D roads on moving vehicles areanalyzed. Then the effects of road curvature radius (Ra), longitudinal slope (Slo), and length (Len) on speed andacceleration are studied. Finally, a general 3D road network model based on road section features is established.This paper also presents intersection and road section recognition methods based on the structural features ofthe 3D road network model and the road features. Real GIS data from a specific region of Beijing is adopted tocreate the simulation scenario, and the simulation results validate the general 3D road network model and therecognitionmethod. Therefore, thiswork makes contributions to the field of intelligent transportation by providinga comprehensive approach tomodeling the 3Droad network and its topological changes in achieving efficient trafficflowand improved road safety.
基金supported by the State Grid Science&Technology Project of China(5400-202224153A-1-1-ZN).
文摘Expanding photovoltaic(PV)resources in rural-grid areas is an essential means to augment the share of solar energy in the energy landscape,aligning with the“carbon peaking and carbon neutrality”objectives.However,rural power grids often lack digitalization;thus,the load distribution within these areas is not fully known.This hinders the calculation of the available PV capacity and deduction of node voltages.This study proposes a load-distribution modeling approach based on remote-sensing image recognition in pursuit of a scientific framework for developing distributed PV resources in rural grid areas.First,houses in remote-sensing images are accurately recognized using deep-learning techniques based on the YOLOv5 model.The distribution of the houses is then used to estimate the load distribution in the grid area.Next,equally spaced and clustered distribution models are used to adaptively determine the location of the nodes and load power in the distribution lines.Finally,by calculating the connectivity matrix of the nodes,a minimum spanning tree is extracted,the topology of the network is constructed,and the node parameters of the load-distribution model are calculated.The proposed scheme is implemented in a software package and its efficacy is demonstrated by analyzing typical remote-sensing images of rural grid areas.The results underscore the ability of the proposed approach to effectively discern the distribution-line structure and compute the node parameters,thereby offering vital support for determining PV access capability.
文摘Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.
基金supported by National Natural Science Foundation of China(62371098)Natural Science Foundation of Sichuan Province(2023NSFSC1422)+1 种基金National Key Research and Development Program of China(2021YFB2900404)Central Universities of South west Minzu University(ZYN2022032).
文摘In recent years,deep learning-based signal recognition technology has gained attention and emerged as an important approach for safeguarding the electromagnetic environment.However,training deep learning-based classifiers on large signal datasets with redundant samples requires significant memory and high costs.This paper proposes a support databased core-set selection method(SD)for signal recognition,aiming to screen a representative subset that approximates the large signal dataset.Specifically,this subset can be identified by employing the labeled information during the early stages of model training,as some training samples are labeled as supporting data frequently.This support data is crucial for model training and can be found using a border sample selector.Simulation results demonstrate that the SD method minimizes the impact on model recognition performance while reducing the dataset size,and outperforms five other state-of-the-art core-set selection methods when the fraction of training sample kept is less than or equal to 0.3 on the RML2016.04C dataset or 0.5 on the RML22 dataset.The SD method is particularly helpful for signal recognition tasks with limited memory and computing resources.
基金supported by National Social Science Foundation Annual Project“Research on Evaluation and Improvement Paths of Integrated Development of Disabled Persons”(Grant No.20BRK029)the National Language Commission’s“14th Five-Year Plan”Scientific Research Plan 2023 Project“Domain Digital Language Service Resource Construction and Key Technology Research”(YB145-72)the National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.
文摘Dynamic signature is a biometric modality that recognizes an individual’s anatomic and behavioural characteristics when signing their name. The rampant case of signature falsification (Identity Theft) was the key motivating factor for embarking on this study. This study was necessitated by the damages and dangers posed by signature forgery coupled with the intractable nature of the problem. The aim and objectives of this study is to design a proactive and responsive system that could compare two signature samples and detect the correct signature against the forged one. Dynamic Signature verification is an important biometric technique that aims to detect whether a given signature is genuine or forged. In this research work, Convolutional Neural Networks (CNNsor ConvNet) which is a class of deep, feed forward artificial neural networks that has successfully been applied to analysing visual imagery was used to train the model. The signature images are stored in a file directory structure which the Keras Python library can work with. Then the CNN was implemented in python using the Keras with the TensorFlow backend to learn the patterns associated with the signature. The result showed that for the same CNNs-based network experimental result of average accuracy, the larger the training dataset, the higher the test accuracy. However, when the training dataset are insufficient, better results can be obtained. The paper concluded that by training datasets using CNNs network, 98% accuracy in the result was recorded, in the experimental part, the model achieved a high degree of accuracy in the classification of the biometric parameters used.
基金This study was co-supported by the National Key R&D Program of China(No.2021YFF0603904)National Natural Science Foundation of China(U1733203)Safety Capacity Building Project of Civil Aviation Administration of China(TM2019-16-1/3).
文摘This study aims to address the deviation in downstream tasks caused by inaccurate recognition results when applying Automatic Speech Recognition(ASR)technology in the Air Traffic Control(ATC)field.This paper presents a novel cascaded model architecture,namely Conformer-CTC/Attention-T5(CCAT),to build a highly accurate and robust ATC speech recognition model.To tackle the challenges posed by noise and fast speech rate in ATC,the Conformer model is employed to extract robust and discriminative speech representations from raw waveforms.On the decoding side,the Attention mechanism is integrated to facilitate precise alignment between input features and output characters.The Text-To-Text Transfer Transformer(T5)language model is also introduced to handle particular pronunciations and code-mixing issues,providing more accurate and concise textual output for downstream tasks.To enhance the model’s robustness,transfer learning and data augmentation techniques are utilized in the training strategy.The model’s performance is optimized by performing hyperparameter tunings,such as adjusting the number of attention heads,encoder layers,and the weights of the loss function.The experimental results demonstrate the significant contributions of data augmentation,hyperparameter tuning,and error correction models to the overall model performance.On the Our ATC Corpus dataset,the proposed model achieves a Character Error Rate(CER)of 3.44%,representing a 3.64%improvement compared to the baseline model.Moreover,the effectiveness of the proposed model is validated on two publicly available datasets.On the AISHELL-1 dataset,the CCAT model achieves a CER of 3.42%,showcasing a 1.23%improvement over the baseline model.Similarly,on the LibriSpeech dataset,the CCAT model achieves a Word Error Rate(WER)of 5.27%,demonstrating a performance improvement of 7.67%compared to the baseline model.Additionally,this paper proposes an evaluation criterion for assessing the robustness of ATC speech recognition systems.In robustness evaluation experiments based on this criterion,the proposed model demonstrates a performance improvement of 22%compared to the baseline model.
基金supported by Institute of Information&communications Technology Planning&Evaluation(IITP)grant funded by the Korea government(MSIT)(No.2022-0-00377,Development of Intelligent Analysis and Classification Based Contents Class Categorization Technique to Prevent Imprudent Harmful Media Distribution).
文摘Automatic speech recognition(ASR)systems have emerged as indispensable tools across a wide spectrum of applications,ranging from transcription services to voice-activated assistants.To enhance the performance of these systems,it is important to deploy efficient models capable of adapting to diverse deployment conditions.In recent years,on-demand pruning methods have obtained significant attention within the ASR domain due to their adaptability in various deployment scenarios.However,these methods often confront substantial trade-offs,particularly in terms of unstable accuracy when reducing the model size.To address challenges,this study introduces two crucial empirical findings.Firstly,it proposes the incorporation of an online distillation mechanism during on-demand pruning training,which holds the promise of maintaining more consistent accuracy levels.Secondly,it proposes the utilization of the Mogrifier long short-term memory(LSTM)language model(LM),an advanced iteration of the conventional LSTM LM,as an effective alternative for pruning targets within the ASR framework.Through rigorous experimentation on the ASR system,employing the Mogrifier LSTM LM and training it using the suggested joint on-demand pruning and online distillation method,this study provides compelling evidence.The results exhibit that the proposed methods significantly outperform a benchmark model trained solely with on-demand pruning methods.Impressively,the proposed strategic configuration successfully reduces the parameter count by approximately 39%,all the while minimizing trade-offs.
基金supported by Fujian Provincial Science and Technology Major Project(No.2020HZ02014)by the grants from National Natural Science Foundation of Fujian(2021J01133,2021J011404)by the Quanzhou Scientific and Technological Planning Projects(Nos.2018C113R,2019C028R,2019C029R,2019C076R and 2019C099R).
文摘Congenital heart defect,accounting for about 30%of congenital defects,is the most common one.Data shows that congenital heart defects have seriously affected the birth rate of healthy newborns.In Fetal andNeonatal Cardiology,medical imaging technology(2D ultrasonic,MRI)has been proved to be helpful to detect congenital defects of the fetal heart and assists sonographers in prenatal diagnosis.It is a highly complex task to recognize 2D fetal heart ultrasonic standard plane(FHUSP)manually.Compared withmanual identification,automatic identification through artificial intelligence can save a lot of time,ensure the efficiency of diagnosis,and improve the accuracy of diagnosis.In this study,a feature extraction method based on texture features(Local Binary Pattern LBP and Histogram of Oriented Gradient HOG)and combined with Bag of Words(BOW)model is carried out,and then feature fusion is performed.Finally,it adopts Support VectorMachine(SVM)to realize automatic recognition and classification of FHUSP.The data includes 788 standard plane data sets and 448 normal and abnormal plane data sets.Compared with some other methods and the single method model,the classification accuracy of our model has been obviously improved,with the highest accuracy reaching 87.35%.Similarly,we also verify the performance of the model in normal and abnormal planes,and the average accuracy in classifying abnormal and normal planes is 84.92%.The experimental results show that thismethod can effectively classify and predict different FHUSP and can provide certain assistance for sonographers to diagnose fetal congenital heart disease.
基金Supported by Shaanxi Province Key Research and Development Project (2021GY-280)the National Natural Science Foundation of China (No.61834005,61772417,61802304)。
文摘Micro-expressions are spontaneous, unconscious movements that reveal true emotions.Accurate facial movement information and network training learning methods are crucial for micro-expression recognition.However, most existing micro-expression recognition technologies so far focus on modeling the single category of micro-expression images and neural network structure.Aiming at the problems of low recognition rate and weak model generalization ability in micro-expression recognition, a micro-expression recognition algorithm is proposed based on graph convolution network(GCN) and Transformer model.Firstly, action unit(AU) feature detection is extracted and facial muscle nodes in the neighborhood are divided into three subsets for recognition.Then, graph convolution layer is used to find the layout of dependencies between AU nodes of micro-expression classification.Finally, multiple attentional features of each facial action are enriched with Transformer model to include more sequence information before calculating the overall correlation of each region.The proposed method is validated in CASME II and CAS(ME)^2 datasets, and the recognition rate reached 69.85%.
文摘The appearance of a face is severely altered by illumination conditions that makes automatic face recognition a challenging task. In this paper we propose a Gaussian Mixture Models (GMM)-based human face identification technique built in the Fourier or frequency domain that is robust to illumination changes and does not require “illumination normalization” (removal of illumination effects) prior to application unlike many existing methods. The importance of the Fourier domain phase in human face identification is a well-established fact in signal processing. A maximum a posteriori (or, MAP) estimate based on the posterior likelihood is used to perform identification, achieving misclassification error rates as low as 2% on a database that contains images of 65 individuals under 21 different illumination conditions. Furthermore, a misclassification rate of 3.5% is observed on the Yale database with 10 people and 64 different illumination conditions. Both these sets of results are significantly better than those obtained from traditional PCA and LDA classifiers. Statistical analysis pertaining to model selection is also presented.
文摘Action recognition is an important topic in computer vision. Recently, deep learning technologies have been successfully used in lots of applications including video data for sloving recognition problems. However, most existing deep learning based recognition frameworks are not optimized for action in the surveillance videos. In this paper, we propose a novel method to deal with the recognition of different types of actions in outdoor surveillance videos. The proposed method first introduces motion compensation to improve the detection of human target. Then, it uses three different types of deep models with single and sequenced images as inputs for the recognition of different types of actions. Finally, predictions from different models are fused with a linear model. Experimental results show that the proposed method works well on the real surveillance videos.
文摘In many animal-related studies, a high-performance animal behavior recognition system can help researchers reduce or get rid of the limitation of human assessments and make the experiments easier to reproduce. Recently, although deep learning models are holding state-of-the-art performances in human action recognition tasks, these models are not well-studied in applying to animal behavior recognition tasks. One reason is the lack of extensive datasets which are required to train these deep models for good performances. In this research, we investigated two current state-of-the-art deep learning models in human action recognition tasks, the I3D model and the R(2 + 1)D model, in solving a mouse behavior recognition task. We compared their performances with other models from previous researches and the results showed that the deep learning models that pre-trained using human action datasets then fine-tuned using the mouse behavior dataset can outperform other models from previous researches. It also shows promises of applying these deep learning models to other animal behavior recognition tasks without any significant modification in the models’ architecture, all we need to do is collecting proper datasets for the tasks and fine-tuning the pre-trained models using the collected data.
基金Supported by the China National Science and Technology Major Project(2017ZX05005-005)
文摘In order to accurately calculate drilled trajectories,the method of quantitatively recognizing borehole trajectory models was provided,and a case analysis was conducted.Because the measurement-while-drilling data provide with measured values of tool-face angle besides inclination angle and azimuth angle,this paper presents the technological approach of recognizing borehole trajectory models based on tool-face angle.A universal tool-face angle equation was established based on the directional deflection mechanism of steerable drilling tools,and it can calculate the tool-face angles with characteristic parameters of various borehole trajectory models.Then,by evaluating the error between the theoretical values and the measured values of tool-face angle,the trajectory model most consistent with the actual well trajectory can be selected.The model recognition of borehole trajectory provides with the quantitative evaluation index and selection basis of survey calculation methods,which can avoid subjectively and randomly selecting the survey calculation method,and consequently improve the monitoring accuracy and reliability of borehole trajectory.
文摘The laws of functioning of discrete deterministic dynamical systems are investigated,presented in the form of automata models defined by geometric images.Due to the use of the apparatus of geometric images of automata,developed by V.A.Tverdokhlebov,the analysis of automata models is carried out on the basis of the analysis of mathematical structures represented by geometric curves and numerical sequences.The purpose of present research is to further develop the mathematical apparatus of geometric images of automaton models of systems,including the development of new methods for recognizing automata by their geometric images,given both geometric curves and numerical sequences.
基金the National Natural Science Foundation of China(No.61872231)the National Key Research and Development Program of China(No.2021YFC2801000)the Major Research plan of the National Social Science Foundation of China(No.2000&ZD130).
文摘Speech emotion recognition,as an important component of humancomputer interaction technology,has received increasing attention.Recent studies have treated emotion recognition of speech signals as a multimodal task,due to its inclusion of the semantic features of two different modalities,i.e.,audio and text.However,existing methods often fail in effectively represent features and capture correlations.This paper presents a multi-level circulant cross-modal Transformer(MLCCT)formultimodal speech emotion recognition.The proposed model can be divided into three steps,feature extraction,interaction and fusion.Self-supervised embedding models are introduced for feature extraction,which give a more powerful representation of the original data than those using spectrograms or audio features such as Mel-frequency cepstral coefficients(MFCCs)and low-level descriptors(LLDs).In particular,MLCCT contains two types of feature interaction processes,where a bidirectional Long Short-term Memory(Bi-LSTM)with circulant interaction mechanism is proposed for low-level features,while a two-stream residual cross-modal Transformer block is appliedwhen high-level features are involved.Finally,we choose self-attention blocks for fusion and a fully connected layer to make predictions.To evaluate the performance of our proposed model,comprehensive experiments are conducted on three widely used benchmark datasets including IEMOCAP,MELD and CMU-MOSEI.The competitive results verify the effectiveness of our approach.
基金funded by the Natural Science Foundation of Fujian Province(No.2018J01063)the Project of Deep Learning Based Underwater Cultural Relics Recognization(No.38360041)the Project of the State Administration of Cultural Relics(No.2018300).
文摘Seabed sediment recognition is vital for the exploitation of marine resources.Side-scan sonar(SSS)is an excellent tool for acquiring the imagery of seafloor topography.Combined with ocean surface sampling,it provides detailed and accurate images of marine substrate features.Most of the processing of SSS imagery works around limited sampling stations and requires manual interpretation to complete the classification of seabed sediment imagery.In complex sea areas,with manual interpretation,small targets are often lost due to a large amount of information.To date,studies related to the automatic recognition of seabed sediments are still few.This paper proposes a seabed sediment recognition method based on You Only Look Once version 5 and SSS imagery to perform real-time sedi-ment classification and localization for accuracy,particularly on small targets and faster speeds.We used methods such as changing the dataset size,epoch,and optimizer and adding multiscale training to overcome the challenges of having a small sample and a low accuracy.With these methods,we improved the results on mean average precision by 8.98%and F1 score by 11.12%compared with the original method.In addition,the detection speed was approximately 100 frames per second,which is faster than that of previous methods.This speed enabled us to achieve real-time seabed sediment recognition from SSS imagery.