With the emergence and development of social networks,people can stay in touch with friends,family,and colleagues more quickly and conveniently,regardless of their location.This ubiquitous digital internet environment...With the emergence and development of social networks,people can stay in touch with friends,family,and colleagues more quickly and conveniently,regardless of their location.This ubiquitous digital internet environment has also led to large-scale disclosure of personal privacy.Due to the complexity and subtlety of sensitive information,traditional sensitive information identification technologies cannot thoroughly address the characteristics of each piece of data,thus weakening the deep connections between text and images.In this context,this paper adopts the CLIP model as a modality discriminator.By using comparative learning between sensitive image descriptions and images,the similarity between the images and the sensitive descriptions is obtained to determine whether the images contain sensitive information.This provides the basis for identifying sensitive information using different modalities.Specifically,if the original data does not contain sensitive information,only single-modality text-sensitive information identification is performed;if the original data contains sensitive information,multimodality sensitive information identification is conducted.This approach allows for differentiated processing of each piece of data,thereby achieving more accurate sensitive information identification.The aforementioned modality discriminator can address the limitations of existing sensitive information identification technologies,making the identification of sensitive information from the original data more appropriate and precise.展开更多
Fine-grained recognition of ships based on remote sensing images is crucial to safeguarding maritime rights and interests and maintaining national security.Currently,with the emergence of massive high-resolution multi...Fine-grained recognition of ships based on remote sensing images is crucial to safeguarding maritime rights and interests and maintaining national security.Currently,with the emergence of massive high-resolution multi-modality images,the use of multi-modality images for fine-grained recognition has become a promising technology.Fine-grained recognition of multi-modality images imposes higher requirements on the dataset samples.The key to the problem is how to extract and fuse the complementary features of multi-modality images to obtain more discriminative fusion features.The attention mechanism helps the model to pinpoint the key information in the image,resulting in a significant improvement in the model’s performance.In this paper,a dataset for fine-grained recognition of ships based on visible and near-infrared multi-modality remote sensing images has been proposed first,named Dataset for Multimodal Fine-grained Recognition of Ships(DMFGRS).It includes 1,635 pairs of visible and near-infrared remote sensing images divided into 20 categories,collated from digital orthophotos model provided by commercial remote sensing satellites.DMFGRS provides two types of annotation format files,as well as segmentation mask images corresponding to the ship targets.Then,a Multimodal Information Cross-Enhancement Network(MICE-Net)fusing features of visible and near-infrared remote sensing images,has been proposed.In the network,a dual-branch feature extraction and fusion module has been designed to obtain more expressive features.The Feature Cross Enhancement Module(FCEM)achieves the fusion enhancement of the two modal features by making the channel attention and spatial attention work cross-functionally on the feature map.A benchmark is established by evaluating state-of-the-art object recognition algorithms on DMFGRS.MICE-Net conducted experiments on DMFGRS,and the precision,recall,mAP0.5 and mAP0.5:0.95 reached 87%,77.1%,83.8%and 63.9%,respectively.Extensive experiments demonstrate that the proposed MICE-Net has more excellent performance on DMFGRS.Built on lightweight network YOLO,the model has excellent generalizability,and thus has good potential for application in real-life scenarios.展开更多
Mining more discriminative temporal features to enrich temporal context representation is considered the key to fine-grained action recog-nition.Previous action recognition methods utilize a fixed spatiotemporal windo...Mining more discriminative temporal features to enrich temporal context representation is considered the key to fine-grained action recog-nition.Previous action recognition methods utilize a fixed spatiotemporal window to learn local video representation.However,these methods failed to capture complex motion patterns due to their limited receptive field.To solve the above problems,this paper proposes a lightweight Temporal Pyramid Excitation(TPE)module to capture the short,medium,and long-term temporal context.In this method,Temporal Pyramid(TP)module can effectively expand the temporal receptive field of the network by using the multi-temporal kernel decomposition without significantly increasing the computational cost.In addition,the Multi Excitation module can emphasize temporal importance to enhance the temporal feature representation learning.TPE can be integrated into ResNet50,and building a compact video learning framework-TPENet.Extensive validation experiments on several challenging benchmark(Something-Something V1,Something-Something V2,UCF-101,and HMDB51)datasets demonstrate that our method achieves a preferable balance between computation and accuracy.展开更多
Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a te...Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.展开更多
Regular exercise is a crucial aspect of daily life, as it enables individuals to stay physically active, lowers thelikelihood of developing illnesses, and enhances life expectancy. The recognition of workout actions i...Regular exercise is a crucial aspect of daily life, as it enables individuals to stay physically active, lowers thelikelihood of developing illnesses, and enhances life expectancy. The recognition of workout actions in videostreams holds significant importance in computer vision research, as it aims to enhance exercise adherence, enableinstant recognition, advance fitness tracking technologies, and optimize fitness routines. However, existing actiondatasets often lack diversity and specificity for workout actions, hindering the development of accurate recognitionmodels. To address this gap, the Workout Action Video dataset (WAVd) has been introduced as a significantcontribution. WAVd comprises a diverse collection of labeled workout action videos, meticulously curated toencompass various exercises performed by numerous individuals in different settings. This research proposes aninnovative framework based on the Attention driven Residual Deep Convolutional-Gated Recurrent Unit (ResDCGRU)network for workout action recognition in video streams. Unlike image-based action recognition, videoscontain spatio-temporal information, making the task more complex and challenging. While substantial progresshas been made in this area, challenges persist in detecting subtle and complex actions, handling occlusions,and managing the computational demands of deep learning approaches. The proposed ResDC-GRU Attentionmodel demonstrated exceptional classification performance with 95.81% accuracy in classifying workout actionvideos and also outperformed various state-of-the-art models. The method also yielded 81.6%, 97.2%, 95.6%, and93.2% accuracy on established benchmark datasets, namely HMDB51, Youtube Actions, UCF50, and UCF101,respectively, showcasing its superiority and robustness in action recognition. The findings suggest practicalimplications in real-world scenarios where precise video action recognition is paramount, addressing the persistingchallenges in the field. TheWAVd dataset serves as a catalyst for the development ofmore robust and effective fitnesstracking systems and ultimately promotes healthier lifestyles through improved exercise monitoring and analysis.展开更多
The infamous type Ⅳ failure within the fine-grained heat-affected zone (FGHAZ) in G115 steel weldments seriously threatens the safe operation of ultra-supercritical (USC) power plants.In this work,the traditional the...The infamous type Ⅳ failure within the fine-grained heat-affected zone (FGHAZ) in G115 steel weldments seriously threatens the safe operation of ultra-supercritical (USC) power plants.In this work,the traditional thermo-mechanical treatment was modified via the replacement of hot-rolling with cold rolling,i.e.,normalizing,cold rolling,and tempering (NCT),which was developed to improve the creep strength of the FGHAZ in G115 steel weldments.The NCT treatment effectively promoted the dissolution of preformed M_(23)C_(6)particles and relieved the boundary segregation of C and Cr during welding thermal cycling,which accelerated the dispersed reprecipitation of M_(23)C_(6) particles within the fresh reaustenitized grains during post-weld heat treatment.In addition,the precipitation of Cu-rich phases and MX particles was promoted evidently due to the deformation-induced dislocations.As a result,the interacting actions between precipitates,dislocations,and boundaries during creep were reinforced considerably.Following this strategy,the creep rupture life of the FGHAZ in G115 steel weldments can be prolonged by 18.6%,which can further push the application of G115 steel in USC power plants.展开更多
The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregula...The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field.展开更多
Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this uniq...Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this unique capability in robots remains a significant challenge.Here,we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure,temperature,material recognition and 3D location capabilities,which is combined with multimodal supervised learning algorithms for object recognition.The sensor exhibits human-like pressure(0.04–100 kPa)and temperature(21.5–66.2℃)detection,millisecond response times(11 ms),a pressure sensitivity of 92.22 kPa^(−1)and triboelectric durability of over 6000 cycles.The devised algorithm has universality and can accommodate a range of application scenarios.The tactile system can identify common foods in a kitchen scene with 94.63%accuracy and explore the topographic and geomorphic features of a Mars scene with 100%accuracy.This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing,recognition and intelligence.展开更多
Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study intr...Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.展开更多
The identification of intercepted radio fuze modulation types is a prerequisite for decision-making in interference systems.However,the electromagnetic environment of modern battlefields is complex,and the signal-to-n...The identification of intercepted radio fuze modulation types is a prerequisite for decision-making in interference systems.However,the electromagnetic environment of modern battlefields is complex,and the signal-to-noise ratio(SNR)of such environments is usually low,which makes it difficult to implement accurate recognition of radio fuzes.To solve the above problem,a radio fuze automatic modulation recognition(AMR)method for low-SNR environments is proposed.First,an adaptive denoising algorithm based on data rearrangement and the two-dimensional(2D)fast Fourier transform(FFT)(DR2D)is used to reduce the noise of the intercepted radio fuze intermediate frequency(IF)signal.Then,the textural features of the denoised IF signal rearranged data matrix are extracted from the statistical indicator vectors of gray-level cooccurrence matrices(GLCMs),and support vector machines(SVMs)are used for classification.The DR2D-based adaptive denoising algorithm achieves an average correlation coefficient of more than 0.76 for ten fuze types under SNRs of-10 d B and above,which is higher than that of other typical algorithms.The trained SVM classification model achieves an average recognition accuracy of more than 96%on seven modulation types and recognition accuracies of more than 94%on each modulation type under SNRs of-12 d B and above,which represents a good AMR performance of radio fuzes under low SNRs.展开更多
In the field of computer vision and pattern recognition,knowledge based on images of human activity has gained popularity as a research topic.Activity recognition is the process of determining human behavior based on ...In the field of computer vision and pattern recognition,knowledge based on images of human activity has gained popularity as a research topic.Activity recognition is the process of determining human behavior based on an image.We implemented an Extended Kalman filter to create an activity recognition system here.The proposed method applies an HSI color transformation in its initial stages to improve the clarity of the frame of the image.To minimize noise,we use Gaussian filters.Extraction of silhouette using the statistical method.We use Binary Robust Invariant Scalable Keypoints(BRISK)and SIFT for feature extraction.The next step is to perform feature discrimination using Gray Wolf.After that,the features are input into the Extended Kalman filter and classified into relevant human activities according to their definitive characteristics.The experimental procedure uses the SUB-Interaction and HMDB51 datasets to a 0.88%and 0.86%recognition rate.展开更多
This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distri...This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distribution feature extraction layer in SDFEN replaces convolutional output neural networks with the spatial distribution features that focus more on inter-sample information by incorporating class center vectors.The designed hybrid loss function considers both intra-class distance and inter-class distance,thereby enhancing the similarity among samples of the same class and increasing the dissimilarity between samples of different classes during training.Consequently,this method allows unknown classes to occupy a larger space in the feature space.This reduces the possibility of overlap with known class samples and makes the boundaries between known and unknown samples more distinct.Additionally,the feature comparator threshold can be used to reject unknown samples.For signal open set recognition,seven methods,including the proposed method,are applied to two kinds of electromagnetic signal data:modulation signal and real-world emitter.The experimental results demonstrate that the proposed method outperforms the other six methods overall in a simulated open environment.Specifically,compared to the state-of-the-art Openmax method,the novel method achieves up to 8.87%and 5.25%higher micro-F-measures,respectively.展开更多
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa...Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community.展开更多
Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seaml...Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics.展开更多
Electric power training is essential for ensuring the safety and reliability of the system.In this study,we introduce a novel Abnormal Action Recognition(AAR)system that utilizes a Lightweight Pose Estimation Network(...Electric power training is essential for ensuring the safety and reliability of the system.In this study,we introduce a novel Abnormal Action Recognition(AAR)system that utilizes a Lightweight Pose Estimation Network(LPEN)to efficiently and effectively detect abnormal fall-down and trespass incidents in electric power training scenarios.The LPEN network,comprising three stages—MobileNet,Initial Stage,and Refinement Stage—is employed to swiftly extract image features,detect human key points,and refine them for accurate analysis.Subsequently,a Pose-aware Action Analysis Module(PAAM)captures the positional coordinates of human skeletal points in each frame.Finally,an Abnormal Action Inference Module(AAIM)evaluates whether abnormal fall-down or unauthorized trespass behavior is occurring.For fall-down recognition,three criteria—falling speed,main angles of skeletal points,and the person’s bounding box—are considered.To identify unauthorized trespass,emphasis is placed on the position of the ankles.Extensive experiments validate the effectiveness and efficiency of the proposed system in ensuring the safety and reliability of electric power training.展开更多
Although sentiment analysis is pivotal to understanding user preferences,existing models face significant challenges in handling context-dependent sentiments,sarcasm,and nuanced emotions.This study addresses these cha...Although sentiment analysis is pivotal to understanding user preferences,existing models face significant challenges in handling context-dependent sentiments,sarcasm,and nuanced emotions.This study addresses these challenges by integrating ontology-based methods with deep learning models,thereby enhancing sentiment analysis accuracy in complex domains such as film reviews and restaurant feedback.The framework comprises explicit topic recognition,followed by implicit topic identification to mitigate topic interference in subsequent sentiment analysis.In the context of sentiment analysis,we develop an expanded sentiment lexicon based on domainspecific corpora by leveraging techniques such as word-frequency analysis and word embedding.Furthermore,we introduce a sentiment recognition method based on both ontology-derived sentiment features and sentiment lexicons.We evaluate the performance of our system using a dataset of 10,500 restaurant reviews,focusing on sentiment classification accuracy.The incorporation of specialized lexicons and ontology structures enables the framework to discern subtle sentiment variations and context-specific expressions,thereby improving the overall sentiment-analysis performance.Experimental results demonstrate that the integration of ontology-based methods and deep learning models significantly improves sentiment analysis accuracy.展开更多
Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format fo...Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.展开更多
This study proposes a pose estimation-convolutional neural network-bidirectional gated recurrent unit(PSECNN-BiGRU)fusion model for human posture recognition to address low accuracy issues in abnormal posture recognit...This study proposes a pose estimation-convolutional neural network-bidirectional gated recurrent unit(PSECNN-BiGRU)fusion model for human posture recognition to address low accuracy issues in abnormal posture recognition due to the loss of some feature information and the deterioration of comprehensive performance in model detection in complex home environments.Firstly,the deep convolutional network is integrated with the Mediapipe framework to extract high-precision,multi-dimensional information from the key points of the human skeleton,thereby obtaining a human posture feature set.Thereafter,a double-layer BiGRU algorithm is utilized to extract multi-layer,bidirectional temporal features from the human posture feature set,and a CNN network with an exponential linear unit(ELU)activation function is adopted to perform deep convolution of the feature map to extract the spatial feature of the human posture.Furthermore,a squeeze and excitation networks(SENet)module is introduced to adaptively learn the importance weights of each channel,enhancing the network’s focus on important features.Finally,comparative experiments are performed on available datasets,including the public human activity recognition using smartphone dataset(UCIHAR),the public human activity recognition 70 plus dataset(HAR70PLUS),and the independently developed home abnormal behavior recognition dataset(HABRD)created by the authors’team.The results show that the average accuracy of the proposed PSE-CNN-BiGRU fusion model for human posture recognition is 99.56%,89.42%,and 98.90%,respectively,which are 5.24%,5.83%,and 3.19%higher than the average accuracy of the five models proposed in the comparative literature,including CNN,GRU,and others.The F1-score for abnormal posture recognition reaches 98.84%(heartache),97.18%(fall),99.6%(bellyache),and 98.27%(climbing)on the self-builtHABRDdataset,thus verifying the effectiveness,generalization,and robustness of the proposed model in enhancing human posture recognition.展开更多
RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still...RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.展开更多
The fingerprinting-based approach using the wireless local area network(WLAN)is widely used for indoor localization.However,the construction of the fingerprint database is quite time-consuming.Especially when the posi...The fingerprinting-based approach using the wireless local area network(WLAN)is widely used for indoor localization.However,the construction of the fingerprint database is quite time-consuming.Especially when the position of the access point(AP)or wall changes,updating the fingerprint database in real-time is difficult.An appropriate indoor localization approach,which has a low implementation cost,excellent real-time performance,and high localization accuracy and fully considers complex indoor environment factors,is preferred in location-based services(LBSs)applications.In this paper,we proposed a fine-grained grid computing(FGGC)model to achieve decimeter-level localization accuracy.Reference points(RPs)are generated in the grid by the FGGC model.Then,the received signal strength(RSS)values at each RP are calculated with the attenuation factors,such as the frequency band,three-dimensional propagation distance,and walls in complex environments.As a result,the fingerprint database can be established automatically without manual measurement,and the efficiency and cost that the FGGC model takes for the fingerprint database are superior to previous methods.The proposed indoor localization approach,which estimates the position step by step from the approximate grid location to the fine-grained location,can achieve higher real-time performance and localization accuracy simultaneously.The mean error of the proposed model is 0.36 m,far lower than that of previous approaches.Thus,the proposed model is feasible to improve the efficiency and accuracy of Wi-Fi indoor localization.It also shows high-accuracy performance with a fast running speed even under a large-size grid.The results indicate that the proposed method can also be suitable for precise marketing,indoor navigation,and emergency rescue.展开更多
基金supported by the National Natural Science Foundation of China(No.62302540),with author Fangfang Shan for more information,please visit their website at https://www.nsfc.gov.cn/(accessed on 05 June 2024)Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020),where Fangfang Shan is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/(accessed on 05 June 2024)the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422),and for more information,you can visit https://kjt.henan.gov.cn(accessed on 05 June 2024).
文摘With the emergence and development of social networks,people can stay in touch with friends,family,and colleagues more quickly and conveniently,regardless of their location.This ubiquitous digital internet environment has also led to large-scale disclosure of personal privacy.Due to the complexity and subtlety of sensitive information,traditional sensitive information identification technologies cannot thoroughly address the characteristics of each piece of data,thus weakening the deep connections between text and images.In this context,this paper adopts the CLIP model as a modality discriminator.By using comparative learning between sensitive image descriptions and images,the similarity between the images and the sensitive descriptions is obtained to determine whether the images contain sensitive information.This provides the basis for identifying sensitive information using different modalities.Specifically,if the original data does not contain sensitive information,only single-modality text-sensitive information identification is performed;if the original data contains sensitive information,multimodality sensitive information identification is conducted.This approach allows for differentiated processing of each piece of data,thereby achieving more accurate sensitive information identification.The aforementioned modality discriminator can address the limitations of existing sensitive information identification technologies,making the identification of sensitive information from the original data more appropriate and precise.
文摘Fine-grained recognition of ships based on remote sensing images is crucial to safeguarding maritime rights and interests and maintaining national security.Currently,with the emergence of massive high-resolution multi-modality images,the use of multi-modality images for fine-grained recognition has become a promising technology.Fine-grained recognition of multi-modality images imposes higher requirements on the dataset samples.The key to the problem is how to extract and fuse the complementary features of multi-modality images to obtain more discriminative fusion features.The attention mechanism helps the model to pinpoint the key information in the image,resulting in a significant improvement in the model’s performance.In this paper,a dataset for fine-grained recognition of ships based on visible and near-infrared multi-modality remote sensing images has been proposed first,named Dataset for Multimodal Fine-grained Recognition of Ships(DMFGRS).It includes 1,635 pairs of visible and near-infrared remote sensing images divided into 20 categories,collated from digital orthophotos model provided by commercial remote sensing satellites.DMFGRS provides two types of annotation format files,as well as segmentation mask images corresponding to the ship targets.Then,a Multimodal Information Cross-Enhancement Network(MICE-Net)fusing features of visible and near-infrared remote sensing images,has been proposed.In the network,a dual-branch feature extraction and fusion module has been designed to obtain more expressive features.The Feature Cross Enhancement Module(FCEM)achieves the fusion enhancement of the two modal features by making the channel attention and spatial attention work cross-functionally on the feature map.A benchmark is established by evaluating state-of-the-art object recognition algorithms on DMFGRS.MICE-Net conducted experiments on DMFGRS,and the precision,recall,mAP0.5 and mAP0.5:0.95 reached 87%,77.1%,83.8%and 63.9%,respectively.Extensive experiments demonstrate that the proposed MICE-Net has more excellent performance on DMFGRS.Built on lightweight network YOLO,the model has excellent generalizability,and thus has good potential for application in real-life scenarios.
基金supported by the research team of Xi’an Traffic Engineering Institute and the Young and middle-aged fund project of Xi’an Traffic Engineering Institute (2022KY-02).
文摘Mining more discriminative temporal features to enrich temporal context representation is considered the key to fine-grained action recog-nition.Previous action recognition methods utilize a fixed spatiotemporal window to learn local video representation.However,these methods failed to capture complex motion patterns due to their limited receptive field.To solve the above problems,this paper proposes a lightweight Temporal Pyramid Excitation(TPE)module to capture the short,medium,and long-term temporal context.In this method,Temporal Pyramid(TP)module can effectively expand the temporal receptive field of the network by using the multi-temporal kernel decomposition without significantly increasing the computational cost.In addition,the Multi Excitation module can emphasize temporal importance to enhance the temporal feature representation learning.TPE can be integrated into ResNet50,and building a compact video learning framework-TPENet.Extensive validation experiments on several challenging benchmark(Something-Something V1,Something-Something V2,UCF-101,and HMDB51)datasets demonstrate that our method achieves a preferable balance between computation and accuracy.
基金supported and funded by KAU Scientific Endowment,King Abdulaziz University,Jeddah,Saudi Arabia.
文摘Human recognition technology based on biometrics has become a fundamental requirement in all aspects of life due to increased concerns about security and privacy issues.Therefore,biometric systems have emerged as a technology with the capability to identify or authenticate individuals based on their physiological and behavioral characteristics.Among different viable biometric modalities,the human ear structure can offer unique and valuable discriminative characteristics for human recognition systems.In recent years,most existing traditional ear recognition systems have been designed based on computer vision models and have achieved successful results.Nevertheless,such traditional models can be sensitive to several unconstrained environmental factors.As such,some traits may be difficult to extract automatically but can still be semantically perceived as soft biometrics.This research proposes a new group of semantic features to be used as soft ear biometrics,mainly inspired by conventional descriptive traits used naturally by humans when identifying or describing each other.Hence,the research study is focused on the fusion of the soft ear biometric traits with traditional(hard)ear biometric features to investigate their validity and efficacy in augmenting human identification performance.The proposed framework has two subsystems:first,a computer vision-based subsystem,extracting traditional(hard)ear biometric traits using principal component analysis(PCA)and local binary patterns(LBP),and second,a crowdsourcing-based subsystem,deriving semantic(soft)ear biometric traits.Several feature-level fusion experiments were conducted using the AMI database to evaluate the proposed algorithm’s performance.The obtained results for both identification and verification showed that the proposed soft ear biometric information significantly improved the recognition performance of traditional ear biometrics,reaching up to 12%for LBP and 5%for PCA descriptors;when fusing all three capacities PCA,LBP,and soft traits using k-nearest neighbors(KNN)classifier.
文摘Regular exercise is a crucial aspect of daily life, as it enables individuals to stay physically active, lowers thelikelihood of developing illnesses, and enhances life expectancy. The recognition of workout actions in videostreams holds significant importance in computer vision research, as it aims to enhance exercise adherence, enableinstant recognition, advance fitness tracking technologies, and optimize fitness routines. However, existing actiondatasets often lack diversity and specificity for workout actions, hindering the development of accurate recognitionmodels. To address this gap, the Workout Action Video dataset (WAVd) has been introduced as a significantcontribution. WAVd comprises a diverse collection of labeled workout action videos, meticulously curated toencompass various exercises performed by numerous individuals in different settings. This research proposes aninnovative framework based on the Attention driven Residual Deep Convolutional-Gated Recurrent Unit (ResDCGRU)network for workout action recognition in video streams. Unlike image-based action recognition, videoscontain spatio-temporal information, making the task more complex and challenging. While substantial progresshas been made in this area, challenges persist in detecting subtle and complex actions, handling occlusions,and managing the computational demands of deep learning approaches. The proposed ResDC-GRU Attentionmodel demonstrated exceptional classification performance with 95.81% accuracy in classifying workout actionvideos and also outperformed various state-of-the-art models. The method also yielded 81.6%, 97.2%, 95.6%, and93.2% accuracy on established benchmark datasets, namely HMDB51, Youtube Actions, UCF50, and UCF101,respectively, showcasing its superiority and robustness in action recognition. The findings suggest practicalimplications in real-world scenarios where precise video action recognition is paramount, addressing the persistingchallenges in the field. TheWAVd dataset serves as a catalyst for the development ofmore robust and effective fitnesstracking systems and ultimately promotes healthier lifestyles through improved exercise monitoring and analysis.
基金financially supported by the National Key R&D Program of China(No.2022YFB3705300)the National Natural Science Foundation of China(Nos.U1960204 and 51974199)the Postdoctoral Fellowship Program of CPSF(No.GZB20230515)。
文摘The infamous type Ⅳ failure within the fine-grained heat-affected zone (FGHAZ) in G115 steel weldments seriously threatens the safe operation of ultra-supercritical (USC) power plants.In this work,the traditional thermo-mechanical treatment was modified via the replacement of hot-rolling with cold rolling,i.e.,normalizing,cold rolling,and tempering (NCT),which was developed to improve the creep strength of the FGHAZ in G115 steel weldments.The NCT treatment effectively promoted the dissolution of preformed M_(23)C_(6)particles and relieved the boundary segregation of C and Cr during welding thermal cycling,which accelerated the dispersed reprecipitation of M_(23)C_(6) particles within the fresh reaustenitized grains during post-weld heat treatment.In addition,the precipitation of Cu-rich phases and MX particles was promoted evidently due to the deformation-induced dislocations.As a result,the interacting actions between precipitates,dislocations,and boundaries during creep were reinforced considerably.Following this strategy,the creep rupture life of the FGHAZ in G115 steel weldments can be prolonged by 18.6%,which can further push the application of G115 steel in USC power plants.
基金The support of this research was by Hubei Provincial Natural Science Foundation(2022CFB449)Science Research Foundation of Education Department of Hubei Province(B2020061),are gratefully acknowledged.
文摘The task of food image recognition,a nuanced subset of fine-grained image recognition,grapples with substantial intra-class variation and minimal inter-class differences.These challenges are compounded by the irregular and multi-scale nature of food images.Addressing these complexities,our study introduces an advanced model that leverages multiple attention mechanisms and multi-stage local fusion,grounded in the ConvNeXt architecture.Our model employs hybrid attention(HA)mechanisms to pinpoint critical discriminative regions within images,substantially mitigating the influence of background noise.Furthermore,it introduces a multi-stage local fusion(MSLF)module,fostering long-distance dependencies between feature maps at varying stages.This approach facilitates the assimilation of complementary features across scales,significantly bolstering the model’s capacity for feature extraction.Furthermore,we constructed a dataset named Roushi60,which consists of 60 different categories of common meat dishes.Empirical evaluation of the ETH Food-101,ChineseFoodNet,and Roushi60 datasets reveals that our model achieves recognition accuracies of 91.12%,82.86%,and 92.50%,respectively.These figures not only mark an improvement of 1.04%,3.42%,and 1.36%over the foundational ConvNeXt network but also surpass the performance of most contemporary food image recognition methods.Such advancements underscore the efficacy of our proposed model in navigating the intricate landscape of food image recognition,setting a new benchmark for the field.
基金the National Natural Science Foundation of China(Grant No.52072041)the Beijing Natural Science Foundation(Grant No.JQ21007)+2 种基金the University of Chinese Academy of Sciences(Grant No.Y8540XX2D2)the Robotics Rhino-Bird Focused Research Project(No.2020-01-002)the Tencent Robotics X Laboratory.
文摘Humans can perceive our complex world through multi-sensory fusion.Under limited visual conditions,people can sense a variety of tactile signals to identify objects accurately and rapidly.However,replicating this unique capability in robots remains a significant challenge.Here,we present a new form of ultralight multifunctional tactile nano-layered carbon aerogel sensor that provides pressure,temperature,material recognition and 3D location capabilities,which is combined with multimodal supervised learning algorithms for object recognition.The sensor exhibits human-like pressure(0.04–100 kPa)and temperature(21.5–66.2℃)detection,millisecond response times(11 ms),a pressure sensitivity of 92.22 kPa^(−1)and triboelectric durability of over 6000 cycles.The devised algorithm has universality and can accommodate a range of application scenarios.The tactile system can identify common foods in a kitchen scene with 94.63%accuracy and explore the topographic and geomorphic features of a Mars scene with 100%accuracy.This sensing approach empowers robots with versatile tactile perception to advance future society toward heightened sensing,recognition and intelligence.
基金funded by the National Science and Technology Council,Taiwan(Grant No.NSTC 112-2121-M-039-001)by China Medical University(Grant No.CMU112-MF-79).
文摘Artificial intelligence(AI)technology has become integral in the realm of medicine and healthcare,particularly in human activity recognition(HAR)applications such as fitness and rehabilitation tracking.This study introduces a robust coupling analysis framework that integrates four AI-enabled models,combining both machine learning(ML)and deep learning(DL)approaches to evaluate their effectiveness in HAR.The analytical dataset comprises 561 features sourced from the UCI-HAR database,forming the foundation for training the models.Additionally,the MHEALTH database is employed to replicate the modeling process for comparative purposes,while inclusion of the WISDM database,renowned for its challenging features,supports the framework’s resilience and adaptability.The ML-based models employ the methodologies including adaptive neuro-fuzzy inference system(ANFIS),support vector machine(SVM),and random forest(RF),for data training.In contrast,a DL-based model utilizes one-dimensional convolution neural network(1dCNN)to automate feature extraction.Furthermore,the recursive feature elimination(RFE)algorithm,which drives an ML-based estimator to eliminate low-participation features,helps identify the optimal features for enhancing model performance.The best accuracies of the ANFIS,SVM,RF,and 1dCNN models with meticulous featuring process achieve around 90%,96%,91%,and 93%,respectively.Comparative analysis using the MHEALTH dataset showcases the 1dCNN model’s remarkable perfect accuracy(100%),while the RF,SVM,and ANFIS models equipped with selected features achieve accuracies of 99.8%,99.7%,and 96.5%,respectively.Finally,when applied to the WISDM dataset,the DL-based and ML-based models attain accuracies of 91.4%and 87.3%,respectively,aligning with prior research findings.In conclusion,the proposed framework yields HAR models with commendable performance metrics,exhibiting its suitability for integration into the healthcare services system through AI-driven applications.
基金National Natural Science Foundation of China under Grant No.61973037China Postdoctoral Science Foundation 2022M720419 to provide fund for conducting experiments。
文摘The identification of intercepted radio fuze modulation types is a prerequisite for decision-making in interference systems.However,the electromagnetic environment of modern battlefields is complex,and the signal-to-noise ratio(SNR)of such environments is usually low,which makes it difficult to implement accurate recognition of radio fuzes.To solve the above problem,a radio fuze automatic modulation recognition(AMR)method for low-SNR environments is proposed.First,an adaptive denoising algorithm based on data rearrangement and the two-dimensional(2D)fast Fourier transform(FFT)(DR2D)is used to reduce the noise of the intercepted radio fuze intermediate frequency(IF)signal.Then,the textural features of the denoised IF signal rearranged data matrix are extracted from the statistical indicator vectors of gray-level cooccurrence matrices(GLCMs),and support vector machines(SVMs)are used for classification.The DR2D-based adaptive denoising algorithm achieves an average correlation coefficient of more than 0.76 for ten fuze types under SNRs of-10 d B and above,which is higher than that of other typical algorithms.The trained SVM classification model achieves an average recognition accuracy of more than 96%on seven modulation types and recognition accuracies of more than 94%on each modulation type under SNRs of-12 d B and above,which represents a good AMR performance of radio fuzes under low SNRs.
基金funded by the Open Access Initiative of the University of Bremen and the DFG via SuUB Bremen.The authors are thankful to the Deanship of Scientific Research at Najran University for funding this work under the Research Group Funding Program grant code(NU/RG/SERC/13/40).
文摘In the field of computer vision and pattern recognition,knowledge based on images of human activity has gained popularity as a research topic.Activity recognition is the process of determining human behavior based on an image.We implemented an Extended Kalman filter to create an activity recognition system here.The proposed method applies an HSI color transformation in its initial stages to improve the clarity of the frame of the image.To minimize noise,we use Gaussian filters.Extraction of silhouette using the statistical method.We use Binary Robust Invariant Scalable Keypoints(BRISK)and SIFT for feature extraction.The next step is to perform feature discrimination using Gray Wolf.After that,the features are input into the Extended Kalman filter and classified into relevant human activities according to their definitive characteristics.The experimental procedure uses the SUB-Interaction and HMDB51 datasets to a 0.88%and 0.86%recognition rate.
文摘This paper proposes a novel open set recognition method,the Spatial Distribution Feature Extraction Network(SDFEN),to address the problem of electromagnetic signal recognition in an open environment.The spatial distribution feature extraction layer in SDFEN replaces convolutional output neural networks with the spatial distribution features that focus more on inter-sample information by incorporating class center vectors.The designed hybrid loss function considers both intra-class distance and inter-class distance,thereby enhancing the similarity among samples of the same class and increasing the dissimilarity between samples of different classes during training.Consequently,this method allows unknown classes to occupy a larger space in the feature space.This reduces the possibility of overlap with known class samples and makes the boundaries between known and unknown samples more distinct.Additionally,the feature comparator threshold can be used to reject unknown samples.For signal open set recognition,seven methods,including the proposed method,are applied to two kinds of electromagnetic signal data:modulation signal and real-world emitter.The experimental results demonstrate that the proposed method outperforms the other six methods overall in a simulated open environment.Specifically,compared to the state-of-the-art Openmax method,the novel method achieves up to 8.87%and 5.25%higher micro-F-measures,respectively.
基金supported from the National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community.
基金funded by Researchers Supporting Project Number(RSPD2024 R947),King Saud University,Riyadh,Saudi Arabia.
文摘Hand gestures have been used as a significant mode of communication since the advent of human civilization.By facilitating human-computer interaction(HCI),hand gesture recognition(HGRoc)technology is crucial for seamless and error-free HCI.HGRoc technology is pivotal in healthcare and communication for the deaf community.Despite significant advancements in computer vision-based gesture recognition for language understanding,two considerable challenges persist in this field:(a)limited and common gestures are considered,(b)processing multiple channels of information across a network takes huge computational time during discriminative feature extraction.Therefore,a novel hand vision-based convolutional neural network(CNN)model named(HVCNNM)offers several benefits,notably enhanced accuracy,robustness to variations,real-time performance,reduced channels,and scalability.Additionally,these models can be optimized for real-time performance,learn from large amounts of data,and are scalable to handle complex recognition tasks for efficient human-computer interaction.The proposed model was evaluated on two challenging datasets,namely the Massey University Dataset(MUD)and the American Sign Language(ASL)Alphabet Dataset(ASLAD).On the MUD and ASLAD datasets,HVCNNM achieved a score of 99.23% and 99.00%,respectively.These results demonstrate the effectiveness of CNN as a promising HGRoc approach.The findings suggest that the proposed model have potential roles in applications such as sign language recognition,human-computer interaction,and robotics.
基金supportted by Natural Science Foundation of Jiangsu Province(No.BK20230696).
文摘Electric power training is essential for ensuring the safety and reliability of the system.In this study,we introduce a novel Abnormal Action Recognition(AAR)system that utilizes a Lightweight Pose Estimation Network(LPEN)to efficiently and effectively detect abnormal fall-down and trespass incidents in electric power training scenarios.The LPEN network,comprising three stages—MobileNet,Initial Stage,and Refinement Stage—is employed to swiftly extract image features,detect human key points,and refine them for accurate analysis.Subsequently,a Pose-aware Action Analysis Module(PAAM)captures the positional coordinates of human skeletal points in each frame.Finally,an Abnormal Action Inference Module(AAIM)evaluates whether abnormal fall-down or unauthorized trespass behavior is occurring.For fall-down recognition,three criteria—falling speed,main angles of skeletal points,and the person’s bounding box—are considered.To identify unauthorized trespass,emphasis is placed on the position of the ankles.Extensive experiments validate the effectiveness and efficiency of the proposed system in ensuring the safety and reliability of electric power training.
基金supported by the BK21 FOUR Program of the National Research Foundation of Korea funded by the Ministry of Education(NRF5199991014091)Seok-Won Lee’s work was supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)under the Artificial Intelligence Convergence Innovation Human Resources Development(IITP-2024-RS-2023-00255968)grant funded by the Korea government(MSIT).
文摘Although sentiment analysis is pivotal to understanding user preferences,existing models face significant challenges in handling context-dependent sentiments,sarcasm,and nuanced emotions.This study addresses these challenges by integrating ontology-based methods with deep learning models,thereby enhancing sentiment analysis accuracy in complex domains such as film reviews and restaurant feedback.The framework comprises explicit topic recognition,followed by implicit topic identification to mitigate topic interference in subsequent sentiment analysis.In the context of sentiment analysis,we develop an expanded sentiment lexicon based on domainspecific corpora by leveraging techniques such as word-frequency analysis and word embedding.Furthermore,we introduce a sentiment recognition method based on both ontology-derived sentiment features and sentiment lexicons.We evaluate the performance of our system using a dataset of 10,500 restaurant reviews,focusing on sentiment classification accuracy.The incorporation of specialized lexicons and ontology structures enables the framework to discern subtle sentiment variations and context-specific expressions,thereby improving the overall sentiment-analysis performance.Experimental results demonstrate that the integration of ontology-based methods and deep learning models significantly improves sentiment analysis accuracy.
文摘Handwritten character recognition(HCR)involves identifying characters in images,documents,and various sources such as forms surveys,questionnaires,and signatures,and transforming them into a machine-readable format for subsequent processing.Successfully recognizing complex and intricately shaped handwritten characters remains a significant obstacle.The use of convolutional neural network(CNN)in recent developments has notably advanced HCR,leveraging the ability to extract discriminative features from extensive sets of raw data.Because of the absence of pre-existing datasets in the Kurdish language,we created a Kurdish handwritten dataset called(KurdSet).The dataset consists of Kurdish characters,digits,texts,and symbols.The dataset consists of 1560 participants and contains 45,240 characters.In this study,we chose characters only from our dataset.We utilized a Kurdish dataset for handwritten character recognition.The study also utilizes various models,including InceptionV3,Xception,DenseNet121,and a customCNNmodel.To show the performance of the KurdSet dataset,we compared it to Arabic handwritten character recognition dataset(AHCD).We applied the models to both datasets to show the performance of our dataset.Additionally,the performance of the models is evaluated using test accuracy,which measures the percentage of correctly classified characters in the evaluation phase.All models performed well in the training phase,DenseNet121 exhibited the highest accuracy among the models,achieving a high accuracy of 99.80%on the Kurdish dataset.And Xception model achieved 98.66%using the Arabic dataset.
基金funded by the Henan Provincial Science and Technology Research Project(222102210086)the Starry Sky Creative Space Innovation Space Innovation Incubation Project of Zhengzhou University of Light Industry(2023ZCKJ211).
文摘This study proposes a pose estimation-convolutional neural network-bidirectional gated recurrent unit(PSECNN-BiGRU)fusion model for human posture recognition to address low accuracy issues in abnormal posture recognition due to the loss of some feature information and the deterioration of comprehensive performance in model detection in complex home environments.Firstly,the deep convolutional network is integrated with the Mediapipe framework to extract high-precision,multi-dimensional information from the key points of the human skeleton,thereby obtaining a human posture feature set.Thereafter,a double-layer BiGRU algorithm is utilized to extract multi-layer,bidirectional temporal features from the human posture feature set,and a CNN network with an exponential linear unit(ELU)activation function is adopted to perform deep convolution of the feature map to extract the spatial feature of the human posture.Furthermore,a squeeze and excitation networks(SENet)module is introduced to adaptively learn the importance weights of each channel,enhancing the network’s focus on important features.Finally,comparative experiments are performed on available datasets,including the public human activity recognition using smartphone dataset(UCIHAR),the public human activity recognition 70 plus dataset(HAR70PLUS),and the independently developed home abnormal behavior recognition dataset(HABRD)created by the authors’team.The results show that the average accuracy of the proposed PSE-CNN-BiGRU fusion model for human posture recognition is 99.56%,89.42%,and 98.90%,respectively,which are 5.24%,5.83%,and 3.19%higher than the average accuracy of the five models proposed in the comparative literature,including CNN,GRU,and others.The F1-score for abnormal posture recognition reaches 98.84%(heartache),97.18%(fall),99.6%(bellyache),and 98.27%(climbing)on the self-builtHABRDdataset,thus verifying the effectiveness,generalization,and robustness of the proposed model in enhancing human posture recognition.
基金the Strategic Priority Research Program of Chinese Academy of Sciences(Grant No.XDC02040300)for this study.
文摘RFID-based human activity recognition(HAR)attracts attention due to its convenience,noninvasiveness,and privacy protection.Existing RFID-based HAR methods use modeling,CNN,or LSTM to extract features effectively.Still,they have shortcomings:1)requiring complex hand-crafted data cleaning processes and 2)only addressing single-person activity recognition based on specific RF signals.To solve these problems,this paper proposes a novel device-free method based on Time-streaming Multiscale Transformer called TransTM.This model leverages the Transformer's powerful data fitting capabilities to take raw RFID RSSI data as input without pre-processing.Concretely,we propose a multiscale convolutional hybrid Transformer to capture behavioral features that recognizes singlehuman activities and human-to-human interactions.Compared with existing CNN-and LSTM-based methods,the Transformer-based method has more data fitting power,generalization,and scalability.Furthermore,using RF signals,our method achieves an excellent classification effect on human behaviorbased classification tasks.Experimental results on the actual RFID datasets show that this model achieves a high average recognition accuracy(99.1%).The dataset we collected for detecting RFID-based indoor human activities will be published.
基金the Open Project of Sichuan Provincial Key Laboratory of Philosophy and Social Science for Language Intelligence in Special Education under Grant No.YYZN-2023-4the Ph.D.Fund of Chengdu Technological University under Grant No.2020RC002.
文摘The fingerprinting-based approach using the wireless local area network(WLAN)is widely used for indoor localization.However,the construction of the fingerprint database is quite time-consuming.Especially when the position of the access point(AP)or wall changes,updating the fingerprint database in real-time is difficult.An appropriate indoor localization approach,which has a low implementation cost,excellent real-time performance,and high localization accuracy and fully considers complex indoor environment factors,is preferred in location-based services(LBSs)applications.In this paper,we proposed a fine-grained grid computing(FGGC)model to achieve decimeter-level localization accuracy.Reference points(RPs)are generated in the grid by the FGGC model.Then,the received signal strength(RSS)values at each RP are calculated with the attenuation factors,such as the frequency band,three-dimensional propagation distance,and walls in complex environments.As a result,the fingerprint database can be established automatically without manual measurement,and the efficiency and cost that the FGGC model takes for the fingerprint database are superior to previous methods.The proposed indoor localization approach,which estimates the position step by step from the approximate grid location to the fine-grained location,can achieve higher real-time performance and localization accuracy simultaneously.The mean error of the proposed model is 0.36 m,far lower than that of previous approaches.Thus,the proposed model is feasible to improve the efficiency and accuracy of Wi-Fi indoor localization.It also shows high-accuracy performance with a fast running speed even under a large-size grid.The results indicate that the proposed method can also be suitable for precise marketing,indoor navigation,and emergency rescue.