期刊文献+
共找到17,115篇文章
< 1 2 250 >
每页显示 20 50 100
The chest X ray in pulmonary embolism: Westermark sign, Hampton's Hump and Palla's sign. What's the difference?
1
作者 Tan Si Hong Shawn Lim Xin Yan Fatimah Lateef 《Journal of Acute Disease》 2018年第3期99-102,共4页
Pulmonary embolism (PE), with the incidence of about 60 per 100000 annually, can be a life-threatening disease if it is not treated promptly. It has been estimated that some 10% of PE patients die within the first hou... Pulmonary embolism (PE), with the incidence of about 60 per 100000 annually, can be a life-threatening disease if it is not treated promptly. It has been estimated that some 10% of PE patients die within the first hour of the event. Untreated PE has a mortality of about 30%. PE is a condition that is treatable if suspected and diagnosed early. The chest radiograph is still the first investigation that is ordered in patients presenting with cardiorespiratory symptoms or symptoms suggestive of PE. The CXR is also helpful in identifying or excluding other conditions or diagnoses. Thus, knowing and understanding some of the more specific CXR signs can be useful. We suggest that physicians to be aware of and utilize CXR findings such as Palla's sign, Westermark sign and Hamptons hump to help with the diagnosis of PE and to exclude other conditions that can mimic venous thrombo-embolism. Even if these signs are not common, their presence, even in an unsuspected patient without a high pretest probability of PE, should prompt further investigations such as a D-dimer test, lung scintigraphy or computed tomography pulmonary angiography as required. 展开更多
关键词 PULMONARY EMBOLISM Palla's sign Hamptom's HUMP Westermark sign
下载PDF
Source localization in signed networks with effective distance
2
作者 马志伟 孙蕾 +2 位作者 丁智国 黄宜真 胡兆龙 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第2期577-585,共9页
While progress has been made in information source localization,it has overlooked the prevalent friend and adversarial relationships in social networks.This paper addresses this gap by focusing on source localization ... While progress has been made in information source localization,it has overlooked the prevalent friend and adversarial relationships in social networks.This paper addresses this gap by focusing on source localization in signed network models.Leveraging the topological characteristics of signed networks and transforming the propagation probability into effective distance,we propose an optimization method for observer selection.Additionally,by using the reverse propagation algorithm we present a method for information source localization in signed networks.Extensive experimental results demonstrate that a higher proportion of positive edges within signed networks contributes to more favorable source localization,and the higher the ratio of propagation rates between positive and negative edges,the more accurate the source localization becomes.Interestingly,this aligns with our observation that,in reality,the number of friends tends to be greater than the number of adversaries,and the likelihood of information propagation among friends is often higher than among adversaries.In addition,the source located at the periphery of the network is not easy to identify.Furthermore,our proposed observer selection method based on effective distance achieves higher operational efficiency and exhibits higher accuracy in information source localization,compared with three strategies for observer selection based on the classical full-order neighbor coverage. 展开更多
关键词 complex networks signed networks source localization effective distance
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
3
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
A Hybrid Feature Fusion Traffic Sign Detection Algorithm Based on YOLOv7
4
作者 Bingyi Ren Juwei Zhang Tong Wang 《Computers, Materials & Continua》 SCIE EI 2024年第7期1425-1440,共16页
Autonomous driving technology has entered a period of rapid development,and traffic sign detection is one of the important tasks.Existing target detection networks are difficult to adapt to scenarios where target size... Autonomous driving technology has entered a period of rapid development,and traffic sign detection is one of the important tasks.Existing target detection networks are difficult to adapt to scenarios where target sizes are seriously imbalanced,and traffic sign targets are small and have unclear features,which makes detection more difficult.Therefore,we propose aHybrid Feature Fusion Traffic Sign detection algorithmbased onYOLOv7(HFFTYOLO).First,a self-attention mechanism is incorporated at the end of the backbone network to calculate feature interactions within scales;Secondly,the cross-scale fusion part of the neck introduces a bottom-up multi-path fusion method.Design reuse paths at the end of the neck,paying particular attention to cross-scale fusion of highlevel features.In addition,we found the appropriate channel width through a lot of experiments and reduced the superfluous parameters.In terms of training,a newregression lossCMPDIoUis proposed,which not only considers the problem of loss degradation when the aspect ratio is the same but the width and height are different,but also enables the penalty term to dynamically change at different scales.Finally,our proposed improved method shows excellent results on the TT100K dataset.Compared with the baseline model,without increasing the number of parameters and computational complexity,AP0.5 and AP increased by 2.2%and 2.7%,respectively,reaching 92.9%and 58.1%. 展开更多
关键词 Small target detection YOLOv7 traffic sign detection regression loss
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
5
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese sign Language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
“Keyboard sign”and“coffee bean sign”in the prenatal diagnosis of ileal atresia:A case report
6
作者 Zhi-Hui Fei Qi-Yi Zhou +1 位作者 Ling Fan Chan Yin 《World Journal of Clinical Cases》 SCIE 2024年第24期5622-5627,共6页
BACKGROUND Ileal atresia is a congenital abnormality where there is significant stenosis or complete absence of a portion of the ileum.The overall diagnostic accuracy of prenatal ultrasound in detecting jejunal and il... BACKGROUND Ileal atresia is a congenital abnormality where there is significant stenosis or complete absence of a portion of the ileum.The overall diagnostic accuracy of prenatal ultrasound in detecting jejunal and ileal atresia is low.We report a case of ileal atresia diagnosed prenatally by ultrasound examination with the“keyboard sign”and“coffee bean sign”.CASE SUMMARY We report a case of ileal atresia diagnosed in utero at 31 weeks'of gestation.Prenatal ultrasound examination revealed two rows of intestines arranged in an‘S’shape in the middle abdomen.The inner diameters were 1.7 cm and 1.6 cm,respectively.A typical“keyboard sign”was observed.The intestine canal behind the“keyboard sign”showed an irregular strong echo.There was no normal intestinal wall structure,showing a typical“coffee bean sign”.Termination of the pregnancy and autopsy findings confirmed the diagnosis.CONCLUSION The prenatal diagnosis of ileal atresia is difficult.The sonographic features of the“keyboard sign”and“coffee bean sign”are helpful in diagnosing the location of congenital jejunal and ileal atresia. 展开更多
关键词 Ileal atresia The prenatal diagnosis Keyboard sign Coffee bean sign
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
7
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 Chinese sign Language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
Selective his bundle pacing eliminates crochetage sign:A case report
8
作者 Yan-Guang Mu Ke-Sen Liu 《World Journal of Clinical Cases》 SCIE 2024年第22期5276-5282,共7页
BACKGROUND Crochetage sign is a specific electrocardiographic manifestation of ostium secundum atrial septal defects(ASDs),which is associated with the severity of the left-to-right shunt.Herein,we reported a case of ... BACKGROUND Crochetage sign is a specific electrocardiographic manifestation of ostium secundum atrial septal defects(ASDs),which is associated with the severity of the left-to-right shunt.Herein,we reported a case of selective his bundle pacing(SHBP)that eliminated crochetage sign in a patient with ostium secundum ASD.CASE SUMMARY A 77-year-old man was admitted with a 2-year history of chest tightness and shortness of breath.Transthoracic echocardiography revealed an ostium secundum ASD.Twelve-lead electrocardiogram revealed atrial fibrillation with a prolonged relative risk interval,incomplete right bundle branch block,and crochetage sign.The patient was diagnosed with an ostium secundum ASD,atrial fibrillation with a second-degree atrioventricular block,and heart failure.The patient was treated with selective his bundle pacemaker implantation.After the procedure,crochetage sign disappeared during his bundle pacing on the electrocardiogram.CONCLUSION S-HBP eliminated crochetage sign on electrocardiogram.Crochetage sign may be a manifestation of a conduction system disorder. 展开更多
关键词 Crochetage sign Atrial septal defect PACEMAKER Selective his bundle pacing Case report
下载PDF
Correlation between abdominal computed tomography signs and postoperative prognosis for patients with colorectal cancer
9
作者 Shao-Min Yang Jie-Mei Liu +3 位作者 Rui-Ping Wen Yu-Dong Qian Jing-Bo He Jing-Song Sun 《World Journal of Gastrointestinal Surgery》 SCIE 2024年第7期2145-2156,共12页
BACKGROUND Patients with different stages of colorectal cancer(CRC)exhibit different abdominal computed tomography(CT)signs.Therefore,the influence of CT signs on CRC prognosis must be determined.AIM To observe abdomi... BACKGROUND Patients with different stages of colorectal cancer(CRC)exhibit different abdominal computed tomography(CT)signs.Therefore,the influence of CT signs on CRC prognosis must be determined.AIM To observe abdominal CT signs in patients with CRC and analyze the correlation between the CT signs and postoperative prognosis.METHODS The clinical history and CT imaging results of 88 patients with CRC who underwent radical surgery at Xingtan Hospital Affiliated to Shunde Hospital of Southern Medical University were retrospectively analyzed.Univariate and multivariate Cox regression analyses were used to explore the independent risk factors for postoperative death in patients with CRC.The three-year survival rate was analyzed using the Kaplan-Meier curve,and the correlation between postoperative survival time and abdominal CT signs in patients with CRC was analyzed using Spearman correlation analysis.RESULTS For patients with CRC,the three-year survival rate was 73.86%.The death group exhibited more severe characteristics than the survival group.A multivariate Cox regression model analysis showed that body mass index(BMI),degree of periintestinal infiltration,tumor size,and lymph node CT value were independent factors influencing postoperative death(P<0.05 for all).Patients with characteristics typical to the death group had a low three-year survival rate(log-rankχ2=66.487,11.346,12.500,and 27.672,respectively,P<0.05 for all).The survival time of CRC patients was negatively correlated with BMI,degree of periintestinal infiltration,tumor size,lymph node CT value,mean tumor long-axis diameter,and mean tumor short-axis diameter(r=-0.559,0.679,-0.430,-0.585,-0.425,and-0.385,respectively,P<0.05 for all).BMI was positively correlated with the degree of periintestinal invasion,lymph node CT value,and mean tumor short-axis diameter(r=0.303,0.431,and 0.437,respectively,P<0.05 for all).CONCLUSION The degree of periintestinal infiltration,tumor size,and lymph node CT value are crucial for evaluating the prognosis of patients with CRC. 展开更多
关键词 Colorectal cancer ABDOMINAL Computed tomography signs Radical surgery PROGNOSIS CORRELATION
下载PDF
Existence of Monotone Positive Solution for a Fourth-Order Three-Point BVP with Sign-Changing Green’s Function
10
作者 Junrui Yue Yun Zhang Qingyue Bai 《Open Journal of Applied Sciences》 2024年第1期63-69,共7页
This paper is concerned with the following fourth-order three-point boundary value problem , where , we discuss the existence of positive solutions to the above problem by applying to the fixed point theory in cones a... This paper is concerned with the following fourth-order three-point boundary value problem , where , we discuss the existence of positive solutions to the above problem by applying to the fixed point theory in cones and iterative technique. 展开更多
关键词 Fourth-Order Three-Point Boundary Value Problem sign-Changing Green’s Function Fixed Point Index Iterative Technique Monotone Positive Solution EXISTENCE
下载PDF
Continuous Sign Language Recognition Based on Spatial-Temporal Graph Attention Network 被引量:2
11
作者 Qi Guo Shujun Zhang Hui Li 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第3期1653-1670,共18页
Continuous sign language recognition(CSLR)is challenging due to the complexity of video background,hand gesture variability,and temporal modeling difficulties.This work proposes a CSLR method based on a spatialtempora... Continuous sign language recognition(CSLR)is challenging due to the complexity of video background,hand gesture variability,and temporal modeling difficulties.This work proposes a CSLR method based on a spatialtemporal graph attention network to focus on essential features of video series.The method considers local details of sign language movements by taking the information on joints and bones as inputs and constructing a spatialtemporal graph to reflect inter-frame relevance and physical connections between nodes.The graph-based multihead attention mechanism is utilized with adjacent matrix calculation for better local-feature exploration,and short-term motion correlation modeling is completed via a temporal convolutional network.We adopted BLSTM to learn the long-termdependence and connectionist temporal classification to align the word-level sequences.The proposed method achieves competitive results regarding word error rates(1.59%)on the Chinese Sign Language dataset and the mean Jaccard Index(65.78%)on the ChaLearn LAP Continuous Gesture Dataset. 展开更多
关键词 Continuous sign language recognition graph attention network bidirectional long short-term memory connectionist temporal classification
下载PDF
Rotation,Translation and Scale Invariant Sign Word Recognition Using Deep Learning 被引量:2
12
作者 Abu Saleh Musa Miah Jungpil Shin +2 位作者 Md.Al Mehedi Hasan Md Abdur Rahim Yuichi Okuyama 《Computer Systems Science & Engineering》 SCIE EI 2023年第3期2521-2536,共16页
Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each o... Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each other through hand gestures.Recognition of hand gestures has become an important challenge for the recognition of sign language.There are many existing models that can produce a good accuracy,but if the model test with rotated or translated images,they may face some difficulties to make good performance accuracy.To resolve these challenges of hand gesture recognition,we proposed a Rotation,Translation and Scale-invariant sign word recognition system using a convolu-tional neural network(CNN).We have followed three steps in our work:rotated,translated and scaled(RTS)version dataset generation,gesture segmentation,and sign word classification.Firstly,we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation,Translation and Scale of the ori-ginal images to create the RTS version dataset.Then we have applied the gesture segmentation technique.The segmentation consists of three levels,i)Otsu Thresholding with YCbCr,ii)Morphological analysis:dilation through opening morphology and iii)Watershed algorithm.Finally,our designed CNN model has been trained to classify the hand gesture as well as the sign word.Our model has been evaluated using the twenty sign word dataset,five sign word dataset and the RTS version of these datasets.We achieved 99.30%accuracy from the twenty sign word dataset evaluation,99.10%accuracy from the RTS version of the twenty sign word evolution,100%accuracy from thefive sign word dataset evaluation,and 98.00%accuracy from the RTS versionfive sign word dataset evolution.Furthermore,the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition. 展开更多
关键词 sign word recognition convolution neural network(cnn) rotation translation and scaling(rts) otsu segmentation
下载PDF
Simulation based on a modified social force model for sensitivity to emergency signs in subway station 被引量:1
13
作者 蔡征宇 周汝 +2 位作者 崔银锴 王妍 蒋军成 《Chinese Physics B》 SCIE EI CAS CSCD 2023年第2期175-183,共9页
The subway is the primary travel tool for urban residents in China. Due to the complex structure of the subway and high personnel density in rush hours, subway evacuation capacity is critical. The subway evacuation mo... The subway is the primary travel tool for urban residents in China. Due to the complex structure of the subway and high personnel density in rush hours, subway evacuation capacity is critical. The subway evacuation model is explored in this work by combining the improved social force model with the view radius using the Vicsek model. The pedestrians are divided into two categories based on different force models. The first category is sensitive pedestrians who have normal responses to emergency signs. The second category is insensitive pedestrians. By simulating different proportions of the insensitive pedestrians, we find that the escape time is directly proportional to the number of insensitive pedestrians and inversely proportional to the view radius. However, when the view radius is large enough, the escape time does not change significantly, and the evacuation of people in a small view radius environment tends to be integrated. With the improvement of view radius conditions, the escape time changes more obviously with the proportion of insensitive pedestrians. A new emergency sign layout is proposed, and the simulations show that the proposed layout can effectively reduce the escape time in a small view radius environment. However, the evacuation effect of the new escape sign layout on the large view radius environment is not apparent. In this case, the exit setting emerges as an additional factor affecting the escape time. 展开更多
关键词 modified social force model emergency evacuation insensitive pedestrians emergency signs layout
下载PDF
C2Net-YOLOv5: A Bidirectional Res2Net-Based Traffic Sign Detection Algorithm 被引量:1
14
作者 Xiujuan Wang Yiqi Tian +1 位作者 Kangfeng Zheng Chutong Liu 《Computers, Materials & Continua》 SCIE EI 2023年第11期1949-1965,共17页
Rapid advancement of intelligent transportation systems(ITS)and autonomous driving(AD)have shown the importance of accurate and efficient detection of traffic signs.However,certain drawbacks,such as balancing accuracy... Rapid advancement of intelligent transportation systems(ITS)and autonomous driving(AD)have shown the importance of accurate and efficient detection of traffic signs.However,certain drawbacks,such as balancing accuracy and real-time performance,hinder the deployment of traffic sign detection algorithms in ITS and AD domains.In this study,a novel traffic sign detection algorithm was proposed based on the bidirectional Res2Net architecture to achieve an improved balance between accuracy and speed.An enhanced backbone network module,called C2Net,which uses an upgraded bidirectional Res2Net,was introduced to mitigate information loss in the feature extraction process and to achieve information complementarity.Furthermore,a squeeze-and-excitation attention mechanism was incorporated within the channel attention of the architecture to perform channel-level feature correction on the input feature map,which effectively retains valuable features while removing non-essential features.A series of ablation experiments were conducted to validate the efficacy of the proposed methodology.The performance was evaluated using two distinct datasets:the Tsinghua-Tencent 100K and the CSUST Chinese traffic sign detection benchmark 2021.On the TT100K dataset,the method achieves precision,recall,and Map0.5 scores of 83.3%,79.3%,and 84.2%,respectively.Similarly,on the CCTSDB 2021 dataset,the method achieves precision,recall,and Map0.5 scores of 91.49%,73.79%,and 81.03%,respectively.Experimental results revealed that the proposed method had superior performance compared to conventional models,which includes the faster region-based convolutional neural network,single shot multibox detector,and you only look once version 5. 展开更多
关键词 Target detection traffic sign detection autonomous driving YOLOv5
下载PDF
A Light-Weight Deep Learning-Based Architecture for Sign Language Classification 被引量:1
15
作者 M.Daniel Nareshkumar B.Jaison 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期3501-3515,共15页
With advancements in computing powers and the overall quality of images captured on everyday cameras,a much wider range of possibilities has opened in various scenarios.This fact has several implications for deaf and ... With advancements in computing powers and the overall quality of images captured on everyday cameras,a much wider range of possibilities has opened in various scenarios.This fact has several implications for deaf and dumb people as they have a chance to communicate with a greater number of people much easier.More than ever before,there is a plethora of info about sign language usage in the real world.Sign languages,and by extension the datasets available,are of two forms,isolated sign language and continuous sign language.The main difference between the two types is that in isolated sign language,the hand signs cover individual letters of the alphabet.In continuous sign language,entire words’hand signs are used.This paper will explore a novel deep learning architecture that will use recently published large pre-trained image models to quickly and accurately recognize the alphabets in the American Sign Language(ASL).The study will focus on isolated sign language to demonstrate that it is possible to achieve a high level of classification accuracy on the data,thereby showing that interpreters can be implemented in the real world.The newly proposed Mobile-NetV2 architecture serves as the backbone of this study.It is designed to run on end devices like mobile phones and infer signals(what does it infer)from images in a relatively short amount of time.With the proposed architecture in this paper,the classification accuracy of 98.77%in the Indian Sign Language(ISL)and American Sign Language(ASL)is achieved,outperforming the existing state-of-the-art systems. 展开更多
关键词 Deep learning machine learning CLASSIFICATION filters american sign language
下载PDF
Research on Traffic Sign Detection Based on Improved YOLOv8 被引量:1
16
作者 Zhongjie Huang Lintao Li +1 位作者 Gerd Christian Krizek Linhao Sun 《Journal of Computer and Communications》 2023年第7期226-232,共7页
Aiming at solving the problem of missed detection and low accuracy in detecting traffic signs in the wild, an improved method of YOLOv8 is proposed. Firstly, combined with the characteristics of small target objects i... Aiming at solving the problem of missed detection and low accuracy in detecting traffic signs in the wild, an improved method of YOLOv8 is proposed. Firstly, combined with the characteristics of small target objects in the actual scene, this paper further adds blur and noise operation. Then, the asymptotic feature pyramid network (AFPN) is introduced to highlight the influence of key layer features after feature fusion, and simultaneously solve the direct interaction of non-adjacent layers. Experimental results on the TT100K dataset show that compared with the YOLOv8, the detection accuracy and recall are higher. . 展开更多
关键词 Traffic sign Detection Small Object Detection YOLOv8 Feature Fusion
下载PDF
A Novel Action Transformer Network for Hybrid Multimodal Sign Language Recognition
17
作者 Sameena Javaid Safdar Rizvi 《Computers, Materials & Continua》 SCIE EI 2023年第1期523-537,共15页
Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body mo... Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body movements including head,facial expressions,eyes,shoulder shrugging,etc.Previously both gestures have been detected;identifying separately may have better accuracy,butmuch communicational information is lost.Aproper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others.Our novel proposed system contributes as Sign LanguageAction Transformer Network(SLATN),localizing hand,body,and facial gestures in video sequences.Here we are expending a Transformer-style structural design as a“base network”to extract features from a spatiotemporal domain.Themodel impulsively learns to track individual persons and their action context inmultiple frames.Furthermore,a“head network”emphasizes hand movement and facial expression simultaneously,which is often crucial to understanding sign language,using its attention mechanism for creating tight bounding boxes around classified gestures.The model’s work is later compared with the traditional identification methods of activity recognition.It not only works faster but achieves better accuracy as well.Themodel achieves overall 82.66%testing accuracy with a very considerable performance of computation with 94.13 Giga-Floating Point Operations per Second(G-FLOPS).Another contribution is a newly created dataset of Pakistan Sign Language forManual and Non-Manual(PkSLMNM)gestures. 展开更多
关键词 sign language gesture recognition manual signs non-manual signs action transformer network
下载PDF
Constructing Representative Collective Signature Protocols Using The GOST R34.10-1994 Standard
18
作者 Tuan Nguyen Kim Duy Ho Ngoc Nikolay A.Moldovyan 《Computers, Materials & Continua》 SCIE EI 2023年第1期1475-1491,共17页
The representative collective digital signature,which was suggested by us,is built based on combining the advantages of group digital signature and collective digital signature.This collective digital signature schema... The representative collective digital signature,which was suggested by us,is built based on combining the advantages of group digital signature and collective digital signature.This collective digital signature schema helps to create a unique digital signature that deputizes a collective of people representing different groups of signers and may also include personal signers.The advantage of the proposed collective signature is that it can be built based on most of the well-known difficult problems such as the factor analysis,the discrete logarithm and finding modulo roots of large prime numbers and the current digital signature standards of the United States and Russian Federation.In this paper,we use the discrete logarithmic problem on prime finite fields,which has been implemented in the GOST R34.10-1994 digital signature standard,to build the proposed collective signature protocols.These protocols help to create collective signatures:Guaranteed internal integrity and fixed size,independent of the number of members involved in forming the signature.The signature built in this study,consisting of 3 components(U,R,S),stores the information of all relevant signers in the U components,thus tracking the signer and against the“disclaim of liability”of the signer later is possible.The idea of hiding the signer’s public key is also applied in the proposed protocols.This makes it easy for the signing group representative to specify which members are authorized to participate in the signature creation process. 展开更多
关键词 signing collective signing group discrete logarithm group signature collective signature GOST standards
下载PDF
Traffic Sign Recognition for Autonomous Vehicle Using Optimized YOLOv7 and Convolutional Block Attention Module
19
作者 P.Kuppusamy M.Sanjay +1 位作者 P.V.Deepashree C.Iwendi 《Computers, Materials & Continua》 SCIE EI 2023年第10期445-466,共22页
The infrastructure and construction of roads are crucial for the economic and social development of a region,but traffic-related challenges like accidents and congestion persist.Artificial Intelligence(AI)and Machine ... The infrastructure and construction of roads are crucial for the economic and social development of a region,but traffic-related challenges like accidents and congestion persist.Artificial Intelligence(AI)and Machine Learning(ML)have been used in road infrastructure and construction,particularly with the Internet of Things(IoT)devices.Object detection in Computer Vision also plays a key role in improving road infrastructure and addressing trafficrelated problems.This study aims to use You Only Look Once version 7(YOLOv7),Convolutional Block Attention Module(CBAM),the most optimized object-detection algorithm,to detect and identify traffic signs,and analyze effective combinations of adaptive optimizers like Adaptive Moment estimation(Adam),Root Mean Squared Propagation(RMSprop)and Stochastic Gradient Descent(SGD)with the YOLOv7.Using a portion of German traffic signs for training,the study investigates the feasibility of adopting smaller datasets while maintaining high accuracy.The model proposed in this study not only improves traffic safety by detecting traffic signs but also has the potential to contribute to the rapid development of autonomous vehicle systems.The study results showed an impressive accuracy of 99.7%when using a batch size of 8 and the Adam optimizer.This high level of accuracy demonstrates the effectiveness of the proposed model for the image classification task of traffic sign recognition. 展开更多
关键词 Object detection traffic sign detection YOLOv7 convolutional block attention module road sign detection ADAM
下载PDF
A Robust Model for Translating Arabic Sign Language into Spoken Arabic Using Deep Learning
20
作者 Khalid M.O.Nahar Ammar Almomani +1 位作者 Nahlah Shatnawi Mohammad Alauthman 《Intelligent Automation & Soft Computing》 SCIE 2023年第8期2037-2057,共21页
This study presents a novel and innovative approach to auto-matically translating Arabic Sign Language(ATSL)into spoken Arabic.The proposed solution utilizes a deep learning-based classification approach and the trans... This study presents a novel and innovative approach to auto-matically translating Arabic Sign Language(ATSL)into spoken Arabic.The proposed solution utilizes a deep learning-based classification approach and the transfer learning technique to retrain 12 image recognition models.The image-based translation method maps sign language gestures to corre-sponding letters or words using distance measures and classification as a machine learning technique.The results show that the proposed model is more accurate and faster than traditional image-based models in classifying Arabic-language signs,with a translation accuracy of 93.7%.This research makes a significant contribution to the field of ATSL.It offers a practical solution for improving communication for individuals with special needs,such as the deaf and mute community.This work demonstrates the potential of deep learning techniques in translating sign language into natural language and highlights the importance of ATSL in facilitating communication for individuals with disabilities. 展开更多
关键词 sign language deep learning transfer learning machine learning automatic translation of sign language natural language processing Arabic sign language
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部