期刊文献+
共找到109,167篇文章
< 1 2 250 >
每页显示 20 50 100
基于高阶思维能力培养的“My SQL数据库技术”课程教学改革研究
1
作者 江国粹 《安徽电子信息职业技术学院学报》 2024年第1期39-43,共5页
针对目前高校学生工程思维缺乏、创新意识不足、理论和实践不能融会贯通等问题,结合金课建设的关键点、难点以及发力点,以“My SQL数据库技术”课程为例,构建了基于高阶思维能力培养的“My SQL数据库技术”课程沉浸式教学模型。实践证明... 针对目前高校学生工程思维缺乏、创新意识不足、理论和实践不能融会贯通等问题,结合金课建设的关键点、难点以及发力点,以“My SQL数据库技术”课程为例,构建了基于高阶思维能力培养的“My SQL数据库技术”课程沉浸式教学模型。实践证明,这种让学生不断分析、评价、创造,借助高阶知识和高阶任务形成高阶思维的教学模型,实现了育人模式的优化,促进了高阶思维能力人才的培养。 展开更多
关键词 高阶思维 My sql数据库 金课 教学改革 创新
下载PDF
辅助任务增强的中文跨域NL2SQL算法
2
作者 胡亚红 刘亚冬 +1 位作者 朱正东 刘鹏杰 《国防科技大学学报》 EI CAS CSCD 北大核心 2024年第2期197-204,共8页
自然语言到结构化查询语言(natural language to structured query language,NL2SQL)任务旨在将自然语言询问转化为数据库可执行的结构化查询语言(structured query language,SQL)语句。本文提出了一种辅助任务增强的中文跨域NL2SQL算法... 自然语言到结构化查询语言(natural language to structured query language,NL2SQL)任务旨在将自然语言询问转化为数据库可执行的结构化查询语言(structured query language,SQL)语句。本文提出了一种辅助任务增强的中文跨域NL2SQL算法,其核心思想是通过在解码阶段添加辅助任务以结合原始模型来进行多任务训练,提升模型的准确率。辅助任务的设计是通过将数据库模式建模成图,预测自然语言询问与数据库模式图中的节点的依赖关系,显式地建模自然语言询问和数据库模式之间的依赖关系。针对特定的自然语言询问,通过辅助任务的提升,模型能够更好地识别数据库模式中哪些表/列对预测目标SQL更有效。在中文NL2SQL数据集DuSQL上的实验结果表明,添加辅助任务后的算法相对于原始模型取得了更好的效果,能够更好地处理跨域NL2SQL任务。 展开更多
关键词 人工智能 深度学习 自然语言处理 语义解析
下载PDF
基于依存关系图注意力网络的SQL生成方法
3
作者 舒晴 刘喜平 +4 位作者 谭钊 李希 万常选 刘德喜 廖国琼 《浙江大学学报(工学版)》 EI CAS CSCD 北大核心 2024年第5期908-917,共10页
研究基于自然语言问题的结构化查询语言(SQL)生成问题(Text-to-SQL).提出两阶段框架,旨在解耦模式链接和SQL生成过程,降低SQL生成的难度.第1阶段通过基于关系图注意力网络的模式链接器识别问题中提及的数据库表、列和值,利用问题的语法... 研究基于自然语言问题的结构化查询语言(SQL)生成问题(Text-to-SQL).提出两阶段框架,旨在解耦模式链接和SQL生成过程,降低SQL生成的难度.第1阶段通过基于关系图注意力网络的模式链接器识别问题中提及的数据库表、列和值,利用问题的语法结构和数据库模式项之间的内部关系,指导模型学习问题与数据库的对齐关系.构建问题图时,针对Text-to-SQL任务的特点,在原始句法依存树的基础上,合并与模式链接无关的关系,添加并列结构中的从属词与句中其他成分间的依存关系,帮助模型捕获长距离依赖关系.第2阶段进行SQL生成,将对齐信息注入T5的编码器,对T5进行微调.在Spider、Spider-DK和Spider-Syn数据集上进行实验,实验结果显示,该方法具有良好的性能,尤其是对中等难度以上的Text-to-SQL问题具有良好的表现. 展开更多
关键词 Text-to-sql 自然语言查询 依存句法分析 关系图注意力网络
下载PDF
一种利用词典扩展数据库模式信息的Text2SQL方法
4
作者 于晓昕 何东 +2 位作者 叶子铭 陈黎 于中华 《四川大学学报(自然科学版)》 CAS CSCD 北大核心 2024年第1期78-88,共11页
现有Text2SQL方法严重依赖表名和列名在自然语言查询中的显式提及,在同物异名的实际应用场景中准确率急剧下降.此外,这些方法仅仅依赖数据库模式捕捉数据库建模的领域知识,而数据库模式作为结构化的元数据,其表达领域知识的能力是非常... 现有Text2SQL方法严重依赖表名和列名在自然语言查询中的显式提及,在同物异名的实际应用场景中准确率急剧下降.此外,这些方法仅仅依赖数据库模式捕捉数据库建模的领域知识,而数据库模式作为结构化的元数据,其表达领域知识的能力是非常有限的,即使有经验的程序员也很难仅从数据库模式完全领会该数据库建模的领域知识,因此程序员必须依赖详细的数据库设计文档才能构造SQL语句以正确地表达特定的查询.为此,本文提出一种利用词典扩展数据库模式信息的Text2SQL方法,该方法从数据库表名和列名解析出其中的单词或短语,查询词典获取这些单词或短语的语义解释,将这些解释看成是相应表名或列名的扩展内容,与表名、列名及其他数据库模式信息(主键、外键等)相结合,作为模型的输入,从而使模型能够更全面地学习数据库建模的应用领域知识.在Spider-syn和Spider数据集上进行的实验说明了所提出方法的有效性,即使自然语言查询中使用的表名和列名与数据库模式中对应的表名和列名完全不同,本文方法也能够得到较好的SQL翻译结果,明显优于最新提出的抗同义词替换攻击的方法. 展开更多
关键词 数据库模式 语义扩展 解释信息 Text2sql
下载PDF
SQL Server性能优化的技术分析
5
作者 赵佳 《中国高新科技》 2024年第7期102-104,107,共4页
随着客户端/服务器技术和多层架构的进步,系统效率问题日益引发广泛讨论。由于缺乏数据库管理系统(DBMS)提供的优化方法和工具,以及基于微软技术的数据库管理员和应用程序开发人员的知识,无法对设计和服务执行系统进行优化。文章回顾了... 随着客户端/服务器技术和多层架构的进步,系统效率问题日益引发广泛讨论。由于缺乏数据库管理系统(DBMS)提供的优化方法和工具,以及基于微软技术的数据库管理员和应用程序开发人员的知识,无法对设计和服务执行系统进行优化。文章回顾了提高SQL Server实例性能时需要考虑的目标,并阐述了用于优化查询的技术。 展开更多
关键词 查询 优化 sql Server
下载PDF
基于BiLSTM的NL2SQL模型
6
作者 邰伟鹏 刘杨 +2 位作者 王小林 郑啸 钟亮 《计算机应用与软件》 北大核心 2024年第3期34-40,共7页
随着互联网技术的发展,很多应用为大众提供金融量化服务,而大部分用户不具备金融或计算机专业知识,他们期望使用自然语言查询数据,因此自然语言转SQL(NL2SQL)被迫切需要。针对此问题,提出一种基于双向长短期记忆模型(BiLSTM)的中文金融N... 随着互联网技术的发展,很多应用为大众提供金融量化服务,而大部分用户不具备金融或计算机专业知识,他们期望使用自然语言查询数据,因此自然语言转SQL(NL2SQL)被迫切需要。针对此问题,提出一种基于双向长短期记忆模型(BiLSTM)的中文金融NL2SQL算法,分为编码和解码阶段。在编码阶段,利用BiLSTM和注意力机制生成特征向量。在解码阶段,根据SQL的语法规则,将SQL生成解耦为九个分类任务,各个任务间相互依赖联合学习,之后生成复杂的SQL语句。除模型外,还训练出包含金融词汇的向量库,构建金融领域的数据集。通过在此数据集上实验验证,结果表明,该方法准确率更高,能有效解决金融领域SQL生成问题,并在某金融量化分析系统中实现。 展开更多
关键词 NL2sql BiLSTM 注意力机制 向量库 数据集
下载PDF
SQL-to-text模型的组合泛化能力评估方法
7
作者 陈琳 范元凯 +3 位作者 何震瀛 刘晓清 杨阳 汤路民 《计算机工程》 CAS CSCD 北大核心 2024年第3期326-335,共10页
数据库的结构化查询语言(SQL)到自然语言的翻译(SQL-to-text)能提高关系数据库的易用性。近年来该领域主要使用机器学习的方法进行研究并已取得一定进展,然而现有翻译模型的能力仍不足以投入实际应用。由于组合泛化能力是SQL-to-text模... 数据库的结构化查询语言(SQL)到自然语言的翻译(SQL-to-text)能提高关系数据库的易用性。近年来该领域主要使用机器学习的方法进行研究并已取得一定进展,然而现有翻译模型的能力仍不足以投入实际应用。由于组合泛化能力是SQL-to-text模型在实际应用中提升翻译效果的必要能力,且目前缺少对此类模型组合泛化能力的研究,因此提出一种SQL-to-text模型的组合泛化能力评估方法。基于现有的SQL-to-text数据集生成大量SQL和对应的自然语言翻译(SQL-自然语言对),并按SQL-自然语言对所含SQL子句的个数将其划分为训练数据与测试数据,使测试数据中的SQL子句皆以不同的组合方式在训练数据中出现,从而得到可评估模型组合泛化能力的新数据集。评估结果表明,该方法对查询知识的使用程度较高,划分数据的方式更加合理,所得数据集符合评估组合泛化能力的需求且贴近模型的实际应用场景,受到原始数据集的限制程度更低,并证实现有模型的组合泛化能力仍需提升,其中针对SQL-to-text任务设计的关系感知图转换器模型组合泛化能力最弱,表明原有的SQL-to-text数据集对组合泛化能力的考察存在欠缺。 展开更多
关键词 结构化查询语言 组合泛化 机器翻译 数据库 长短期记忆模型
下载PDF
基于SQL注入漏洞的攻击技术研究
8
作者 朱振南 金京犬 《电脑知识与技术》 2024年第1期98-100,103,共4页
SQL注入漏洞作为最频繁出现的一种Web应用漏洞,给数据库的安全性带来了巨大威胁。文章旨在深入研究各类SQL注入攻击的实现方式,并结合一个实际攻击案例,提供了深入的分析和探讨。同时,文章还重点关注SQL注入检测技术的研究,为实现有效... SQL注入漏洞作为最频繁出现的一种Web应用漏洞,给数据库的安全性带来了巨大威胁。文章旨在深入研究各类SQL注入攻击的实现方式,并结合一个实际攻击案例,提供了深入的分析和探讨。同时,文章还重点关注SQL注入检测技术的研究,为实现有效的检测和防御策略提供了有力支持,为安全专家和研究人员提供有价值的参考和指导。 展开更多
关键词 sql注入漏洞 sql注入攻击 sql注入检测 sql注入防御
下载PDF
基于SQL数据库的医院财务管理信息平台
9
作者 傅仁 《兵工自动化》 北大核心 2024年第2期23-27,共5页
为提升医院财务信息的管理能力,利用信息化技术建立医院财务信息一体化平台,采用统一的数据库对财务信息数据进行统一存储和处理。对基于信息一体化的医院财务信息一体化平台的架构进行设计,基于结构化查询语言(structured query langua... 为提升医院财务信息的管理能力,利用信息化技术建立医院财务信息一体化平台,采用统一的数据库对财务信息数据进行统一存储和处理。对基于信息一体化的医院财务信息一体化平台的架构进行设计,基于结构化查询语言(structured query language,SQL)技术建立财务网上报销系统,对医院财务报销业务流程进行分析,以SQL Server 2005为后台数据库,实行客户/服务器(browser/server,B/S)模式的医院财务报销流程。实验结果表明:该系统对多用户并发的单业务响应时间<4.96 s,对多用户并发的混合业务的响应时间<65 s;系统的数据吞吐量为8 Mbps,数据处理效率为87%,优于同类系统的数据处理结果,优于一般系统的操作响应性能,可以满足医院财务管理的需求,为建立信息化的医院财务管理信息平台提供参考。 展开更多
关键词 医院财务管理 报销系统 信息平台 sql数据库
下载PDF
SQL语言在数据库实践课程中的应用
10
作者 李璋 陈龙 +2 位作者 陈逸凡 程翔 高琪媛 《科技风》 2024年第8期98-100,共3页
本文介绍了SQL语言的主要含义,给出了SQL语言在数据库实践课程中的一些应用,如查询单表的某公司职员综合信息表、多表的某高校十一人制男子组足球比赛赛事数据分析表,依托这些实例从基础的单表查询操作到能够进行多表之间的数据查询分析... 本文介绍了SQL语言的主要含义,给出了SQL语言在数据库实践课程中的一些应用,如查询单表的某公司职员综合信息表、多表的某高校十一人制男子组足球比赛赛事数据分析表,依托这些实例从基础的单表查询操作到能够进行多表之间的数据查询分析,掌握SQL语言的SELECT功能及其有关查询关键字的用法。 展开更多
关键词 sql语言 数据库实践课程 数据库操作
下载PDF
Literature classification and its applications in condensed matter physics and materials science by natural language processing
11
作者 吴思远 朱天念 +5 位作者 涂思佳 肖睿娟 袁洁 吴泉生 李泓 翁红明 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第5期117-123,共7页
The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classificatio... The exponential growth of literature is constraining researchers’access to comprehensive information in related fields.While natural language processing(NLP)may offer an effective solution to literature classification,it remains hindered by the lack of labelled dataset.In this article,we introduce a novel method for generating literature classification models through semi-supervised learning,which can generate labelled dataset iteratively with limited human input.We apply this method to train NLP models for classifying literatures related to several research directions,i.e.,battery,superconductor,topological material,and artificial intelligence(AI)in materials science.The trained NLP‘battery’model applied on a larger dataset different from the training and testing dataset can achieve F1 score of 0.738,which indicates the accuracy and reliability of this scheme.Furthermore,our approach demonstrates that even with insufficient data,the not-well-trained model in the first few cycles can identify the relationships among different research fields and facilitate the discovery and understanding of interdisciplinary directions. 展开更多
关键词 natural language processing text mining materials science
下载PDF
基于污点分析的SQL注入漏洞检测
12
作者 王国峰 唐云善 徐立飞 《信息技术》 2024年第2期185-190,共6页
SQL注入漏洞给Web程序的数据库系统带来巨大的风险,一旦此漏洞遭受攻击,其带来的损失不可估量。对此,提出一种基于污点分析的SQL注入漏洞的检测方法。该方法以三地址码为中间表示,根据SQL注入漏洞特征,设计了用于前向分析的污点数据流... SQL注入漏洞给Web程序的数据库系统带来巨大的风险,一旦此漏洞遭受攻击,其带来的损失不可估量。对此,提出一种基于污点分析的SQL注入漏洞的检测方法。该方法以三地址码为中间表示,根据SQL注入漏洞特征,设计了用于前向分析的污点数据流值和污点传播规则;在程序控制流图上进行数据流算法的迭代分析;在计算过程中同步进行安全性检查,进而得到所有包含污点数据的Sink点;通过遍历包含污点数据的Sink点集合,报出SQL注入漏洞位置。最后通过对比实验验证了该方法的有效性。 展开更多
关键词 sql注入 静态漏洞检测 数据流分析 污点分析
下载PDF
Recent Advances on Deep Learning for Sign Language Recognition
13
作者 Yanqiong Zhang Xianwei Jiang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2399-2450,共52页
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa... Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community. 展开更多
关键词 Sign language recognition deep learning artificial intelligence computer vision gesture recognition
下载PDF
Identification of Software Bugs by Analyzing Natural Language-Based Requirements Using Optimized Deep Learning Features
14
作者 Qazi Mazhar ul Haq Fahim Arif +4 位作者 Khursheed Aurangzeb Noor ul Ain Javed Ali Khan Saddaf Rubab Muhammad Shahid Anwar 《Computers, Materials & Continua》 SCIE EI 2024年第3期4379-4397,共19页
Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learn... Software project outcomes heavily depend on natural language requirements,often causing diverse interpretations and issues like ambiguities and incomplete or faulty requirements.Researchers are exploring machine learning to predict software bugs,but a more precise and general approach is needed.Accurate bug prediction is crucial for software evolution and user training,prompting an investigation into deep and ensemble learning methods.However,these studies are not generalized and efficient when extended to other datasets.Therefore,this paper proposed a hybrid approach combining multiple techniques to explore their effectiveness on bug identification problems.The methods involved feature selection,which is used to reduce the dimensionality and redundancy of features and select only the relevant ones;transfer learning is used to train and test the model on different datasets to analyze how much of the learning is passed to other datasets,and ensemble method is utilized to explore the increase in performance upon combining multiple classifiers in a model.Four National Aeronautics and Space Administration(NASA)and four Promise datasets are used in the study,showing an increase in the model’s performance by providing better Area Under the Receiver Operating Characteristic Curve(AUC-ROC)values when different classifiers were combined.It reveals that using an amalgam of techniques such as those used in this study,feature selection,transfer learning,and ensemble methods prove helpful in optimizing the software bug prediction models and providing high-performing,useful end mode. 展开更多
关键词 Natural language processing software bug prediction transfer learning ensemble learning feature selection
下载PDF
Enhancing Relational Triple Extraction in Specific Domains:Semantic Enhancement and Synergy of Large Language Models and Small Pre-Trained Language Models
15
作者 Jiakai Li Jianpeng Hu Geng Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第5期2481-2503,共23页
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e... In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach. 展开更多
关键词 Relational triple extraction semantic interaction large language models data augmentation specific domains
下载PDF
Evaluating Privacy Leakage and Memorization Attacks on Large Language Models (LLMs) in Generative AI Applications
16
作者 Harshvardhan Aditya Siddansh Chawla +6 位作者 Gunika Dhingra Parijat Rai Saumil Sood Tanmay Singh Zeba Mohsin Wase Arshdeep Bahga Vijay K. Madisetti 《Journal of Software Engineering and Applications》 2024年第5期421-447,共27页
The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Infor... The recent interest in the deployment of Generative AI applications that use large language models (LLMs) has brought to the forefront significant privacy concerns, notably the leakage of Personally Identifiable Information (PII) and other confidential or protected information that may have been memorized during training, specifically during a fine-tuning or customization process. We describe different black-box attacks from potential adversaries and study their impact on the amount and type of information that may be recovered from commonly used and deployed LLMs. Our research investigates the relationship between PII leakage, memorization, and factors such as model size, architecture, and the nature of attacks employed. The study utilizes two broad categories of attacks: PII leakage-focused attacks (auto-completion and extraction attacks) and memorization-focused attacks (various membership inference attacks). The findings from these investigations are quantified using an array of evaluative metrics, providing a detailed understanding of LLM vulnerabilities and the effectiveness of different attacks. 展开更多
关键词 Large language Models PII Leakage Privacy Memorization OVERFITTING Membership Inference Attack (MIA)
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
17
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Impact of transcranial electrical stimulation on serum neurotrophic factors and language function in patients with speech disorders
18
作者 Li Sun Kai Xiao +1 位作者 Xiao-Yan Shen Shu Wang 《World Journal of Clinical Cases》 SCIE 2024年第10期1742-1749,共8页
BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and p... BACKGROUND Speech disorders have a substantial impact on communication abilities and quality of life.Traditional treatments such as speech and psychological therapies frequently demonstrate limited effectiveness and patient compliance.Transcranial electrical stimulation(TES)has emerged as a promising non-invasive treatment to improve neurological functions.However,its effectiveness in enhancing language functions and serum neurofactor levels in individuals with speech disorders requires further investigation.AIM To investigate the impact of TES in conjunction with standard therapies on serum neurotrophic factor levels and language function in patients with speech disorders.METHODS In a controlled study spanning from March 2019 to November 2021,81 patients with speech disorders were divided into a control group(n=40)receiving standard speech stimulation and psychological intervention,and an observation group(n=41)receiving additional TES.The study assessed serum levels of ciliary neurotrophic factor(CNTF),glial cell-derived neurotrophic factor(GDNF),brainderived neurotrophic factor(BDNF),and nerve growth factor(NGF),as well as evaluations of motor function,language function,and development quotient scores.RESULTS After 3 wk of intervention,the observation group exhibited significantly higher serum levels of CNTF,GDNF,BDNF,and NGF compared to the control group.Moreover,improvements were noted in motor function,cognitive function,language skills,physical abilities,and overall development quotient scores.It is worth mentioning that the observation group also displayed superior perfor CONCLUSION This retrospective study concluded that TES combined with traditional speech and psychotherapy can effectively increase the levels of neurokines in the blood and enhance language function in patients with speech disorders.These results provide a promising avenue for integrating TES into standard treatment methods for speech disorders. 展开更多
关键词 Transcranial electrical stimulation Serum neurofactor levels Developmental level language features
下载PDF
A Survey on Chinese Sign Language Recognition:From Traditional Methods to Artificial Intelligence
19
作者 Xianwei Jiang Yanqiong Zhang +1 位作者 Juan Lei Yudong Zhang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1-40,共40页
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La... Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing. 展开更多
关键词 Chinese Sign language Recognition deep neural networks artificial intelligence transfer learning hybrid network models
下载PDF
Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification
20
作者 Jungpil Shin Md.Al Mehedi Hasan +2 位作者 Abu Saleh Musa Miah Kota Suzuki Koki Hirooka 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2605-2625,共21页
Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japane... Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individualswith hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods. 展开更多
关键词 Japanese Sign language(JSL) hand gesture recognition geometric feature distance feature angle feature GoogleNet
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部