期刊文献+
共找到31篇文章
< 1 2 >
每页显示 20 50 100
Comparing Fine-Tuning, Zero and Few-Shot Strategies with Large Language Models in Hate Speech Detection in English
1
作者 Ronghao Pan JoséAntonio García-Díaz Rafael Valencia-García 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第9期2849-2868,共20页
Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning... Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives. 展开更多
关键词 Hate speech detection zero-shot few-shot fine-tuning natural language processing
下载PDF
Fine-tuning electronic structure of N-doped graphitic carbon-supported Co-and Fe-incorporated Mo_(2)C to achieve ultrahigh electrochemical water oxidation activity
2
作者 Md.Selim Arif Sher Shah Hyeonjung Jung +3 位作者 Vinod K.Paidi Kug-Seung Lee Jeong Woo Han Jong Hyeok Park 《Carbon Energy》 SCIE EI CAS CSCD 2024年第7期134-149,共16页
Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated... Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance. 展开更多
关键词 fine-tuning electronic structures heteronanostructures Mo_(2)C multimetal(Co/Fe) oxygen evolution reaction
下载PDF
Optimizing Enterprise Conversational AI: Accelerating Response Accuracy with Custom Dataset Fine-Tuning
3
作者 Yash Kishore 《Intelligent Information Management》 2024年第2期65-76,共12页
As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidab... As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape. 展开更多
关键词 fine-tuning DATASET AI CONVERSATIONAL ENTERPRISE LLM
下载PDF
Enhancing Fire Detection Performance Based on Fine-Tuned YOLOv10
4
作者 Trong Thua Huynh Hoang Thanh Nguyen Du Thang Phu 《Computers, Materials & Continua》 SCIE EI 2024年第11期2281-2298,共18页
In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural N... In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural Network),YOLO(You Only Look Once),and their variants have demonstrated superiority in quickly detecting objects from images and videos,creating new opportunities to enhance automatic and efficient fire detection.The YOLO model,especially newer versions like YOLOv10,stands out for its fast processing capability,making it suitable for low-latency applications.However,when applied to real-world datasets,the accuracy of fire prediction is still not high.This study improves the accuracy of YOLOv10 for real-time applications through model fine-tuning techniques and data augmentation.The core work of the research involves creating a diverse fire image dataset specifically suited for fire detection applications in buildings and factories,freezing the initial layers of the model to retain general features learned from the dataset by applying the Squeeze and Excitation attention mechanism and employing the Stochastic Gradient Descent(SGD)with a momentum optimization algorithm to enhance accuracy while ensuring real-time fire detection.Experimental results demonstrate the effectiveness of the proposed fire prediction approach,where the YOLOv10 small model exhibits the best balance compared to other YOLO family models such as nano,medium,and balanced.Additionally,the study provides an experimental evaluation to highlight the effectiveness of model fine-tuning compared to the YOLOv10 baseline,YOLOv8 and Faster R-CNN based on two criteria:accuracy and prediction time. 展开更多
关键词 Fire detection ACCURACY prediction time fine-tuning real-time YOLOv10 Faster R-CNN
下载PDF
Classification of Conversational Sentences Using an Ensemble Pre-Trained Language Model with the Fine-Tuned Parameter
5
作者 R.Sujatha K.Nimala 《Computers, Materials & Continua》 SCIE EI 2024年第2期1669-1686,共18页
Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requir... Sentence classification is the process of categorizing a sentence based on the context of the sentence.Sentence categorization requires more semantic highlights than other tasks,such as dependence parsing,which requires more syntactic elements.Most existing strategies focus on the general semantics of a conversation without involving the context of the sentence,recognizing the progress and comparing impacts.An ensemble pre-trained language model was taken up here to classify the conversation sentences from the conversation corpus.The conversational sentences are classified into four categories:information,question,directive,and commission.These classification label sequences are for analyzing the conversation progress and predicting the pecking order of the conversation.Ensemble of Bidirectional Encoder for Representation of Transformer(BERT),Robustly Optimized BERT pretraining Approach(RoBERTa),Generative Pre-Trained Transformer(GPT),DistilBERT and Generalized Autoregressive Pretraining for Language Understanding(XLNet)models are trained on conversation corpus with hyperparameters.Hyperparameter tuning approach is carried out for better performance on sentence classification.This Ensemble of Pre-trained Language Models with a Hyperparameter Tuning(EPLM-HT)system is trained on an annotated conversation dataset.The proposed approach outperformed compared to the base BERT,GPT,DistilBERT and XLNet transformer models.The proposed ensemble model with the fine-tuned parameters achieved an F1_score of 0.88. 展开更多
关键词 Bidirectional encoder for representation of transformer conversation ensemble model fine-tuning generalized autoregressive pretraining for language understanding generative pre-trained transformer hyperparameter tuning natural language processing robustly optimized BERT pretraining approach sentence classification transformer models
下载PDF
Rotary-scaling fine-tuning (RSFT) method for optimizing railway wheel profiles and its application to a locomotive 被引量:9
6
作者 Yunguang Ye Yayun Qi +3 位作者 Dachuan Shi Yu Sun Yichang Zhou Markus Hecht 《Railway Engineering Science》 2020年第2期160-183,共24页
The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a ... The existing multi-objective wheel profile optimization methods mainly consist of three sub-modules:(1)wheel profile generation,(2)multi-body dynamics simulation,and(3)an optimization algorithm.For the first module,a comparably conservative rotary-scaling finetuning(RSFT)method,which introduces two design variables and an empirical formula,is proposed to fine-tune the traditional wheel profiles for improving their engineering applicability.For the second module,for the TRAXX locomotives serving on the Blankenburg–Rubeland line,an optimization function representing the relationship between the wheel profile and the wheel–rail wear number is established based on Kriging surrogate model(KSM).For the third module,a method combining the regression capability of KSM with the iterative computing power of particle swarm optimization(PSO)is proposed to quickly and reliably implement the task of optimizing wheel profiles.Finally,with the RSFT–KSM–PSO method,we propose two wear-resistant wheel profiles for the TRAXX locomotives serving on the Blankenburg–Rubeland line,namely S1002-S and S1002-M.The S1002-S profile minimizes the total wear number by 30%,while the S1002-M profile makes the wear distribution more uniform through a proper sacrifice of the tread wear number,and the total wear number is reduced by 21%.The quasi-static and hunting stability tests further demonstrate that the profile designed by the RSFT–KSM–PSO method is promising for practical engineering applications. 展开更多
关键词 Wheel profile optimization Wear reduction Rotary-scaling fine-tuning Particle swarm optimization Kriging surrogate model
下载PDF
Railway wheel profile fine-tuning system for profile recommendation 被引量:3
7
作者 Yunguang Ye Jonas Vuitton +1 位作者 Yu Sun Markus Hecht 《Railway Engineering Science》 2021年第1期74-93,共20页
This paper develops a wheel profile fine-tuning system(WPFTS)that comprehensively considers the influence of wheel profile on wheel damage,vehicle stability,vehicle safety,and passenger comfort.WPFTS can recommend one... This paper develops a wheel profile fine-tuning system(WPFTS)that comprehensively considers the influence of wheel profile on wheel damage,vehicle stability,vehicle safety,and passenger comfort.WPFTS can recommend one or more optimized wheel profiles according to train operators’needs,e.g.,reducing wheel wear,mitigating the development of wheel out-of-roundness(OOR),improving the shape stability of the wheel profile.Specifically,WPFTS includes four modules:(I)a wheel profile generation module based on the rotary-scaling finetuning(RSFT)method;(II)a multi-objective generation module consisting of a rigid multi-body dynamics simulation(MBS)model,an analytical model,and a rigid–flexible MBS model,for generating 11 objectives related to wheel damage,vehicle stability,vehicle safety,and passenger comfort;(III)a weight assignment module consisting of an adaptive weight assignment strategy and a manual weight assignment strategy;and(IV)an optimization module based on radial basis function(RBF)and particle swarm optimization(PSO).Finally,three cases are introduced to show how WPTFS recommends a wheel profile according to train operators’needs.Among them,a wheel profile with high shape stability,a wheel profile for mitigating the development of wheel OOR,and a wheel profile considering hunting stability and derailment safety are developed,respectively. 展开更多
关键词 Wheel profile fine-tuning system Optimization RECOMMENDATION WEAR Contact concentration index Multi-body dynamics simulation(MBS) Railway wheel
下载PDF
Fine-tuning of cortical progenitor proliferation by thalamic afferents
8
作者 Katrin Gerstmann Geraldine Zimmer 《Neural Regeneration Research》 SCIE CAS CSCD 2015年第6期887-888,共2页
During cerebral cortical cortex neurogenesis two major types of progenitors generate a variety of morphologically and functionally diverse projection neurons destined for the different cortical layers in non-gyrified ... During cerebral cortical cortex neurogenesis two major types of progenitors generate a variety of morphologically and functionally diverse projection neurons destined for the different cortical layers in non-gyrified mice. Radial glia cells (RGCs) undergo mitosis in the cortical ventricular zone and exhibit an apical-basal cell polarity, whereas non-polar intermediate progenitor cells (IPCs) divide basally in the subventricular zone (Franco and Muller, 2013; Taverna et al., 2014). 展开更多
关键词 Eph fine-tuning of cortical progenitor proliferation by thalamic afferents
下载PDF
New approach to assess sperm DNA fragmentation dynamics: Fine-tuning mathematical models
9
作者 Isabel Ortiz Jesus Dorado +4 位作者 Jane Morrell Jaime Gosalvez Francisco Crespo Juan M.Jimenez Manuel Hidalgo 《Journal of Animal Science and Biotechnology》 SCIE CAS CSCD 2017年第3期592-600,共9页
Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to ... Background: Sperm DNA fragmentation(sDF) has been proved to be an important parameter in order to predict in vitro the potential fertility of a semen sample. Colloid centrifugation could be a suitable technique to select those donkey sperm more resistant to DNA fragmentation after thawing. Previous studies have shown that to elucidate the latent damage of the DNA molecule, sDF should be assessed dynamically, where the rate of fragmentation between treatments indicates how resistant the DNA is to iatrogenic damage. The rate of fragmentation is calculated using the slope of a linear regression equation. However, it has not been studied if s DF dynamics fit this model. The objectives of this study were to evaluate the effect of different after-thawing centrifugation protocols on sperm DNA fragmentation and elucidate the most accurate mathematical model(linear regression, exponential or polynomial) for DNA fragmentation over time in frozen-thawed donkey semen.Results: After submitting post-thaw semen samples to no centrifugation(UDC), sperm washing(SW) or single layer centrifugation(SLC) protocols, sD F values after 6 h of incubation were significantly lower in SLC samples than in SW or UDC.Coefficient of determination(R-2) values were significantly higher for a second order polynomial model than for linear or exponential. The highest values for acceleration of fragmentation(aSDF) were obtained for SW, fol owed by SLC and UDC.Conclusion: SLC after thawing seems to preserve longer DNA longevity in comparison to UDC and SW. Moreover,the fine-tuning of models has shown that sDF dynamics in frozen-thawed donkey semen fit a second order polynomial model, which implies that fragmentation rate is not constant and fragmentation acceleration must be taken into account to elucidate hidden damage in the DNA molecule. 展开更多
关键词 Colloid centrifugation Dynamics fine-tuning Mathematical models Sperm DNA fragmentation
下载PDF
Fine-Tuning Bilateral Ties
10
作者 Ni Yanshuo 《ChinAfrica》 2011年第2期14-17,共4页
Chinese Vice Premier’s visit to Africa continues to emphasize the mutual cooperation,with a focus on agriculture FOR many years,the Chinese Government has dispatched the minister of foreign affairs to Africa for the ... Chinese Vice Premier’s visit to Africa continues to emphasize the mutual cooperation,with a focus on agriculture FOR many years,the Chinese Government has dispatched the minister of foreign affairs to Africa for the first official visit of a year.This year,however,that rule was broken when Hui Liangyu,Chinese Vice Premier,made the 14-day trip. On January 6-19,Hui paid official visits to Mauritius,Zambia,the Democratic Republic of Congo(DRC),Cameroon and Senegal,focusing on economic and agri- 展开更多
关键词 fine-tuning Bilateral Ties DRC
下载PDF
Fine-tuning growth in gold nanostructures from achiral 2D to chiral 3D geometries
11
作者 Lili Tan Zhi Chen +6 位作者 Chengyu Xiao Zhiyong Geng Yinran Jin Chaoyang Wei Fei Teng Wenlong Fu Peng-peng Wang 《Nano Research》 SCIE EI CSCD 2024年第7期6654-6660,共7页
Enriching the library of chiral plasmonic structures is of significant importance in advancing their applicability across diverse domains such as biosensing,nanophotonics,and catalysis.Here,employing triangle nanoplat... Enriching the library of chiral plasmonic structures is of significant importance in advancing their applicability across diverse domains such as biosensing,nanophotonics,and catalysis.Here,employing triangle nanoplates as growth seeds,we synthesized a novel class of chiral-shaped plasmonic nanostructures through a wet chemical strategy with dipeptide as chiral inducers,including chiral tri-blade boomerangs,concave rhombic dodecahedrons,and nanoflowers.The structural diversity in chiral plasmonic nanostructures was elucidated through their continuous morphological evolution from two-dimensional to threedimensional architectures.The fine-tuning of chiroptical properties was achieved by precisely manipulating crucial synthetic parameters such as the amount of chiral molecules,seeds,and gold precursor that significantly influenced chiral structure formation.The findings provide a promising avenue for enriching chiral materials with highly sophisticated structures,facilitating a fundamental understanding of the relationship between structural nuances and chiroptical properties. 展开更多
关键词 plasmonic nanostructures geometric chirality circular dichroism fine-tuning
原文传递
Research status and application of artificial intelligence large models in the oil and gas industry
12
作者 LIU He REN Yili +6 位作者 LI Xin DENG Yue WANG Yongtao CAO Qianwen DU Jinyang LIN Zhiwei WANG Wenjie 《Petroleum Exploration and Development》 SCIE 2024年第4期1049-1065,共17页
This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large mode... This article elucidates the concept of large model technology,summarizes the research status of large model technology both domestically and internationally,provides an overview of the application status of large models in vertical industries,outlines the challenges and issues confronted in applying large models in the oil and gas sector,and offers prospects for the application of large models in the oil and gas industry.The existing large models can be briefly divided into three categories:large language models,visual large models,and multimodal large models.The application of large models in the oil and gas industry is still in its infancy.Based on open-source large language models,some oil and gas enterprises have released large language model products using methods like fine-tuning and retrieval augmented generation.Scholars have attempted to develop scenario-specific models for oil and gas operations by using visual/multimodal foundation models.A few researchers have constructed pre-trained foundation models for seismic data processing and interpretation,as well as core analysis.The application of large models in the oil and gas industry faces challenges such as current data quantity and quality being difficult to support the training of large models,high research and development costs,and poor algorithm autonomy and control.The application of large models should be guided by the needs of oil and gas business,taking the application of large models as an opportunity to improve data lifecycle management,enhance data governance capabilities,promote the construction of computing power,strengthen the construction of“artificial intelligence+energy”composite teams,and boost the autonomy and control of large model technology. 展开更多
关键词 foundation model large language mode visual large model multimodal large model large model of oil and gas industry pre-training fine-tuning
下载PDF
民间文学文本命名实体识别方法
13
作者 黄健钰 王笳辉 +1 位作者 段亮 冉苒 《软件导刊》 2023年第10期65-72,共8页
民间文学文本命名实体识别任务旨在从民间文学文本中判别实体并将其划分到预定义的语义类别,为民间文学的保存与传播奠定基础。民间文学区别于一般中文语料,其文本存在一词多义情况突出与领域名词众多的问题,导致常规命名实体识别方法... 民间文学文本命名实体识别任务旨在从民间文学文本中判别实体并将其划分到预定义的语义类别,为民间文学的保存与传播奠定基础。民间文学区别于一般中文语料,其文本存在一词多义情况突出与领域名词众多的问题,导致常规命名实体识别方法难以准确充分地识别出民间文学文本中存在的实体及其类别。针对该问题,提出一种基于BERT的民间文学文本命名实体识别模型TBERT。该模型首先在通用中文BERT模型的基础上融合民间文学文本语料特征与实体类型特征;然后利用BiLSTM模型进一步提取序列依赖特征;最后结合CRF模型获取的标签约束信息输出全局最优结果。实验结果表明,该方法在民间文学文本数据集上具有良好表现。 展开更多
关键词 民间文学文本 命名实体识别 fine-tune TBERT-BiLSTM-CRF 特征融合
下载PDF
Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation
14
作者 许一格 邱锡鹏 +1 位作者 周浬皋 黄萱菁 《Journal of Computer Science & Technology》 SCIE EI CSCD 2023年第4期853-866,共14页
Fine-tuning pre-trained language models like BERT have become an effective way in natural language processing(NLP)and yield state-of-the-art results on many downstream tasks.Recent studies on adapting BERT to new task... Fine-tuning pre-trained language models like BERT have become an effective way in natural language processing(NLP)and yield state-of-the-art results on many downstream tasks.Recent studies on adapting BERT to new tasks mainly focus on modifying the model structure,re-designing the pre-training tasks,and leveraging external data and knowledge.The fine-tuning strategy itself has yet to be fully explored.In this paper,we improve the fine-tuning of BERT with two effective mechanisms:self-ensemble and self-distillation.The self-ensemble mechanism utilizes the checkpoints from an experience pool to integrate the teacher model.In order to transfer knowledge from the teacher model to the student model efficiently,we further use knowledge distillation,which is called self-distillation because the distillation comes from the model itself through the time dimension.Experiments on the GLUE benchmark and the Text Classification benchmark show that our proposed approach can significantly improve the adaption of BERT without any external data or knowledge.We conduct exhaustive experiments to investigate the efficiency of the self-ensemble and self-distillation mechanisms,and our proposed approach achieves a new state-of-the-art result on the SNLI dataset. 展开更多
关键词 BERT deep learning fine-tuning natural language processing(NLP) pre-training model
原文传递
基于迁移学习的图像识别研究 被引量:11
15
作者 袁文翠 孔雪 《微型电脑应用》 2018年第7期10-12,共3页
在运用深度学习解决问题时,最常见障碍在于训练模型需要海量的数据。虽然每天互联网上以TB的量级产生数据(尤其是无语言障碍的图像数据),但对于新领域仅有小部分的数据带有标签,若对这些数据都进行人工标记,将会耗费大量的人力与物力,... 在运用深度学习解决问题时,最常见障碍在于训练模型需要海量的数据。虽然每天互联网上以TB的量级产生数据(尤其是无语言障碍的图像数据),但对于新领域仅有小部分的数据带有标签,若对这些数据都进行人工标记,将会耗费大量的人力与物力,这就给模型的训练带来了极大的挑战。将针对上述问题,提出运用Fine-tune模型、域对抗训练从现有的数据中迁移知识,训练自己的小样本数据集。 展开更多
关键词 深度学习 fine-tune模型 域对抗
下载PDF
基于卷积神经网络的脚部关键参数计算方法 被引量:3
16
作者 梁志剑 常力丹 谢红宇 《科学技术与工程》 北大核心 2019年第6期190-195,共6页
针对传统制鞋业定制化程度低、无法适应足部多样性及舒适性,提出了一种基于卷积神经网络的脚型关键参数计算方法。首先对图像进行透视变换等预处理;然后使用fine-tune的迁移学习方法,通过修改VGG(visual geometry group)神经网络源模型... 针对传统制鞋业定制化程度低、无法适应足部多样性及舒适性,提出了一种基于卷积神经网络的脚型关键参数计算方法。首先对图像进行透视变换等预处理;然后使用fine-tune的迁移学习方法,通过修改VGG(visual geometry group)神经网络源模型全连接分类层,将高层卷积权重进行微调;优化网络模型并提取特征值进行特征分类,从图像中识别出脚的轮廓。最后通过设计的算法计算出脚型特征值;并与实际测量的脚长、腰窝宽度、脚宽等做对比。实验表明,改进后的模型对脚部识别的准确率达到96. 8%,输出结果与测量的真实数据相比误差不超过3%,可作为鞋底制作的重要依据。 展开更多
关键词 脚型特征 深度学习 卷积神经网络 迁移学习 fine-tune 形态学算法
下载PDF
BERT-TECNN模型的文本分类方法研究 被引量:20
17
作者 李铁飞 生龙 吴迪 《计算机工程与应用》 CSCD 北大核心 2021年第18期186-193,共8页
由于Bert-base,Chinese预训练模型参数巨大,在做分类任务微调时内部参数变化较小,易产生过拟合现象,泛化能力弱,且该模型是以字为单位进行的预训练,包含词信息量较少。针对这些问题,提出了BERT-TECNN模型,模型使用Bert-base,Chinese模... 由于Bert-base,Chinese预训练模型参数巨大,在做分类任务微调时内部参数变化较小,易产生过拟合现象,泛化能力弱,且该模型是以字为单位进行的预训练,包含词信息量较少。针对这些问题,提出了BERT-TECNN模型,模型使用Bert-base,Chinese模型作为动态字向量模型,输出包含深度特征信息的字向量,Transformerencoder层再次对数据进行多头自注意力计算,提取特征信息,以提高模型的泛化能力,CNN层利用不同大小卷积核,捕捉每条数据中不同长度词的信息,最后应用softmax进行分类。该模型与Word2Vec+CNN、Word2Vec+BiLSTM、Elmo+CNN、BERT+CNN、BERT+BiLSTM、BERT+Transformer等深度学习文本分类模型在三种数据集上进行对比实验,得到的准确率、精确率、召回率、F1测度值均为最高。实验表明该模型有效地提取了文本中字词的特征信息,优化了过拟合问题,提高了泛化能力。 展开更多
关键词 bert transformer ENCODER CNN 文本分类 fine-tuning self-attention 过拟合
下载PDF
仓储环境下基于深度学习的物体识别方法研究 被引量:4
18
作者 金秋 李天剑 《北京信息科技大学学报(自然科学版)》 2018年第1期60-65,共6页
针对仓储环境下叉车机器人物体识别的应用场景,提出一种基于Faster-RCNN优化和改进后的物体识别算法。通过对Faster-RCNN模型进行微调(fine-tuning),完成对托盘、货物、人以及叉车等物体的识别,同时优化了训练过程,使得网络最后达到最... 针对仓储环境下叉车机器人物体识别的应用场景,提出一种基于Faster-RCNN优化和改进后的物体识别算法。通过对Faster-RCNN模型进行微调(fine-tuning),完成对托盘、货物、人以及叉车等物体的识别,同时优化了训练过程,使得网络最后达到最优。同时通过对不同共享卷积层模型下的Faster-RCNN进行比较,最后得到最优的Faster-RCNN模型为ZF+RPN模型。实验表明,改进和优化后的算法对仓储环境下的物体检测的准确率达到90%,测试的帧率为33.3fps,基本满足叉车机器人对物体检测实时性和准确性的要求。 展开更多
关键词 深度学习 Faster-RCNN fine-tuning RPN模型 共享卷积层
下载PDF
基于小波包与CNN的滚动轴承故障诊断 被引量:10
19
作者 许理 李戈 +1 位作者 余亮 姚毅 《四川理工学院学报(自然科学版)》 CAS 2018年第3期54-59,共6页
滚动轴承的振动信号具有较强的非平稳性,小波包(Wavelet Packet,WP)时频分析方法能有效提取非平稳信号的时频特征,具有精细的时频分辨率。而卷积神经网络(Convolutional Neural Network,CNN)强大的特征学习能力使其具有优于浅层网络的... 滚动轴承的振动信号具有较强的非平稳性,小波包(Wavelet Packet,WP)时频分析方法能有效提取非平稳信号的时频特征,具有精细的时频分辨率。而卷积神经网络(Convolutional Neural Network,CNN)强大的特征学习能力使其具有优于浅层网络的故障识别率。为了更准确地诊断出滚动轴承的运行状态,提出一种基于小波包与CNN相结合的滚动轴承故障诊断方法:对采集的轴承振动信号进行小波包时频分析,得到各类信号的时频特征图,采用fine-tuning技术在CNN模型caffe Net上进行微调,解决少量样本训练CNN模型的问题,最终得到了可用于滚动轴承故障诊断的CNN模型。采用小波包与CNN相结合进行故障诊断,故障识别率达到了99.1%,高于连续小波变换(CWT)和短时傅里叶变换(STFT)与CNN相结合的故障识别率。而采用主成分分析(PCA)与支持向量机(SVM)相结合的故障识别率最低,且对复合故障的识别效果明显不足。 展开更多
关键词 滚动轴承 小波包 卷积神经网络 故障诊断 fine-tuning技术
下载PDF
Mining and fine-tuning sugar uptake system for titer improvement of milbemycins in Streptomyces bingchenggensis 被引量:1
20
作者 Pinjiao Jin Shanshan Li +4 位作者 Yanyan Zhang Liyang Chu Hairong He Zhuoxu Dong Wensheng Xiang 《Synthetic and Systems Biotechnology》 SCIE 2020年第3期214-221,共8页
Dramatic decrease of sugar uptake is a general phenomenon in Streptomyces at stationary phase,when antibiotics are extensively produced.Milbemycins produced by Streptomyces bingchenggensis are a group of valuable macr... Dramatic decrease of sugar uptake is a general phenomenon in Streptomyces at stationary phase,when antibiotics are extensively produced.Milbemycins produced by Streptomyces bingchenggensis are a group of valuable macrolide biopesticides,while the low yield and titer impede their broad applications in agricultural field.Considering that inadequate sugar uptake generally hinders titer improvement of desired products,we mined the underlying sugar uptake systems and fine-tuned their expression in this work.First,we screened the candidates at both genomic and transcriptomic level in S.bingchenggensis.Then,two ATP-binding cassette transporters named TP2 and TP5 were characterized to improve milbemycin titer and yield significantly.Next,the appropriate native temporal promoters were selected and used to tune the expression of TP2 and TP5,resulting in a maximal milbemycin A3/A4 titer increase by 36.9%to 3321 mg/L.Finally,TP2 and TP5 were broadly finetuned in another two macrolide biopesticide producers Streptomyces avermitilis and Streptomyces cyaneogriseus,leading to a maximal titer improvement of 34.1%and 52.6%for avermectin B1a and nemadectin,respectively.This work provides useful transporter tools and corresponding engineering strategy for Streptomyces. 展开更多
关键词 STREPTOMYCES Sugar uptake system fine-tuning Titer improvement Milbemycins Macrolide biopesticides
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部