Since a complete DNA chain contains a large data (usually billions of nucleotides), it’s challenging to figure out the function of each sequence segment. Several powerful predictive models for the function of DNA seq...Since a complete DNA chain contains a large data (usually billions of nucleotides), it’s challenging to figure out the function of each sequence segment. Several powerful predictive models for the function of DNA sequence, including, CNN (convolutional neural network), RNN (recurrent neural network), and LSTM [1] (long short-term memory) have been proposed. However, all of them have some flaws. For example, the RNN can hardly have long-term memory. Here, we build on one of these models, DanQ, which uses CNN and LSTM together. We extend DanQ by developing an improved DanQ model and applying it to predict the function of DNA sequence more efficiently. In the most primitive DanQ model, the regulatory grammar is learned by the regulatory motifs captured by the convolution layer and the long-term dependencies between the motifs captured by the recurrent layer, so as to increase the prediction accuracy. Through the testing of some models, DanQ has greatly improved in some indicators. For the regulatory markers, DanQ achieves improvements above 50% of the area under the curve, via the measurement of the precision-recall curve.展开更多
近年来,深度学习凭借其优越的性能广泛应用于图像处理、自然语言处理、语音识别等领域,它对性能的提升远超于以往的传统方法。论文采取循环神经网络(Recurrent Neural Networks,RNN)中的长短期记忆模型(Long Short Time Memory,LSTM)实...近年来,深度学习凭借其优越的性能广泛应用于图像处理、自然语言处理、语音识别等领域,它对性能的提升远超于以往的传统方法。论文采取循环神经网络(Recurrent Neural Networks,RNN)中的长短期记忆模型(Long Short Time Memory,LSTM)实现了语音识别中的声学模型构建,并增加反向时序信息对训练的影响,构成了双向长短期记忆模型(Bi-directional Long Short Time Memory,BLSTM)。语音信号是一种复杂的时变信号,而BLSTM能够在处理时间序列数据的同时,选择性地记住有效信息,丢弃无用信息,实验表明该方法的识别率较传统的高斯混合模型-隐马尔可夫模型(Gaussian Mixture Model-Hidden Markov Model,GMM-HMM)有显著的提高。展开更多
双向长短时记忆(bidirectional lorg short term memory,BLSTM)是一种特殊的递归神经网络(recurrent neural network,RNN),能够有效地对语音的长时上下文进行建模。该文提出一种基于深度BLSTM的语音驱动面部动画合成方法,利用说话人的...双向长短时记忆(bidirectional lorg short term memory,BLSTM)是一种特殊的递归神经网络(recurrent neural network,RNN),能够有效地对语音的长时上下文进行建模。该文提出一种基于深度BLSTM的语音驱动面部动画合成方法,利用说话人的音视频双模态信息训练BLSTM-RNN神经网络,采用主动外观模型(active appearance model,AAM)对人脸图像进行建模,将AAM模型参数作为网络输出,研究网络结构和不同语音特征输入对动画合成效果的影响。基于LIPS2008标准评测库的实验结果表明:具有BLSTM层的网络效果明显优于前向网络的,基于BLSTM-前向-BLSTM 256节点(BFB256)的三层模型结构的效果最佳,FBank、基频和能量组合可以进一步提升动画合成效果。展开更多
文摘Since a complete DNA chain contains a large data (usually billions of nucleotides), it’s challenging to figure out the function of each sequence segment. Several powerful predictive models for the function of DNA sequence, including, CNN (convolutional neural network), RNN (recurrent neural network), and LSTM [1] (long short-term memory) have been proposed. However, all of them have some flaws. For example, the RNN can hardly have long-term memory. Here, we build on one of these models, DanQ, which uses CNN and LSTM together. We extend DanQ by developing an improved DanQ model and applying it to predict the function of DNA sequence more efficiently. In the most primitive DanQ model, the regulatory grammar is learned by the regulatory motifs captured by the convolution layer and the long-term dependencies between the motifs captured by the recurrent layer, so as to increase the prediction accuracy. Through the testing of some models, DanQ has greatly improved in some indicators. For the regulatory markers, DanQ achieves improvements above 50% of the area under the curve, via the measurement of the precision-recall curve.
文摘近年来,深度学习凭借其优越的性能广泛应用于图像处理、自然语言处理、语音识别等领域,它对性能的提升远超于以往的传统方法。论文采取循环神经网络(Recurrent Neural Networks,RNN)中的长短期记忆模型(Long Short Time Memory,LSTM)实现了语音识别中的声学模型构建,并增加反向时序信息对训练的影响,构成了双向长短期记忆模型(Bi-directional Long Short Time Memory,BLSTM)。语音信号是一种复杂的时变信号,而BLSTM能够在处理时间序列数据的同时,选择性地记住有效信息,丢弃无用信息,实验表明该方法的识别率较传统的高斯混合模型-隐马尔可夫模型(Gaussian Mixture Model-Hidden Markov Model,GMM-HMM)有显著的提高。
文摘双向长短时记忆(bidirectional lorg short term memory,BLSTM)是一种特殊的递归神经网络(recurrent neural network,RNN),能够有效地对语音的长时上下文进行建模。该文提出一种基于深度BLSTM的语音驱动面部动画合成方法,利用说话人的音视频双模态信息训练BLSTM-RNN神经网络,采用主动外观模型(active appearance model,AAM)对人脸图像进行建模,将AAM模型参数作为网络输出,研究网络结构和不同语音特征输入对动画合成效果的影响。基于LIPS2008标准评测库的实验结果表明:具有BLSTM层的网络效果明显优于前向网络的,基于BLSTM-前向-BLSTM 256节点(BFB256)的三层模型结构的效果最佳,FBank、基频和能量组合可以进一步提升动画合成效果。