To generate realistic three-dimensional animation of virtual character,capturing real facial expression is the primary task.Due to diverse facial expressions and complex background,facial landmarks recognized by exist...To generate realistic three-dimensional animation of virtual character,capturing real facial expression is the primary task.Due to diverse facial expressions and complex background,facial landmarks recognized by existing strategies have the problem of deviations and low accuracy.Therefore,a method for facial expression capture based on two-stage neural network is proposed in this paper which takes advantage of improved multi-task cascaded convolutional networks(MTCNN)and high-resolution network.Firstly,the convolution operation of traditional MTCNN is improved.The face information in the input image is quickly filtered by feature fusion in the first stage and Octave Convolution instead of the original ones is introduced into in the second stage to enhance the feature extraction ability of the network,which further rejects a large number of false candidates.The model outputs more accurate facial candidate windows for better landmarks recognition and locates the faces.Then the images cropped after face detection are input into high-resolution network.Multi-scale feature fusion is realized by parallel connection of multi-resolution streams,and rich high-resolution heatmaps of facial landmarks are obtained.Finally,the changes of facial landmarks recognized are tracked in real-time.The expression parameters are extracted and transmitted to Unity3D engine to drive the virtual character’s face,which can realize facial expression synchronous animation.Extensive experimental results obtained on the WFLW database demonstrate the superiority of the proposed method in terms of accuracy and robustness,especially for diverse expressions and complex background.The method can accurately capture facial expression and generate three-dimensional animation effects,making online entertainment and social interaction more immersive in shared virtual space.展开更多
Molecular imprinting of proteins remains a huge chal-lenge because of two major obstacles:difficulty in template removal and low imprinting efficiency.Herein,we propose a new strategy to simultaneously overcome these ...Molecular imprinting of proteins remains a huge chal-lenge because of two major obstacles:difficulty in template removal and low imprinting efficiency.Herein,we propose a new strategy to simultaneously overcome these two challenges by creating molecu-larly imprinted polymers(MIPs)with nanoscale shape-memorable imprint cavities.These novel MIPs were developed by simply cross-linking the polymers with a peptide cross-linker instead of commonly used cross-linkers.Due to the unique pH-induced helix–coil transition of the peptide cross-linker,adjusting the pH from 5.5 to 7.4 leads to an expansion of the imprint cavities,thus facilitating template removal.Returning the pH back to 5.5 restores the original size and shape of the imprint cavities due to the precise refolding of peptide.A template protein can there-fore be readily removed under mild conditions,while simultaneously achieving a significantly improved imprinting effect.展开更多
基金This research was funded by College Student Innovation and Entrepreneurship Training Program,grant number 2021055Z and S202110082031the Special Project for Cultivating Scientific and Technological Innovation Ability of College and Middle School Students in Hebei Province,Grant Number 2021H011404.
文摘To generate realistic three-dimensional animation of virtual character,capturing real facial expression is the primary task.Due to diverse facial expressions and complex background,facial landmarks recognized by existing strategies have the problem of deviations and low accuracy.Therefore,a method for facial expression capture based on two-stage neural network is proposed in this paper which takes advantage of improved multi-task cascaded convolutional networks(MTCNN)and high-resolution network.Firstly,the convolution operation of traditional MTCNN is improved.The face information in the input image is quickly filtered by feature fusion in the first stage and Octave Convolution instead of the original ones is introduced into in the second stage to enhance the feature extraction ability of the network,which further rejects a large number of false candidates.The model outputs more accurate facial candidate windows for better landmarks recognition and locates the faces.Then the images cropped after face detection are input into high-resolution network.Multi-scale feature fusion is realized by parallel connection of multi-resolution streams,and rich high-resolution heatmaps of facial landmarks are obtained.Finally,the changes of facial landmarks recognized are tracked in real-time.The expression parameters are extracted and transmitted to Unity3D engine to drive the virtual character’s face,which can realize facial expression synchronous animation.Extensive experimental results obtained on the WFLW database demonstrate the superiority of the proposed method in terms of accuracy and robustness,especially for diverse expressions and complex background.The method can accurately capture facial expression and generate three-dimensional animation effects,making online entertainment and social interaction more immersive in shared virtual space.
基金This work was funded by the National Natural Science Foundation of China(grant nos:51625302 and 51873091)the National Key Research and Development Program of China(2017YFC1103501).
文摘Molecular imprinting of proteins remains a huge chal-lenge because of two major obstacles:difficulty in template removal and low imprinting efficiency.Herein,we propose a new strategy to simultaneously overcome these two challenges by creating molecu-larly imprinted polymers(MIPs)with nanoscale shape-memorable imprint cavities.These novel MIPs were developed by simply cross-linking the polymers with a peptide cross-linker instead of commonly used cross-linkers.Due to the unique pH-induced helix–coil transition of the peptide cross-linker,adjusting the pH from 5.5 to 7.4 leads to an expansion of the imprint cavities,thus facilitating template removal.Returning the pH back to 5.5 restores the original size and shape of the imprint cavities due to the precise refolding of peptide.A template protein can there-fore be readily removed under mild conditions,while simultaneously achieving a significantly improved imprinting effect.