Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition sys...Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.展开更多
BACKGROUND Perception is frequently impaired in patients with Alzheimer’s disease(AD).Several patients exhibit visual or haptic hallucinations.CASE SUMMARY A 71-year-old Chinese man presented with visual and haptic h...BACKGROUND Perception is frequently impaired in patients with Alzheimer’s disease(AD).Several patients exhibit visual or haptic hallucinations.CASE SUMMARY A 71-year-old Chinese man presented with visual and haptic hallucinations he had been experiencing for 2 weeks.The clinical manifestations were the feeling of insects crawling and biting the limbs and geison.He looked for the insects while itching and scratching,which led to skin breakage on the limbs.He was treated with topical and anti-allergic drugs in several dermatology departments without any significant improvement.After admission,the patient was administered risperidone(0.5 mg)and duloxetine(2 mg/day).One week later,the dose of risperidone was increased to 2 mg/day,and that of duloxetine was increased to 60 mg/day.After 2 weeks of treatment,the patient’s sensation of insects crawling and biting disappeared,and his mood stabilized.CONCLUSION This patient manifested psychiatric behavioral symptoms caused by AD brain atrophy.It was important to re-evaluate the patient’s cognitive-psychological status when the patient repeatedly went to the hospital for treatment.Follow-up attention to cognitive function and the consideration of perceptual deficits as early manifestations of AD should be considered.展开更多
Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The p...Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust.展开更多
This paper is dedicated to a thorough review on the audio-visual related translations from both home and abroad.In reviewing the foreign achievements on this specific field of translation studies it can shed some ligh...This paper is dedicated to a thorough review on the audio-visual related translations from both home and abroad.In reviewing the foreign achievements on this specific field of translation studies it can shed some lights on our national audio-visual practice and research.The review on the Chinese scholars’ audio-visual translation studies is to offer the potential developing direction and guidelines to the studies and aspects neglected as well.Based on the summary of relevant studies,possible topics for further studies are proposed.展开更多
Emotion recognition has become an important task of modern human-computer interac- tion. A multilayer boosted HMM ( MBHMM ) classifier for automatic audio-visual emotion recognition is presented in this paper. A mod...Emotion recognition has become an important task of modern human-computer interac- tion. A multilayer boosted HMM ( MBHMM ) classifier for automatic audio-visual emotion recognition is presented in this paper. A modified Baum-Welch algorithm is proposed for component HMM learn- ing and adaptive boosting (AdaBoost) is used to train ensemble classifiers for different layers (cues). Except for the first layer, the initial weights of training samples in current layer are decided by recognition results of the ensemble classifier in the upper layer. Thus the training procedure using current cue can focus more on the difficult samples according to the previous cue. Our MBHMM clas- sifier is combined by these ensemble classifiers and takes advantage of the complementary informa- tion from multiple cues and modalities. Experimental results on audio-visual emotion data collected in Wizard of Oz scenarios and labeled under two types of emotion category sets demonstrate that our approach is effective and promising.展开更多
February 10 (US Central Time), 2019, China National Peking Opera Company (CNPOC) and the Hubei Chime Bells National Chinese Orchestra presented a fantastic audio-visual performance of Chinese Peking Opera and Chinese ...February 10 (US Central Time), 2019, China National Peking Opera Company (CNPOC) and the Hubei Chime Bells National Chinese Orchestra presented a fantastic audio-visual performance of Chinese Peking Opera and Chinese chime bells for the American audience at the world s top-level Buntrock Hall at Symphony Center.展开更多
Mongolian audio-visual works are an important carrier of exploring the true significance to this national culture.This paper believes that the Mongolian people in Inner Mongolia constantly enhance the individual sense...Mongolian audio-visual works are an important carrier of exploring the true significance to this national culture.This paper believes that the Mongolian people in Inner Mongolia constantly enhance the individual sense of identity to the overall ethnic group through the influence of film and television and music,and on this basis constantly evolve a new culture in line with modern and contemporary life to further enhance their sense of belonging to the ethnic nation.展开更多
Based on the current situation of college audio-visual English teaching in China, this article points out that the avoidance in class is a serious phenomenon in the process of college audio-visual English teaching. Af...Based on the current situation of college audio-visual English teaching in China, this article points out that the avoidance in class is a serious phenomenon in the process of college audio-visual English teaching. After further analysis and combination with the characteristics of college English audio-visual teaching in China, it puts forward the application of task-based teaching method to college audio-visual English teaching and its steps, attempting to alleviate the avoidance phenomenon in students through task-based teaching method.展开更多
Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rend...Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.展开更多
The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are e...The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are extracted and classified into different groups according to their priority values and scalable layers (visual importance). These priority values are mapped to the 1P DiffServ per hop behaviors (PHB). This scheme can selectively discard packets with low importance, in order to avoid the network congestion. Simulation results show that the quality of received video can gracefully adapt to network state, as compared with the ‘best-effort' manner. Also, by allowing the content provider to define prioritization of each audio-visual object, the adaptive transmission of object-based scalable video can be customized based on the content.展开更多
基金Supported by the Centre for Digital Entertainment at Bournemouth University by the UK Engineering and Physical Sciences Research Council(EPSRC)EP/L016540/1 and Humain Ltd.
文摘Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.
文摘BACKGROUND Perception is frequently impaired in patients with Alzheimer’s disease(AD).Several patients exhibit visual or haptic hallucinations.CASE SUMMARY A 71-year-old Chinese man presented with visual and haptic hallucinations he had been experiencing for 2 weeks.The clinical manifestations were the feeling of insects crawling and biting the limbs and geison.He looked for the insects while itching and scratching,which led to skin breakage on the limbs.He was treated with topical and anti-allergic drugs in several dermatology departments without any significant improvement.After admission,the patient was administered risperidone(0.5 mg)and duloxetine(2 mg/day).One week later,the dose of risperidone was increased to 2 mg/day,and that of duloxetine was increased to 60 mg/day.After 2 weeks of treatment,the patient’s sensation of insects crawling and biting disappeared,and his mood stabilized.CONCLUSION This patient manifested psychiatric behavioral symptoms caused by AD brain atrophy.It was important to re-evaluate the patient’s cognitive-psychological status when the patient repeatedly went to the hospital for treatment.Follow-up attention to cognitive function and the consideration of perceptual deficits as early manifestations of AD should be considered.
文摘Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust.
文摘This paper is dedicated to a thorough review on the audio-visual related translations from both home and abroad.In reviewing the foreign achievements on this specific field of translation studies it can shed some lights on our national audio-visual practice and research.The review on the Chinese scholars’ audio-visual translation studies is to offer the potential developing direction and guidelines to the studies and aspects neglected as well.Based on the summary of relevant studies,possible topics for further studies are proposed.
基金Supported by the National Natural Science Foundation of China(60905006)the NSFC-Guangdong Joint Fund(U1035004)
文摘Emotion recognition has become an important task of modern human-computer interac- tion. A multilayer boosted HMM ( MBHMM ) classifier for automatic audio-visual emotion recognition is presented in this paper. A modified Baum-Welch algorithm is proposed for component HMM learn- ing and adaptive boosting (AdaBoost) is used to train ensemble classifiers for different layers (cues). Except for the first layer, the initial weights of training samples in current layer are decided by recognition results of the ensemble classifier in the upper layer. Thus the training procedure using current cue can focus more on the difficult samples according to the previous cue. Our MBHMM clas- sifier is combined by these ensemble classifiers and takes advantage of the complementary informa- tion from multiple cues and modalities. Experimental results on audio-visual emotion data collected in Wizard of Oz scenarios and labeled under two types of emotion category sets demonstrate that our approach is effective and promising.
文摘February 10 (US Central Time), 2019, China National Peking Opera Company (CNPOC) and the Hubei Chime Bells National Chinese Orchestra presented a fantastic audio-visual performance of Chinese Peking Opera and Chinese chime bells for the American audience at the world s top-level Buntrock Hall at Symphony Center.
基金This paper is the periodic research result of the research project:Basic Research Project of Beijing Institute of Graphic Communication:Research on the Artistic,Modern Communication and Publishing of Dian-shi Zhai Pictorial(1884-1898)(Serial Number Eb202008).
文摘Mongolian audio-visual works are an important carrier of exploring the true significance to this national culture.This paper believes that the Mongolian people in Inner Mongolia constantly enhance the individual sense of identity to the overall ethnic group through the influence of film and television and music,and on this basis constantly evolve a new culture in line with modern and contemporary life to further enhance their sense of belonging to the ethnic nation.
文摘Based on the current situation of college audio-visual English teaching in China, this article points out that the avoidance in class is a serious phenomenon in the process of college audio-visual English teaching. After further analysis and combination with the characteristics of college English audio-visual teaching in China, it puts forward the application of task-based teaching method to college audio-visual English teaching and its steps, attempting to alleviate the avoidance phenomenon in students through task-based teaching method.
文摘Point-based rendering is a common method widely used in point cloud rendering.It realizes rendering by turning the points into the base geometry.The critical step in point-based rendering is to set an appropriate rendering radius for the base geometry,usually calculated using the average Euclidean distance of the N nearest neighboring points to the rendered point.This method effectively reduces the appearance of empty spaces between points in rendering.However,it also causes the problem that the rendering radius of outlier points far away from the central region of the point cloud sequence could be large,which impacts the perceptual quality.To solve the above problem,we propose an algorithm for point-based point cloud rendering through outlier detection to optimize the perceptual quality of rendering.The algorithm determines whether the detected points are outliers using a combination of local and global geometric features.For the detected outliers,the minimum radius is used for rendering.We examine the performance of the proposed method in terms of both objective quality and perceptual quality.The experimental results show that the peak signal-to-noise ratio(PSNR)of the point cloud sequences is improved under all geometric quantization,and the PSNR improvement ratio is more evident in dense point clouds.Specifically,the PSNR of the point cloud sequences is improved by 3.6%on average compared with the original algorithm.The proposed method significantly improves the perceptual quality of the rendered point clouds and the results of ablation studies prove the feasibility and effectiveness of the proposed method.
文摘The object-based scalable coding in MPEG-4 is investigated, and a prioritized transmission scheme of MPEG-4 audio-visual objects (AVOs) over the DiffServ network with the QoS guarantee is proposed. MPEG-4 AVOs are extracted and classified into different groups according to their priority values and scalable layers (visual importance). These priority values are mapped to the 1P DiffServ per hop behaviors (PHB). This scheme can selectively discard packets with low importance, in order to avoid the network congestion. Simulation results show that the quality of received video can gracefully adapt to network state, as compared with the ‘best-effort' manner. Also, by allowing the content provider to define prioritization of each audio-visual object, the adaptive transmission of object-based scalable video can be customized based on the content.