Visual media have dominated sensory communications for decades,and the resulting“visual hegemony”leads to the call for the“auditory return”in order to achieve a holistic balance in cultural acceptance.Romance of t...Visual media have dominated sensory communications for decades,and the resulting“visual hegemony”leads to the call for the“auditory return”in order to achieve a holistic balance in cultural acceptance.Romance of the Three Kingdoms,a classic literary work in China,has received significant attention and promotion from leading audio platforms.However,the commercialization of digital audio publishing faces unprecedented challenges due to the mismatch between the dissemination of long-form content on digital audio platforms and the current trend of short and fast information reception.Drawing on the Business Model Canvas Theory and taking Romance of the Three Kingdoms as the main focus of analysis,this paper argues that the construction of a business model for the audio publishing of classical books should start from three aspects:the user evaluation of digital audio platforms,the establishment of value propositions based on the“creative transformation and innovative development”principle,and the improvement of the audio publishing infrastructure to ensure the healthy operation and development of the digital audio platforms and consequently improve their current state of development and expand the boundaries of cultural heritage.展开更多
Rainfall data with high spatial and temporal resolutions are essential for urban hydrological modeling.Ubiquitous surveillance cameras can continuously record rainfall events through video and audio,so they have been ...Rainfall data with high spatial and temporal resolutions are essential for urban hydrological modeling.Ubiquitous surveillance cameras can continuously record rainfall events through video and audio,so they have been recognized as potential rain gauges to supplement professional rainfall observation networks.Since video-based rainfall estimation methods can be affected by variable backgrounds and lighting conditions,audio-based approaches could be a supplement without suffering from these conditions.However,most audio-based approaches focus on rainfall-level classification rather than rainfall intensity estimation.Here,we introduce a dataset named Surveillance Audio Rainfall Intensity Dataset(SARID)and a deep learning model for estimating rainfall intensity.First,we created the dataset through audio of six real-world rainfall events.This dataset's audio recordings are segmented into 12,066 pieces and annotated with rainfall intensity and environmental information,such as underlying surfaces,temperature,humidity,and wind.Then,we developed a deep learning-based baseline using Mel-Frequency Cepstral Coefficients(MFCC)and Transformer architecture to estimate rainfall intensity from surveillance audio.Validated from ground truth data,our baseline achieves a root mean absolute error of 0.88 mm h^(-1) and a coefficient of correlation of 0.765.Our findings demonstrate the potential of surveillance audio-based models as practical and effective tools for rainfall observation systems,initiating a new chapter in rainfall intensity estimation.It offers a novel data source for high-resolution hydrological sensing and contributes to the broader landscape of urban sensing,emergency response,and resilience.展开更多
Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animation...Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animations,and the portability of virtual character gestures and facial animations has not received sufficient attention.Methods Therefore,we propose a deep-learning-based audio-to-animation-and-blendshape(Audio2AB)network that generates gesture animations and ARK it's 52 facial expression parameter blendshape weights based on audio,audio-corresponding text,emotion labels,and semantic relevance labels to generate parametric data for full-body animations.This parameterization method can be used to drive full-body animations of virtual characters and improve their portability.In the experiment,we first downsampled the gesture and facial data to achieve the same temporal resolution for the input,output,and facial data.The Audio2AB network then encoded the audio,audio-corresponding text,emotion labels,and semantic relevance labels,and then fused the text,emotion labels,and semantic relevance labels into the audio to obtain better audio features.Finally,we established links between the body,gestures,and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.Results By using audio,audio-corresponding text,and emotional and semantic relevance labels as input,the trained Audio2AB network could generate gesture animation data containing blendshape weights.Therefore,different 3D virtual character animations could be created through parameterization.Conclusions The experimental results showed that the proposed method could generate significant gestures and facial animations.展开更多
Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary mea...Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.展开更多
基金This study is a phased achievement of the“Research on Innovative Communication of Romance of the Three Kingdoms under Audio Empowerment”project(No.23ZGL16)funded by Zhuge Liang Research Center,a key research base of social sciences in Sichuan Province.
文摘Visual media have dominated sensory communications for decades,and the resulting“visual hegemony”leads to the call for the“auditory return”in order to achieve a holistic balance in cultural acceptance.Romance of the Three Kingdoms,a classic literary work in China,has received significant attention and promotion from leading audio platforms.However,the commercialization of digital audio publishing faces unprecedented challenges due to the mismatch between the dissemination of long-form content on digital audio platforms and the current trend of short and fast information reception.Drawing on the Business Model Canvas Theory and taking Romance of the Three Kingdoms as the main focus of analysis,this paper argues that the construction of a business model for the audio publishing of classical books should start from three aspects:the user evaluation of digital audio platforms,the establishment of value propositions based on the“creative transformation and innovative development”principle,and the improvement of the audio publishing infrastructure to ensure the healthy operation and development of the digital audio platforms and consequently improve their current state of development and expand the boundaries of cultural heritage.
基金National Key R&D Program of China(2021YFE0112300)State Scholarship Fund from the China Scholarship Council(CSC)(No.201906865016)Special Fund for Public Welfare Scientific Institutions of Fujian Province(No.2020R1002002).
文摘Rainfall data with high spatial and temporal resolutions are essential for urban hydrological modeling.Ubiquitous surveillance cameras can continuously record rainfall events through video and audio,so they have been recognized as potential rain gauges to supplement professional rainfall observation networks.Since video-based rainfall estimation methods can be affected by variable backgrounds and lighting conditions,audio-based approaches could be a supplement without suffering from these conditions.However,most audio-based approaches focus on rainfall-level classification rather than rainfall intensity estimation.Here,we introduce a dataset named Surveillance Audio Rainfall Intensity Dataset(SARID)and a deep learning model for estimating rainfall intensity.First,we created the dataset through audio of six real-world rainfall events.This dataset's audio recordings are segmented into 12,066 pieces and annotated with rainfall intensity and environmental information,such as underlying surfaces,temperature,humidity,and wind.Then,we developed a deep learning-based baseline using Mel-Frequency Cepstral Coefficients(MFCC)and Transformer architecture to estimate rainfall intensity from surveillance audio.Validated from ground truth data,our baseline achieves a root mean absolute error of 0.88 mm h^(-1) and a coefficient of correlation of 0.765.Our findings demonstrate the potential of surveillance audio-based models as practical and effective tools for rainfall observation systems,initiating a new chapter in rainfall intensity estimation.It offers a novel data source for high-resolution hydrological sensing and contributes to the broader landscape of urban sensing,emergency response,and resilience.
基金Supported by the National Natural Science Foundation of China (62277014)the National Key Research and Development Program of China (2020YFC1523100)the Fundamental Research Funds for the Central Universities of China (PA2023GDSK0047)。
文摘Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animations,and the portability of virtual character gestures and facial animations has not received sufficient attention.Methods Therefore,we propose a deep-learning-based audio-to-animation-and-blendshape(Audio2AB)network that generates gesture animations and ARK it's 52 facial expression parameter blendshape weights based on audio,audio-corresponding text,emotion labels,and semantic relevance labels to generate parametric data for full-body animations.This parameterization method can be used to drive full-body animations of virtual characters and improve their portability.In the experiment,we first downsampled the gesture and facial data to achieve the same temporal resolution for the input,output,and facial data.The Audio2AB network then encoded the audio,audio-corresponding text,emotion labels,and semantic relevance labels,and then fused the text,emotion labels,and semantic relevance labels into the audio to obtain better audio features.Finally,we established links between the body,gestures,and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.Results By using audio,audio-corresponding text,and emotional and semantic relevance labels as input,the trained Audio2AB network could generate gesture animation data containing blendshape weights.Therefore,different 3D virtual character animations could be created through parameterization.Conclusions The experimental results showed that the proposed method could generate significant gestures and facial animations.
基金Supported by Shandong Province Key R and D Program,No.2021SFGC0504Shandong Provincial Natural Science Foundation,No.ZR2021MF079Science and Technology Development Plan of Jinan(Clinical Medicine Science and Technology Innovation Plan),No.202225054.
文摘Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.