Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary mea...Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.展开更多
A filter algorithm based on cochlear mechanics and neuron filter mechanism is proposed from the view point of vibration.It helps to solve the problem that the non-linear amplification is rarely considered in studying ...A filter algorithm based on cochlear mechanics and neuron filter mechanism is proposed from the view point of vibration.It helps to solve the problem that the non-linear amplification is rarely considered in studying the auditory filters.A cochlear mechanical transduction model is built to illustrate the audio signals processing procedure in cochlea,and then the neuron filter mechanism is modeled to indirectly obtain the outputs with the cochlear properties of frequency tuning and non-linear amplification.The mathematic description of the proposed algorithm is derived by the two models.The parameter space,the parameter selection rules and the error correction of the proposed algorithm are discussed.The unit impulse responses in the time domain and the frequency domain are simulated and compared to probe into the characteristics of the proposed algorithm.Then a 24-channel filter bank is built based on the proposed algorithm and applied to the enhancements of the audio signals.The experiments and comparisons verify that,the proposed algorithm can effectively divide the audio signals into different frequencies,significantly enhance the high frequency parts,and provide positive impacts on the performance of speech enhancement in different noise environments,especially for the babble noise and the volvo noise.展开更多
Audio mixing is a crucial part of music production.For analyzing or recreating audio mixing,it is of great importance to conduct research on estimating mixing parameters used to create mixdowns from music recordings,i...Audio mixing is a crucial part of music production.For analyzing or recreating audio mixing,it is of great importance to conduct research on estimating mixing parameters used to create mixdowns from music recordings,i.e.,audio mixing inversion.However,approaches of audio mixing inversion are rarely explored.A method of estimating mixing parameters from raw tracks and a stereo mixdown via embodied self-supervised learning is presented.In this work,several commonly used audio effects including gain,pan,equalization,reverb,and compression,are taken into consideration.This method is able to learn an inference neural network that takes a stereo mixdown and the raw audio sources as input and estimate mixing parameters used to create the mixdown by iteratively sampling and training.During the sampling step,the inference network predicts a set of mixing parameters,which is sampled and fed to an audio-processing framework to generate audio data for the training step.During the training step,the same network used in the sampling step is optimized with the sampled data generated from the sampling step.This method is able to explicitly model the mixing process in an interpretable way instead of using a black-box neural network model.A set of objective measures are used for evaluation.The experimental results show that this method has better performance than current state-of-the-art methods.展开更多
The recognition and retrieval of identical videos by combing through entire video files requires a great deal of time and memory space. Therefore, most current video-matching methods analyze only a part of each video&...The recognition and retrieval of identical videos by combing through entire video files requires a great deal of time and memory space. Therefore, most current video-matching methods analyze only a part of each video's image frame information. All these methods, however, share the critical problem of erroneously categorizing identical videos as different if they have merely been altered in resolution or converted with a different codec. This paper deals instead with an identical-video-retrieval method using the low-peak feature of audio data. The low-peak feature remains relatively stable even with changes in bit-rate or codec. The proposed method showed a search success rate of 93.7% in a video matching experiment. This approach could provide a technique for recognizing identical content on video file share sites.展开更多
Recently,many audio search sites headed by Google have used audio fingerprinting technology to search for the same audio and protect the music copyright using one part of the audio data.However,if there are fingerprin...Recently,many audio search sites headed by Google have used audio fingerprinting technology to search for the same audio and protect the music copyright using one part of the audio data.However,if there are fingerprints per audio file,then the amount of query data for the audio search increases.In this paper,we propose a novel method that can reduce the number of fingerprints while providing a level of performance similar to that of existing methods.The proposed method uses the difference of Gaussians which is often used in feature extraction during image signal processing.In the experiment,we use the proposed method and dynamic time warping and undertake an experimental search for the same audio with a success rate of 90%.The proposed method,therefore,can be used for an effective audio search.展开更多
基金Supported by Shandong Province Key R and D Program,No.2021SFGC0504Shandong Provincial Natural Science Foundation,No.ZR2021MF079Science and Technology Development Plan of Jinan(Clinical Medicine Science and Technology Innovation Plan),No.202225054.
文摘Depression is a common mental health disorder.With current depression detection methods,specialized physicians often engage in conversations and physiological examinations based on standardized scales as auxiliary measures for depression assessment.Non-biological markers-typically classified as verbal or non-verbal and deemed crucial evaluation criteria for depression-have not been effectively utilized.Specialized physicians usually require extensive training and experience to capture changes in these features.Advancements in deep learning technology have provided technical support for capturing non-biological markers.Several researchers have proposed automatic depression estimation(ADE)systems based on sounds and videos to assist physicians in capturing these features and conducting depression screening.This article summarizes commonly used public datasets and recent research on audio-and video-based ADE based on three perspectives:Datasets,deficiencies in existing research,and future development directions.
基金Project(17KJB510029)supported by the Natural Science Foundation of the Jiangsu Higher Education Institutions,ChinaProject(GXL2017004)supported by the Scientific Research Foundation of Nanjing Forestry University,China+3 种基金Project(202102210132)supported by the Important Project of Science and Technology of Henan Province,ChinaProject(B2019-51)supported by the Scientific Research Foundation of Henan Polytechnic University,ChinaProject(51521003)supported by the Foundation for Innovative Research Groups of the National Natural Science Foundation of ChinaProject(KQTD2016112515134654)supported by Shenzhen Science and Technology Program,China。
文摘A filter algorithm based on cochlear mechanics and neuron filter mechanism is proposed from the view point of vibration.It helps to solve the problem that the non-linear amplification is rarely considered in studying the auditory filters.A cochlear mechanical transduction model is built to illustrate the audio signals processing procedure in cochlea,and then the neuron filter mechanism is modeled to indirectly obtain the outputs with the cochlear properties of frequency tuning and non-linear amplification.The mathematic description of the proposed algorithm is derived by the two models.The parameter space,the parameter selection rules and the error correction of the proposed algorithm are discussed.The unit impulse responses in the time domain and the frequency domain are simulated and compared to probe into the characteristics of the proposed algorithm.Then a 24-channel filter bank is built based on the proposed algorithm and applied to the enhancements of the audio signals.The experiments and comparisons verify that,the proposed algorithm can effectively divide the audio signals into different frequencies,significantly enhance the high frequency parts,and provide positive impacts on the performance of speech enhancement in different noise environments,especially for the babble noise and the volvo noise.
基金This work was supported by High-grade,Precision and Advanced Discipline Construction Project of Beijing Universities,Major Projects of National Social Science Fund of China(No.21ZD19)Nation Culture and Tourism Technological Innovation Engineering Project of China.
文摘Audio mixing is a crucial part of music production.For analyzing or recreating audio mixing,it is of great importance to conduct research on estimating mixing parameters used to create mixdowns from music recordings,i.e.,audio mixing inversion.However,approaches of audio mixing inversion are rarely explored.A method of estimating mixing parameters from raw tracks and a stereo mixdown via embodied self-supervised learning is presented.In this work,several commonly used audio effects including gain,pan,equalization,reverb,and compression,are taken into consideration.This method is able to learn an inference neural network that takes a stereo mixdown and the raw audio sources as input and estimate mixing parameters used to create the mixdown by iteratively sampling and training.During the sampling step,the inference network predicts a set of mixing parameters,which is sampled and fed to an audio-processing framework to generate audio data for the training step.During the training step,the same network used in the sampling step is optimized with the sampled data generated from the sampling step.This method is able to explicitly model the mixing process in an interpretable way instead of using a black-box neural network model.A set of objective measures are used for evaluation.The experimental results show that this method has better performance than current state-of-the-art methods.
文摘The recognition and retrieval of identical videos by combing through entire video files requires a great deal of time and memory space. Therefore, most current video-matching methods analyze only a part of each video's image frame information. All these methods, however, share the critical problem of erroneously categorizing identical videos as different if they have merely been altered in resolution or converted with a different codec. This paper deals instead with an identical-video-retrieval method using the low-peak feature of audio data. The low-peak feature remains relatively stable even with changes in bit-rate or codec. The proposed method showed a search success rate of 93.7% in a video matching experiment. This approach could provide a technique for recognizing identical content on video file share sites.
文摘Recently,many audio search sites headed by Google have used audio fingerprinting technology to search for the same audio and protect the music copyright using one part of the audio data.However,if there are fingerprints per audio file,then the amount of query data for the audio search increases.In this paper,we propose a novel method that can reduce the number of fingerprints while providing a level of performance similar to that of existing methods.The proposed method uses the difference of Gaussians which is often used in feature extraction during image signal processing.In the experiment,we use the proposed method and dynamic time warping and undertake an experimental search for the same audio with a success rate of 90%.The proposed method,therefore,can be used for an effective audio search.