Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed spe...Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed special research attention in recent studies,hence,the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.Design/methodology/approach-This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data.The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency(TF-IDF)for word-level feature extraction and Long Short Term Memory(LSTM)which is a variant of recurrent neural networks architecture for sentence-level feature extraction.The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech,offensive language or neither.Findings-The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods.In order to validate the performances of the proposed method,t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection.Furthermore,Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.Research limitations/implications-Finally,the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.Originality/value-The main novelty of this study is the use of an automatic topic spotting measure based on na€ıve Bayes model to improve features representation.展开更多
The hidden danger of the automatic speaker verification(ASV)system is various spoofed speeches.These threats can be classified into two categories,namely logical access(LA)and physical access(PA).To improve identifica...The hidden danger of the automatic speaker verification(ASV)system is various spoofed speeches.These threats can be classified into two categories,namely logical access(LA)and physical access(PA).To improve identification capability of spoofed speech detection,this paper considers the research on features.Firstly,following the idea of modifying the constant-Q-based features,this work considered adding variance or mean to the constant-Q-based cepstral domain to obtain good performance.Secondly,linear frequency cepstral coefficients(LFCCs)performed comparably with constant-Q-based features.Finally,we proposed linear frequency variance-based cepstral coefficients(LVCCs)and linear frequency mean-based cepstral coefficients(LMCCs)for identification of speech spoofing.LVCCs and LMCCs could be attained by adding the frame variance or the mean to the log magnitude spectrum based on LFCC features.The proposed novel features were evaluated on ASVspoof 2019 datase.The experimental results show that compared with known hand-crafted features,LVCCs and LMCCs are more effective in resisting spoofed speech attack.展开更多
In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-wri...In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-writing is difficult, some countermeasure methods for surrounding noise are indispensable. In this study, a signal detection method to remove the noise for actual speech signals is proposed by using Bayesian estimation with the aid of bone-conducted speech. More specifically, by introducing Bayes’ theorem based on the observation of air-conducted speech contaminated by surrounding background noise, a new type of algorithm for noise removal is theoretically derived. In the proposed speech detection method, bone-conducted speech is utilized in order to obtain precise estimation for speech signals. The effectiveness of the proposed method is experimentally confirmed by applying it to air- and bone-conducted speeches measured in real environment under the existence of surrounding background noise.展开更多
Diagnosing a baby’s feelings poses a challenge for both doctors and parents because babies cannot explain their feelings through expression or speech.Understanding the emotions of babies and their associated expressi...Diagnosing a baby’s feelings poses a challenge for both doctors and parents because babies cannot explain their feelings through expression or speech.Understanding the emotions of babies and their associated expressions during different sensations such as hunger,pain,etc.,is a complicated task.In infancy,all communication and feelings are propagated through cryspeech,which is a natural phenomenon.Several clinical methods can be used to diagnose a baby’s diseases,but nonclinical methods of diagnosing a baby’s feelings are lacking.As such,in this study,we aimed to identify babies’feelings and emotions through their cry using a nonclinical method.Changes in the cry sound can be identified using our method and used to assess the baby’s feelings.We considered the frequency of the cries from the energy of the sound.The feelings represented by the infant’s cry are judged to represent certain sensations expressed by the child using the optimal frequency of the recognition of a real-world audio sound.We used machine learning and artificial intelligence to distinguish cry tones in real time through feature analysis.The experimental group consisted of 50%each male and female babies,and we determined the relevancy of the results against different parameters.This application produced real-time results after recognizing a child’s cry sounds.The novelty of our work is that we,for the first time,successfully derived the feelings of young children through the cry-speech of the child,showing promise for end-user applications.展开更多
Regarding the performance of traditional endpoint detection algorithms degrades as the environment noise level increases, a recursive calculating algorithm for higher-order cu- mulants over a sliding window is propose...Regarding the performance of traditional endpoint detection algorithms degrades as the environment noise level increases, a recursive calculating algorithm for higher-order cu- mulants over a sliding window is proposed. Then it is applied to the speech endpoint detection. Furthermore, endpoint detection is carried out with the feature of energy. Experimental results show that both the computational efficiency and the robustness against noise of the proposed algorithm are improved remarkably compared with traditional algorithm. The average prob- ability of correct point detection (Pc-point) of the proposed voice activity detection (VAD) is 6.07% higher than that of G.729b VAD in different noisy at different signal-noise ratios (SNRs) environments.展开更多
The Perception Spectrogram Structure Boundary(PSSB)parameter is proposed for speech endpoint detection as a preprocess of speech or speaker recognition.At first a hearing perception speech enhancement is carried out...The Perception Spectrogram Structure Boundary(PSSB)parameter is proposed for speech endpoint detection as a preprocess of speech or speaker recognition.At first a hearing perception speech enhancement is carried out.Then the two-dimensional enhancement is performed upon the sound spectrogram according to the difference between the determinacy distribution characteristic of speech and the random distribution characteristic of noise.Finally a decision for endpoint was made by the PSSB parameter.Experimental results show that,in a low SNR environment from-10 dB to 10 dB,the algorithm proposed in this paper may achieve higher accuracy than the extant endpoint detection algorithms.The detection accuracy of 75.2%can be reached even in the extremely low SNR at-10 dB.Therefore it is suitable for speech endpoint detection in low-SNRs environment.展开更多
An important component of a spoken term detection (STD) system involves estimating confidence measures of hypothesised detections.A potential problem of the widely used lattice-based confidence estimation,however,is...An important component of a spoken term detection (STD) system involves estimating confidence measures of hypothesised detections.A potential problem of the widely used lattice-based confidence estimation,however,is that the confidence scores are treated uniformly for all search terms,regardless of how much they may differ in terms of phonetic or linguistic properties.This problem is particularly evident for out-of-vocabulary (OOV) terms which tend to exhibit high intra-term diversity.To address the impact of term diversity on confidence measures,we propose in this work a term-dependent normalisation technique which compensates for term diversity in confidence estimation.We first derive an evaluation-metric-oriented normalisation that optimises the evaluation metric by compensating for the diverse occurrence rates among terms,and then propose a linear bias compensation and a discriminative compensation to deal with the bias problem that is inherent in lattice-based confidence measurement and from which the Term Specific Threshold (TST) approach suffers.We tested the proposed technique on speech data from the multi-party meeting domain with two state-ofthe-art STD systems based on phonemes and words respectively.The experimental results demonstrate that the confidence normalisation approach leads to a significant performance improvement in STD,particularly for OOV terms with phonemebased systems.展开更多
Parkinson’s disease is one of the most destructive diseases to the nervous system.Speech disorder is one of the typical symptoms of Parkinson’s disease.Approximately 90%of Parkin-son’s patients develop some degree ...Parkinson’s disease is one of the most destructive diseases to the nervous system.Speech disorder is one of the typical symptoms of Parkinson’s disease.Approximately 90%of Parkin-son’s patients develop some degree of speech disorder,which affects speech function faster than any other subsystem of the body.Screening Parkinson’s disease by sound is a very effective method that has attracted a growing number of researchers over the past decade.Patients with Parkinson’s disease could be identified by recording the sound signal of the pronunciation of words,extracting appropriate features and identifying the disturbance in their voices.This paper proposes an improved genetic algorithm combined with a data enhancement method for Parkinson’s speech signal recognition.Specifically,the methods first extract representative speech signal features through the L1 regularization SVM and then enhance the representative feature data by the SMOTE algorithm.Following this,both original and enhanced features are used to train an SVM classifier for speech signal recognition.An improved genetic algorithm was applied to find the optimal parameters of the SVM.The effectiveness of our proposed model is demonstrated by using Parkinson’s disease audio data set from the UCI machine learning library,and compared with the most advancedmethods,our proposed method has the best performance.展开更多
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
文摘Purpose-Hate speech is an expression of intense hatred.Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors.Hate speech detection with social media data has witnessed special research attention in recent studies,hence,the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.Design/methodology/approach-This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data.The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency(TF-IDF)for word-level feature extraction and Long Short Term Memory(LSTM)which is a variant of recurrent neural networks architecture for sentence-level feature extraction.The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech,offensive language or neither.Findings-The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods.In order to validate the performances of the proposed method,t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection.Furthermore,Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.Research limitations/implications-Finally,the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.Originality/value-The main novelty of this study is the use of an automatic topic spotting measure based on na€ıve Bayes model to improve features representation.
基金National Natural Science Foundation of China(No.62001100)。
文摘The hidden danger of the automatic speaker verification(ASV)system is various spoofed speeches.These threats can be classified into two categories,namely logical access(LA)and physical access(PA).To improve identification capability of spoofed speech detection,this paper considers the research on features.Firstly,following the idea of modifying the constant-Q-based features,this work considered adding variance or mean to the constant-Q-based cepstral domain to obtain good performance.Secondly,linear frequency cepstral coefficients(LFCCs)performed comparably with constant-Q-based features.Finally,we proposed linear frequency variance-based cepstral coefficients(LVCCs)and linear frequency mean-based cepstral coefficients(LMCCs)for identification of speech spoofing.LVCCs and LMCCs could be attained by adding the frame variance or the mean to the log magnitude spectrum based on LFCC features.The proposed novel features were evaluated on ASVspoof 2019 datase.The experimental results show that compared with known hand-crafted features,LVCCs and LMCCs are more effective in resisting spoofed speech attack.
文摘In order to apply speech recognition systems to actual circumstances such as inspection and maintenance operations in industrial factories to recording and reporting routines at construction sites, etc. where hand-writing is difficult, some countermeasure methods for surrounding noise are indispensable. In this study, a signal detection method to remove the noise for actual speech signals is proposed by using Bayesian estimation with the aid of bone-conducted speech. More specifically, by introducing Bayes’ theorem based on the observation of air-conducted speech contaminated by surrounding background noise, a new type of algorithm for noise removal is theoretically derived. In the proposed speech detection method, bone-conducted speech is utilized in order to obtain precise estimation for speech signals. The effectiveness of the proposed method is experimentally confirmed by applying it to air- and bone-conducted speeches measured in real environment under the existence of surrounding background noise.
基金This research was funded by the Deanship of Scientific Research,Najran University,Kingdom of Saudi Arabia,grant number NU/RC/SERC/11/5.
文摘Diagnosing a baby’s feelings poses a challenge for both doctors and parents because babies cannot explain their feelings through expression or speech.Understanding the emotions of babies and their associated expressions during different sensations such as hunger,pain,etc.,is a complicated task.In infancy,all communication and feelings are propagated through cryspeech,which is a natural phenomenon.Several clinical methods can be used to diagnose a baby’s diseases,but nonclinical methods of diagnosing a baby’s feelings are lacking.As such,in this study,we aimed to identify babies’feelings and emotions through their cry using a nonclinical method.Changes in the cry sound can be identified using our method and used to assess the baby’s feelings.We considered the frequency of the cries from the energy of the sound.The feelings represented by the infant’s cry are judged to represent certain sensations expressed by the child using the optimal frequency of the recognition of a real-world audio sound.We used machine learning and artificial intelligence to distinguish cry tones in real time through feature analysis.The experimental group consisted of 50%each male and female babies,and we determined the relevancy of the results against different parameters.This application produced real-time results after recognizing a child’s cry sounds.The novelty of our work is that we,for the first time,successfully derived the feelings of young children through the cry-speech of the child,showing promise for end-user applications.
基金supported by the National Natural Science Eoundation of China(61271352)
文摘Regarding the performance of traditional endpoint detection algorithms degrades as the environment noise level increases, a recursive calculating algorithm for higher-order cu- mulants over a sliding window is proposed. Then it is applied to the speech endpoint detection. Furthermore, endpoint detection is carried out with the feature of energy. Experimental results show that both the computational efficiency and the robustness against noise of the proposed algorithm are improved remarkably compared with traditional algorithm. The average prob- ability of correct point detection (Pc-point) of the proposed voice activity detection (VAD) is 6.07% higher than that of G.729b VAD in different noisy at different signal-noise ratios (SNRs) environments.
基金supported by the National Natural Science Foundation of China.(61071215,61271359,61372146)
文摘The Perception Spectrogram Structure Boundary(PSSB)parameter is proposed for speech endpoint detection as a preprocess of speech or speaker recognition.At first a hearing perception speech enhancement is carried out.Then the two-dimensional enhancement is performed upon the sound spectrogram according to the difference between the determinacy distribution characteristic of speech and the random distribution characteristic of noise.Finally a decision for endpoint was made by the PSSB parameter.Experimental results show that,in a low SNR environment from-10 dB to 10 dB,the algorithm proposed in this paper may achieve higher accuracy than the extant endpoint detection algorithms.The detection accuracy of 75.2%can be reached even in the extremely low SNR at-10 dB.Therefore it is suitable for speech endpoint detection in low-SNRs environment.
文摘An important component of a spoken term detection (STD) system involves estimating confidence measures of hypothesised detections.A potential problem of the widely used lattice-based confidence estimation,however,is that the confidence scores are treated uniformly for all search terms,regardless of how much they may differ in terms of phonetic or linguistic properties.This problem is particularly evident for out-of-vocabulary (OOV) terms which tend to exhibit high intra-term diversity.To address the impact of term diversity on confidence measures,we propose in this work a term-dependent normalisation technique which compensates for term diversity in confidence estimation.We first derive an evaluation-metric-oriented normalisation that optimises the evaluation metric by compensating for the diverse occurrence rates among terms,and then propose a linear bias compensation and a discriminative compensation to deal with the bias problem that is inherent in lattice-based confidence measurement and from which the Term Specific Threshold (TST) approach suffers.We tested the proposed technique on speech data from the multi-party meeting domain with two state-ofthe-art STD systems based on phonemes and words respectively.The experimental results demonstrate that the confidence normalisation approach leads to a significant performance improvement in STD,particularly for OOV terms with phonemebased systems.
基金supported by the Youth Fund Project of the National Natural Fund of China under Grant 62002038.
文摘Parkinson’s disease is one of the most destructive diseases to the nervous system.Speech disorder is one of the typical symptoms of Parkinson’s disease.Approximately 90%of Parkin-son’s patients develop some degree of speech disorder,which affects speech function faster than any other subsystem of the body.Screening Parkinson’s disease by sound is a very effective method that has attracted a growing number of researchers over the past decade.Patients with Parkinson’s disease could be identified by recording the sound signal of the pronunciation of words,extracting appropriate features and identifying the disturbance in their voices.This paper proposes an improved genetic algorithm combined with a data enhancement method for Parkinson’s speech signal recognition.Specifically,the methods first extract representative speech signal features through the L1 regularization SVM and then enhance the representative feature data by the SMOTE algorithm.Following this,both original and enhanced features are used to train an SVM classifier for speech signal recognition.An improved genetic algorithm was applied to find the optimal parameters of the SVM.The effectiveness of our proposed model is demonstrated by using Parkinson’s disease audio data set from the UCI machine learning library,and compared with the most advancedmethods,our proposed method has the best performance.