In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is di...In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.展开更多
In the coal mining industry,the gangue separation phase imposes a key challenge due to the high visual similaritybetween coal and gangue.Recently,separation methods have become more intelligent and efficient,using new...In the coal mining industry,the gangue separation phase imposes a key challenge due to the high visual similaritybetween coal and gangue.Recently,separation methods have become more intelligent and efficient,using newtechnologies and applying different features for recognition.One such method exploits the difference in substancedensity,leading to excellent coal/gangue recognition.Therefore,this study uses density differences to distinguishcoal from gangue by performing volume prediction on the samples.Our training samples maintain a record of3-side images as input,volume,and weight as the ground truth for the classification.The prediction process relieson a Convolutional neural network(CGVP-CNN)model that receives an input of a 3-side image and then extractsthe needed features to estimate an approximation for the volume.The classification was comparatively performedvia ten different classifiers,namely,K-Nearest Neighbors(KNN),Linear Support Vector Machines(Linear SVM),Radial Basis Function(RBF)SVM,Gaussian Process,Decision Tree,Random Forest,Multi-Layer Perceptron(MLP),Adaptive Boosting(AdaBosst),Naive Bayes,and Quadratic Discriminant Analysis(QDA).After severalexperiments on testing and training data,results yield a classification accuracy of 100%,92%,95%,96%,100%,100%,100%,96%,81%,and 92%,respectively.The test reveals the best timing with KNN,which maintained anaccuracy level of 100%.Assessing themodel generalization capability to newdata is essential to ensure the efficiencyof the model,so by applying a cross-validation experiment,the model generalization was measured.The useddataset was isolated based on the volume values to ensure the model generalization not only on new images of thesame volume but with a volume outside the trained range.Then,the predicted volume values were passed to theclassifiers group,where classification reported accuracy was found to be(100%,100%,100%,98%,88%,87%,100%,87%,97%,100%),respectively.Although obtaining a classification with high accuracy is the main motive,this workhas a remarkable reduction in the data preprocessing time compared to related works.The CGVP-CNN modelmanaged to reduce the data preprocessing time of previous works to 0.017 s while maintaining high classificationaccuracy using the estimated volume value.展开更多
Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale regi...Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale region of interest(ROI).However,some noise signals that are not easily separated in a single-scale space can be easily separated in a multi-scale space.Also,existing spatiotemporal networks mainly focus on local spatiotemporal information and do not emphasize temporal information,which is crucial in pulse extraction problems,resulting in insufficient spatiotemporal feature modelling.Methods Here,we propose a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution(SSTC)and dimension separable attention(DSAT).First,to solve the problem of a single-scale ROI,we constructed a multi-scale feature space for initial signal separation.Second,SSTC and DSAT were designed for efficient spatiotemporal correlation modeling,which increased the information interaction between the long-span time and space dimensions;this placed more emphasis on temporal features.Results The signal-to-noise ratio(SNR)of the proposed network reached 9.58dB on the PURE dataset and 6.77dB on the UBFC-rPPG dataset,outperforming state-of-the-art algorithms.Conclusions The results showed that fusing multi-scale signals yielded better results than methods based on only single-scale signals.The proposed SSTC and dimension-separable attention mechanism will contribute to more accurate pulse signal extraction.展开更多
The accurate and automatic segmentation of retinal vessels fromfundus images is critical for the early diagnosis and prevention ofmany eye diseases,such as diabetic retinopathy(DR).Existing retinal vessel segmentation...The accurate and automatic segmentation of retinal vessels fromfundus images is critical for the early diagnosis and prevention ofmany eye diseases,such as diabetic retinopathy(DR).Existing retinal vessel segmentation approaches based on convolutional neural networks(CNNs)have achieved remarkable effectiveness.Here,we extend a retinal vessel segmentation model with low complexity and high performance based on U-Net,which is one of the most popular architectures.In view of the excellent work of depth-wise separable convolution,we introduce it to replace the standard convolutional layer.The complexity of the proposed model is reduced by decreasing the number of parameters and calculations required for themodel.To ensure performance while lowering redundant parameters,we integrate the pre-trained MobileNet V2 into the encoder.Then,a feature fusion residual module(FFRM)is designed to facilitate complementary strengths by enhancing the effective fusion between adjacent levels,which alleviates extraneous clutter introduced by direct fusion.Finally,we provide detailed comparisons between the proposed SepFE and U-Net in three retinal image mainstream datasets(DRIVE,STARE,and CHASEDB1).The results show that the number of SepFE parameters is only 3%of U-Net,the Flops are only 8%of U-Net,and better segmentation performance is obtained.The superiority of SepFE is further demonstrated through comparisons with other advanced methods.展开更多
Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware reso...Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.展开更多
One of the most obvious clinical reasons of dementia or The Behavioral and Psychological Symptoms of Dementia(BPSD)are the lack of emotional expression,the increased frequency of negative emotions,and the impermanence...One of the most obvious clinical reasons of dementia or The Behavioral and Psychological Symptoms of Dementia(BPSD)are the lack of emotional expression,the increased frequency of negative emotions,and the impermanence of emotions.Observing the reduction of BPSD in dementia through emotions can be considered effective and widely used in the field of non-pharmacological therapy.At present,this article will verify whether the image recognition artificial intelligence(AI)system can correctly reflect the emotional performance of the elderly with dementia through a questionnaire survey of three professional elderly nursing staff.The ANOVA(sig.=0.50)is used to determine that the judgment given by the nursing staff has no obvious deviation,and then Kendall's test(0.722**)and spearman's test(0.863**)are used to verify the judgment severity of the emotion recognition system and the nursing staff unanimously.This implies the usability of the tool.Additionally,it can be expected to be further applied in the research related to BPSD elderly emotion detection.展开更多
This letter deals with the frequency domain Blind Source Separation of Convolutive Mixtures (CMBSS). From the frequency representation of the "overlap and save", a Weighted General Discrete Fourier Transform...This letter deals with the frequency domain Blind Source Separation of Convolutive Mixtures (CMBSS). From the frequency representation of the "overlap and save", a Weighted General Discrete Fourier Transform (WGDFT) is derived to replace the traditional Discrete Fourier Transform (DFT). The mixing matrix on each frequency bin could be estimated more precisely from WGDFT coefficients than from DFT coefficients, which improves separation performance. Simulation results verify the validity of WGDFT for frequency domain blind source separation of convolutive mixtures.展开更多
In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation proce...In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation procedure of the EM algorithm with a less computational load,the algorithm named Iterative Maximum Likelihood algorithm(IML) is proposed to calculate the likelihood and recover the source signals.An important feature of the ML approach is that it has robust performance in noise environments by treating the covariance matrix of the additive Gaussian noise as a parameter.Another striking feature of the ML approach is that it is possible to separate more sources than sensors by exploiting the finite alphabet property of the sources.Simulation results show that the proposed ML approach works well either in determined mixtures or underdetermined mixtures.Furthermore,the performance of the proposed ML algorithm is close to the performance with perfect knowledge of the channel filters.展开更多
Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual stati...Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual statistically dependent signals. When the observations are nonnegative linear combinations of nonnegative sources, the correlation coefficients of the observations are larger than these of source signals. In this letter, a novel Nonnegative Matrix Factorization (NMF) algorithm with least correlated component constraints to blind separation of convolutive mixed sources is proposed. The algorithm relaxes the source independence assumption and has low-complexity algebraic com- putations. Simulation results on blind source separation including real face image data indicate that the sources can be successfully recovered with the algorithm.展开更多
Birds play a crucial role in maintaining ecological balance,making bird recognition technology a hot research topic.Traditional recognition methods have not achieved high accuracy in bird identification.This paper pro...Birds play a crucial role in maintaining ecological balance,making bird recognition technology a hot research topic.Traditional recognition methods have not achieved high accuracy in bird identification.This paper proposes an improved ResNet18 model to enhance the recognition rate of local bird species in Yunnan.First,a dataset containing five species of local birds in Yunnan was established:C.amherstiae,T.caboti,Syrmaticus humiae,Polyplectron bicalcaratum,and Pucrasia macrolopha.The improved ResNet18 model was then used to identify these species.This method replaces traditional convolution with depth wise separable convolution and introduces an SE(Squeeze and Excitation)module to improve the model’s efficiency and accuracy.Compared to the traditional ResNet18 model,this improved model excels in implementing a wild bird classification solution,significantly reducing computational overhead and accelerating model training using low-power,lightweight hardware.Experimental analysis shows that the improved ResNet18 model achieved an accuracy of 98.57%,compared to 98.26%for the traditional Residual Network 18 layers(ResNet18)model.展开更多
为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征...为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征中的长期依赖关系,并强化网络对输入上下文信息的理解,本文提出了一种基于一维扩张卷积与Transformer的时域视听融合语音分离模型。将基于频域的传统视听融合语音分离方法应用到时域中,避免了时频变换带来的信息损失和相位重构问题。所提网络架构包含四个模块:一个视觉特征提取网络,用于从视频帧中提取唇部嵌入特征;一个音频编码器,用于将混合语音转换为特征表示;一个多模态分离网络,主要由音频子网络、视频子网络,以及Transformer网络组成,用于利用视觉和音频特征进行语音分离;以及一个音频解码器,用于将分离后的特征还原为干净的语音。本文使用LRS2数据集生成的包含两个说话者混合语音的数据集。实验结果表明,所提出的网络在尺度不变信噪比改进(Scale-Invariant Signal-to-Noise Ratio Improvement,SISNRi)与信号失真比改进(Signal-to-Distortion Ratio Improvement,SDRi)这两种指标上分别达到14.0 dB与14.3 dB,较纯音频分离模型和普适的视听融合分离模型有明显的性能提升。展开更多
文摘In order to prevent possible casualties and economic loss, it is critical to accurate prediction of the Remaining Useful Life (RUL) in rail prognostics health management. However, the traditional neural networks is difficult to capture the long-term dependency relationship of the time series in the modeling of the long time series of rail damage, due to the coupling relationship of multi-channel data from multiple sensors. Here, in this paper, a novel RUL prediction model with an enhanced pulse separable convolution is used to solve this issue. Firstly, a coding module based on the improved pulse separable convolutional network is established to effectively model the relationship between the data. To enhance the network, an alternate gradient back propagation method is implemented. And an efficient channel attention (ECA) mechanism is developed for better emphasizing the useful pulse characteristics. Secondly, an optimized Transformer encoder was designed to serve as the backbone of the model. It has the ability to efficiently understand relationship between the data itself and each other at each time step of long time series with a full life cycle. More importantly, the Transformer encoder is improved by integrating pulse maximum pooling to retain more pulse timing characteristics. Finally, based on the characteristics of the front layer, the final predicted RUL value was provided and served as the end-to-end solution. The empirical findings validate the efficacy of the suggested approach in forecasting the rail RUL, surpassing various existing data-driven prognostication techniques. Meanwhile, the proposed method also shows good generalization performance on PHM2012 bearing data set.
基金the National Natural Science Foundation of China under Grant No.52274159 received by E.Hu,https://www.nsfc.gov.cn/Grant No.52374165 received by E.Hu,https://www.nsfc.gov.cn/the China National Coal Group Key Technology Project Grant No.(20221CY001)received by Z.Guan,and E.Hu,https://www.chinacoal.com/.
文摘In the coal mining industry,the gangue separation phase imposes a key challenge due to the high visual similaritybetween coal and gangue.Recently,separation methods have become more intelligent and efficient,using newtechnologies and applying different features for recognition.One such method exploits the difference in substancedensity,leading to excellent coal/gangue recognition.Therefore,this study uses density differences to distinguishcoal from gangue by performing volume prediction on the samples.Our training samples maintain a record of3-side images as input,volume,and weight as the ground truth for the classification.The prediction process relieson a Convolutional neural network(CGVP-CNN)model that receives an input of a 3-side image and then extractsthe needed features to estimate an approximation for the volume.The classification was comparatively performedvia ten different classifiers,namely,K-Nearest Neighbors(KNN),Linear Support Vector Machines(Linear SVM),Radial Basis Function(RBF)SVM,Gaussian Process,Decision Tree,Random Forest,Multi-Layer Perceptron(MLP),Adaptive Boosting(AdaBosst),Naive Bayes,and Quadratic Discriminant Analysis(QDA).After severalexperiments on testing and training data,results yield a classification accuracy of 100%,92%,95%,96%,100%,100%,100%,96%,81%,and 92%,respectively.The test reveals the best timing with KNN,which maintained anaccuracy level of 100%.Assessing themodel generalization capability to newdata is essential to ensure the efficiencyof the model,so by applying a cross-validation experiment,the model generalization was measured.The useddataset was isolated based on the volume values to ensure the model generalization not only on new images of thesame volume but with a volume outside the trained range.Then,the predicted volume values were passed to theclassifiers group,where classification reported accuracy was found to be(100%,100%,100%,98%,88%,87%,100%,87%,97%,100%),respectively.Although obtaining a classification with high accuracy is the main motive,this workhas a remarkable reduction in the data preprocessing time compared to related works.The CGVP-CNN modelmanaged to reduce the data preprocessing time of previous works to 0.017 s while maintaining high classificationaccuracy using the estimated volume value.
基金Supported by the National Natural Science Foundation of China(61903336,61976190)the Natural Science Foundation of Zhejiang Province(LY21F030015)。
文摘Background The use of remote photoplethysmography(rPPG)to estimate blood volume pulse in a noncontact manner has been an active research topic in recent years.Existing methods are primarily based on a singlescale region of interest(ROI).However,some noise signals that are not easily separated in a single-scale space can be easily separated in a multi-scale space.Also,existing spatiotemporal networks mainly focus on local spatiotemporal information and do not emphasize temporal information,which is crucial in pulse extraction problems,resulting in insufficient spatiotemporal feature modelling.Methods Here,we propose a multi-scale facial video pulse extraction network based on separable spatiotemporal convolution(SSTC)and dimension separable attention(DSAT).First,to solve the problem of a single-scale ROI,we constructed a multi-scale feature space for initial signal separation.Second,SSTC and DSAT were designed for efficient spatiotemporal correlation modeling,which increased the information interaction between the long-span time and space dimensions;this placed more emphasis on temporal features.Results The signal-to-noise ratio(SNR)of the proposed network reached 9.58dB on the PURE dataset and 6.77dB on the UBFC-rPPG dataset,outperforming state-of-the-art algorithms.Conclusions The results showed that fusing multi-scale signals yielded better results than methods based on only single-scale signals.The proposed SSTC and dimension-separable attention mechanism will contribute to more accurate pulse signal extraction.
基金supported by the Hunan Provincial Natural Science Foundation of China(2021JJ50074)the Scientific Research Fund of Hunan Provincial Education Department(19B082)+6 种基金the Science and Technology Development Center of the Ministry of Education-New Generation Information Technology Innovation Project(2018A02020)the Science Foundation of Hengyang Normal University(19QD12)the Science and Technology Plan Project of Hunan Province(2016TP1020)the Subject Group Construction Project of Hengyang Normal University(18XKQ02)theApplication Oriented SpecialDisciplines,Double First ClassUniversity Project of Hunan Province(Xiangjiaotong[2018]469)the Hunan Province Special Funds of Central Government for Guiding Local Science and Technology Development(2018CT5001)the First Class Undergraduate Major in Hunan Province Internet of Things Major(Xiangjiaotong[2020]248,No.288).
文摘The accurate and automatic segmentation of retinal vessels fromfundus images is critical for the early diagnosis and prevention ofmany eye diseases,such as diabetic retinopathy(DR).Existing retinal vessel segmentation approaches based on convolutional neural networks(CNNs)have achieved remarkable effectiveness.Here,we extend a retinal vessel segmentation model with low complexity and high performance based on U-Net,which is one of the most popular architectures.In view of the excellent work of depth-wise separable convolution,we introduce it to replace the standard convolutional layer.The complexity of the proposed model is reduced by decreasing the number of parameters and calculations required for themodel.To ensure performance while lowering redundant parameters,we integrate the pre-trained MobileNet V2 into the encoder.Then,a feature fusion residual module(FFRM)is designed to facilitate complementary strengths by enhancing the effective fusion between adjacent levels,which alleviates extraneous clutter introduced by direct fusion.Finally,we provide detailed comparisons between the proposed SepFE and U-Net in three retinal image mainstream datasets(DRIVE,STARE,and CHASEDB1).The results show that the number of SepFE parameters is only 3%of U-Net,the Flops are only 8%of U-Net,and better segmentation performance is obtained.The superiority of SepFE is further demonstrated through comparisons with other advanced methods.
文摘Convolutional neural networks (CNNs) are widely used in image classification tasks, but their increasing model size and computation make them challenging to implement on embedded systems with constrained hardware resources. To address this issue, the MobileNetV1 network was developed, which employs depthwise convolution to reduce network complexity. MobileNetV1 employs a stride of 2 in several convolutional layers to decrease the spatial resolution of feature maps, thereby lowering computational costs. However, this stride setting can lead to a loss of spatial information, particularly affecting the detection and representation of smaller objects or finer details in images. To maintain the trade-off between complexity and model performance, a lightweight convolutional neural network with hierarchical multi-scale feature fusion based on the MobileNetV1 network is proposed. The network consists of two main subnetworks. The first subnetwork uses a depthwise dilated separable convolution (DDSC) layer to learn imaging features with fewer parameters, which results in a lightweight and computationally inexpensive network. Furthermore, depthwise dilated convolution in DDSC layer effectively expands the field of view of filters, allowing them to incorporate a larger context. The second subnetwork is a hierarchical multi-scale feature fusion (HMFF) module that uses parallel multi-resolution branches architecture to process the input feature map in order to extract the multi-scale feature information of the input image. Experimental results on the CIFAR-10, Malaria, and KvasirV1 datasets demonstrate that the proposed method is efficient, reducing the network parameters and computational cost by 65.02% and 39.78%, respectively, while maintaining the network performance compared to the MobileNetV1 baseline.
文摘One of the most obvious clinical reasons of dementia or The Behavioral and Psychological Symptoms of Dementia(BPSD)are the lack of emotional expression,the increased frequency of negative emotions,and the impermanence of emotions.Observing the reduction of BPSD in dementia through emotions can be considered effective and widely used in the field of non-pharmacological therapy.At present,this article will verify whether the image recognition artificial intelligence(AI)system can correctly reflect the emotional performance of the elderly with dementia through a questionnaire survey of three professional elderly nursing staff.The ANOVA(sig.=0.50)is used to determine that the judgment given by the nursing staff has no obvious deviation,and then Kendall's test(0.722**)and spearman's test(0.863**)are used to verify the judgment severity of the emotion recognition system and the nursing staff unanimously.This implies the usability of the tool.Additionally,it can be expected to be further applied in the research related to BPSD elderly emotion detection.
基金the grant from the Ph.D. Programs Foun-dation of Ministry of Education of China (No. 20060280003)the Shanghai Leading Academic Dis-cipline Project (Project No.T0102).
文摘This letter deals with the frequency domain Blind Source Separation of Convolutive Mixtures (CMBSS). From the frequency representation of the "overlap and save", a Weighted General Discrete Fourier Transform (WGDFT) is derived to replace the traditional Discrete Fourier Transform (DFT). The mixing matrix on each frequency bin could be estimated more precisely from WGDFT coefficients than from DFT coefficients, which improves separation performance. Simulation results verify the validity of WGDFT for frequency domain blind source separation of convolutive mixtures.
基金supportedin part by the National Natural Science Foundation of China under Grant No. 61001106the National Key Basic Research Program of China(973 Program) under Grant No. 2009CB320400
文摘In this paper,a Maximum Likelihood(ML) approach,implemented by Expectation-Maximization(EM) algorithm,is proposed to blind separation of convolutively mixed discrete sources.In order to carry out the expectation procedure of the EM algorithm with a less computational load,the algorithm named Iterative Maximum Likelihood algorithm(IML) is proposed to calculate the likelihood and recover the source signals.An important feature of the ML approach is that it has robust performance in noise environments by treating the covariance matrix of the additive Gaussian noise as a parameter.Another striking feature of the ML approach is that it is possible to separate more sources than sensors by exploiting the finite alphabet property of the sources.Simulation results show that the proposed ML approach works well either in determined mixtures or underdetermined mixtures.Furthermore,the performance of the proposed ML algorithm is close to the performance with perfect knowledge of the channel filters.
基金Supported by the Specialized Research Fund for the Doctoral Program of Higher Education of China (No.20060280003)Shanghai Leading Academic Dis-cipline Project (T0102)
文摘Most of the existing algorithms for blind sources separation have a limitation that sources are statistically independent. However, in many practical applications, the source signals are non- negative and mutual statistically dependent signals. When the observations are nonnegative linear combinations of nonnegative sources, the correlation coefficients of the observations are larger than these of source signals. In this letter, a novel Nonnegative Matrix Factorization (NMF) algorithm with least correlated component constraints to blind separation of convolutive mixed sources is proposed. The algorithm relaxes the source independence assumption and has low-complexity algebraic com- putations. Simulation results on blind source separation including real face image data indicate that the sources can be successfully recovered with the algorithm.
文摘Birds play a crucial role in maintaining ecological balance,making bird recognition technology a hot research topic.Traditional recognition methods have not achieved high accuracy in bird identification.This paper proposes an improved ResNet18 model to enhance the recognition rate of local bird species in Yunnan.First,a dataset containing five species of local birds in Yunnan was established:C.amherstiae,T.caboti,Syrmaticus humiae,Polyplectron bicalcaratum,and Pucrasia macrolopha.The improved ResNet18 model was then used to identify these species.This method replaces traditional convolution with depth wise separable convolution and introduces an SE(Squeeze and Excitation)module to improve the model’s efficiency and accuracy.Compared to the traditional ResNet18 model,this improved model excels in implementing a wild bird classification solution,significantly reducing computational overhead and accelerating model training using low-power,lightweight hardware.Experimental analysis shows that the improved ResNet18 model achieved an accuracy of 98.57%,compared to 98.26%for the traditional Residual Network 18 layers(ResNet18)model.
文摘为了提高语音分离的效果,除了利用混合的语音信号,还可以借助视觉信号作为辅助信息。这种融合了视觉与音频信号的多模态建模方式,已被证实可以有效地提高语音分离的性能,为语音分离任务提供了新的可能性。为了更好地捕捉视觉与音频特征中的长期依赖关系,并强化网络对输入上下文信息的理解,本文提出了一种基于一维扩张卷积与Transformer的时域视听融合语音分离模型。将基于频域的传统视听融合语音分离方法应用到时域中,避免了时频变换带来的信息损失和相位重构问题。所提网络架构包含四个模块:一个视觉特征提取网络,用于从视频帧中提取唇部嵌入特征;一个音频编码器,用于将混合语音转换为特征表示;一个多模态分离网络,主要由音频子网络、视频子网络,以及Transformer网络组成,用于利用视觉和音频特征进行语音分离;以及一个音频解码器,用于将分离后的特征还原为干净的语音。本文使用LRS2数据集生成的包含两个说话者混合语音的数据集。实验结果表明,所提出的网络在尺度不变信噪比改进(Scale-Invariant Signal-to-Noise Ratio Improvement,SISNRi)与信号失真比改进(Signal-to-Distortion Ratio Improvement,SDRi)这两种指标上分别达到14.0 dB与14.3 dB,较纯音频分离模型和普适的视听融合分离模型有明显的性能提升。