With the development of social media and the prevalence of mobile devices,an increasing number of people tend to use social media platforms to express their opinions and attitudes,leading to many online controversies....With the development of social media and the prevalence of mobile devices,an increasing number of people tend to use social media platforms to express their opinions and attitudes,leading to many online controversies.These online controversies can severely threaten social stability,making automatic detection of controversies particularly necessary.Most controversy detection methods currently focus on mining features from text semantics and propagation structures.However,these methods have two drawbacks:1)limited ability to capture structural features and failure to learn deeper structural features,and 2)neglecting the influence of topic information and ineffective utilization of topic features.In light of these phenomena,this paper proposes a social media controversy detection method called Dual Feature Enhanced Graph Convolutional Network(DFE-GCN).This method explores structural information at different scales from global and local perspectives to capture deeper structural features,enhancing the expressive power of structural features.Furthermore,to strengthen the influence of topic information,this paper utilizes attention mechanisms to enhance topic features after each graph convolutional layer,effectively using topic information.We validated our method on two different public datasets,and the experimental results demonstrate that our method achieves state-of-the-art performance compared to baseline methods.On the Weibo and Reddit datasets,the accuracy is improved by 5.92%and 3.32%,respectively,and the F1 score is improved by 1.99%and 2.17%,demonstrating the positive impact of enhanced structural features and topic features on controversy detection.展开更多
The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous net...The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous network architectures.Despite its strategic importance,the UWSOS network is highly susceptible to hostile infiltrations,which significantly impede its battlefield recovery capabilities.Existing methods to enhance network resilience predominantly focus on basic graph relationships,neglecting the crucial higher-order dependencies among nodes necessary for capturing multi-hop meta-paths within the UWSOS.To address these limitations,we propose the Enhanced-Resilience Multi-Layer Attention Graph Convolutional Network(E-MAGCN),designed to augment the adaptability of UWSOS.Our approach employs BERT for extracting semantic insights from nodes and edges,thereby refining feature representations by leveraging various node and edge categories.Additionally,E-MAGCN integrates a regularization-based multi-layer attention mechanism and a semantic node fusion algo-rithm within the Graph Convolutional Network(GCN)framework.Through extensive simulation experiments,our model demonstrates an enhancement in resilience performance ranging from 1.2% to 7% over existing algorithms.展开更多
This study aims to reduce the interference of ambient noise in mobile communication,improve the accuracy and authenticity of information transmitted by sound,and guarantee the accuracy of voice information deliv-ered ...This study aims to reduce the interference of ambient noise in mobile communication,improve the accuracy and authenticity of information transmitted by sound,and guarantee the accuracy of voice information deliv-ered by mobile communication.First,the principles and techniques of speech enhancement are analyzed,and a fast lateral recursive least square method(FLRLS method)is adopted to process sound data.Then,the convolutional neural networks(CNNs)-based noise recognition CNN(NR-CNN)algorithm and speech enhancement model are proposed.Finally,related experiments are designed to verify the performance of the proposed algorithm and model.The experimental results show that the noise classification accuracy of the NR-CNN noise recognition algorithm is higher than 99.82%,and the recall rate and F1 value are also higher than 99.92.The proposed sound enhance-ment model can effectively enhance the original sound in the case of noise interference.After the CNN is incorporated,the average value of all noisy sound perception quality evaluation system values is improved by over 21%compared with that of the traditional noise reduction method.The proposed algorithm can adapt to a variety of voice environments and can simultaneously enhance and reduce noise processing on a variety of different types of voice signals,and the processing effect is better than that of traditional sound enhancement models.In addition,the sound distortion index of the proposed speech enhancement model is inferior to that of the control group,indicating that the addition of the CNN neural network is less likely to cause sound signal distortion in various sound environments and shows superior robustness.In summary,the proposed CNN-based speech enhancement model shows significant sound enhancement effects,stable performance,and strong adapt-ability.This study provides a reference and basis for research applying neural networks in speech enhancement.展开更多
BACKGROUND The accurate classification of focal liver lesions(FLLs)is essential to properly guide treatment options and predict prognosis.Dynamic contrast-enhanced computed tomography(DCE-CT)is still the cornerstone i...BACKGROUND The accurate classification of focal liver lesions(FLLs)is essential to properly guide treatment options and predict prognosis.Dynamic contrast-enhanced computed tomography(DCE-CT)is still the cornerstone in the exact classification of FLLs due to its noninvasive nature,high scanning speed,and high-density resolution.Since their recent development,convolutional neural network-based deep learning techniques has been recognized to have high potential for image recognition tasks.AIM To develop and evaluate an automated multiphase convolutional dense network(MP-CDN)to classify FLLs on multiphase CT.METHODS A total of 517 FLLs scanned on a 320-detector CT scanner using a four-phase DCECT imaging protocol(including precontrast phase,arterial phase,portal venous phase,and delayed phase)from 2012 to 2017 were retrospectively enrolled.FLLs were classified into four categories:Category A,hepatocellular carcinoma(HCC);category B,liver metastases;category C,benign non-inflammatory FLLs including hemangiomas,focal nodular hyperplasias and adenomas;and category D,hepatic abscesses.Each category was split into a training set and test set in an approximate 8:2 ratio.An MP-CDN classifier with a sequential input of the fourphase CT images was developed to automatically classify FLLs.The classification performance of the model was evaluated on the test set;the accuracy and specificity were calculated from the confusion matrix,and the area under the receiver operating characteristic curve(AUC)was calculated from the SoftMax probability outputted from the last layer of the MP-CDN.RESULTS A total of 410 FLLs were used for training and 107 FLLs were used for testing.The mean classification accuracy of the test set was 81.3%(87/107).The accuracy/specificity of distinguishing each category from the others were 0.916/0.964,0.925/0.905,0.860/0.918,and 0.925/0.963 for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.The AUC(95%confidence interval)for differentiating each category from the others was 0.92(0.837-0.992),0.99(0.967-1.00),0.88(0.795-0.955)and 0.96(0.914-0.996)for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.CONCLUSION MP-CDN accurately classified FLLs detected on four-phase CT as HCC,metastases,benign non-inflammatory FLLs and hepatic abscesses and may assist radiologists in identifying the different types of FLLs.展开更多
Different devices in the recent era generated a vast amount of digital video.Generally,it has been seen in recent years that people are forging the video to use it as proof of evidence in the court of justice.Many kin...Different devices in the recent era generated a vast amount of digital video.Generally,it has been seen in recent years that people are forging the video to use it as proof of evidence in the court of justice.Many kinds of researches on forensic detection have been presented,and it provides less accuracy.This paper proposed a novel forgery detection technique in image frames of the videos using enhanced Convolutional Neural Network(CNN).In the initial stage,the input video is taken as of the dataset and then converts the videos into image frames.Next,perform pre-sampling using the Adaptive Rood Pattern Search(ARPS)algorithm intended for reducing the useless frames.In the next stage,perform preprocessing for enhancing the image frames.Then,face detection is done as of the image utilizing the Viola-Jones algorithm.Finally,the improved Crow Search Algorithm(ICSA)has been used to select the extorted features and inputted to the Enhanced Convolutional Neural Network(ECNN)classifier for detecting the forged image frames.The experimental outcome of the proposed system has achieved 97.21%accuracy compared to other existing methods.展开更多
In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the e...In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the extraction of basic features.The images captured by wearable sensors contain advanced features,allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions.Poor lighting and limited sensor capabilities can impact data quality,making the recognition of human actions a challenging task.The unimodal-based HAR approaches are not suitable in a real-time environment.Therefore,an updated HAR model is developed using multiple types of data and an advanced deep-learning approach.Firstly,the required signals and sensor data are accumulated from the standard databases.From these signals,the wave features are retrieved.Then the extracted wave features and sensor data are given as the input to recognize the human activity.An Adaptive Hybrid Deep Attentive Network(AHDAN)is developed by incorporating a“1D Convolutional Neural Network(1DCNN)”with a“Gated Recurrent Unit(GRU)”for the human activity recognition process.Additionally,the Enhanced Archerfish Hunting Optimizer(EAHO)is suggested to fine-tune the network parameters for enhancing the recognition process.An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model.The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36,95.25 for recall,95.48 for specificity,and 95.47 for precision,respectively.The result proved that the developed model is effective in recognizing human action by taking less time.Additionally,it reduces the computation complexity and overfitting issue through using an optimization approach.展开更多
Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER hav...Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.展开更多
Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as con...Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.展开更多
为了提高多视图深度估计结果精度,提出一种基于自适应空间特征增强的多视图深度估计算法。设计了由改进后的特征金字塔网络(feature pyramid network,FPN)和自适应空间特征增强(adaptive space feature enhancement,ASFE)组成的多尺度...为了提高多视图深度估计结果精度,提出一种基于自适应空间特征增强的多视图深度估计算法。设计了由改进后的特征金字塔网络(feature pyramid network,FPN)和自适应空间特征增强(adaptive space feature enhancement,ASFE)组成的多尺度特征提取模块,获取到具有全局上下文信息和位置信息的多尺度特征图像。通过残差学习网络对深度图进行优化,防止多次卷积操作出现重建边缘模糊的问题。通过分类的思想构建focal loss函数增强网络模型的判断能力。由实验结果可知,该算法在DTU(technical university of denmark)数据集上和CasMVSNet(Cascade MVSNet)算法相比,在整体精度误差、运行时间、显存资源占用上分别降低了14.08%、72.15%、4.62%。在Tanks and Temples数据集整体评价指标Mean上该模型优于其他算法,证明提出的基于自适应空间特征增强的多视图深度估计算法的有效性。展开更多
基金funded by the Natural Science Foundation of China Grant No.202204120017the Autonomous Region Science and Technology Program Grant No.2022B01008-2the Autonomous Region Science and Technology Program Grant No.2020A02001-1.
文摘With the development of social media and the prevalence of mobile devices,an increasing number of people tend to use social media platforms to express their opinions and attitudes,leading to many online controversies.These online controversies can severely threaten social stability,making automatic detection of controversies particularly necessary.Most controversy detection methods currently focus on mining features from text semantics and propagation structures.However,these methods have two drawbacks:1)limited ability to capture structural features and failure to learn deeper structural features,and 2)neglecting the influence of topic information and ineffective utilization of topic features.In light of these phenomena,this paper proposes a social media controversy detection method called Dual Feature Enhanced Graph Convolutional Network(DFE-GCN).This method explores structural information at different scales from global and local perspectives to capture deeper structural features,enhancing the expressive power of structural features.Furthermore,to strengthen the influence of topic information,this paper utilizes attention mechanisms to enhance topic features after each graph convolutional layer,effectively using topic information.We validated our method on two different public datasets,and the experimental results demonstrate that our method achieves state-of-the-art performance compared to baseline methods.On the Weibo and Reddit datasets,the accuracy is improved by 5.92%and 3.32%,respectively,and the F1 score is improved by 1.99%and 2.17%,demonstrating the positive impact of enhanced structural features and topic features on controversy detection.
基金This research was supported by the Key Research and Development Program of Shaanxi Province(2024GX-YBXM-010)the National Science Foundation of China(61972302).
文摘The collective Unmanned Weapon System-of-Systems(UWSOS)network represents a fundamental element in modern warfare,characterized by a diverse array of unmanned combat platforms interconnected through hetero-geneous network architectures.Despite its strategic importance,the UWSOS network is highly susceptible to hostile infiltrations,which significantly impede its battlefield recovery capabilities.Existing methods to enhance network resilience predominantly focus on basic graph relationships,neglecting the crucial higher-order dependencies among nodes necessary for capturing multi-hop meta-paths within the UWSOS.To address these limitations,we propose the Enhanced-Resilience Multi-Layer Attention Graph Convolutional Network(E-MAGCN),designed to augment the adaptability of UWSOS.Our approach employs BERT for extracting semantic insights from nodes and edges,thereby refining feature representations by leveraging various node and edge categories.Additionally,E-MAGCN integrates a regularization-based multi-layer attention mechanism and a semantic node fusion algo-rithm within the Graph Convolutional Network(GCN)framework.Through extensive simulation experiments,our model demonstrates an enhancement in resilience performance ranging from 1.2% to 7% over existing algorithms.
基金supported by General Project of Philosophy and Social Science Research in Colleges and Universities in Jiangsu Province(2022SJYB0712)Research Development Fund for Young Teachers of Chengxian College of Southeast University(z0037)Special Project of Ideological and Political Education Reform and Research Course(yjgsz2206).
文摘This study aims to reduce the interference of ambient noise in mobile communication,improve the accuracy and authenticity of information transmitted by sound,and guarantee the accuracy of voice information deliv-ered by mobile communication.First,the principles and techniques of speech enhancement are analyzed,and a fast lateral recursive least square method(FLRLS method)is adopted to process sound data.Then,the convolutional neural networks(CNNs)-based noise recognition CNN(NR-CNN)algorithm and speech enhancement model are proposed.Finally,related experiments are designed to verify the performance of the proposed algorithm and model.The experimental results show that the noise classification accuracy of the NR-CNN noise recognition algorithm is higher than 99.82%,and the recall rate and F1 value are also higher than 99.92.The proposed sound enhance-ment model can effectively enhance the original sound in the case of noise interference.After the CNN is incorporated,the average value of all noisy sound perception quality evaluation system values is improved by over 21%compared with that of the traditional noise reduction method.The proposed algorithm can adapt to a variety of voice environments and can simultaneously enhance and reduce noise processing on a variety of different types of voice signals,and the processing effect is better than that of traditional sound enhancement models.In addition,the sound distortion index of the proposed speech enhancement model is inferior to that of the control group,indicating that the addition of the CNN neural network is less likely to cause sound signal distortion in various sound environments and shows superior robustness.In summary,the proposed CNN-based speech enhancement model shows significant sound enhancement effects,stable performance,and strong adapt-ability.This study provides a reference and basis for research applying neural networks in speech enhancement.
基金Supported by National Natural Science Foundation of China,No.91959118Science and Technology Program of Guangzhou,China,No.201704020016+1 种基金SKY Radiology Department International Medical Research Foundation of China,No.Z-2014-07-1912-15Clinical Research Foundation of the 3rd Affiliated Hospital of Sun Yat-Sen University,No.YHJH201901.
文摘BACKGROUND The accurate classification of focal liver lesions(FLLs)is essential to properly guide treatment options and predict prognosis.Dynamic contrast-enhanced computed tomography(DCE-CT)is still the cornerstone in the exact classification of FLLs due to its noninvasive nature,high scanning speed,and high-density resolution.Since their recent development,convolutional neural network-based deep learning techniques has been recognized to have high potential for image recognition tasks.AIM To develop and evaluate an automated multiphase convolutional dense network(MP-CDN)to classify FLLs on multiphase CT.METHODS A total of 517 FLLs scanned on a 320-detector CT scanner using a four-phase DCECT imaging protocol(including precontrast phase,arterial phase,portal venous phase,and delayed phase)from 2012 to 2017 were retrospectively enrolled.FLLs were classified into four categories:Category A,hepatocellular carcinoma(HCC);category B,liver metastases;category C,benign non-inflammatory FLLs including hemangiomas,focal nodular hyperplasias and adenomas;and category D,hepatic abscesses.Each category was split into a training set and test set in an approximate 8:2 ratio.An MP-CDN classifier with a sequential input of the fourphase CT images was developed to automatically classify FLLs.The classification performance of the model was evaluated on the test set;the accuracy and specificity were calculated from the confusion matrix,and the area under the receiver operating characteristic curve(AUC)was calculated from the SoftMax probability outputted from the last layer of the MP-CDN.RESULTS A total of 410 FLLs were used for training and 107 FLLs were used for testing.The mean classification accuracy of the test set was 81.3%(87/107).The accuracy/specificity of distinguishing each category from the others were 0.916/0.964,0.925/0.905,0.860/0.918,and 0.925/0.963 for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.The AUC(95%confidence interval)for differentiating each category from the others was 0.92(0.837-0.992),0.99(0.967-1.00),0.88(0.795-0.955)and 0.96(0.914-0.996)for HCC,metastases,benign non-inflammatory FLLs,and abscesses on the test set,respectively.CONCLUSION MP-CDN accurately classified FLLs detected on four-phase CT as HCC,metastases,benign non-inflammatory FLLs and hepatic abscesses and may assist radiologists in identifying the different types of FLLs.
文摘Different devices in the recent era generated a vast amount of digital video.Generally,it has been seen in recent years that people are forging the video to use it as proof of evidence in the court of justice.Many kinds of researches on forensic detection have been presented,and it provides less accuracy.This paper proposed a novel forgery detection technique in image frames of the videos using enhanced Convolutional Neural Network(CNN).In the initial stage,the input video is taken as of the dataset and then converts the videos into image frames.Next,perform pre-sampling using the Adaptive Rood Pattern Search(ARPS)algorithm intended for reducing the useless frames.In the next stage,perform preprocessing for enhancing the image frames.Then,face detection is done as of the image utilizing the Viola-Jones algorithm.Finally,the improved Crow Search Algorithm(ICSA)has been used to select the extorted features and inputted to the Enhanced Convolutional Neural Network(ECNN)classifier for detecting the forged image frames.The experimental outcome of the proposed system has achieved 97.21%accuracy compared to other existing methods.
文摘In recent years,wearable devices-based Human Activity Recognition(HAR)models have received significant attention.Previously developed HAR models use hand-crafted features to recognize human activities,leading to the extraction of basic features.The images captured by wearable sensors contain advanced features,allowing them to be analyzed by deep learning algorithms to enhance the detection and recognition of human actions.Poor lighting and limited sensor capabilities can impact data quality,making the recognition of human actions a challenging task.The unimodal-based HAR approaches are not suitable in a real-time environment.Therefore,an updated HAR model is developed using multiple types of data and an advanced deep-learning approach.Firstly,the required signals and sensor data are accumulated from the standard databases.From these signals,the wave features are retrieved.Then the extracted wave features and sensor data are given as the input to recognize the human activity.An Adaptive Hybrid Deep Attentive Network(AHDAN)is developed by incorporating a“1D Convolutional Neural Network(1DCNN)”with a“Gated Recurrent Unit(GRU)”for the human activity recognition process.Additionally,the Enhanced Archerfish Hunting Optimizer(EAHO)is suggested to fine-tune the network parameters for enhancing the recognition process.An experimental evaluation is performed on various deep learning networks and heuristic algorithms to confirm the effectiveness of the proposed HAR model.The EAHO-based HAR model outperforms traditional deep learning networks with an accuracy of 95.36,95.25 for recall,95.48 for specificity,and 95.47 for precision,respectively.The result proved that the developed model is effective in recognizing human action by taking less time.Additionally,it reduces the computation complexity and overfitting issue through using an optimization approach.
文摘Facial Expression Recognition(FER)has been an interesting area of research in places where there is human-computer interaction.Human psychol-ogy,emotions and behaviors can be analyzed in FER.Classifiers used in FER have been perfect on normal faces but have been found to be constrained in occluded faces.Recently,Deep Learning Techniques(DLT)have gained popular-ity in applications of real-world problems including recognition of human emo-tions.The human face reflects emotional states and human intentions.An expression is the most natural and powerful way of communicating non-verbally.Systems which form communications between the two are termed Human Machine Interaction(HMI)systems.FER can improve HMI systems as human expressions convey useful information to an observer.This paper proposes a FER scheme called EECNN(Enhanced Convolution Neural Network with Atten-tion mechanism)to recognize seven types of human emotions with satisfying results in its experiments.Proposed EECNN achieved 89.8%accuracy in classi-fying the images.
基金supported by the National Science Foundation under Grant No.62066039.
文摘Recently,speech enhancement methods based on Generative Adversarial Networks have achieved good performance in time-domain noisy signals.However,the training of Generative Adversarial Networks has such problems as convergence difficulty,model collapse,etc.In this work,an end-to-end speech enhancement model based on Wasserstein Generative Adversarial Networks is proposed,and some improvements have been made in order to get faster convergence speed and better generated speech quality.Specifically,in the generator coding part,each convolution layer adopts different convolution kernel sizes to conduct convolution operations for obtaining speech coding information from multiple scales;a gated linear unit is introduced to alleviate the vanishing gradient problem with the increase of network depth;the gradient penalty of the discriminator is replaced with spectral normalization to accelerate the convergence rate of themodel;a hybrid penalty termcomposed of L1 regularization and a scale-invariant signal-to-distortion ratio is introduced into the loss function of the generator to improve the quality of generated speech.The experimental results on both TIMIT corpus and Tibetan corpus show that the proposed model improves the speech quality significantly and accelerates the convergence speed of the model.
文摘为了提高多视图深度估计结果精度,提出一种基于自适应空间特征增强的多视图深度估计算法。设计了由改进后的特征金字塔网络(feature pyramid network,FPN)和自适应空间特征增强(adaptive space feature enhancement,ASFE)组成的多尺度特征提取模块,获取到具有全局上下文信息和位置信息的多尺度特征图像。通过残差学习网络对深度图进行优化,防止多次卷积操作出现重建边缘模糊的问题。通过分类的思想构建focal loss函数增强网络模型的判断能力。由实验结果可知,该算法在DTU(technical university of denmark)数据集上和CasMVSNet(Cascade MVSNet)算法相比,在整体精度误差、运行时间、显存资源占用上分别降低了14.08%、72.15%、4.62%。在Tanks and Temples数据集整体评价指标Mean上该模型优于其他算法,证明提出的基于自适应空间特征增强的多视图深度估计算法的有效性。