Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’...Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.展开更多
Buiding data-driven models using machine learning methods has gradually become a common approach for studying reservoir parameters.Among these methods,deep learning methods are highly effective.From the perspective of...Buiding data-driven models using machine learning methods has gradually become a common approach for studying reservoir parameters.Among these methods,deep learning methods are highly effective.From the perspective of multi-task learning,this paper uses six types of logging data—acoustic logging(AC),gamma ray(GR),compensated neutron porosity(CNL),density(DEN),deep and shallow lateral resistivity(LLD)and shallow lateral resistivity(LLS)—that are inputs and three reservoir parameters that are outputs to build a porosity saturation permeability network(PSP-Net)that can predict porosity,saturation,and permeability values simultaneously.These logging data are obtained from 108 training wells in a medium₋low permeability oilfield block in the western district of China.PSP-Net method adopts a serial structure to realize transfer learning of reservoir-parameter characteristics.Compared with other existing methods at the stage of academic exploration to simulating industrial applications,the proposed method overcomes the disadvantages inherent in single-task learning reservoir-parameter prediction models,including easily overfitting and heavy model-training workload.Additionally,the proposed method demonstrates good anti-overfitting and generalization capabilities,integrating professional knowledge and experience.In 37 test wells,compared with the existing method,the proposed method exhibited an average error reduction of 10.44%,27.79%,and 28.83%from porosity,saturation,permeability calculation.The prediction and actual permeabilities are within one order of magnitude.The training on PSP-Net are simpler and more convenient than other single-task learning methods discussed in this paper.Furthermore,the findings of this paper can help in the re-examination of old oilfield wells and the completion of logging data.展开更多
A new microreactor with continuous serially connected micromixers(CSCM)was tailored for the coprecipitation process to synthesize Fe_(3)O_(4) nanoparticles.Numerical simulation reveals that the two types of CSCM micro...A new microreactor with continuous serially connected micromixers(CSCM)was tailored for the coprecipitation process to synthesize Fe_(3)O_(4) nanoparticles.Numerical simulation reveals that the two types of CSCM microchannels(V-typed and U-typed)proposed in this work exhibited markedly better mixing performances than the Zigzag and capillary microchannels due to the promotion of Dean vortices.Complete mixing was achieved in the V-typed microchannel in 2.7 s at an inlet Reynolds number of 27.Fe_(3)O_(4) nanoparticles synthesized in a planar glass microreactor with the V-typed microchannel,possessing an average size of 9.3 nm and exhibiting superparamagnetism,had obviously better dispersity and uniformity and higher crystallinity than those obtained in the capillary microreactor.The new CSCM microreactor developed in this work can act as a potent device to intensify the synthesis of similar inorganic nanoparticles via multistep chemical precipitation processes.展开更多
Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively u...Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.展开更多
Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out se...Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out separately by existing systems using separate models,significantly adding to the difficulty of network administration.Convolutional Neural Network(CNN)and Transformer are deep learning-based approaches for network traf-fic classification.CNN is good at extracting local features while ignoring long-distance information from the network traffic sequence,and Transformer can capture long-distance feature dependencies while ignoring local details.Based on these characteristics,a multi-task learning model that combines Transformer and 1D-CNN for encrypted traffic classification is proposed(MTC).In order to make up for the Transformer’s lack of local detail feature extraction capability and the 1D-CNN’s shortcoming of ignoring long-distance correlation information when processing traffic sequences,the model uses a parallel structure to fuse the features generated by the Transformer block and the 1D-CNN block with each other using a feature fusion block.This structure improved the representation of traffic features by both blocks and allows the model to perform well with both long and short length sequences.The model simultaneously handles multiple tasks,which lowers the cost of training.Experiments reveal that on the ISCX VPN-nonVPN dataset,the model achieves an average F1 score of 98.25%and an average recall of 98.30%for the task of identifying applications,and an average F1 score of 97.94%,and an average recall of 97.54%for the task of traffic characterization.When advanced models on the same dataset are chosen for comparison,the model produces the best results.To prove the generalization,we applied MTC to CICIDS2017 dataset,and our model also achieved good results.展开更多
False data injection(FDI) attacks are common in the distributed estimation of multi-task network environments, so an attack detection strategy is designed by combining the generalized maximum correntropy criterion. Ba...False data injection(FDI) attacks are common in the distributed estimation of multi-task network environments, so an attack detection strategy is designed by combining the generalized maximum correntropy criterion. Based on this, we propose a diffusion least-mean-square algorithm based on the generalized maximum correntropy criterion(GMCC-DLMS)for multi-task networks. The algorithm achieves gratifying estimation results. Even more, compared to the related work,it has better robustness when the number of attacked nodes increases. Moreover, the assumption about the number of attacked nodes is relaxed, which is applicable to multi-task environments. In addition, the performance of the proposed GMCC-DLMS algorithm is analyzed in the mean and mean-square senses. Finally, simulation experiments confirm the performance and effectiveness against FDI attacks of the algorithm.展开更多
There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the...There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the processing capabilities of the current internet infrastructure.Therefore,engineering works using technology to organize and analyze information and extract useful information are interesting in both industry and academia.The goal of this paper is to explore the entity relationship based on deep learning,introduce semantic knowledge by using the prepared language model,develop an advanced entity relationship information extraction method by combining Robustly Optimized BERT Approach(RoBERTa)and multi-task learning,and combine the intelligent characters in the field of linguistic,called Robustly Optimized BERT Approach+Multi-Task Learning(RoBERTa+MTL).To improve the effectiveness of model interaction,multi-task teaching is used to implement the observation information of auxiliary tasks.Experimental results show that our method has achieved an accuracy of 88.95 entity relationship extraction,and a further it has achieved 86.35%of accuracy after being combined with multi-task learning.展开更多
Vegetable production in the open field involves many tasks,such as soil preparation,ridging,and transplanting/sowing.Different tasks require agricultural machinery equipped with different agricultural tools to meet th...Vegetable production in the open field involves many tasks,such as soil preparation,ridging,and transplanting/sowing.Different tasks require agricultural machinery equipped with different agricultural tools to meet the needs of the operation.Aiming at the coupling multi-task in the intelligent production of vegetables in the open field,the task assignment method for multiple unmanned tractors based on consistency alliance is studied.Firstly,unmanned vegetable production in the open field is abstracted as a multi-task assignment model with constraints of task demand,task sequence,and the distance traveled by an unmanned tractor.The tight time constraints between associated tasks are transformed into time windows.Based on the driving distance of the unmanned tractor and the replacement cost of the tools,an expanded task cost function is innovatively established.The task assignment model of multiple unmanned tractors is optimized by the consensus based bundle algorithm(CBBA)with time windows.Experiments show that the method can effectively solve task conflict in unmanned production and optimize task allocation.A basic model is provided for the cooperative task of multiple unmanned tractors for vegetable production in the open field.展开更多
Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step proces...Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step process,such as impres-sion→click→conversion,which means the process from the delivery of the recommended item to the user’s click to the final conversion.Due to data sparsity or sample selection bias,it is difficult for the trained model to achieve the business goal of the target campaign.Multi-task learning,a classical solution to this pro-blem,aims to generalize better on the original task given several related tasks by exploiting the knowledge between tasks to share the same feature and label space.Adaptively learned task relations bring better performance to make full use of the correlation between tasks.We train a general model capable of captur-ing the relationships between various tasks on all existing active tasks from a meta-learning perspective.In addition,this paper proposes a Multi-task Attention Network(MAN)to identify commonalities and differences between tasks in the feature space.The model performance is improved by explicitly learning the stacking of task relationships in the label space.To illustrate the effectiveness of our method,experiments are conducted on Alibaba Click and Conversion Pre-diction(Ali-CCP)dataset.Experimental results show that the method outperforms the state-of-the-art multi-task learning methods.展开更多
Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.Howeve...Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.展开更多
Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forec...Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forecast convective storms and lightning with lead times for up to 90 min,using GOES-16 geostationary satellite infrared brightness temperatures(IRBTs),lightning flashes from Geostationary Lightning Mapper(GLM),and vertically integrated liquid(VIL)from Next Generation Weather Radar(NEXRAD).To cope with the heavily skewed distribution of lightning data,a spatiotemporal exponent-weighted loss function and log-transformed lightning normalization approach were developed.The effects of MTL,single-task learning(STL),and IRBTs as auxiliary input features on convection and lightning nowcasting were investigated.The results showed that normalizing the heavily skew-distributed lightning data along with a log-transformation dramatically outperforms the min-max normalization method for nowcasting an intense lightning event.The MTL model significantly outperformed the STL model for both lightning nowcasting and VIL nowcasting,particularly for intense lightning events.The MTL also helped delay the lightning forecast performance decay with the lead times.Furthermore,incorporating satellite IRBTs as auxiliary input features substantially improved lightning nowcasting,but produced little difference in VIL forecasting.Finally,the MTL model performed better for forecasting both lightning and the VIL of organized convective storms than for isolated cells.展开更多
The human motion generation model can extract structural features from existing human motion capture data,and the generated data makes animated characters move.The 3D human motion capture sequences contain complex spa...The human motion generation model can extract structural features from existing human motion capture data,and the generated data makes animated characters move.The 3D human motion capture sequences contain complex spatial-temporal structures,and the deep learning model can fully describe the potential semantic structure of human motion.To improve the authenticity of the generated human motion sequences,we propose a multi-task motion generation model that consists of a discriminator and a generator.The discriminator classifies motion sequences into different styles according to their similarity to the mean spatial-temporal templates from motion sequences of 17 crucial human joints in three-freedom degrees.And target motion sequences are created with these styles by the generator.Unlike traditional related works,our model can handle multiple tasks,such as identifying styles and generating data.In addition,by extracting 17 crucial joints from 29 human joints,our model avoids data redundancy and improves the accuracy of model recognition.The experimental results show that the discriminator of the model can effectively recognize diversified movements,and the generated data can correctly fit the actual data.The combination of discriminator and generator solves the problem of low reuse rate of motion data,and the generated motion sequences are more suitable for actual movement.展开更多
Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new...Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new tasks and datasets,which is inevitably time-consuming.This study proposes a multi-task learning approach that simultaneously accomplishes the semantic segmentation of seven-type structural components,three-type seismic damage,and four-type deterioration states.The proposed method contains a CNN-based encoder-decoder backbone subnetwork with skip-connection modules and a multi-head,task-specific recognition subnetwork.The backbone subnetwork is designed to extract multi-level features of post-earthquake RC structures.The multi-head,task-specific recognition subnetwork consists of three individual self-attention pipelines,each of which utilizes extracted multi-level features from the backbone network as a mutual guidance for the individual segmentation task.A synthetical loss function is designed with real-time adaptive coefficients to balance multi-task losses and focus on the most unstably fluctuating one.Ablation experiments and comparative studies are further conducted to demonstrate their effectiveness and necessity.The results show that the proposed method can simultaneously recognize different structural components,seismic damage,and deterioration states,and that the overall performance of the three-task learning models gains general improvement when compared to all single-task and dual-task models.展开更多
Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While i...Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While it remains an unsolved problem and poses a significant threat to the security of cyberspace when various categories of non-steganographic or steganographic texts coexist.In this paper,we propose a general linguistic steganalysis framework named LS-MTL,which introduces the idea of multi-task learning to deal with the classification of various categories of steganographic and non-steganographic texts.LS-MTL captures sensitive linguistic features from multiple related linguistic steganalysis tasks and can concurrently handle diverse tasks with a constructed model.In the proposed framework,convolutional neural networks(CNNs)are utilized as private base models to extract sensitive features for each steganalysis task.Besides,a shared CNN is built to capture potential interaction information and share linguistic features among all tasks.Finally,LS-MTL incorporates the private and shared sensitive features to identify the detected text as steganographic or non-steganographic.Experimental results demonstrate that the proposed framework LS-MTL outperforms the baseline in the multi-category linguistic steganalysis task,while average Acc,Pre,and Rec are increased by 0.5%,1.4%,and 0.4%,respectively.More ablation experimental results show that LS-MTL with the shared module has robust generalization capability and achieves good detection performance even in the case of spare data.展开更多
针对工业机器人在高度制造领域精度不高的问题,本文提出了一种基于POE模型的工业机器人运动学参数二次辨识方法。阐述了基于指数积(Product of exponential, POE)模型的运动学误差模型构建方法,并建立基于POE误差模型的适应度函数;为实...针对工业机器人在高度制造领域精度不高的问题,本文提出了一种基于POE模型的工业机器人运动学参数二次辨识方法。阐述了基于指数积(Product of exponential, POE)模型的运动学误差模型构建方法,并建立基于POE误差模型的适应度函数;为实现高精度的参数辨识,提出了一种二次辨识方法,先利用改进灰狼优化算法(Improved grey wolf optimizer, IGWO)实现运动学参数误差的粗辨识,初步将Staubli TX60型机器人的平均位置误差和平均姿态误差分别从(0.648 mm, 0.212°)降低为(0.457 mm, 0.166°);为进一步提高机器人的精度性能,再通过LM(Levenberg-Marquard)算法进行参数误差的精辨识,最终将Staubli TX60型机器人平均位置误差和平均姿态误差进一步降低为(0.237 mm, 0.063°),机器人平均位置误差和平均姿态误差分别降低63.4%和70.2%。为了验证上述二次辨识方法的稳定性,随机选取5组辨识数据集和验证数据集进行POE误差模型的参数误差辨识,结果表明提出的二次辨识方法能够稳定、精确地辨识工业机器人运动学参数误差。展开更多
文摘Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.
文摘Buiding data-driven models using machine learning methods has gradually become a common approach for studying reservoir parameters.Among these methods,deep learning methods are highly effective.From the perspective of multi-task learning,this paper uses six types of logging data—acoustic logging(AC),gamma ray(GR),compensated neutron porosity(CNL),density(DEN),deep and shallow lateral resistivity(LLD)and shallow lateral resistivity(LLS)—that are inputs and three reservoir parameters that are outputs to build a porosity saturation permeability network(PSP-Net)that can predict porosity,saturation,and permeability values simultaneously.These logging data are obtained from 108 training wells in a medium₋low permeability oilfield block in the western district of China.PSP-Net method adopts a serial structure to realize transfer learning of reservoir-parameter characteristics.Compared with other existing methods at the stage of academic exploration to simulating industrial applications,the proposed method overcomes the disadvantages inherent in single-task learning reservoir-parameter prediction models,including easily overfitting and heavy model-training workload.Additionally,the proposed method demonstrates good anti-overfitting and generalization capabilities,integrating professional knowledge and experience.In 37 test wells,compared with the existing method,the proposed method exhibited an average error reduction of 10.44%,27.79%,and 28.83%from porosity,saturation,permeability calculation.The prediction and actual permeabilities are within one order of magnitude.The training on PSP-Net are simpler and more convenient than other single-task learning methods discussed in this paper.Furthermore,the findings of this paper can help in the re-examination of old oilfield wells and the completion of logging data.
基金the financial support from the National Natural Science Foundation of China(21808059)the Fundamental Research Funds for the Central Universities(JKA01221712).
文摘A new microreactor with continuous serially connected micromixers(CSCM)was tailored for the coprecipitation process to synthesize Fe_(3)O_(4) nanoparticles.Numerical simulation reveals that the two types of CSCM microchannels(V-typed and U-typed)proposed in this work exhibited markedly better mixing performances than the Zigzag and capillary microchannels due to the promotion of Dean vortices.Complete mixing was achieved in the V-typed microchannel in 2.7 s at an inlet Reynolds number of 27.Fe_(3)O_(4) nanoparticles synthesized in a planar glass microreactor with the V-typed microchannel,possessing an average size of 9.3 nm and exhibiting superparamagnetism,had obviously better dispersity and uniformity and higher crystallinity than those obtained in the capillary microreactor.The new CSCM microreactor developed in this work can act as a potent device to intensify the synthesis of similar inorganic nanoparticles via multistep chemical precipitation processes.
基金National Key R&D Program of China(No.2022ZD0118401).
文摘Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.
基金supported by the People’s Public Security University of China central basic scientific research business program(No.2021JKF206).
文摘Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out separately by existing systems using separate models,significantly adding to the difficulty of network administration.Convolutional Neural Network(CNN)and Transformer are deep learning-based approaches for network traf-fic classification.CNN is good at extracting local features while ignoring long-distance information from the network traffic sequence,and Transformer can capture long-distance feature dependencies while ignoring local details.Based on these characteristics,a multi-task learning model that combines Transformer and 1D-CNN for encrypted traffic classification is proposed(MTC).In order to make up for the Transformer’s lack of local detail feature extraction capability and the 1D-CNN’s shortcoming of ignoring long-distance correlation information when processing traffic sequences,the model uses a parallel structure to fuse the features generated by the Transformer block and the 1D-CNN block with each other using a feature fusion block.This structure improved the representation of traffic features by both blocks and allows the model to perform well with both long and short length sequences.The model simultaneously handles multiple tasks,which lowers the cost of training.Experiments reveal that on the ISCX VPN-nonVPN dataset,the model achieves an average F1 score of 98.25%and an average recall of 98.30%for the task of identifying applications,and an average F1 score of 97.94%,and an average recall of 97.54%for the task of traffic characterization.When advanced models on the same dataset are chosen for comparison,the model produces the best results.To prove the generalization,we applied MTC to CICIDS2017 dataset,and our model also achieved good results.
文摘False data injection(FDI) attacks are common in the distributed estimation of multi-task network environments, so an attack detection strategy is designed by combining the generalized maximum correntropy criterion. Based on this, we propose a diffusion least-mean-square algorithm based on the generalized maximum correntropy criterion(GMCC-DLMS)for multi-task networks. The algorithm achieves gratifying estimation results. Even more, compared to the related work,it has better robustness when the number of attacked nodes increases. Moreover, the assumption about the number of attacked nodes is relaxed, which is applicable to multi-task environments. In addition, the performance of the proposed GMCC-DLMS algorithm is analyzed in the mean and mean-square senses. Finally, simulation experiments confirm the performance and effectiveness against FDI attacks of the algorithm.
文摘There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the processing capabilities of the current internet infrastructure.Therefore,engineering works using technology to organize and analyze information and extract useful information are interesting in both industry and academia.The goal of this paper is to explore the entity relationship based on deep learning,introduce semantic knowledge by using the prepared language model,develop an advanced entity relationship information extraction method by combining Robustly Optimized BERT Approach(RoBERTa)and multi-task learning,and combine the intelligent characters in the field of linguistic,called Robustly Optimized BERT Approach+Multi-Task Learning(RoBERTa+MTL).To improve the effectiveness of model interaction,multi-task teaching is used to implement the observation information of auxiliary tasks.Experimental results show that our method has achieved an accuracy of 88.95 entity relationship extraction,and a further it has achieved 86.35%of accuracy after being combined with multi-task learning.
基金supported by the Science and Technology Innovation 2030-“New Generation Artificial Intelligence”Major Project(No.2021ZD0113604)China Agriculture Research System of MOF and MARA(No.CARS-23-D07)。
文摘Vegetable production in the open field involves many tasks,such as soil preparation,ridging,and transplanting/sowing.Different tasks require agricultural machinery equipped with different agricultural tools to meet the needs of the operation.Aiming at the coupling multi-task in the intelligent production of vegetables in the open field,the task assignment method for multiple unmanned tractors based on consistency alliance is studied.Firstly,unmanned vegetable production in the open field is abstracted as a multi-task assignment model with constraints of task demand,task sequence,and the distance traveled by an unmanned tractor.The tight time constraints between associated tasks are transformed into time windows.Based on the driving distance of the unmanned tractor and the replacement cost of the tools,an expanded task cost function is innovatively established.The task assignment model of multiple unmanned tractors is optimized by the consensus based bundle algorithm(CBBA)with time windows.Experiments show that the method can effectively solve task conflict in unmanned production and optimize task allocation.A basic model is provided for the cooperative task of multiple unmanned tractors for vegetable production in the open field.
基金Our work was supported by the research project of Yunnan University(Grant No.2021Y274)Natural Science Foundation of China(Grant No.61862064).
文摘Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step process,such as impres-sion→click→conversion,which means the process from the delivery of the recommended item to the user’s click to the final conversion.Due to data sparsity or sample selection bias,it is difficult for the trained model to achieve the business goal of the target campaign.Multi-task learning,a classical solution to this pro-blem,aims to generalize better on the original task given several related tasks by exploiting the knowledge between tasks to share the same feature and label space.Adaptively learned task relations bring better performance to make full use of the correlation between tasks.We train a general model capable of captur-ing the relationships between various tasks on all existing active tasks from a meta-learning perspective.In addition,this paper proposes a Multi-task Attention Network(MAN)to identify commonalities and differences between tasks in the feature space.The model performance is improved by explicitly learning the stacking of task relationships in the label space.To illustrate the effectiveness of our method,experiments are conducted on Alibaba Click and Conversion Pre-diction(Ali-CCP)dataset.Experimental results show that the method outperforms the state-of-the-art multi-task learning methods.
文摘Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.
基金supported by the Science and Technology Grant No.520120210003,Jibei Electric Power Company of the State Grid Corporation of China。
文摘Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forecast convective storms and lightning with lead times for up to 90 min,using GOES-16 geostationary satellite infrared brightness temperatures(IRBTs),lightning flashes from Geostationary Lightning Mapper(GLM),and vertically integrated liquid(VIL)from Next Generation Weather Radar(NEXRAD).To cope with the heavily skewed distribution of lightning data,a spatiotemporal exponent-weighted loss function and log-transformed lightning normalization approach were developed.The effects of MTL,single-task learning(STL),and IRBTs as auxiliary input features on convection and lightning nowcasting were investigated.The results showed that normalizing the heavily skew-distributed lightning data along with a log-transformation dramatically outperforms the min-max normalization method for nowcasting an intense lightning event.The MTL model significantly outperformed the STL model for both lightning nowcasting and VIL nowcasting,particularly for intense lightning events.The MTL also helped delay the lightning forecast performance decay with the lead times.Furthermore,incorporating satellite IRBTs as auxiliary input features substantially improved lightning nowcasting,but produced little difference in VIL forecasting.Finally,the MTL model performed better for forecasting both lightning and the VIL of organized convective storms than for isolated cells.
文摘The human motion generation model can extract structural features from existing human motion capture data,and the generated data makes animated characters move.The 3D human motion capture sequences contain complex spatial-temporal structures,and the deep learning model can fully describe the potential semantic structure of human motion.To improve the authenticity of the generated human motion sequences,we propose a multi-task motion generation model that consists of a discriminator and a generator.The discriminator classifies motion sequences into different styles according to their similarity to the mean spatial-temporal templates from motion sequences of 17 crucial human joints in three-freedom degrees.And target motion sequences are created with these styles by the generator.Unlike traditional related works,our model can handle multiple tasks,such as identifying styles and generating data.In addition,by extracting 17 crucial joints from 29 human joints,our model avoids data redundancy and improves the accuracy of model recognition.The experimental results show that the discriminator of the model can effectively recognize diversified movements,and the generated data can correctly fit the actual data.The combination of discriminator and generator solves the problem of low reuse rate of motion data,and the generated motion sequences are more suitable for actual movement.
基金National Key R&D Program of China under Grant No.2019YFC1511005the National Natural Science Foundation of China under Grant Nos.51921006,52192661 and 52008138+2 种基金the China Postdoctoral Science Foundation under Grant Nos.BX20190102 and 2019M661286the Heilongjiang Natural Science Foundation under Grant No.LH2022E070the Heilongjiang Province Postdoctoral Science Foundation under Grant Nos.LBH-TZ2016 and LBH-Z19064。
文摘Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new tasks and datasets,which is inevitably time-consuming.This study proposes a multi-task learning approach that simultaneously accomplishes the semantic segmentation of seven-type structural components,three-type seismic damage,and four-type deterioration states.The proposed method contains a CNN-based encoder-decoder backbone subnetwork with skip-connection modules and a multi-head,task-specific recognition subnetwork.The backbone subnetwork is designed to extract multi-level features of post-earthquake RC structures.The multi-head,task-specific recognition subnetwork consists of three individual self-attention pipelines,each of which utilizes extracted multi-level features from the backbone network as a mutual guidance for the individual segmentation task.A synthetical loss function is designed with real-time adaptive coefficients to balance multi-task losses and focus on the most unstably fluctuating one.Ablation experiments and comparative studies are further conducted to demonstrate their effectiveness and necessity.The results show that the proposed method can simultaneously recognize different structural components,seismic damage,and deterioration states,and that the overall performance of the three-task learning models gains general improvement when compared to all single-task and dual-task models.
基金This paper is partly supported by the National Natural Science Foundation of China unde rGrants 61972057 and 62172059Hunan ProvincialNatural Science Foundation of China underGrant 2022JJ30623 and 2019JJ50287Scientific Research Fund of Hunan Provincial Education Department of China under Grant 21A0211 and 19A265。
文摘Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While it remains an unsolved problem and poses a significant threat to the security of cyberspace when various categories of non-steganographic or steganographic texts coexist.In this paper,we propose a general linguistic steganalysis framework named LS-MTL,which introduces the idea of multi-task learning to deal with the classification of various categories of steganographic and non-steganographic texts.LS-MTL captures sensitive linguistic features from multiple related linguistic steganalysis tasks and can concurrently handle diverse tasks with a constructed model.In the proposed framework,convolutional neural networks(CNNs)are utilized as private base models to extract sensitive features for each steganalysis task.Besides,a shared CNN is built to capture potential interaction information and share linguistic features among all tasks.Finally,LS-MTL incorporates the private and shared sensitive features to identify the detected text as steganographic or non-steganographic.Experimental results demonstrate that the proposed framework LS-MTL outperforms the baseline in the multi-category linguistic steganalysis task,while average Acc,Pre,and Rec are increased by 0.5%,1.4%,and 0.4%,respectively.More ablation experimental results show that LS-MTL with the shared module has robust generalization capability and achieves good detection performance even in the case of spare data.
文摘针对工业机器人在高度制造领域精度不高的问题,本文提出了一种基于POE模型的工业机器人运动学参数二次辨识方法。阐述了基于指数积(Product of exponential, POE)模型的运动学误差模型构建方法,并建立基于POE误差模型的适应度函数;为实现高精度的参数辨识,提出了一种二次辨识方法,先利用改进灰狼优化算法(Improved grey wolf optimizer, IGWO)实现运动学参数误差的粗辨识,初步将Staubli TX60型机器人的平均位置误差和平均姿态误差分别从(0.648 mm, 0.212°)降低为(0.457 mm, 0.166°);为进一步提高机器人的精度性能,再通过LM(Levenberg-Marquard)算法进行参数误差的精辨识,最终将Staubli TX60型机器人平均位置误差和平均姿态误差进一步降低为(0.237 mm, 0.063°),机器人平均位置误差和平均姿态误差分别降低63.4%和70.2%。为了验证上述二次辨识方法的稳定性,随机选取5组辨识数据集和验证数据集进行POE误差模型的参数误差辨识,结果表明提出的二次辨识方法能够稳定、精确地辨识工业机器人运动学参数误差。