The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number i...The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number in millimeter wave system,the multi-task deep residual shrinkage network(MTDRSN) and transfer learning-based convolutional neural network(TCNN), namely MDTCNet, are proposed. The sampling covariance matrix based on the received signal is used as the input to the proposed network. A DRSN-based multi-task classifications model is first introduced to estimate signal sources number and multipath number simultaneously. Then, the DoAs with multi-signal and multipath are estimated by the regression model. The proposed CNN is applied for DoAs estimation with the predicted number of signal sources and paths. Furthermore, the modelbased transfer learning is also introduced into the regression model. The TCNN inherits the partial network parameters of the already formed optimization model obtained by the CNN. A series of experimental results show that the MDTCNet-based DoAs estimation method can accurately predict the signal sources number and multipath number under a range of signal-to-noise ratios. Remarkably, the proposed method achieves the lower root mean square error compared with some existing deep learning-based and traditional methods.展开更多
Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’...Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.展开更多
Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively u...Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.展开更多
Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new...Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new tasks and datasets,which is inevitably time-consuming.This study proposes a multi-task learning approach that simultaneously accomplishes the semantic segmentation of seven-type structural components,three-type seismic damage,and four-type deterioration states.The proposed method contains a CNN-based encoder-decoder backbone subnetwork with skip-connection modules and a multi-head,task-specific recognition subnetwork.The backbone subnetwork is designed to extract multi-level features of post-earthquake RC structures.The multi-head,task-specific recognition subnetwork consists of three individual self-attention pipelines,each of which utilizes extracted multi-level features from the backbone network as a mutual guidance for the individual segmentation task.A synthetical loss function is designed with real-time adaptive coefficients to balance multi-task losses and focus on the most unstably fluctuating one.Ablation experiments and comparative studies are further conducted to demonstrate their effectiveness and necessity.The results show that the proposed method can simultaneously recognize different structural components,seismic damage,and deterioration states,and that the overall performance of the three-task learning models gains general improvement when compared to all single-task and dual-task models.展开更多
Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out se...Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out separately by existing systems using separate models,significantly adding to the difficulty of network administration.Convolutional Neural Network(CNN)and Transformer are deep learning-based approaches for network traf-fic classification.CNN is good at extracting local features while ignoring long-distance information from the network traffic sequence,and Transformer can capture long-distance feature dependencies while ignoring local details.Based on these characteristics,a multi-task learning model that combines Transformer and 1D-CNN for encrypted traffic classification is proposed(MTC).In order to make up for the Transformer’s lack of local detail feature extraction capability and the 1D-CNN’s shortcoming of ignoring long-distance correlation information when processing traffic sequences,the model uses a parallel structure to fuse the features generated by the Transformer block and the 1D-CNN block with each other using a feature fusion block.This structure improved the representation of traffic features by both blocks and allows the model to perform well with both long and short length sequences.The model simultaneously handles multiple tasks,which lowers the cost of training.Experiments reveal that on the ISCX VPN-nonVPN dataset,the model achieves an average F1 score of 98.25%and an average recall of 98.30%for the task of identifying applications,and an average F1 score of 97.94%,and an average recall of 97.54%for the task of traffic characterization.When advanced models on the same dataset are chosen for comparison,the model produces the best results.To prove the generalization,we applied MTC to CICIDS2017 dataset,and our model also achieved good results.展开更多
False data injection(FDI) attacks are common in the distributed estimation of multi-task network environments, so an attack detection strategy is designed by combining the generalized maximum correntropy criterion. Ba...False data injection(FDI) attacks are common in the distributed estimation of multi-task network environments, so an attack detection strategy is designed by combining the generalized maximum correntropy criterion. Based on this, we propose a diffusion least-mean-square algorithm based on the generalized maximum correntropy criterion(GMCC-DLMS)for multi-task networks. The algorithm achieves gratifying estimation results. Even more, compared to the related work,it has better robustness when the number of attacked nodes increases. Moreover, the assumption about the number of attacked nodes is relaxed, which is applicable to multi-task environments. In addition, the performance of the proposed GMCC-DLMS algorithm is analyzed in the mean and mean-square senses. Finally, simulation experiments confirm the performance and effectiveness against FDI attacks of the algorithm.展开更多
Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forec...Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forecast convective storms and lightning with lead times for up to 90 min,using GOES-16 geostationary satellite infrared brightness temperatures(IRBTs),lightning flashes from Geostationary Lightning Mapper(GLM),and vertically integrated liquid(VIL)from Next Generation Weather Radar(NEXRAD).To cope with the heavily skewed distribution of lightning data,a spatiotemporal exponent-weighted loss function and log-transformed lightning normalization approach were developed.The effects of MTL,single-task learning(STL),and IRBTs as auxiliary input features on convection and lightning nowcasting were investigated.The results showed that normalizing the heavily skew-distributed lightning data along with a log-transformation dramatically outperforms the min-max normalization method for nowcasting an intense lightning event.The MTL model significantly outperformed the STL model for both lightning nowcasting and VIL nowcasting,particularly for intense lightning events.The MTL also helped delay the lightning forecast performance decay with the lead times.Furthermore,incorporating satellite IRBTs as auxiliary input features substantially improved lightning nowcasting,but produced little difference in VIL forecasting.Finally,the MTL model performed better for forecasting both lightning and the VIL of organized convective storms than for isolated cells.展开更多
The human motion generation model can extract structural features from existing human motion capture data,and the generated data makes animated characters move.The 3D human motion capture sequences contain complex spa...The human motion generation model can extract structural features from existing human motion capture data,and the generated data makes animated characters move.The 3D human motion capture sequences contain complex spatial-temporal structures,and the deep learning model can fully describe the potential semantic structure of human motion.To improve the authenticity of the generated human motion sequences,we propose a multi-task motion generation model that consists of a discriminator and a generator.The discriminator classifies motion sequences into different styles according to their similarity to the mean spatial-temporal templates from motion sequences of 17 crucial human joints in three-freedom degrees.And target motion sequences are created with these styles by the generator.Unlike traditional related works,our model can handle multiple tasks,such as identifying styles and generating data.In addition,by extracting 17 crucial joints from 29 human joints,our model avoids data redundancy and improves the accuracy of model recognition.The experimental results show that the discriminator of the model can effectively recognize diversified movements,and the generated data can correctly fit the actual data.The combination of discriminator and generator solves the problem of low reuse rate of motion data,and the generated motion sequences are more suitable for actual movement.展开更多
Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.Howeve...Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.展开更多
Vegetable production in the open field involves many tasks,such as soil preparation,ridging,and transplanting/sowing.Different tasks require agricultural machinery equipped with different agricultural tools to meet th...Vegetable production in the open field involves many tasks,such as soil preparation,ridging,and transplanting/sowing.Different tasks require agricultural machinery equipped with different agricultural tools to meet the needs of the operation.Aiming at the coupling multi-task in the intelligent production of vegetables in the open field,the task assignment method for multiple unmanned tractors based on consistency alliance is studied.Firstly,unmanned vegetable production in the open field is abstracted as a multi-task assignment model with constraints of task demand,task sequence,and the distance traveled by an unmanned tractor.The tight time constraints between associated tasks are transformed into time windows.Based on the driving distance of the unmanned tractor and the replacement cost of the tools,an expanded task cost function is innovatively established.The task assignment model of multiple unmanned tractors is optimized by the consensus based bundle algorithm(CBBA)with time windows.Experiments show that the method can effectively solve task conflict in unmanned production and optimize task allocation.A basic model is provided for the cooperative task of multiple unmanned tractors for vegetable production in the open field.展开更多
There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the...There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the processing capabilities of the current internet infrastructure.Therefore,engineering works using technology to organize and analyze information and extract useful information are interesting in both industry and academia.The goal of this paper is to explore the entity relationship based on deep learning,introduce semantic knowledge by using the prepared language model,develop an advanced entity relationship information extraction method by combining Robustly Optimized BERT Approach(RoBERTa)and multi-task learning,and combine the intelligent characters in the field of linguistic,called Robustly Optimized BERT Approach+Multi-Task Learning(RoBERTa+MTL).To improve the effectiveness of model interaction,multi-task teaching is used to implement the observation information of auxiliary tasks.Experimental results show that our method has achieved an accuracy of 88.95 entity relationship extraction,and a further it has achieved 86.35%of accuracy after being combined with multi-task learning.展开更多
Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While i...Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While it remains an unsolved problem and poses a significant threat to the security of cyberspace when various categories of non-steganographic or steganographic texts coexist.In this paper,we propose a general linguistic steganalysis framework named LS-MTL,which introduces the idea of multi-task learning to deal with the classification of various categories of steganographic and non-steganographic texts.LS-MTL captures sensitive linguistic features from multiple related linguistic steganalysis tasks and can concurrently handle diverse tasks with a constructed model.In the proposed framework,convolutional neural networks(CNNs)are utilized as private base models to extract sensitive features for each steganalysis task.Besides,a shared CNN is built to capture potential interaction information and share linguistic features among all tasks.Finally,LS-MTL incorporates the private and shared sensitive features to identify the detected text as steganographic or non-steganographic.Experimental results demonstrate that the proposed framework LS-MTL outperforms the baseline in the multi-category linguistic steganalysis task,while average Acc,Pre,and Rec are increased by 0.5%,1.4%,and 0.4%,respectively.More ablation experimental results show that LS-MTL with the shared module has robust generalization capability and achieves good detection performance even in the case of spare data.展开更多
Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step proces...Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step process,such as impres-sion→click→conversion,which means the process from the delivery of the recommended item to the user’s click to the final conversion.Due to data sparsity or sample selection bias,it is difficult for the trained model to achieve the business goal of the target campaign.Multi-task learning,a classical solution to this pro-blem,aims to generalize better on the original task given several related tasks by exploiting the knowledge between tasks to share the same feature and label space.Adaptively learned task relations bring better performance to make full use of the correlation between tasks.We train a general model capable of captur-ing the relationships between various tasks on all existing active tasks from a meta-learning perspective.In addition,this paper proposes a Multi-task Attention Network(MAN)to identify commonalities and differences between tasks in the feature space.The model performance is improved by explicitly learning the stacking of task relationships in the label space.To illustrate the effectiveness of our method,experiments are conducted on Alibaba Click and Conversion Pre-diction(Ali-CCP)dataset.Experimental results show that the method outperforms the state-of-the-art multi-task learning methods.展开更多
The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceu...The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations.展开更多
Under the influence of air humidity,dust,aerosols,etc.,in real scenes,haze presents an uneven state.In this way,the image quality and contrast will decrease.In this case,It is difficult to detect the target in the ima...Under the influence of air humidity,dust,aerosols,etc.,in real scenes,haze presents an uneven state.In this way,the image quality and contrast will decrease.In this case,It is difficult to detect the target in the image by the universal detection network.Thus,a dual subnet based on multi-task collaborative training(DSMCT)is proposed in this paper.Firstly,in the training phase,the Gated Context Aggregation Network(GCANet)is used as the supervisory network of YOLOX to promote the extraction of clean information in foggy scenes.In the test phase,only the YOLOX branch needs to be activated to ensure the detection speed of the model.Secondly,the deformable convolution module is used to improve GCANet to enhance the model’s ability to capture details of non-homogeneous fog.Finally,the Coordinate Attention mechanism is introduced into the Vision Transformer and the backbone network of YOLOX is redesigned.In this way,the feature extraction ability of the network for deep-level information can be enhanced.The experimental results on artificial fog data set FOG_VOC and real fog data set RTTS show that the map value of DSMCT reached 86.56%and 62.39%,respectively,which was 2.27%and 4.41%higher than the current most advanced detection model.The DSMCT network has high practicality and effectiveness for target detection in real foggy scenes.展开更多
With focus now placed on the learner, more attention is given to his learning style, multiple intelligence and developing learning strategies to enable him to make sense of and use of the target language appropriately...With focus now placed on the learner, more attention is given to his learning style, multiple intelligence and developing learning strategies to enable him to make sense of and use of the target language appropriately in varied contexts and with different uses of the language. To attain this, the teacher is tasked with designing, monitoring and processing language learning activities for students to carry out and in the process learn by doing and reflecting on the learning process they went through as they interacted socially with each other. This paper describes a task named"The Fishbowl Technique"and found to be effective in large ESL classes in the secondary level in the Philippines.展开更多
The global nuclear mass based on the macroscopic-microscopic model was studied by applying a newly designed multi-task learning artificial neural network(MTL-ANN). First, the reported nuclear binding energies of 2095 ...The global nuclear mass based on the macroscopic-microscopic model was studied by applying a newly designed multi-task learning artificial neural network(MTL-ANN). First, the reported nuclear binding energies of 2095 nuclei(Z ≥ 8, N ≥ 8) released in the latest Atomic Mass Evaluation AME2020 and the deviations between the fitting result of the liquid drop model(LDM)and data from AME2020 for each nucleus were obtained.To compensate for the deviations and investigate the possible ignored physics in the LDM, the MTL-ANN method was introduced in the model. Compared to the single-task learning(STL) method, this new network has a powerful ability to simultaneously learn multi-nuclear properties,such as the binding energies and single neutron and proton separation energies. Moreover, it is highly effective in reducing the risk of overfitting and achieving better predictions. Consequently, good predictions can be obtained using this nuclear mass model for both the training and validation datasets and for the testing dataset. In detail, the global root mean square(RMS) of the binding energy is effectively reduced from approximately 2.4 MeV of LDM to the current 0.2 MeV, and the RMS of Sn, Spcan also reach approximately 0.2 MeV. Moreover, compared to STL, for the training and validation sets, 3-9% improvement can be achieved with the binding energy, and 20-30% improvement for S_(n), S_(p);for the testing sets, the reduction in deviations can even reach 30-40%, which significantly illustrates the advantage of the current MTL.展开更多
To cope with the task scheduling problem under multi-task and transportation consideration in large-scale service oriented manufacturing systems(SOMS), a service allocation optimization mathematical model was establis...To cope with the task scheduling problem under multi-task and transportation consideration in large-scale service oriented manufacturing systems(SOMS), a service allocation optimization mathematical model was established, and then a hybrid discrete particle swarm optimization-genetic algorithm(HDPSOGA) was proposed. In SOMS, each resource involved in the whole life cycle of a product, whether it is provided by a piece of software or a hardware device, is encapsulated into a service. So, the transportation during production of a task should be taken into account because the hard-services selected are possibly provided by various providers in different areas. In the service allocation optimization mathematical model, multi-task and transportation were considered simultaneously. In the proposed HDPSOGA algorithm, integer coding method was applied to establish the mapping between the particle location matrix and the service allocation scheme. The position updating process was performed according to the cognition part, the social part, and the previous velocity and position while introducing the crossover and mutation idea of genetic algorithm to fit the discrete space. Finally, related simulation experiments were carried out to compare with other two previous algorithms. The results indicate the effectiveness and efficiency of the proposed hybrid algorithm.展开更多
基金funded by Beijing University of Posts and Telecommunications-China Mobile Research Institute Joint Innovation Center。
文摘The direction-of-arrival(DoA) estimation is one of the hot research areas in signal processing. To overcome the DoA estimation challenge without the prior information about signal sources number and multipath number in millimeter wave system,the multi-task deep residual shrinkage network(MTDRSN) and transfer learning-based convolutional neural network(TCNN), namely MDTCNet, are proposed. The sampling covariance matrix based on the received signal is used as the input to the proposed network. A DRSN-based multi-task classifications model is first introduced to estimate signal sources number and multipath number simultaneously. Then, the DoAs with multi-signal and multipath are estimated by the regression model. The proposed CNN is applied for DoAs estimation with the predicted number of signal sources and paths. Furthermore, the modelbased transfer learning is also introduced into the regression model. The TCNN inherits the partial network parameters of the already formed optimization model obtained by the CNN. A series of experimental results show that the MDTCNet-based DoAs estimation method can accurately predict the signal sources number and multipath number under a range of signal-to-noise ratios. Remarkably, the proposed method achieves the lower root mean square error compared with some existing deep learning-based and traditional methods.
文摘Thoracic diseases pose significant risks to an individual's chest health and are among the most perilous medical diseases. They can impact either one or both lungs, which leads to a severe impairment of a person’s ability to breathe normally. Some notable examples of such diseases encompass pneumonia, lung cancer, coronavirus disease 2019 (COVID-19), tuberculosis, and chronic obstructive pulmonary disease (COPD). Consequently, early and precise detection of these diseases is paramount during the diagnostic process. Traditionally, the primary methods employed for the detection involve the use of X-ray imaging or computed tomography (CT) scans. Nevertheless, due to the scarcity of proficient radiologists and the inherent similarities between these diseases, the accuracy of detection can be compromised, leading to imprecise or erroneous results. To address this challenge, scientists have turned to computer-based solutions, aiming for swift and accurate diagnoses. The primary objective of this study is to develop two machine learning models, utilizing single-task and multi-task learning frameworks, to enhance classification accuracy. Within the multi-task learning architecture, two principal approaches exist soft parameter sharing and hard parameter sharing. Consequently, this research adopts a multi-task deep learning approach that leverages CNNs to achieve improved classification performance for the specified tasks. These tasks, focusing on pneumonia and COVID-19, are processed and learned simultaneously within a multi-task model. To assess the effectiveness of the trained model, it is rigorously validated using three different real-world datasets for training and testing.
基金National Key R&D Program of China(No.2022ZD0118401).
文摘Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.
基金National Key R&D Program of China under Grant No.2019YFC1511005the National Natural Science Foundation of China under Grant Nos.51921006,52192661 and 52008138+2 种基金the China Postdoctoral Science Foundation under Grant Nos.BX20190102 and 2019M661286the Heilongjiang Natural Science Foundation under Grant No.LH2022E070the Heilongjiang Province Postdoctoral Science Foundation under Grant Nos.LBH-TZ2016 and LBH-Z19064。
文摘Recent studies for computer vision and deep learning-based,post-earthquake inspections on RC structures mainly perform well for specific tasks,while the trained models must be fine-tuned and re-trained when facing new tasks and datasets,which is inevitably time-consuming.This study proposes a multi-task learning approach that simultaneously accomplishes the semantic segmentation of seven-type structural components,three-type seismic damage,and four-type deterioration states.The proposed method contains a CNN-based encoder-decoder backbone subnetwork with skip-connection modules and a multi-head,task-specific recognition subnetwork.The backbone subnetwork is designed to extract multi-level features of post-earthquake RC structures.The multi-head,task-specific recognition subnetwork consists of three individual self-attention pipelines,each of which utilizes extracted multi-level features from the backbone network as a mutual guidance for the individual segmentation task.A synthetical loss function is designed with real-time adaptive coefficients to balance multi-task losses and focus on the most unstably fluctuating one.Ablation experiments and comparative studies are further conducted to demonstrate their effectiveness and necessity.The results show that the proposed method can simultaneously recognize different structural components,seismic damage,and deterioration states,and that the overall performance of the three-task learning models gains general improvement when compared to all single-task and dual-task models.
基金supported by the People’s Public Security University of China central basic scientific research business program(No.2021JKF206).
文摘Traffic characterization(e.g.,chat,video)and application identifi-cation(e.g.,FTP,Facebook)are two of the more crucial jobs in encrypted network traffic classification.These two activities are typically carried out separately by existing systems using separate models,significantly adding to the difficulty of network administration.Convolutional Neural Network(CNN)and Transformer are deep learning-based approaches for network traf-fic classification.CNN is good at extracting local features while ignoring long-distance information from the network traffic sequence,and Transformer can capture long-distance feature dependencies while ignoring local details.Based on these characteristics,a multi-task learning model that combines Transformer and 1D-CNN for encrypted traffic classification is proposed(MTC).In order to make up for the Transformer’s lack of local detail feature extraction capability and the 1D-CNN’s shortcoming of ignoring long-distance correlation information when processing traffic sequences,the model uses a parallel structure to fuse the features generated by the Transformer block and the 1D-CNN block with each other using a feature fusion block.This structure improved the representation of traffic features by both blocks and allows the model to perform well with both long and short length sequences.The model simultaneously handles multiple tasks,which lowers the cost of training.Experiments reveal that on the ISCX VPN-nonVPN dataset,the model achieves an average F1 score of 98.25%and an average recall of 98.30%for the task of identifying applications,and an average F1 score of 97.94%,and an average recall of 97.54%for the task of traffic characterization.When advanced models on the same dataset are chosen for comparison,the model produces the best results.To prove the generalization,we applied MTC to CICIDS2017 dataset,and our model also achieved good results.
文摘False data injection(FDI) attacks are common in the distributed estimation of multi-task network environments, so an attack detection strategy is designed by combining the generalized maximum correntropy criterion. Based on this, we propose a diffusion least-mean-square algorithm based on the generalized maximum correntropy criterion(GMCC-DLMS)for multi-task networks. The algorithm achieves gratifying estimation results. Even more, compared to the related work,it has better robustness when the number of attacked nodes increases. Moreover, the assumption about the number of attacked nodes is relaxed, which is applicable to multi-task environments. In addition, the performance of the proposed GMCC-DLMS algorithm is analyzed in the mean and mean-square senses. Finally, simulation experiments confirm the performance and effectiveness against FDI attacks of the algorithm.
基金supported by the Science and Technology Grant No.520120210003,Jibei Electric Power Company of the State Grid Corporation of China。
文摘Convective storms and lightning are among the most important weather phenomena that are challenging to forecast.In this study,a novel multi-task learning(MTL)encoder-decoder U-net neural network was developed to forecast convective storms and lightning with lead times for up to 90 min,using GOES-16 geostationary satellite infrared brightness temperatures(IRBTs),lightning flashes from Geostationary Lightning Mapper(GLM),and vertically integrated liquid(VIL)from Next Generation Weather Radar(NEXRAD).To cope with the heavily skewed distribution of lightning data,a spatiotemporal exponent-weighted loss function and log-transformed lightning normalization approach were developed.The effects of MTL,single-task learning(STL),and IRBTs as auxiliary input features on convection and lightning nowcasting were investigated.The results showed that normalizing the heavily skew-distributed lightning data along with a log-transformation dramatically outperforms the min-max normalization method for nowcasting an intense lightning event.The MTL model significantly outperformed the STL model for both lightning nowcasting and VIL nowcasting,particularly for intense lightning events.The MTL also helped delay the lightning forecast performance decay with the lead times.Furthermore,incorporating satellite IRBTs as auxiliary input features substantially improved lightning nowcasting,but produced little difference in VIL forecasting.Finally,the MTL model performed better for forecasting both lightning and the VIL of organized convective storms than for isolated cells.
文摘The human motion generation model can extract structural features from existing human motion capture data,and the generated data makes animated characters move.The 3D human motion capture sequences contain complex spatial-temporal structures,and the deep learning model can fully describe the potential semantic structure of human motion.To improve the authenticity of the generated human motion sequences,we propose a multi-task motion generation model that consists of a discriminator and a generator.The discriminator classifies motion sequences into different styles according to their similarity to the mean spatial-temporal templates from motion sequences of 17 crucial human joints in three-freedom degrees.And target motion sequences are created with these styles by the generator.Unlike traditional related works,our model can handle multiple tasks,such as identifying styles and generating data.In addition,by extracting 17 crucial joints from 29 human joints,our model avoids data redundancy and improves the accuracy of model recognition.The experimental results show that the discriminator of the model can effectively recognize diversified movements,and the generated data can correctly fit the actual data.The combination of discriminator and generator solves the problem of low reuse rate of motion data,and the generated motion sequences are more suitable for actual movement.
文摘Aspect-based sentiment analysis(ABSA)is a fine-grained process.Its fundamental subtasks are aspect termextraction(ATE)and aspect polarity classification(APC),and these subtasks are dependent and closely related.However,most existing works on Arabic ABSA content separately address them,assume that aspect terms are preidentified,or use a pipeline model.Pipeline solutions design different models for each task,and the output from the ATE model is used as the input to the APC model,which may result in error propagation among different steps because APC is affected by ATE error.These methods are impractical for real-world scenarios where the ATE task is the base task for APC,and its result impacts the accuracy of APC.Thus,in this study,we focused on a multi-task learning model for Arabic ATE and APC in which the model is jointly trained on two subtasks simultaneously in a singlemodel.This paper integrates themulti-task model,namely Local Cotext Foucse-Aspect Term Extraction and Polarity classification(LCF-ATEPC)and Arabic Bidirectional Encoder Representation from Transformers(AraBERT)as a shred layer for Arabic contextual text representation.The LCF-ATEPC model is based on a multi-head selfattention and local context focus mechanism(LCF)to capture the interactive information between an aspect and its context.Moreover,data augmentation techniques are proposed based on state-of-the-art augmentation techniques(word embedding substitution with constraints and contextual embedding(AraBERT))to increase the diversity of the training dataset.This paper examined the effect of data augmentation on the multi-task model for Arabic ABSA.Extensive experiments were conducted on the original and combined datasets(merging the original and augmented datasets).Experimental results demonstrate that the proposed Multi-task model outperformed existing APC techniques.Superior results were obtained by AraBERT and LCF-ATEPC with fusion layer(AR-LCF-ATEPC-Fusion)and the proposed data augmentation word embedding-based method(FastText)on the combined dataset.
基金supported by the Science and Technology Innovation 2030-“New Generation Artificial Intelligence”Major Project(No.2021ZD0113604)China Agriculture Research System of MOF and MARA(No.CARS-23-D07)。
文摘Vegetable production in the open field involves many tasks,such as soil preparation,ridging,and transplanting/sowing.Different tasks require agricultural machinery equipped with different agricultural tools to meet the needs of the operation.Aiming at the coupling multi-task in the intelligent production of vegetables in the open field,the task assignment method for multiple unmanned tractors based on consistency alliance is studied.Firstly,unmanned vegetable production in the open field is abstracted as a multi-task assignment model with constraints of task demand,task sequence,and the distance traveled by an unmanned tractor.The tight time constraints between associated tasks are transformed into time windows.Based on the driving distance of the unmanned tractor and the replacement cost of the tools,an expanded task cost function is innovatively established.The task assignment model of multiple unmanned tractors is optimized by the consensus based bundle algorithm(CBBA)with time windows.Experiments show that the method can effectively solve task conflict in unmanned production and optimize task allocation.A basic model is provided for the cooperative task of multiple unmanned tractors for vegetable production in the open field.
文摘There is a growing amount of data uploaded to the internet every day and it is important to understand the volume of those data to find a better scheme to process them.However,the volume of internet data is beyond the processing capabilities of the current internet infrastructure.Therefore,engineering works using technology to organize and analyze information and extract useful information are interesting in both industry and academia.The goal of this paper is to explore the entity relationship based on deep learning,introduce semantic knowledge by using the prepared language model,develop an advanced entity relationship information extraction method by combining Robustly Optimized BERT Approach(RoBERTa)and multi-task learning,and combine the intelligent characters in the field of linguistic,called Robustly Optimized BERT Approach+Multi-Task Learning(RoBERTa+MTL).To improve the effectiveness of model interaction,multi-task teaching is used to implement the observation information of auxiliary tasks.Experimental results show that our method has achieved an accuracy of 88.95 entity relationship extraction,and a further it has achieved 86.35%of accuracy after being combined with multi-task learning.
基金This paper is partly supported by the National Natural Science Foundation of China unde rGrants 61972057 and 62172059Hunan ProvincialNatural Science Foundation of China underGrant 2022JJ30623 and 2019JJ50287Scientific Research Fund of Hunan Provincial Education Department of China under Grant 21A0211 and 19A265。
文摘Prevailing linguistic steganalysis approaches focus on learning sensitive features to distinguish a particular category of steganographic texts from non-steganographic texts,by performing binary classification.While it remains an unsolved problem and poses a significant threat to the security of cyberspace when various categories of non-steganographic or steganographic texts coexist.In this paper,we propose a general linguistic steganalysis framework named LS-MTL,which introduces the idea of multi-task learning to deal with the classification of various categories of steganographic and non-steganographic texts.LS-MTL captures sensitive linguistic features from multiple related linguistic steganalysis tasks and can concurrently handle diverse tasks with a constructed model.In the proposed framework,convolutional neural networks(CNNs)are utilized as private base models to extract sensitive features for each steganalysis task.Besides,a shared CNN is built to capture potential interaction information and share linguistic features among all tasks.Finally,LS-MTL incorporates the private and shared sensitive features to identify the detected text as steganographic or non-steganographic.Experimental results demonstrate that the proposed framework LS-MTL outperforms the baseline in the multi-category linguistic steganalysis task,while average Acc,Pre,and Rec are increased by 0.5%,1.4%,and 0.4%,respectively.More ablation experimental results show that LS-MTL with the shared module has robust generalization capability and achieves good detection performance even in the case of spare data.
基金Our work was supported by the research project of Yunnan University(Grant No.2021Y274)Natural Science Foundation of China(Grant No.61862064).
文摘Online advertising has gained much attention on various platforms as a hugely lucrative market.In promoting content and advertisements in real life,the acquisition of user target actions is usually a multi-step process,such as impres-sion→click→conversion,which means the process from the delivery of the recommended item to the user’s click to the final conversion.Due to data sparsity or sample selection bias,it is difficult for the trained model to achieve the business goal of the target campaign.Multi-task learning,a classical solution to this pro-blem,aims to generalize better on the original task given several related tasks by exploiting the knowledge between tasks to share the same feature and label space.Adaptively learned task relations bring better performance to make full use of the correlation between tasks.We train a general model capable of captur-ing the relationships between various tasks on all existing active tasks from a meta-learning perspective.In addition,this paper proposes a Multi-task Attention Network(MAN)to identify commonalities and differences between tasks in the feature space.The model performance is improved by explicitly learning the stacking of task relationships in the label space.To illustrate the effectiveness of our method,experiments are conducted on Alibaba Click and Conversion Pre-diction(Ali-CCP)dataset.Experimental results show that the method outperforms the state-of-the-art multi-task learning methods.
基金the financial support from the National Natural Science Foundation of China(22278070,21978047,21776046)。
文摘The high throughput prediction of the thermodynamic phase behavior of active pharmaceutical ingredients(APIs)with pharmaceutically relevant excipients remains a major scientific challenge in the screening of pharmaceutical formulations.In this work,a developed machine-learning model efficiently predicts the solubility of APIs in polymers by learning the phase equilibrium principle and using a few molecular descriptors.Under the few-shot learning framework,thermodynamic theory(perturbed-chain statistical associating fluid theory)was used for data augmentation,and computational chemistry was applied for molecular descriptors'screening.The results showed that the developed machine-learning model can predict the API-polymer phase diagram accurately,broaden the solubility data of APIs in polymers,and reproduce the relationship between API solubility and the interaction mechanisms between API and polymer successfully,which provided efficient guidance for the development of pharmaceutical formulations.
基金This work was jointly supported by the Special Fund for Transformation and Upgrade of Jiangsu Industry and Information Industry-Key Core Technologies(Equipment)Key Industrialization Projects in 2022(No.CMHI-2022-RDG-004):“Key Technology Research for Development of Intelligent Wind Power Operation and Maintenance Mothership in Deep Sea”.
文摘Under the influence of air humidity,dust,aerosols,etc.,in real scenes,haze presents an uneven state.In this way,the image quality and contrast will decrease.In this case,It is difficult to detect the target in the image by the universal detection network.Thus,a dual subnet based on multi-task collaborative training(DSMCT)is proposed in this paper.Firstly,in the training phase,the Gated Context Aggregation Network(GCANet)is used as the supervisory network of YOLOX to promote the extraction of clean information in foggy scenes.In the test phase,only the YOLOX branch needs to be activated to ensure the detection speed of the model.Secondly,the deformable convolution module is used to improve GCANet to enhance the model’s ability to capture details of non-homogeneous fog.Finally,the Coordinate Attention mechanism is introduced into the Vision Transformer and the backbone network of YOLOX is redesigned.In this way,the feature extraction ability of the network for deep-level information can be enhanced.The experimental results on artificial fog data set FOG_VOC and real fog data set RTTS show that the map value of DSMCT reached 86.56%and 62.39%,respectively,which was 2.27%and 4.41%higher than the current most advanced detection model.The DSMCT network has high practicality and effectiveness for target detection in real foggy scenes.
文摘With focus now placed on the learner, more attention is given to his learning style, multiple intelligence and developing learning strategies to enable him to make sense of and use of the target language appropriately in varied contexts and with different uses of the language. To attain this, the teacher is tasked with designing, monitoring and processing language learning activities for students to carry out and in the process learn by doing and reflecting on the learning process they went through as they interacted socially with each other. This paper describes a task named"The Fishbowl Technique"and found to be effective in large ESL classes in the secondary level in the Philippines.
基金supported by the National Natural Science Foundation of China(Nos.1187050492,12005303,and 12175170).
文摘The global nuclear mass based on the macroscopic-microscopic model was studied by applying a newly designed multi-task learning artificial neural network(MTL-ANN). First, the reported nuclear binding energies of 2095 nuclei(Z ≥ 8, N ≥ 8) released in the latest Atomic Mass Evaluation AME2020 and the deviations between the fitting result of the liquid drop model(LDM)and data from AME2020 for each nucleus were obtained.To compensate for the deviations and investigate the possible ignored physics in the LDM, the MTL-ANN method was introduced in the model. Compared to the single-task learning(STL) method, this new network has a powerful ability to simultaneously learn multi-nuclear properties,such as the binding energies and single neutron and proton separation energies. Moreover, it is highly effective in reducing the risk of overfitting and achieving better predictions. Consequently, good predictions can be obtained using this nuclear mass model for both the training and validation datasets and for the testing dataset. In detail, the global root mean square(RMS) of the binding energy is effectively reduced from approximately 2.4 MeV of LDM to the current 0.2 MeV, and the RMS of Sn, Spcan also reach approximately 0.2 MeV. Moreover, compared to STL, for the training and validation sets, 3-9% improvement can be achieved with the binding energy, and 20-30% improvement for S_(n), S_(p);for the testing sets, the reduction in deviations can even reach 30-40%, which significantly illustrates the advantage of the current MTL.
基金Project(2012B091100444)supported by the Production,Education and Research Cooperative Program of Guangdong Province and Ministry of Education,ChinaProject(2013ZM0091)supported by Fundamental Research Funds for the Central Universities of China
文摘To cope with the task scheduling problem under multi-task and transportation consideration in large-scale service oriented manufacturing systems(SOMS), a service allocation optimization mathematical model was established, and then a hybrid discrete particle swarm optimization-genetic algorithm(HDPSOGA) was proposed. In SOMS, each resource involved in the whole life cycle of a product, whether it is provided by a piece of software or a hardware device, is encapsulated into a service. So, the transportation during production of a task should be taken into account because the hard-services selected are possibly provided by various providers in different areas. In the service allocation optimization mathematical model, multi-task and transportation were considered simultaneously. In the proposed HDPSOGA algorithm, integer coding method was applied to establish the mapping between the particle location matrix and the service allocation scheme. The position updating process was performed according to the cognition part, the social part, and the previous velocity and position while introducing the crossover and mutation idea of genetic algorithm to fit the discrete space. Finally, related simulation experiments were carried out to compare with other two previous algorithms. The results indicate the effectiveness and efficiency of the proposed hybrid algorithm.