The manufacturing of nanomaterials by the electrospinning process requires accurate and meticulous inspection of related scanning electron microscope(SEM)images of the electrospun nanofiber,to ensure that no structura...The manufacturing of nanomaterials by the electrospinning process requires accurate and meticulous inspection of related scanning electron microscope(SEM)images of the electrospun nanofiber,to ensure that no structural defects are produced.The presence of anomalies prevents practical application of the electrospun nanofibrous material in nanotechnology.Hence,the automatic monitoring and quality control of nanomaterials is a relevant challenge in the context of Industry 4.0.In this paper,a novel automatic classification system for homogenous(anomaly-free)and non-homogenous(with defects)nanofibers is proposed.The inspection procedure aims at avoiding direct processing of the redundant full SEM image.Specifically,the image to be analyzed is first partitioned into subimages(nanopatches)that are then used as input to a hybrid unsupervised and supervised machine learning system.In the first step,an autoencoder(AE)is trained with unsupervised learning to generate a code representing the input image with a vector of relevant features.Next,a multilayer perceptron(MLP),trained with supervised learning,uses the extracted features to classify non-homogenous nanofiber(NH-NF)and homogenous nanofiber(H-NF)patches.The resulting novel AE-MLP system is shown to outperform other standard machine learning models and other recent state-of-the-art techniques,reporting accuracy rate up to92.5%.In addition,the proposed approach leads to model complexity reduction with respect to other deep learning strategies such as convolutional neural networks(CNN).The encouraging performance achieved in this benchmark study can stimulate the application of the proposed scheme in other challenging industrial manufacturing tasks.展开更多
Software-Defined Networking(SDN)is now considered as one of the promising future networking technologies,and this trend can be clearly verified by observing activities of both industry and academia.In industry,we can ...Software-Defined Networking(SDN)is now considered as one of the promising future networking technologies,and this trend can be clearly verified by observing activities of both industry and academia.In industry,we can easily notice that many major展开更多
Workflow scheduling is a key issue and remains a challenging problem in cloud computing.Faced with the large number of virtual machine(VM)types offered by cloud providers,cloud users need to choose the most appropriat...Workflow scheduling is a key issue and remains a challenging problem in cloud computing.Faced with the large number of virtual machine(VM)types offered by cloud providers,cloud users need to choose the most appropriate VM type for each task.Multiple task scheduling sequences exist in a workflow application.Different task scheduling sequences have a significant impact on the scheduling performance.It is not easy to determine the most appropriate set of VM types for tasks and the best task scheduling sequence.Besides,the idle time slots on VM instances should be used fully to increase resources'utilization and save the execution cost of a workflow.This paper considers these three aspects simultaneously and proposes a cloud workflow scheduling approach which combines particle swarm optimization(PSO)and idle time slot-aware rules,to minimize the execution cost of a workflow application under a deadline constraint.A new particle encoding is devised to represent the VM type required by each task and the scheduling sequence of tasks.An idle time slot-aware decoding procedure is proposed to decode a particle into a scheduling solution.To handle tasks'invalid priorities caused by the randomness of PSO,a repair method is used to repair those priorities to produce valid task scheduling sequences.The proposed approach is compared with state-of-the-art cloud workflow scheduling algorithms.Experiments show that the proposed approach outperforms the comparative algorithms in terms of both of the execution cost and the success rate in meeting the deadline.展开更多
Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defec...Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defective to some extent.However,these detection techniques are mainly based on text features and have weak detection capabilities across programs.Compared with the uncertainty of the code and text caused by the developer’s personalization,the programming language has a stricter logical specification,which reflects the rules and requirements of the language itself and the developer’s potential way of thinking.This article replaces text analysis with programming logic modeling,breaks through the limitation of code text analysis solely relying on the probability of sentence/word occurrence in the code,and proposes an object-oriented language programming logic construction method based on method constraint relationships,selecting features through hypothesis testing ideas,and construct support vector machine classifier to detect class files with defects and reduce the impact of personalized programming on detection methods.In the experiment,some representative Android applications were selected to test and compare the proposed methods.In terms of the accuracy of code defect detection,through cross validation,the proposed method and the existing leading methods all reach an average of more than 90%.In the aspect of cross program detection,the method proposed in this paper is superior to the other two leading methods in accuracy,recall and F1 value.展开更多
In this paper, we are interested in answering the following research question: "Is it possible to form effective groups in virtual communities by exploiting trust information without significant overhead, similar...In this paper, we are interested in answering the following research question: "Is it possible to form effective groups in virtual communities by exploiting trust information without significant overhead, similarly to real user communities?"In order to answer this question, instead of adopting the largely used approach of exploiting the opinions provided by all the users of the community(called global reputation), we propose to use a particular form of reputation, called local reputation. We also propose an algorithm for group formation able to implement the proposed procedure to form effective groups in virtual communities. Another interesting question is how to measure the effectiveness of groups in virtual communities. To this aim we introduce the index in a measure of the effectiveness of the group formation. We tested our algorithm by realizing some experimental trials on real data from the real world EPINIONS and CIAO communities, showing the significant advantages of our procedure w.r.t. another prominent approach based on traditional global reputation.展开更多
This study proposed a lightweight but high-performance convolu-tion network for accurately classifying five upper limb movements of arm,involving forearm flexion and rotation,arm extension,lumbar touch and no reaction...This study proposed a lightweight but high-performance convolu-tion network for accurately classifying five upper limb movements of arm,involving forearm flexion and rotation,arm extension,lumbar touch and no reaction state,aiming to monitoring patient’s rehabilitation process and assist the therapist in elevating patient compliance with treatment.To achieve this goal,a lightweight convolution neural network TMCA-Net(Time Mul-tiscale Channel Attention Convolutional Neural Network)is designed,which combines attention mechanism,uses multi-branched convolution structure to automatically extract feature information at different scales from sensor data,and filters feature information based on attention mechanism.In particular,channel separation convolution is used to replace traditional convolution.This reduces the computational complexity of the model,decouples the convolution operation of the time dimension and the cross-channel feature interaction,which is helpful to the target optimization of feature extraction.TMCA-Net shows excellent performance in the upper limb rehabilitation ges-ture data,achieves 99.11%accuracy and 99.16%F1-score for the classification and recognition of five gestures.Compared with CNN and LSTM network,it achieves 65.62%and 89.98%accuracy in the same task.In addition,on the UCI smartphone public dataset,with the network parameters of one tenth of the current advanced model,the recognition accuracy rate of 95.21%has been achieved,which further proves the light weight and performance characteristics of the model.The clinical significance of this study is to accurately monitor patients’upper limb rehabilitation gesture by an affordable intelligent model as an auxiliary support for therapists’decision-making.展开更多
Artificial Intelligence(AI)tools become essential across industries,distinguishing AI-generated from human-authored text is increasingly challenging.This study assesses the coherence of AI-generated titles and corresp...Artificial Intelligence(AI)tools become essential across industries,distinguishing AI-generated from human-authored text is increasingly challenging.This study assesses the coherence of AI-generated titles and corresponding abstracts in anticipation of rising AI-assisted document production.Our main goal is to examine the correlation between original and AI-generated titles,emphasizing semantic depth and similarity measures,particularly in the context of Large Language Models(LLMs).We argue that LLMs have transformed research focus,dissemination,and citation patterns across five selected knowledge areas:Business Administration and Management(BAM),Computer Science and Information Technology(CS),Engineering and Material Science(EMS),Medicine and Healthcare(MH),and Psychology and Behavioral Sciences(PBS).We collected 15000 titles and abstracts,narrowing the selection to 2000 through a rigorous multi-stage screening process adhering to our study’s criteria.Result shows that there is insufficient evidence to suggest that LLM outperforms human authors in article title generation or articles from the LLM era demonstrates a marked difference in semantic richness and readability compared to those from the pre-LLM.Instead,it asserts that LLM is a valuable tool and can assist researchers in generating titles.With LLM’s assistance,the researcher ensures that the content is reflective of the finalized abstract and core research themes,potentially increasing the impact and accessibility and readability of the academic work.展开更多
Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred...Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness,and discuss future directions in this area.展开更多
Complex networks are widely used to represent an abundance of real-world relations ranging from social networks to brain networks. Inferring missing links or predicting future ones based on the currently observed netw...Complex networks are widely used to represent an abundance of real-world relations ranging from social networks to brain networks. Inferring missing links or predicting future ones based on the currently observed network is known as the link prediction task. Recent network embedding based link prediction algorithms have demonstrated ground-breaking performance on link prediction accuracy. Those algorithms usually apply node attributes as the initial feature input to accelerate the convergence speed during the training process. However, they do not take full advantage of node feature information. In this paper, besides applying feature attributes as the initial input, we make better utilization of node attribute information by building attributable networks and plugging attributable networks into some typical link prediction algorithms and name this algorithm Attributive Graph Enhanced Embedding (AGEE). AGEE is able to automatically learn the weighting trades-off between the structure and the attributive networks. Numerical experiments show that AGEE can improve the link prediction accuracy by around 3% compared with SEAL, Variational Graph AutoEncoder (VGAE), and node2vec.展开更多
There are many studies about flexible job shop scheduling problem with fuzzy processing time and deteriorating scheduling,but most scholars neglect the connection between them,which means the purpose of both models is...There are many studies about flexible job shop scheduling problem with fuzzy processing time and deteriorating scheduling,but most scholars neglect the connection between them,which means the purpose of both models is to simulate a more realistic factory environment.From this perspective,the solutions can be more precise and practical if both issues are considered simultaneously.Therefore,the deterioration effect is treated as a part of the fuzzy job shop scheduling problem in this paper,which means the linear increase of a certain processing time is transformed into an internal linear shift of a triangle fuzzy processing time.Apart from that,many other contributions can be stated as follows.A new algorithm called reinforcement learning based biased bi-population evolutionary algorithm(RB2EA)is proposed,which utilizes Q-learning algorithm to adjust the size of the two populations and the interaction frequency according to the quality of population.A local enhancement method which combimes multiple local search stratgies is presented.An interaction mechanism is designed to promote the convergence of the bi-population.Extensive experiments are designed to evaluate the efficacy of RB2EA,and the conclusion can be drew that RB2EA is able to solve energy-efficient fuzzy flexible job shop scheduling problem with deteriorating jobs(EFFJSPD)efficiently.展开更多
Online social networks are increasingly connecting people around the world.Influence maximization is a key area of research in online social networks,which identifies influential users during information dissemination...Online social networks are increasingly connecting people around the world.Influence maximization is a key area of research in online social networks,which identifies influential users during information dissemination.Most of the existing influence maximization methods only consider the transmission of a single channel,but real-world networks mostly include multiple channels of information transmission with competitive relationships.The problem of influence maximization in an environment involves selecting the seed node set for certain competitive information,so that it can avoid the influence of other information,and ultimately affect the largest set of nodes in the network.In this paper,the influence calculation of nodes is achieved according to the local community discovery algorithm,which is based on community dispersion and the characteristics of dynamic community structure.Furthermore,considering two various competitive information dissemination cases as an example,a solution is designed for self-interested information based on the assumption that the seed node set of competitive information is known,and a novel influence maximization algorithm of node avoidance based on user interest is proposed.Experiments conducted based on real-world Twitter dataset demonstrates the efficiency of our proposed algorithm in terms of accuracy and time against notable influence maximization algorithms.展开更多
Dialog State Tracking(DST)aims to extract the current state from the conversation and plays an important role in dialog systems.Existing methods usually predict the value of each slot independently and do not consider...Dialog State Tracking(DST)aims to extract the current state from the conversation and plays an important role in dialog systems.Existing methods usually predict the value of each slot independently and do not consider the correlations among slots,which will exacerbate the data sparsity problem because of the increased number of candidate values.In this paper,we propose a multi-domain DST model that integrates slot-relevant information.In particular,certain connections may exist among slots in different domains,and their corresponding values can be obtained through explicit or implicit reasoning.Therefore,we use the graph adjacency matrix to determine the correlation between slots,so that the slots can incorporate more slot-value transformer information.Experimental results show that our approach has performed well on the Multi-domain Wizard-Of-Oz(MultiWOZ)2.0and MultiWOZ2.1 datasets,demonstrating the effectiveness and necessity of incorporating slot-relevant information.展开更多
Due to the tremendous volume of data generated by urban surveillance systems, big data oriented lowcomplexity automatic background subtraction techniques are in great demand. In this paper, we propose a novel automati...Due to the tremendous volume of data generated by urban surveillance systems, big data oriented lowcomplexity automatic background subtraction techniques are in great demand. In this paper, we propose a novel automatic background subtraction algorithm for urban surveillance systems in which the computer can automatically renew an image as the new background image when no object is detected. This method is both simple and robust with respect to changes in light conditions.展开更多
基金supported by the European Commission,the European Social Fund and the Calabria Region(C39B18000080002)supported by the UK Engineering and Physical Sciences Research Council(EPSRC)(EP/M026981/1,EP/T021063/1,EP/T024917/1)。
文摘The manufacturing of nanomaterials by the electrospinning process requires accurate and meticulous inspection of related scanning electron microscope(SEM)images of the electrospun nanofiber,to ensure that no structural defects are produced.The presence of anomalies prevents practical application of the electrospun nanofibrous material in nanotechnology.Hence,the automatic monitoring and quality control of nanomaterials is a relevant challenge in the context of Industry 4.0.In this paper,a novel automatic classification system for homogenous(anomaly-free)and non-homogenous(with defects)nanofibers is proposed.The inspection procedure aims at avoiding direct processing of the redundant full SEM image.Specifically,the image to be analyzed is first partitioned into subimages(nanopatches)that are then used as input to a hybrid unsupervised and supervised machine learning system.In the first step,an autoencoder(AE)is trained with unsupervised learning to generate a code representing the input image with a vector of relevant features.Next,a multilayer perceptron(MLP),trained with supervised learning,uses the extracted features to classify non-homogenous nanofiber(NH-NF)and homogenous nanofiber(H-NF)patches.The resulting novel AE-MLP system is shown to outperform other standard machine learning models and other recent state-of-the-art techniques,reporting accuracy rate up to92.5%.In addition,the proposed approach leads to model complexity reduction with respect to other deep learning strategies such as convolutional neural networks(CNN).The encouraging performance achieved in this benchmark study can stimulate the application of the proposed scheme in other challenging industrial manufacturing tasks.
文摘Software-Defined Networking(SDN)is now considered as one of the promising future networking technologies,and this trend can be clearly verified by observing activities of both industry and academia.In industry,we can easily notice that many major
基金is with the School of Computing Science,Beijing University of Posts and Telecommunications,Beijing 100876,and also with the Key Laboratory of Trustworthy Distributed Computing and Service(BUPT),Ministry of Education,Beijing 100876,China(e-mail:zuoxq@bupt.edu.cn).supported in part by the National Natural Science Foundation of China(61874204,61663028,61703199)the Science and Technology Plan Project of Jiangxi Provincial Education Department(GJJ190959)。
文摘Workflow scheduling is a key issue and remains a challenging problem in cloud computing.Faced with the large number of virtual machine(VM)types offered by cloud providers,cloud users need to choose the most appropriate VM type for each task.Multiple task scheduling sequences exist in a workflow application.Different task scheduling sequences have a significant impact on the scheduling performance.It is not easy to determine the most appropriate set of VM types for tasks and the best task scheduling sequence.Besides,the idle time slots on VM instances should be used fully to increase resources'utilization and save the execution cost of a workflow.This paper considers these three aspects simultaneously and proposes a cloud workflow scheduling approach which combines particle swarm optimization(PSO)and idle time slot-aware rules,to minimize the execution cost of a workflow application under a deadline constraint.A new particle encoding is devised to represent the VM type required by each task and the scheduling sequence of tasks.An idle time slot-aware decoding procedure is proposed to decode a particle into a scheduling solution.To handle tasks'invalid priorities caused by the randomness of PSO,a repair method is used to repair those priorities to produce valid task scheduling sequences.The proposed approach is compared with state-of-the-art cloud workflow scheduling algorithms.Experiments show that the proposed approach outperforms the comparative algorithms in terms of both of the execution cost and the success rate in meeting the deadline.
基金This work was supported by National Key RD Program of China under Grant 2017YFB0802901.
文摘Code defects can lead to software vulnerability and even produce vulnerability risks.Existing research shows that the code detection technology with text analysis can judge whether object-oriented code files are defective to some extent.However,these detection techniques are mainly based on text features and have weak detection capabilities across programs.Compared with the uncertainty of the code and text caused by the developer’s personalization,the programming language has a stricter logical specification,which reflects the rules and requirements of the language itself and the developer’s potential way of thinking.This article replaces text analysis with programming logic modeling,breaks through the limitation of code text analysis solely relying on the probability of sentence/word occurrence in the code,and proposes an object-oriented language programming logic construction method based on method constraint relationships,selecting features through hypothesis testing ideas,and construct support vector machine classifier to detect class files with defects and reduce the impact of personalized programming on detection methods.In the experiment,some representative Android applications were selected to test and compare the proposed methods.In terms of the accuracy of code defect detection,through cross validation,the proposed method and the existing leading methods all reach an average of more than 90%.In the aspect of cross program detection,the method proposed in this paper is superior to the other two leading methods in accuracy,recall and F1 value.
文摘In this paper, we are interested in answering the following research question: "Is it possible to form effective groups in virtual communities by exploiting trust information without significant overhead, similarly to real user communities?"In order to answer this question, instead of adopting the largely used approach of exploiting the opinions provided by all the users of the community(called global reputation), we propose to use a particular form of reputation, called local reputation. We also propose an algorithm for group formation able to implement the proposed procedure to form effective groups in virtual communities. Another interesting question is how to measure the effectiveness of groups in virtual communities. To this aim we introduce the index in a measure of the effectiveness of the group formation. We tested our algorithm by realizing some experimental trials on real data from the real world EPINIONS and CIAO communities, showing the significant advantages of our procedure w.r.t. another prominent approach based on traditional global reputation.
基金funding from the Key Laboratory Foundation of National Defence Technology under Grant 61424010208,National Natural Science Foundation of China(Nos.41911530242 and 41975142)5150 Spring Specialists(05492018012 and 05762018039)+3 种基金Major Program of the National Social Science Fund of China(Grant No.17ZDA092)333 High-Level Talent Cultivation Project of Jiangsu Province(BRA2018332)Royal Society of Edinburgh,UK and China Natural Science Foundation Council(RSE Reference:62967_Liu_2018_2)under their Joint International Projects funding scheme and basic Research Programs(Natural Science Foundation)of Jiangsu Province(BK20191398 and BK20180794).
文摘This study proposed a lightweight but high-performance convolu-tion network for accurately classifying five upper limb movements of arm,involving forearm flexion and rotation,arm extension,lumbar touch and no reaction state,aiming to monitoring patient’s rehabilitation process and assist the therapist in elevating patient compliance with treatment.To achieve this goal,a lightweight convolution neural network TMCA-Net(Time Mul-tiscale Channel Attention Convolutional Neural Network)is designed,which combines attention mechanism,uses multi-branched convolution structure to automatically extract feature information at different scales from sensor data,and filters feature information based on attention mechanism.In particular,channel separation convolution is used to replace traditional convolution.This reduces the computational complexity of the model,decouples the convolution operation of the time dimension and the cross-channel feature interaction,which is helpful to the target optimization of feature extraction.TMCA-Net shows excellent performance in the upper limb rehabilitation ges-ture data,achieves 99.11%accuracy and 99.16%F1-score for the classification and recognition of five gestures.Compared with CNN and LSTM network,it achieves 65.62%and 89.98%accuracy in the same task.In addition,on the UCI smartphone public dataset,with the network parameters of one tenth of the current advanced model,the recognition accuracy rate of 95.21%has been achieved,which further proves the light weight and performance characteristics of the model.The clinical significance of this study is to accurately monitor patients’upper limb rehabilitation gesture by an affordable intelligent model as an auxiliary support for therapists’decision-making.
文摘Artificial Intelligence(AI)tools become essential across industries,distinguishing AI-generated from human-authored text is increasingly challenging.This study assesses the coherence of AI-generated titles and corresponding abstracts in anticipation of rising AI-assisted document production.Our main goal is to examine the correlation between original and AI-generated titles,emphasizing semantic depth and similarity measures,particularly in the context of Large Language Models(LLMs).We argue that LLMs have transformed research focus,dissemination,and citation patterns across five selected knowledge areas:Business Administration and Management(BAM),Computer Science and Information Technology(CS),Engineering and Material Science(EMS),Medicine and Healthcare(MH),and Psychology and Behavioral Sciences(PBS).We collected 15000 titles and abstracts,narrowing the selection to 2000 through a rigorous multi-stage screening process adhering to our study’s criteria.Result shows that there is insufficient evidence to suggest that LLM outperforms human authors in article title generation or articles from the LLM era demonstrates a marked difference in semantic richness and readability compared to those from the pre-LLM.Instead,it asserts that LLM is a valuable tool and can assist researchers in generating titles.With LLM’s assistance,the researcher ensures that the content is reflective of the finalized abstract and core research themes,potentially increasing the impact and accessibility and readability of the academic work.
文摘Learning the representations of nodes in a network can benefit various analysis tasks such as node classification, link prediction, clustering, and anomaly detection. Such a representation learning problem is referred to as network embedding, and it has attracted significant attention in recent years. In this article, we briefly review the existing network embedding methods by two taxonomies. The technical taxonomy focuses on the specific techniques used and divides the existing network embedding methods into two stages, i.e., context construction and objective design. The non-technical taxonomy focuses on the problem setting aspect and categorizes existing work based on whether to preserve special network properties, to consider special network types, or to incorporate additional inputs. Finally, we summarize the main findings based on the two taxonomies, analyze their usefulness,and discuss future directions in this area.
文摘Complex networks are widely used to represent an abundance of real-world relations ranging from social networks to brain networks. Inferring missing links or predicting future ones based on the currently observed network is known as the link prediction task. Recent network embedding based link prediction algorithms have demonstrated ground-breaking performance on link prediction accuracy. Those algorithms usually apply node attributes as the initial feature input to accelerate the convergence speed during the training process. However, they do not take full advantage of node feature information. In this paper, besides applying feature attributes as the initial input, we make better utilization of node attribute information by building attributable networks and plugging attributable networks into some typical link prediction algorithms and name this algorithm Attributive Graph Enhanced Embedding (AGEE). AGEE is able to automatically learn the weighting trades-off between the structure and the attributive networks. Numerical experiments show that AGEE can improve the link prediction accuracy by around 3% compared with SEAL, Variational Graph AutoEncoder (VGAE), and node2vec.
文摘There are many studies about flexible job shop scheduling problem with fuzzy processing time and deteriorating scheduling,but most scholars neglect the connection between them,which means the purpose of both models is to simulate a more realistic factory environment.From this perspective,the solutions can be more precise and practical if both issues are considered simultaneously.Therefore,the deterioration effect is treated as a part of the fuzzy job shop scheduling problem in this paper,which means the linear increase of a certain processing time is transformed into an internal linear shift of a triangle fuzzy processing time.Apart from that,many other contributions can be stated as follows.A new algorithm called reinforcement learning based biased bi-population evolutionary algorithm(RB2EA)is proposed,which utilizes Q-learning algorithm to adjust the size of the two populations and the interaction frequency according to the quality of population.A local enhancement method which combimes multiple local search stratgies is presented.An interaction mechanism is designed to promote the convergence of the bi-population.Extensive experiments are designed to evaluate the efficacy of RB2EA,and the conclusion can be drew that RB2EA is able to solve energy-efficient fuzzy flexible job shop scheduling problem with deteriorating jobs(EFFJSPD)efficiently.
基金supported by the National Natural Science Foundation of China(Nos.61502209 and 61502207)
文摘Online social networks are increasingly connecting people around the world.Influence maximization is a key area of research in online social networks,which identifies influential users during information dissemination.Most of the existing influence maximization methods only consider the transmission of a single channel,but real-world networks mostly include multiple channels of information transmission with competitive relationships.The problem of influence maximization in an environment involves selecting the seed node set for certain competitive information,so that it can avoid the influence of other information,and ultimately affect the largest set of nodes in the network.In this paper,the influence calculation of nodes is achieved according to the local community discovery algorithm,which is based on community dispersion and the characteristics of dynamic community structure.Furthermore,considering two various competitive information dissemination cases as an example,a solution is designed for self-interested information based on the assumption that the seed node set of competitive information is known,and a novel influence maximization algorithm of node avoidance based on user interest is proposed.Experiments conducted based on real-world Twitter dataset demonstrates the efficiency of our proposed algorithm in terms of accuracy and time against notable influence maximization algorithms.
基金supported by the National Natural Science Foundation of China(No.61976247)
文摘Dialog State Tracking(DST)aims to extract the current state from the conversation and plays an important role in dialog systems.Existing methods usually predict the value of each slot independently and do not consider the correlations among slots,which will exacerbate the data sparsity problem because of the increased number of candidate values.In this paper,we propose a multi-domain DST model that integrates slot-relevant information.In particular,certain connections may exist among slots in different domains,and their corresponding values can be obtained through explicit or implicit reasoning.Therefore,we use the graph adjacency matrix to determine the correlation between slots,so that the slots can incorporate more slot-value transformer information.Experimental results show that our approach has performed well on the Multi-domain Wizard-Of-Oz(MultiWOZ)2.0and MultiWOZ2.1 datasets,demonstrating the effectiveness and necessity of incorporating slot-relevant information.
基金supported by the projects under the grants Nos. 2016B050502001 and 2015A050502003
文摘Due to the tremendous volume of data generated by urban surveillance systems, big data oriented lowcomplexity automatic background subtraction techniques are in great demand. In this paper, we propose a novel automatic background subtraction algorithm for urban surveillance systems in which the computer can automatically renew an image as the new background image when no object is detected. This method is both simple and robust with respect to changes in light conditions.