Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical ...Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.展开更多
Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image ...Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators.展开更多
Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional ...Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
User representation learning is crucial for capturing different user preferences,but it is also critical challenging because user intentions are latent and dispersed in complex and different patterns of user-generated...User representation learning is crucial for capturing different user preferences,but it is also critical challenging because user intentions are latent and dispersed in complex and different patterns of user-generated data,and thus cannot be measured directly.Text-based data models can learn user representations by mining latent semantics,which is beneficial to enhancing the semantic function of user representations.However,these technologies only extract common features in historical records and cannot represent changes in user intentions.However,sequential feature can express the user’s interests and intentions that change time by time.But the sequential recommendation results based on the user representation of the item lack the interpretability of preference factors.To address these issues,we propose in this paper a novel model with Dual-Layer User Representation,named DLUR,where the user’s intention is learned based on two different layer representations.Specifically,the latent semantic layer adds an interactive layer based on Transformer to extract keywords and key sentences in the text and serve as a basis for interpretation.The sequence layer uses the Transformer model to encode the user’s preference intention to clarify changes in the user’s intention.Therefore,this dual-layer user mode is more comprehensive than a single text mode or sequence mode and can effectually improve the performance of recommendations.Our extensive experiments on five benchmark datasets demonstrate DLUR’s performance over state-of-the-art recommendation models.In addition,DLUR’s ability to explain recommendation results is also demonstrated through some specific cases.展开更多
Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in ...Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.展开更多
Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representation...Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.展开更多
Recent research advances in implicit neural representation have shown that a wide range of video data distributions are achieved by sharing model weights for Neural Representation for Videos(NeRV).While explicit metho...Recent research advances in implicit neural representation have shown that a wide range of video data distributions are achieved by sharing model weights for Neural Representation for Videos(NeRV).While explicit methods exist for accurately embedding ownership or copyright information in video data,the nascent NeRV framework has yet to address this issue comprehensively.In response,this paper introduces MarkINeRV,a scheme designed to embed watermarking information into video frames using an invertible neural network watermarking approach to protect the copyright of NeRV,which models the embedding and extraction of watermarks as a pair of inverse processes of a reversible network and employs the same network to achieve embedding and extraction of watermarks.It is just that the information flow is in the opposite direction.Additionally,a video frame quality enhancement module is incorporated to mitigate watermarking information losses in the rendering process and the possibility ofmalicious attacks during transmission,ensuring the accurate extraction of watermarking information through the invertible network’s inverse process.This paper evaluates the accuracy,robustness,and invisibility of MarkINeRV through multiple video datasets.The results demonstrate its efficacy in extracting watermarking information for copyright protection of NeRV.MarkINeRV represents a pioneering investigation into copyright issues surrounding NeRV.展开更多
The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bon...The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bond. Using a number of rules in a defined order, it is often better suited to complicated cases than the Lewis representation of atoms. This method allows us to determine the formal charge and oxidation number of each atom in the edifice more efficiently than other methods.展开更多
This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli an...This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.展开更多
Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and q...Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and quantitative analysis to investigate the linguistic styles employed by Zhu Ziqing in his renowned prose Beiying.Then,using relevant corpora and self-designed Python software,we investigated whether Zhang Peiji,as a translator,has successfully reproduced the simplistic,emotional,and realistic linguistic characteristics in Zhu Ziqing’s prose from the perspectives of“faithfulness,expressiveness,and elegance.”The findings of the research indicate that by employing a dynamic imitative translation approach,Zhang Peiji has successfully enhanced the linguistic aesthetic qualities of the source text,striving to reflect the distinctive linguistic style of Zhu Ziqing.展开更多
This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sp...This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sparse representation and entropy weight method.Three different electrical quantities are selected as observations in the compressed sensing algorithm.The entropy weighting method is employed to calculate the weights of different observations based on their relative disturbance levels.Subsequently,by leveraging the topological information of the power system and pre-designing an overcomplete dictionary of disturbances based on the corresponding system parameter variations caused by disturbances,an improved Joint Generalized Orthogonal Matching Pursuit(J-GOMP)algorithm is utilized for reconstruction.The reconstructed sparse vectors are divided into three parts.If at least two parts have consistent node identifiers,the node is identified as the disturbance node.If the node identifiers in all three parts are inconsistent,further analysis is conducted considering the weights to determine the disturbance node.Simulation results based on the IEEE 39-bus system model demonstrate that the proposed method,utilizing electrical quantity information from only 8 measurement points,effectively locates disturbance positions and is applicable to various disturbance types with strong noise resistance.展开更多
To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive t...To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive the general evolution formula of the Wigner function, and its relation to Weyl correspondence is also discussed. The method of integration within the ordered product of operators is essential to our discussion.展开更多
Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself disc...Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.展开更多
Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition sys...Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.展开更多
Although the classical spectral representation method(SRM)has been widely used in the generation of spatially varying ground motions,there are still challenges in efficient simulation of the non-stationary stochastic ...Although the classical spectral representation method(SRM)has been widely used in the generation of spatially varying ground motions,there are still challenges in efficient simulation of the non-stationary stochastic vector process in practice.The first problem is the inherent limitation and inflexibility of the deterministic time/frequency modulation function.Another difficulty is the estimation of evolutionary power spectral density(EPSD)with quite a few samples.To tackle these problems,the wavelet packet transform(WPT)algorithm is utilized to build a time-varying spectrum of seed recording which describes the energy distribution in the time-frequency domain.The time-varying spectrum is proven to preserve the time and frequency marginal property as theoretical EPSD will do for the stationary process.For the simulation of spatially varying ground motions,the auto-EPSD for all locations is directly estimated using the time-varying spectrum of seed recording rather than matching predefined EPSD models.Then the constructed spectral matrix is incorporated in SRM to simulate spatially varying non-stationary ground motions using efficient Cholesky decomposition techniques.In addition to a good match with the target coherency model,two numerical examples indicate that the generated time histories retain the physical properties of the prescribed seed recording,including waveform,temporal/spectral non-stationarity,normalized energy buildup,and significant duration.展开更多
The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and work...The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and workplace culture contribute to this gap. Proposed solutions include addressing biased recruitment through gender-neutral language and blind processes, promoting STEM education for girls to increase qualified female candidates, and fostering inclusive workplace cultures with mentorship and sponsorship programs. Gender parity is crucial for the industry’s success, as embracing diversity enables the cybersecurity sector to leverage various perspectives, drive innovation, and effectively combat cyber threats. Achieving this balance is not just about fairness but also a strategic imperative. By embracing concerted efforts towards gender parity, we can create a more resilient and impactful cybersecurity landscape, benefiting industry and society.展开更多
Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) u...Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and farfield models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position,which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization(HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.展开更多
With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the rou...With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the route network design problem,the expressive capability and search performance of the algorithm on multi-objective problems remain unexplored.In this paper,the wind farm layout optimization problem is defined.Then,a multi-objective algorithm based on Graph Neural Network(GNN)and Variable Neighborhood Search(VNS)algorithm is proposed.GNN provides the basis representations for the following search algorithm so that the expressiveness and search accuracy of the algorithm can be improved.The multi-objective VNS algorithm is put forward by combining it with the multi-objective optimization algorithm to solve the problem with multiple objectives.The proposed algorithm is applied to the 18-node simulation example to evaluate the feasibility and practicality of the developed optimization strategy.The experiment on the simulation example shows that the proposed algorithm yields a reduction of 6.1% in Point of Common Coupling(PCC)over the current state-of-the-art algorithm,which means that the proposed algorithm designs a layout that improves the quality of the power supply by 6.1%at the same cost.The ablation experiments show that the proposed algorithm improves the power quality by more than 8.6% and 7.8% compared to both the original VNS algorithm and the multi-objective VNS algorithm.展开更多
基金funded by the National Natural Science Foundation of China,grant number 61302188.
文摘Multimodal medical image fusion can help physicians provide more accurate treatment plans for patients, as unimodal images provide limited valid information. To address the insufficient ability of traditional medical image fusion solutions to protect image details and significant information, a new multimodality medical image fusion method(NSST-PAPCNNLatLRR) is proposed in this paper. Firstly, the high and low-frequency sub-band coefficients are obtained by decomposing the source image using NSST. Then, the latent low-rank representation algorithm is used to process the low-frequency sub-band coefficients;An improved PAPCNN algorithm is also proposed for the fusion of high-frequency sub-band coefficients. The improved PAPCNN model was based on the automatic setting of the parameters, and the optimal method was configured for the time decay factor αe. The experimental results show that, in comparison with the five mainstream fusion algorithms, the new algorithm has significantly improved the visual effect over the comparison algorithm,enhanced the ability to characterize important information in images, and further improved the ability to protect the detailed information;the new algorithm has achieved at least four firsts in six objective indexes.
文摘Medical image fusion has been developed as an efficient assistive technology in various clinical applications such as medical diagnosis and treatment planning.Aiming at the problem of insufficient protection of image contour and detail information by traditional image fusion methods,a new multimodal medical image fusion method is proposed.This method first uses non-subsampled shearlet transform to decompose the source image to obtain high and low frequency subband coefficients,then uses the latent low rank representation algorithm to fuse the low frequency subband coefficients,and applies the improved PAPCNN algorithm to fuse the high frequency subband coefficients.Finally,based on the automatic setting of parameters,the optimization method configuration of the time decay factorαe is carried out.The experimental results show that the proposed method solves the problems of difficult parameter setting and insufficient detail protection ability in traditional PCNN algorithm fusion images,and at the same time,it has achieved great improvement in visual quality and objective evaluation indicators.
文摘Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
基金supported by the Applied Research Center of Artificial Intelligence,Wuhan College(Grant Number X2020113)the Wuhan College Research Project(Grant Number KYZ202009).
文摘User representation learning is crucial for capturing different user preferences,but it is also critical challenging because user intentions are latent and dispersed in complex and different patterns of user-generated data,and thus cannot be measured directly.Text-based data models can learn user representations by mining latent semantics,which is beneficial to enhancing the semantic function of user representations.However,these technologies only extract common features in historical records and cannot represent changes in user intentions.However,sequential feature can express the user’s interests and intentions that change time by time.But the sequential recommendation results based on the user representation of the item lack the interpretability of preference factors.To address these issues,we propose in this paper a novel model with Dual-Layer User Representation,named DLUR,where the user’s intention is learned based on two different layer representations.Specifically,the latent semantic layer adds an interactive layer based on Transformer to extract keywords and key sentences in the text and serve as a basis for interpretation.The sequence layer uses the Transformer model to encode the user’s preference intention to clarify changes in the user’s intention.Therefore,this dual-layer user mode is more comprehensive than a single text mode or sequence mode and can effectually improve the performance of recommendations.Our extensive experiments on five benchmark datasets demonstrate DLUR’s performance over state-of-the-art recommendation models.In addition,DLUR’s ability to explain recommendation results is also demonstrated through some specific cases.
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
文摘Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.
基金funded by the Major Science and Technology Projects in Henan Province,China,Grant No.221100210600.
文摘Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.
基金supported by the National Natural Science Foundation of China,with Fund Numbers 62272478,62102451the National Defense Science and Technology Independent Research Project(Intelligent Information Hiding Technology and Its Applications in a Certain Field)and Science and Technology Innovation Team Innovative Research Project“Research on Key Technologies for Intelligent Information Hiding”with Fund Number ZZKY20222102.
文摘Recent research advances in implicit neural representation have shown that a wide range of video data distributions are achieved by sharing model weights for Neural Representation for Videos(NeRV).While explicit methods exist for accurately embedding ownership or copyright information in video data,the nascent NeRV framework has yet to address this issue comprehensively.In response,this paper introduces MarkINeRV,a scheme designed to embed watermarking information into video frames using an invertible neural network watermarking approach to protect the copyright of NeRV,which models the embedding and extraction of watermarks as a pair of inverse processes of a reversible network and employs the same network to achieve embedding and extraction of watermarks.It is just that the information flow is in the opposite direction.Additionally,a video frame quality enhancement module is incorporated to mitigate watermarking information losses in the rendering process and the possibility ofmalicious attacks during transmission,ensuring the accurate extraction of watermarking information through the invertible network’s inverse process.This paper evaluates the accuracy,robustness,and invisibility of MarkINeRV through multiple video datasets.The results demonstrate its efficacy in extracting watermarking information for copyright protection of NeRV.MarkINeRV represents a pioneering investigation into copyright issues surrounding NeRV.
文摘The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bond. Using a number of rules in a defined order, it is often better suited to complicated cases than the Lewis representation of atoms. This method allows us to determine the formal charge and oxidation number of each atom in the edifice more efficiently than other methods.
基金supported by the Science and Technology Development Fund of Macao SAR(FDCT0128/2022/A,0020/2023/RIB1,0111/2023/AFJ,005/2022/ALC)the Shandong Natural Science Foundation of China(ZR2020MA004)+2 种基金the National Natural Science Foundation of China(12071272)the MYRG 2018-00168-FSTZhejiang Provincial Natural Science Foundation of China(LQ23A010014).
文摘This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.
文摘Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and quantitative analysis to investigate the linguistic styles employed by Zhu Ziqing in his renowned prose Beiying.Then,using relevant corpora and self-designed Python software,we investigated whether Zhang Peiji,as a translator,has successfully reproduced the simplistic,emotional,and realistic linguistic characteristics in Zhu Ziqing’s prose from the perspectives of“faithfulness,expressiveness,and elegance.”The findings of the research indicate that by employing a dynamic imitative translation approach,Zhang Peiji has successfully enhanced the linguistic aesthetic qualities of the source text,striving to reflect the distinctive linguistic style of Zhu Ziqing.
基金funded by the State Grid Jilin Economic Research Institute’s 2022 Practical Re-Search Project on the Construction of Long-Term Power Supply Guarantee Mechanism in Provincial Capital Cities under the New Situation,Grant Number SGJLJY00GPJS2200041.
文摘This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sparse representation and entropy weight method.Three different electrical quantities are selected as observations in the compressed sensing algorithm.The entropy weighting method is employed to calculate the weights of different observations based on their relative disturbance levels.Subsequently,by leveraging the topological information of the power system and pre-designing an overcomplete dictionary of disturbances based on the corresponding system parameter variations caused by disturbances,an improved Joint Generalized Orthogonal Matching Pursuit(J-GOMP)algorithm is utilized for reconstruction.The reconstructed sparse vectors are divided into three parts.If at least two parts have consistent node identifiers,the node is identified as the disturbance node.If the node identifiers in all three parts are inconsistent,further analysis is conducted considering the weights to determine the disturbance node.Simulation results based on the IEEE 39-bus system model demonstrate that the proposed method,utilizing electrical quantity information from only 8 measurement points,effectively locates disturbance positions and is applicable to various disturbance types with strong noise resistance.
基金Project supported by the Foundation for Young Talents in College of Anhui Province, China (Grant Nos. gxyq2021210 and gxyq2019077)the Natural Science Foundation of the Anhui Higher Education Institutions, China (Grant Nos. 2022AH051580 and 2022AH051586)。
文摘To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive the general evolution formula of the Wigner function, and its relation to Weyl correspondence is also discussed. The method of integration within the ordered product of operators is essential to our discussion.
基金supported by the National Natural Science Foundation of China(Nos.62006001,62372001)the Natural Science Foundation of Chongqing City(Grant No.CSTC2021JCYJ-MSXMX0002).
文摘Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.
基金Supported by the Centre for Digital Entertainment at Bournemouth University by the UK Engineering and Physical Sciences Research Council(EPSRC)EP/L016540/1 and Humain Ltd.
文摘Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.
基金National Key Research and Development Program of China under Grant No.2023YFE0102900National Natural Science Foundation of China under Grant Nos.52378506 and 52208164。
文摘Although the classical spectral representation method(SRM)has been widely used in the generation of spatially varying ground motions,there are still challenges in efficient simulation of the non-stationary stochastic vector process in practice.The first problem is the inherent limitation and inflexibility of the deterministic time/frequency modulation function.Another difficulty is the estimation of evolutionary power spectral density(EPSD)with quite a few samples.To tackle these problems,the wavelet packet transform(WPT)algorithm is utilized to build a time-varying spectrum of seed recording which describes the energy distribution in the time-frequency domain.The time-varying spectrum is proven to preserve the time and frequency marginal property as theoretical EPSD will do for the stationary process.For the simulation of spatially varying ground motions,the auto-EPSD for all locations is directly estimated using the time-varying spectrum of seed recording rather than matching predefined EPSD models.Then the constructed spectral matrix is incorporated in SRM to simulate spatially varying non-stationary ground motions using efficient Cholesky decomposition techniques.In addition to a good match with the target coherency model,two numerical examples indicate that the generated time histories retain the physical properties of the prescribed seed recording,including waveform,temporal/spectral non-stationarity,normalized energy buildup,and significant duration.
文摘The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and workplace culture contribute to this gap. Proposed solutions include addressing biased recruitment through gender-neutral language and blind processes, promoting STEM education for girls to increase qualified female candidates, and fostering inclusive workplace cultures with mentorship and sponsorship programs. Gender parity is crucial for the industry’s success, as embracing diversity enables the cybersecurity sector to leverage various perspectives, drive innovation, and effectively combat cyber threats. Achieving this balance is not just about fairness but also a strategic imperative. By embracing concerted efforts towards gender parity, we can create a more resilient and impactful cybersecurity landscape, benefiting industry and society.
基金supported by the National Natural Science Foundation of China (62101359)Sichuan University and Yibin Municipal People’s Government University and City Strategic Cooperation Special Fund Project (2020CDYB-29)+1 种基金the Science and Technology Plan Transfer Payment Project of Sichuan Province (2021ZYSF007)the Key Research and Development Program of Science and Technology Department of Sichuan Province (2020YFS0575,2021KJT0012-2 021YFS-0067)。
文摘Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and farfield models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position,which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization(HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.
基金supported by the Natural Science Foundation of Zhejiang Province(LY19A020001).
文摘With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the route network design problem,the expressive capability and search performance of the algorithm on multi-objective problems remain unexplored.In this paper,the wind farm layout optimization problem is defined.Then,a multi-objective algorithm based on Graph Neural Network(GNN)and Variable Neighborhood Search(VNS)algorithm is proposed.GNN provides the basis representations for the following search algorithm so that the expressiveness and search accuracy of the algorithm can be improved.The multi-objective VNS algorithm is put forward by combining it with the multi-objective optimization algorithm to solve the problem with multiple objectives.The proposed algorithm is applied to the 18-node simulation example to evaluate the feasibility and practicality of the developed optimization strategy.The experiment on the simulation example shows that the proposed algorithm yields a reduction of 6.1% in Point of Common Coupling(PCC)over the current state-of-the-art algorithm,which means that the proposed algorithm designs a layout that improves the quality of the power supply by 6.1%at the same cost.The ablation experiments show that the proposed algorithm improves the power quality by more than 8.6% and 7.8% compared to both the original VNS algorithm and the multi-objective VNS algorithm.