Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional ...Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
This work uses refined first-order shear theory to analyze the free vibration and transient responses of double-curved sandwich two-layer shells made of auxetic honeycomb core and laminated three-phase polymer/GNP/fib...This work uses refined first-order shear theory to analyze the free vibration and transient responses of double-curved sandwich two-layer shells made of auxetic honeycomb core and laminated three-phase polymer/GNP/fiber surface subjected to the blast load.Each of the two layers that make up the double-curved shell structure is made up of an auxetic honeycomb core and two laminated sheets of three-phase polymer/GNP/fiber.The exterior is supported by a Kerr elastic foundation with three characteristics.The key innovation of the proposed theory is that the transverse shear stresses are zero at two free surfaces of each layer.In contrast to previous first-order shear deformation theories,no shear correction factor is required.Navier's exact solution was used to treat the double-curved shell problem with a single title boundary,while the finite element technique and an eight-node quadrilateral were used to address the other boundary requirements.To ensure the accuracy of these results,a thorough comparison technique is employed in conjunction with credible statements.The problem model's edge cases allow for this kind of analysis.The study's findings may be used in the post-construction evaluation of military and civil works structures for their ability to sustain explosive loads.In addition,this is also an important basis for the calculation and design of shell structures made of smart materials when subjected to shock waves or explosive loads.展开更多
User representation learning is crucial for capturing different user preferences,but it is also critical challenging because user intentions are latent and dispersed in complex and different patterns of user-generated...User representation learning is crucial for capturing different user preferences,but it is also critical challenging because user intentions are latent and dispersed in complex and different patterns of user-generated data,and thus cannot be measured directly.Text-based data models can learn user representations by mining latent semantics,which is beneficial to enhancing the semantic function of user representations.However,these technologies only extract common features in historical records and cannot represent changes in user intentions.However,sequential feature can express the user’s interests and intentions that change time by time.But the sequential recommendation results based on the user representation of the item lack the interpretability of preference factors.To address these issues,we propose in this paper a novel model with Dual-Layer User Representation,named DLUR,where the user’s intention is learned based on two different layer representations.Specifically,the latent semantic layer adds an interactive layer based on Transformer to extract keywords and key sentences in the text and serve as a basis for interpretation.The sequence layer uses the Transformer model to encode the user’s preference intention to clarify changes in the user’s intention.Therefore,this dual-layer user mode is more comprehensive than a single text mode or sequence mode and can effectually improve the performance of recommendations.Our extensive experiments on five benchmark datasets demonstrate DLUR’s performance over state-of-the-art recommendation models.In addition,DLUR’s ability to explain recommendation results is also demonstrated through some specific cases.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain les...Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.展开更多
Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in ...Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.展开更多
Recent research advances in implicit neural representation have shown that a wide range of video data distributions are achieved by sharing model weights for Neural Representation for Videos(NeRV).While explicit metho...Recent research advances in implicit neural representation have shown that a wide range of video data distributions are achieved by sharing model weights for Neural Representation for Videos(NeRV).While explicit methods exist for accurately embedding ownership or copyright information in video data,the nascent NeRV framework has yet to address this issue comprehensively.In response,this paper introduces MarkINeRV,a scheme designed to embed watermarking information into video frames using an invertible neural network watermarking approach to protect the copyright of NeRV,which models the embedding and extraction of watermarks as a pair of inverse processes of a reversible network and employs the same network to achieve embedding and extraction of watermarks.It is just that the information flow is in the opposite direction.Additionally,a video frame quality enhancement module is incorporated to mitigate watermarking information losses in the rendering process and the possibility ofmalicious attacks during transmission,ensuring the accurate extraction of watermarking information through the invertible network’s inverse process.This paper evaluates the accuracy,robustness,and invisibility of MarkINeRV through multiple video datasets.The results demonstrate its efficacy in extracting watermarking information for copyright protection of NeRV.MarkINeRV represents a pioneering investigation into copyright issues surrounding NeRV.展开更多
Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representation...Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.展开更多
The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bon...The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bond. Using a number of rules in a defined order, it is often better suited to complicated cases than the Lewis representation of atoms. This method allows us to determine the formal charge and oxidation number of each atom in the edifice more efficiently than other methods.展开更多
This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli an...This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.展开更多
Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and q...Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and quantitative analysis to investigate the linguistic styles employed by Zhu Ziqing in his renowned prose Beiying.Then,using relevant corpora and self-designed Python software,we investigated whether Zhang Peiji,as a translator,has successfully reproduced the simplistic,emotional,and realistic linguistic characteristics in Zhu Ziqing’s prose from the perspectives of“faithfulness,expressiveness,and elegance.”The findings of the research indicate that by employing a dynamic imitative translation approach,Zhang Peiji has successfully enhanced the linguistic aesthetic qualities of the source text,striving to reflect the distinctive linguistic style of Zhu Ziqing.展开更多
This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sp...This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sparse representation and entropy weight method.Three different electrical quantities are selected as observations in the compressed sensing algorithm.The entropy weighting method is employed to calculate the weights of different observations based on their relative disturbance levels.Subsequently,by leveraging the topological information of the power system and pre-designing an overcomplete dictionary of disturbances based on the corresponding system parameter variations caused by disturbances,an improved Joint Generalized Orthogonal Matching Pursuit(J-GOMP)algorithm is utilized for reconstruction.The reconstructed sparse vectors are divided into three parts.If at least two parts have consistent node identifiers,the node is identified as the disturbance node.If the node identifiers in all three parts are inconsistent,further analysis is conducted considering the weights to determine the disturbance node.Simulation results based on the IEEE 39-bus system model demonstrate that the proposed method,utilizing electrical quantity information from only 8 measurement points,effectively locates disturbance positions and is applicable to various disturbance types with strong noise resistance.展开更多
Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself disc...Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.展开更多
To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive t...To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive the general evolution formula of the Wigner function, and its relation to Weyl correspondence is also discussed. The method of integration within the ordered product of operators is essential to our discussion.展开更多
Although the classical spectral representation method(SRM)has been widely used in the generation of spatially varying ground motions,there are still challenges in efficient simulation of the non-stationary stochastic ...Although the classical spectral representation method(SRM)has been widely used in the generation of spatially varying ground motions,there are still challenges in efficient simulation of the non-stationary stochastic vector process in practice.The first problem is the inherent limitation and inflexibility of the deterministic time/frequency modulation function.Another difficulty is the estimation of evolutionary power spectral density(EPSD)with quite a few samples.To tackle these problems,the wavelet packet transform(WPT)algorithm is utilized to build a time-varying spectrum of seed recording which describes the energy distribution in the time-frequency domain.The time-varying spectrum is proven to preserve the time and frequency marginal property as theoretical EPSD will do for the stationary process.For the simulation of spatially varying ground motions,the auto-EPSD for all locations is directly estimated using the time-varying spectrum of seed recording rather than matching predefined EPSD models.Then the constructed spectral matrix is incorporated in SRM to simulate spatially varying non-stationary ground motions using efficient Cholesky decomposition techniques.In addition to a good match with the target coherency model,two numerical examples indicate that the generated time histories retain the physical properties of the prescribed seed recording,including waveform,temporal/spectral non-stationarity,normalized energy buildup,and significant duration.展开更多
Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition sys...Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.展开更多
The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and work...The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and workplace culture contribute to this gap. Proposed solutions include addressing biased recruitment through gender-neutral language and blind processes, promoting STEM education for girls to increase qualified female candidates, and fostering inclusive workplace cultures with mentorship and sponsorship programs. Gender parity is crucial for the industry’s success, as embracing diversity enables the cybersecurity sector to leverage various perspectives, drive innovation, and effectively combat cyber threats. Achieving this balance is not just about fairness but also a strategic imperative. By embracing concerted efforts towards gender parity, we can create a more resilient and impactful cybersecurity landscape, benefiting industry and society.展开更多
Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) u...Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and farfield models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position,which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization(HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.展开更多
文摘Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
文摘This work uses refined first-order shear theory to analyze the free vibration and transient responses of double-curved sandwich two-layer shells made of auxetic honeycomb core and laminated three-phase polymer/GNP/fiber surface subjected to the blast load.Each of the two layers that make up the double-curved shell structure is made up of an auxetic honeycomb core and two laminated sheets of three-phase polymer/GNP/fiber.The exterior is supported by a Kerr elastic foundation with three characteristics.The key innovation of the proposed theory is that the transverse shear stresses are zero at two free surfaces of each layer.In contrast to previous first-order shear deformation theories,no shear correction factor is required.Navier's exact solution was used to treat the double-curved shell problem with a single title boundary,while the finite element technique and an eight-node quadrilateral were used to address the other boundary requirements.To ensure the accuracy of these results,a thorough comparison technique is employed in conjunction with credible statements.The problem model's edge cases allow for this kind of analysis.The study's findings may be used in the post-construction evaluation of military and civil works structures for their ability to sustain explosive loads.In addition,this is also an important basis for the calculation and design of shell structures made of smart materials when subjected to shock waves or explosive loads.
基金supported by the Applied Research Center of Artificial Intelligence,Wuhan College(Grant Number X2020113)the Wuhan College Research Project(Grant Number KYZ202009).
文摘User representation learning is crucial for capturing different user preferences,but it is also critical challenging because user intentions are latent and dispersed in complex and different patterns of user-generated data,and thus cannot be measured directly.Text-based data models can learn user representations by mining latent semantics,which is beneficial to enhancing the semantic function of user representations.However,these technologies only extract common features in historical records and cannot represent changes in user intentions.However,sequential feature can express the user’s interests and intentions that change time by time.But the sequential recommendation results based on the user representation of the item lack the interpretability of preference factors.To address these issues,we propose in this paper a novel model with Dual-Layer User Representation,named DLUR,where the user’s intention is learned based on two different layer representations.Specifically,the latent semantic layer adds an interactive layer based on Transformer to extract keywords and key sentences in the text and serve as a basis for interpretation.The sequence layer uses the Transformer model to encode the user’s preference intention to clarify changes in the user’s intention.Therefore,this dual-layer user mode is more comprehensive than a single text mode or sequence mode and can effectually improve the performance of recommendations.Our extensive experiments on five benchmark datasets demonstrate DLUR’s performance over state-of-the-art recommendation models.In addition,DLUR’s ability to explain recommendation results is also demonstrated through some specific cases.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘Effective small object detection is crucial in various applications including urban intelligent transportation and pedestrian detection.However,small objects are difficult to detect accurately because they contain less information.Many current methods,particularly those based on Feature Pyramid Network(FPN),address this challenge by leveraging multi-scale feature fusion.However,existing FPN-based methods often suffer from inadequate feature fusion due to varying resolutions across different layers,leading to suboptimal small object detection.To address this problem,we propose the Two-layerAttention Feature Pyramid Network(TA-FPN),featuring two key modules:the Two-layer Attention Module(TAM)and the Small Object Detail Enhancement Module(SODEM).TAM uses the attention module to make the network more focused on the semantic information of the object and fuse it to the lower layer,so that each layer contains similar semantic information,to alleviate the problem of small object information being submerged due to semantic gaps between different layers.At the same time,SODEM is introduced to strengthen the local features of the object,suppress background noise,enhance the information details of the small object,and fuse the enhanced features to other feature layers to ensure that each layer is rich in small object information,to improve small object detection accuracy.Our extensive experiments on challenging datasets such as Microsoft Common Objects inContext(MSCOCO)and Pattern Analysis Statistical Modelling and Computational Learning,Visual Object Classes(PASCAL VOC)demonstrate the validity of the proposedmethod.Experimental results show a significant improvement in small object detection accuracy compared to state-of-theart detectors.
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
文摘Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.
基金supported by the National Natural Science Foundation of China,with Fund Numbers 62272478,62102451the National Defense Science and Technology Independent Research Project(Intelligent Information Hiding Technology and Its Applications in a Certain Field)and Science and Technology Innovation Team Innovative Research Project“Research on Key Technologies for Intelligent Information Hiding”with Fund Number ZZKY20222102.
文摘Recent research advances in implicit neural representation have shown that a wide range of video data distributions are achieved by sharing model weights for Neural Representation for Videos(NeRV).While explicit methods exist for accurately embedding ownership or copyright information in video data,the nascent NeRV framework has yet to address this issue comprehensively.In response,this paper introduces MarkINeRV,a scheme designed to embed watermarking information into video frames using an invertible neural network watermarking approach to protect the copyright of NeRV,which models the embedding and extraction of watermarks as a pair of inverse processes of a reversible network and employs the same network to achieve embedding and extraction of watermarks.It is just that the information flow is in the opposite direction.Additionally,a video frame quality enhancement module is incorporated to mitigate watermarking information losses in the rendering process and the possibility ofmalicious attacks during transmission,ensuring the accurate extraction of watermarking information through the invertible network’s inverse process.This paper evaluates the accuracy,robustness,and invisibility of MarkINeRV through multiple video datasets.The results demonstrate its efficacy in extracting watermarking information for copyright protection of NeRV.MarkINeRV represents a pioneering investigation into copyright issues surrounding NeRV.
基金funded by the Major Science and Technology Projects in Henan Province,China,Grant No.221100210600.
文摘Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.
文摘The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bond. Using a number of rules in a defined order, it is often better suited to complicated cases than the Lewis representation of atoms. This method allows us to determine the formal charge and oxidation number of each atom in the edifice more efficiently than other methods.
基金supported by the Science and Technology Development Fund of Macao SAR(FDCT0128/2022/A,0020/2023/RIB1,0111/2023/AFJ,005/2022/ALC)the Shandong Natural Science Foundation of China(ZR2020MA004)+2 种基金the National Natural Science Foundation of China(12071272)the MYRG 2018-00168-FSTZhejiang Provincial Natural Science Foundation of China(LQ23A010014).
文摘This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.
文摘Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and quantitative analysis to investigate the linguistic styles employed by Zhu Ziqing in his renowned prose Beiying.Then,using relevant corpora and self-designed Python software,we investigated whether Zhang Peiji,as a translator,has successfully reproduced the simplistic,emotional,and realistic linguistic characteristics in Zhu Ziqing’s prose from the perspectives of“faithfulness,expressiveness,and elegance.”The findings of the research indicate that by employing a dynamic imitative translation approach,Zhang Peiji has successfully enhanced the linguistic aesthetic qualities of the source text,striving to reflect the distinctive linguistic style of Zhu Ziqing.
基金funded by the State Grid Jilin Economic Research Institute’s 2022 Practical Re-Search Project on the Construction of Long-Term Power Supply Guarantee Mechanism in Provincial Capital Cities under the New Situation,Grant Number SGJLJY00GPJS2200041.
文摘This paper addresses the problem of complex and challenging disturbance localization in the current power system operation environment by proposing a disturbance localization method for power systems based on group sparse representation and entropy weight method.Three different electrical quantities are selected as observations in the compressed sensing algorithm.The entropy weighting method is employed to calculate the weights of different observations based on their relative disturbance levels.Subsequently,by leveraging the topological information of the power system and pre-designing an overcomplete dictionary of disturbances based on the corresponding system parameter variations caused by disturbances,an improved Joint Generalized Orthogonal Matching Pursuit(J-GOMP)algorithm is utilized for reconstruction.The reconstructed sparse vectors are divided into three parts.If at least two parts have consistent node identifiers,the node is identified as the disturbance node.If the node identifiers in all three parts are inconsistent,further analysis is conducted considering the weights to determine the disturbance node.Simulation results based on the IEEE 39-bus system model demonstrate that the proposed method,utilizing electrical quantity information from only 8 measurement points,effectively locates disturbance positions and is applicable to various disturbance types with strong noise resistance.
基金supported by the National Natural Science Foundation of China(Nos.62006001,62372001)the Natural Science Foundation of Chongqing City(Grant No.CSTC2021JCYJ-MSXMX0002).
文摘Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.
基金Project supported by the Foundation for Young Talents in College of Anhui Province, China (Grant Nos. gxyq2021210 and gxyq2019077)the Natural Science Foundation of the Anhui Higher Education Institutions, China (Grant Nos. 2022AH051580 and 2022AH051586)。
文摘To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive the general evolution formula of the Wigner function, and its relation to Weyl correspondence is also discussed. The method of integration within the ordered product of operators is essential to our discussion.
基金National Key Research and Development Program of China under Grant No.2023YFE0102900National Natural Science Foundation of China under Grant Nos.52378506 and 52208164。
文摘Although the classical spectral representation method(SRM)has been widely used in the generation of spatially varying ground motions,there are still challenges in efficient simulation of the non-stationary stochastic vector process in practice.The first problem is the inherent limitation and inflexibility of the deterministic time/frequency modulation function.Another difficulty is the estimation of evolutionary power spectral density(EPSD)with quite a few samples.To tackle these problems,the wavelet packet transform(WPT)algorithm is utilized to build a time-varying spectrum of seed recording which describes the energy distribution in the time-frequency domain.The time-varying spectrum is proven to preserve the time and frequency marginal property as theoretical EPSD will do for the stationary process.For the simulation of spatially varying ground motions,the auto-EPSD for all locations is directly estimated using the time-varying spectrum of seed recording rather than matching predefined EPSD models.Then the constructed spectral matrix is incorporated in SRM to simulate spatially varying non-stationary ground motions using efficient Cholesky decomposition techniques.In addition to a good match with the target coherency model,two numerical examples indicate that the generated time histories retain the physical properties of the prescribed seed recording,including waveform,temporal/spectral non-stationarity,normalized energy buildup,and significant duration.
基金Supported by the Centre for Digital Entertainment at Bournemouth University by the UK Engineering and Physical Sciences Research Council(EPSRC)EP/L016540/1 and Humain Ltd.
文摘Background Deep 3D morphable models(deep 3DMMs)play an essential role in computer vision.They are used in facial synthesis,compression,reconstruction and animation,avatar creation,virtual try-on,facial recognition systems and medical imaging.These applications require high spatial and perceptual quality of synthesised meshes.Despite their significance,these models have not been compared with different mesh representations and evaluated jointly with point-wise distance and perceptual metrics.Methods We compare the influence of different mesh representation features to various deep 3DMMs on spatial and perceptual fidelity of the reconstructed meshes.This paper proves the hypothesis that building deep 3DMMs from meshes represented with global representations leads to lower spatial reconstruction error measured with L_(1) and L_(2) norm metrics and underperforms on perceptual metrics.In contrast,using differential mesh representations which describe differential surface properties yields lower perceptual FMPD and DAME and higher spatial fidelity error.The influence of mesh feature normalisation and standardisation is also compared and analysed from perceptual and spatial fidelity perspectives.Results The results presented in this paper provide guidance in selecting mesh representations to build deep 3DMMs accordingly to spatial and perceptual quality objectives and propose combinations of mesh representations and deep 3DMMs which improve either perceptual or spatial fidelity of existing methods.
文摘The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and workplace culture contribute to this gap. Proposed solutions include addressing biased recruitment through gender-neutral language and blind processes, promoting STEM education for girls to increase qualified female candidates, and fostering inclusive workplace cultures with mentorship and sponsorship programs. Gender parity is crucial for the industry’s success, as embracing diversity enables the cybersecurity sector to leverage various perspectives, drive innovation, and effectively combat cyber threats. Achieving this balance is not just about fairness but also a strategic imperative. By embracing concerted efforts towards gender parity, we can create a more resilient and impactful cybersecurity landscape, benefiting industry and society.
基金supported by the National Natural Science Foundation of China (62101359)Sichuan University and Yibin Municipal People’s Government University and City Strategic Cooperation Special Fund Project (2020CDYB-29)+1 种基金the Science and Technology Plan Transfer Payment Project of Sichuan Province (2021ZYSF007)the Key Research and Development Program of Science and Technology Department of Sichuan Province (2020YFS0575,2021KJT0012-2 021YFS-0067)。
文摘Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and farfield models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position,which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization(HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.