Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional ...Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
Capturing elaborated flow structures and phenomena is required for well-solved numerical flows.The finite difference methods allow simple discretization of mesh and model equations.However,they need simpler meshes,e.g...Capturing elaborated flow structures and phenomena is required for well-solved numerical flows.The finite difference methods allow simple discretization of mesh and model equations.However,they need simpler meshes,e.g.,rectangular.The inverse Lax-Wendroff(ILW)procedure can handle complex geometries for rectangular meshes.High-resolution and high-order methods can capture elaborated flow structures and phenomena.They also have strong mathematical and physical backgrounds,such as positivity-preserving,jump conditions,and wave propagation concepts.We perceive an effort toward direct numerical simulation,for instance,regarding weighted essentially non-oscillatory(WENO)schemes.Thus,we propose to solve a challenging engineering application without turbulence models.We aim to verify and validate recent high-resolution and high-order methods.To check the solver accuracy,we solved vortex and Couette flows.Then,we solved inviscid and viscous nozzle flows for a conical profile.We employed the finite difference method,positivity-preserving Lax-Friedrichs splitting,high-resolution viscous terms discretization,fifth-order multi-resolution WENO,ILW,and third-order strong stability preserving Runge-Kutta.We showed the solver is high-order and captured elaborated flow structures and phenomena.One can see oblique shocks in both nozzle flows.In the viscous flow,we also captured a free-shock separation,recirculation,entrainment region,Mach disk,and the diamond-shaped pattern of nozzle flows.展开更多
Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount impo...Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.展开更多
Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in ...Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.展开更多
Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representation...Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.展开更多
The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bon...The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bond. Using a number of rules in a defined order, it is often better suited to complicated cases than the Lewis representation of atoms. This method allows us to determine the formal charge and oxidation number of each atom in the edifice more efficiently than other methods.展开更多
The near-seabed multichannel seismic exploration systems have yielded remarkable successes in marine geological disaster assessment,marine gas hydrate investigation,and deep-sea mineral exploration owing to their high...The near-seabed multichannel seismic exploration systems have yielded remarkable successes in marine geological disaster assessment,marine gas hydrate investigation,and deep-sea mineral exploration owing to their high vertical and horizontal resolution.However,the quality of deep-towed seismic imaging hinges on accurate source-receiver positioning information.In light of existing technical problems,we propose a novel array geometry inversion method tailored for high-resolution deep-towed multichannel seismic exploration systems.This method is independent of the attitude and depth sensors along a deep-towed seismic streamer,accounting for variations in seawater velocity and seabed slope angle.Our approach decomposes the towed line array into multiline segments and characterizes its geometric shape using the line segment distance and pitch angle.Introducing optimization parameters for seawater velocity and seabed slope angle,we establish an objective function based on the model,yielding results that align with objective reality.Employing the particle swarm optimization algorithm enables synchronous acquisition of optimized inversion results for array geometry and seawater velocity.Experimental validation using theoretical models and practical data verifies that our approach effectively enhances source and receiver positioning inversion accuracy.The algorithm exhibits robust stability and reliability,addressing uncertainties in seismic traveltime picking and complex seabed topography conditions.展开更多
BACKGROUND Intracranial atherosclerosis,a leading cause of stroke,involves arterial plaque formation.This study explores the link between plaque remodelling patterns and diabetes using high-resolution vessel wall imag...BACKGROUND Intracranial atherosclerosis,a leading cause of stroke,involves arterial plaque formation.This study explores the link between plaque remodelling patterns and diabetes using high-resolution vessel wall imaging(HR-VWI).AIM To investigate the factors of intracranial atherosclerotic remodelling patterns and the relationship between intracranial atherosclerotic remodelling and diabetes mellitus using HR-VWI.METHODS Ninety-four patients diagnosed with middle cerebral artery or basilar artery INTRODUCTION Intracranial atherosclerotic disease is one of the main causes of ischaemic stroke in the world,accounting for approx-imately 10%of transient ischaemic attacks and 30%-50%of ischaemic strokes[1].It is the most common factor among Asian people[2].The adaptive changes in the structure and function of blood vessels that can adapt to changes in the internal and external environment are called vascular remodelling,which is a common and important pathological mechanism in atherosclerotic diseases,and the remodelling mode of atherosclerotic plaques is closely related to the occurrence of stroke.Positive remodelling(PR)is an outwards compensatory remodelling where the arterial wall grows outwards in an attempt to maintain a constant lumen diameter.For a long time,it was believed that the degree of stenosis can accurately reflect the risk of ischaemic stroke[3-5].Previous studies have revealed that lesions without significant luminal stenosis can also lead to acute events[6,7],as summarized in a recent meta-analysis study in which approximately 50%of acute/subacute ischaemic events were due to this type of lesion[6].Research[8,9]has pointed out that the PR of plaques is more dangerous and more likely to cause acute ischaemic stroke.Previous studies[10-13]have found that there are specific vascular remodelling phenomena in the coronary and carotid arteries of diabetic patients.However,due to the deep location and small lumen of intracranial arteries and limitations of imaging techniques,the relationship between intracranial arterial remodelling and diabetes is still unclear.In recent years,with the development of magnetic resonance technology and the emergence of high-resolution(HR)vascular wall imaging,a clear and multidimensional display of the intracranial vascular wall has been achieved.Therefore,in this study,HR wall imaging(HR-VWI)was used to display the remodelling characteristics of bilateral middle cerebral arteries and basilar arteries and to explore the factors of intracranial vascular remodelling and its relationship with diabetes.展开更多
This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli an...This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.展开更多
BACKGROUND No studies have yet been conducted on changes in microcirculatory hemody-namics of colorectal adenomas in vivo under endoscopy.The microcirculation of the colorectal adenoma could be observed in vivo by a n...BACKGROUND No studies have yet been conducted on changes in microcirculatory hemody-namics of colorectal adenomas in vivo under endoscopy.The microcirculation of the colorectal adenoma could be observed in vivo by a novel high-resolution magnification endoscopy with blue laser imaging(BLI),thus providing a new insight into the microcirculation of early colon tumors.AIM To observe the superficial microcirculation of colorectal adenomas using the novel magnifying colonoscope with BLI and quantitatively analyzed the changes in hemodynamic parameters.METHODS From October 2019 to January 2020,11 patients were screened for colon adenomas with the novel high-resolution magnification endoscope with BLI.Video images were recorded and processed with Adobe Premiere,Adobe Photoshop and Image-pro Plus software.Four microcirculation parameters:Microcirculation vessel density(MVD),mean vessel width(MVW)with width standard deviation(WSD),and blood flow velocity(BFV),were calculated for adenomas and the surrounding normal mucosa.RESULTS A total of 16 adenomas were identified.Compared with the normal surrounding mucosa,the superficial vessel density in the adenomas was decreased(MVD:0.95±0.18 vs 1.17±0.28μm/μm2,P<0.05).MVW(5.11±1.19 vs 4.16±0.76μm,P<0.05)and WSD(11.94±3.44 vs 9.04±3.74,P<0.05)were both increased.BFV slowed in the adenomas(709.74±213.28 vs 1256.51±383.31μm/s,P<0.05).CONCLUSION The novel high-resolution magnification endoscope with BLI can be used for in vivo study of adenoma superficial microcirculation.Superficial vessel density was decreased,more irregular,with slower blood flow.展开更多
Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and q...Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and quantitative analysis to investigate the linguistic styles employed by Zhu Ziqing in his renowned prose Beiying.Then,using relevant corpora and self-designed Python software,we investigated whether Zhang Peiji,as a translator,has successfully reproduced the simplistic,emotional,and realistic linguistic characteristics in Zhu Ziqing’s prose from the perspectives of“faithfulness,expressiveness,and elegance.”The findings of the research indicate that by employing a dynamic imitative translation approach,Zhang Peiji has successfully enhanced the linguistic aesthetic qualities of the source text,striving to reflect the distinctive linguistic style of Zhu Ziqing.展开更多
Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself disc...Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.展开更多
To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive t...To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive the general evolution formula of the Wigner function, and its relation to Weyl correspondence is also discussed. The method of integration within the ordered product of operators is essential to our discussion.展开更多
Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) u...Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and farfield models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position,which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization(HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.展开更多
The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and work...The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and workplace culture contribute to this gap. Proposed solutions include addressing biased recruitment through gender-neutral language and blind processes, promoting STEM education for girls to increase qualified female candidates, and fostering inclusive workplace cultures with mentorship and sponsorship programs. Gender parity is crucial for the industry’s success, as embracing diversity enables the cybersecurity sector to leverage various perspectives, drive innovation, and effectively combat cyber threats. Achieving this balance is not just about fairness but also a strategic imperative. By embracing concerted efforts towards gender parity, we can create a more resilient and impactful cybersecurity landscape, benefiting industry and society.展开更多
With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the rou...With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the route network design problem,the expressive capability and search performance of the algorithm on multi-objective problems remain unexplored.In this paper,the wind farm layout optimization problem is defined.Then,a multi-objective algorithm based on Graph Neural Network(GNN)and Variable Neighborhood Search(VNS)algorithm is proposed.GNN provides the basis representations for the following search algorithm so that the expressiveness and search accuracy of the algorithm can be improved.The multi-objective VNS algorithm is put forward by combining it with the multi-objective optimization algorithm to solve the problem with multiple objectives.The proposed algorithm is applied to the 18-node simulation example to evaluate the feasibility and practicality of the developed optimization strategy.The experiment on the simulation example shows that the proposed algorithm yields a reduction of 6.1% in Point of Common Coupling(PCC)over the current state-of-the-art algorithm,which means that the proposed algorithm designs a layout that improves the quality of the power supply by 6.1%at the same cost.The ablation experiments show that the proposed algorithm improves the power quality by more than 8.6% and 7.8% compared to both the original VNS algorithm and the multi-objective VNS algorithm.展开更多
To realize high-resolution digital beamforming(DBF)of ultra-wideband(UWB) signals, we propose a DBF method based on Carath ′eodory representation for delay compensation and array extrapolation. Delay compensation by ...To realize high-resolution digital beamforming(DBF)of ultra-wideband(UWB) signals, we propose a DBF method based on Carath ′eodory representation for delay compensation and array extrapolation. Delay compensation by Carath ′eodory representation could achieve high interpolation accuracy while using the single channel sampling technique. Array extrapolation by Carath ′eodory representation reformulates and extends each snapshot, consequently extends the aperture of the original uniform linear array(ULA) by several times and provides a better realtime performance than the existing aperture extrapolation utilizing vector extrapolation based on the two dimensional autoregressive(2-D AR) model. The UWB linear frequency modulated(LFM) signal is used for simulation analysis. Simulation results demonstrate that the proposed method is featured by a much higher spatial resolution than traditional DBF methods and lower sidelobes than using Lagrange fractional filters.展开更多
Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face ...Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face recognition,differences in expression,angle,posture,and lighting conditions have become key factors that affect recognition accuracy.Essentially,there may be significant differences between different image samples of the same face,which makes image classification very difficult.Therefore,how to build a robust virtual image representation becomes a vital issue.To solve the above problems,this paper proposes a novel image classification algorithm.First,to better retain the global features and contour information of the original sample,the algorithm uses an improved non‐linear image representation method to highlight the low‐intensity and high‐intensity pixels of the original training sample,thus generating a virtual sample.Second,by the principle of sparse representation,the linear expression coefficients of the original sample and the virtual sample can be calculated,respectively.After obtaining these two types of coefficients,calculate the distances between the original sample and the test sample and the distance between the virtual sample and the test sample.These two distances are converted into distance scores.Finally,a simple and effective weight fusion scheme is adopted to fuse the classification scores of the original image and the virtual image.The fused score will determine the final classification result.The experimental results show that the proposed method outperforms other typical sparse representation classification methods.展开更多
Now object detection based on deep learning tries different strategies.It uses fewer data training networks to achieve the effect of large dataset training.However,the existing methods usually do not achieve the balan...Now object detection based on deep learning tries different strategies.It uses fewer data training networks to achieve the effect of large dataset training.However,the existing methods usually do not achieve the balance between network parameters and training data.It makes the information provided by a small amount of picture data insufficient to optimize model parameters,resulting in unsatisfactory detection results.To improve the accuracy of few shot object detection,this paper proposes a network based on the transformer and high-resolution feature extraction(THR).High-resolution feature extractionmaintains the resolution representation of the image.Channels and spatial attention are used to make the network focus on features that are more useful to the object.In addition,the recently popular transformer is used to fuse the features of the existing object.This compensates for the previous network failure by making full use of existing object features.Experiments on the Pascal VOC and MS-COCO datasets prove that the THR network has achieved better results than previous mainstream few shot object detection.展开更多
文摘Deep learning has been a catalyst for a transformative revo-lution in machine learning and computer vision in the past decade.Within these research domains,methods grounded in deep learning have exhibited exceptional performance across a spectrum of tasks.The success of deep learning methods can be attributed to their capability to derive potent representations from data,integral for a myriad of downstream applications.These representations encapsulate the intrinsic structure,fea-tures,or latent variables characterising the underlying statistics of visual data.Despite these achievements,the challenge per-sists in effectively conducting representation learning of visual data with deep models,particularly when confronted with vast and noisy datasets.This special issue is a dedicated platform for researchers worldwide to disseminate their latest,high-quality articles,aiming to enhance readers'comprehension of the principles,limitations,and diverse applications of repre-sentation learning in computer vision.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
基金supported by the AFOSR grant FA9550-20-1-0055 and the NSF grant DMS-2010107.
文摘Capturing elaborated flow structures and phenomena is required for well-solved numerical flows.The finite difference methods allow simple discretization of mesh and model equations.However,they need simpler meshes,e.g.,rectangular.The inverse Lax-Wendroff(ILW)procedure can handle complex geometries for rectangular meshes.High-resolution and high-order methods can capture elaborated flow structures and phenomena.They also have strong mathematical and physical backgrounds,such as positivity-preserving,jump conditions,and wave propagation concepts.We perceive an effort toward direct numerical simulation,for instance,regarding weighted essentially non-oscillatory(WENO)schemes.Thus,we propose to solve a challenging engineering application without turbulence models.We aim to verify and validate recent high-resolution and high-order methods.To check the solver accuracy,we solved vortex and Couette flows.Then,we solved inviscid and viscous nozzle flows for a conical profile.We employed the finite difference method,positivity-preserving Lax-Friedrichs splitting,high-resolution viscous terms discretization,fifth-order multi-resolution WENO,ILW,and third-order strong stability preserving Runge-Kutta.We showed the solver is high-order and captured elaborated flow structures and phenomena.One can see oblique shocks in both nozzle flows.In the viscous flow,we also captured a free-shock separation,recirculation,entrainment region,Mach disk,and the diamond-shaped pattern of nozzle flows.
文摘Edge devices,due to their limited computational and storage resources,often require the use of compilers for program optimization.Therefore,ensuring the security and reliability of these compilers is of paramount importance in the emerging field of edge AI.One widely used testing method for this purpose is fuzz testing,which detects bugs by inputting random test cases into the target program.However,this process consumes significant time and resources.To improve the efficiency of compiler fuzz testing,it is common practice to utilize test case prioritization techniques.Some researchers use machine learning to predict the code coverage of test cases,aiming to maximize the test capability for the target compiler by increasing the overall predicted coverage of the test cases.Nevertheless,these methods can only forecast the code coverage of the compiler at a specific optimization level,potentially missing many optimization-related bugs.In this paper,we introduce C-CORE(short for Clustering by Code Representation),the first framework to prioritize test cases according to their code representations,which are derived directly from the source codes.This approach avoids being limited to specific compiler states and extends to a broader range of compiler bugs.Specifically,we first train a scaled pre-trained programming language model to capture as many common features as possible from the test cases generated by a fuzzer.Using this pre-trained model,we then train two downstream models:one for predicting the likelihood of triggering a bug and another for identifying code representations associated with bugs.Subsequently,we cluster the test cases according to their code representations and select the highest-scoring test case from each cluster as the high-quality test case.This reduction in redundant testing cases leads to time savings.Comprehensive evaluation results reveal that code representations are better at distinguishing test capabilities,and C-CORE significantly enhances testing efficiency.Across four datasets,C-CORE increases the average of the percentage of faults detected(APFD)value by 0.16 to 0.31 and reduces test time by over 50% in 46% of cases.When compared to the best results from approaches using predicted code coverage,C-CORE improves the APFD value by 1.1% to 12.3% and achieves an overall time-saving of 159.1%.
文摘Sparse representation is an effective data classification algorithm that depends on the known training samples to categorise the test sample.It has been widely used in various image classification tasks.Sparseness in sparse representation means that only a few of instances selected from all training samples can effectively convey the essential class-specific information of the test sample,which is very important for classification.For deformable images such as human faces,pixels at the same location of different images of the same subject usually have different intensities.Therefore,extracting features and correctly classifying such deformable objects is very hard.Moreover,the lighting,attitude and occlusion cause more difficulty.Considering the problems and challenges listed above,a novel image representation and classification algorithm is proposed.First,the authors’algorithm generates virtual samples by a non-linear variation method.This method can effectively extract the low-frequency information of space-domain features of the original image,which is very useful for representing deformable objects.The combination of the original and virtual samples is more beneficial to improve the clas-sification performance and robustness of the algorithm.Thereby,the authors’algorithm calculates the expression coefficients of the original and virtual samples separately using the sparse representation principle and obtains the final score by a designed efficient score fusion scheme.The weighting coefficients in the score fusion scheme are set entirely automatically.Finally,the algorithm classifies the samples based on the final scores.The experimental results show that our method performs better classification than conventional sparse representation algorithms.
基金funded by the Major Science and Technology Projects in Henan Province,China,Grant No.221100210600.
文摘Prior studies have demonstrated that deep learning-based approaches can enhance the performance of source code vulnerability detection by training neural networks to learn vulnerability patterns in code representations.However,due to limitations in code representation and neural network design,the validity and practicality of the model still need to be improved.Additionally,due to differences in programming languages,most methods lack cross-language detection generality.To address these issues,in this paper,we analyze the shortcomings of previous code representations and neural networks.We propose a novel hierarchical code representation that combines Concrete Syntax Trees(CST)with Program Dependence Graphs(PDG).Furthermore,we introduce a Tree-Graph-Gated-Attention(TGGA)network based on gated recurrent units and attention mechanisms to build a Hierarchical Code Representation learning-based Vulnerability Detection(HCRVD)system.This system enables cross-language vulnerability detection at the function-level.The experiments show that HCRVD surpasses many competitors in vulnerability detection capabilities.It benefits from the hierarchical code representation learning method,and outperforms baseline in cross-language vulnerability detection by 9.772%and 11.819%in the C/C++and Java datasets,respectively.Moreover,HCRVD has certain ability to detect vulnerabilities in unknown programming languages and is useful in real open-source projects.HCRVD shows good validity,generality and practicality.
文摘The systematic method for constructing Lewis representations is a method for representing chemical bonds between atoms in a molecule. It uses symbols to represent the valence electrons of the atoms involved in the bond. Using a number of rules in a defined order, it is often better suited to complicated cases than the Lewis representation of atoms. This method allows us to determine the formal charge and oxidation number of each atom in the edifice more efficiently than other methods.
基金supported by the special funds of Laoshan Laboratory(No.LSKJ202203604)the National Key Research and Development Program of China(No.2016 YFC0303901).
文摘The near-seabed multichannel seismic exploration systems have yielded remarkable successes in marine geological disaster assessment,marine gas hydrate investigation,and deep-sea mineral exploration owing to their high vertical and horizontal resolution.However,the quality of deep-towed seismic imaging hinges on accurate source-receiver positioning information.In light of existing technical problems,we propose a novel array geometry inversion method tailored for high-resolution deep-towed multichannel seismic exploration systems.This method is independent of the attitude and depth sensors along a deep-towed seismic streamer,accounting for variations in seawater velocity and seabed slope angle.Our approach decomposes the towed line array into multiline segments and characterizes its geometric shape using the line segment distance and pitch angle.Introducing optimization parameters for seawater velocity and seabed slope angle,we establish an objective function based on the model,yielding results that align with objective reality.Employing the particle swarm optimization algorithm enables synchronous acquisition of optimized inversion results for array geometry and seawater velocity.Experimental validation using theoretical models and practical data verifies that our approach effectively enhances source and receiver positioning inversion accuracy.The algorithm exhibits robust stability and reliability,addressing uncertainties in seismic traveltime picking and complex seabed topography conditions.
基金Supported by National Natural Science Foundation of China,No.82071871Guangdong Basic and Applied Basic Research Foundation,No.2021A1515220131+1 种基金Guangdong Medical Science and Technology Research Fund Project,No.2022111520491834Clinical Research Project of Shenzhen Second People's Hospital,No.20223357022。
文摘BACKGROUND Intracranial atherosclerosis,a leading cause of stroke,involves arterial plaque formation.This study explores the link between plaque remodelling patterns and diabetes using high-resolution vessel wall imaging(HR-VWI).AIM To investigate the factors of intracranial atherosclerotic remodelling patterns and the relationship between intracranial atherosclerotic remodelling and diabetes mellitus using HR-VWI.METHODS Ninety-four patients diagnosed with middle cerebral artery or basilar artery INTRODUCTION Intracranial atherosclerotic disease is one of the main causes of ischaemic stroke in the world,accounting for approx-imately 10%of transient ischaemic attacks and 30%-50%of ischaemic strokes[1].It is the most common factor among Asian people[2].The adaptive changes in the structure and function of blood vessels that can adapt to changes in the internal and external environment are called vascular remodelling,which is a common and important pathological mechanism in atherosclerotic diseases,and the remodelling mode of atherosclerotic plaques is closely related to the occurrence of stroke.Positive remodelling(PR)is an outwards compensatory remodelling where the arterial wall grows outwards in an attempt to maintain a constant lumen diameter.For a long time,it was believed that the degree of stenosis can accurately reflect the risk of ischaemic stroke[3-5].Previous studies have revealed that lesions without significant luminal stenosis can also lead to acute events[6,7],as summarized in a recent meta-analysis study in which approximately 50%of acute/subacute ischaemic events were due to this type of lesion[6].Research[8,9]has pointed out that the PR of plaques is more dangerous and more likely to cause acute ischaemic stroke.Previous studies[10-13]have found that there are specific vascular remodelling phenomena in the coronary and carotid arteries of diabetic patients.However,due to the deep location and small lumen of intracranial arteries and limitations of imaging techniques,the relationship between intracranial arterial remodelling and diabetes is still unclear.In recent years,with the development of magnetic resonance technology and the emergence of high-resolution(HR)vascular wall imaging,a clear and multidimensional display of the intracranial vascular wall has been achieved.Therefore,in this study,HR wall imaging(HR-VWI)was used to display the remodelling characteristics of bilateral middle cerebral arteries and basilar arteries and to explore the factors of intracranial vascular remodelling and its relationship with diabetes.
基金supported by the Science and Technology Development Fund of Macao SAR(FDCT0128/2022/A,0020/2023/RIB1,0111/2023/AFJ,005/2022/ALC)the Shandong Natural Science Foundation of China(ZR2020MA004)+2 种基金the National Natural Science Foundation of China(12071272)the MYRG 2018-00168-FSTZhejiang Provincial Natural Science Foundation of China(LQ23A010014).
文摘This study introduces a pre-orthogonal adaptive Fourier decomposition(POAFD)to obtain approximations and numerical solutions to the fractional Laplacian initial value problem and the extension problem of Caffarelli and Silvestre(generalized Poisson equation).As a first step,the method expands the initial data function into a sparse series of the fundamental solutions with fast convergence,and,as a second step,makes use of the semigroup or the reproducing kernel property of each of the expanding entries.Experiments show the effectiveness and efficiency of the proposed series solutions.
基金This study was approved by the Medical Ethics Committee of Beijing Tsinghua Changgung Hospital(20002-0-02).
文摘BACKGROUND No studies have yet been conducted on changes in microcirculatory hemody-namics of colorectal adenomas in vivo under endoscopy.The microcirculation of the colorectal adenoma could be observed in vivo by a novel high-resolution magnification endoscopy with blue laser imaging(BLI),thus providing a new insight into the microcirculation of early colon tumors.AIM To observe the superficial microcirculation of colorectal adenomas using the novel magnifying colonoscope with BLI and quantitatively analyzed the changes in hemodynamic parameters.METHODS From October 2019 to January 2020,11 patients were screened for colon adenomas with the novel high-resolution magnification endoscope with BLI.Video images were recorded and processed with Adobe Premiere,Adobe Photoshop and Image-pro Plus software.Four microcirculation parameters:Microcirculation vessel density(MVD),mean vessel width(MVW)with width standard deviation(WSD),and blood flow velocity(BFV),were calculated for adenomas and the surrounding normal mucosa.RESULTS A total of 16 adenomas were identified.Compared with the normal surrounding mucosa,the superficial vessel density in the adenomas was decreased(MVD:0.95±0.18 vs 1.17±0.28μm/μm2,P<0.05).MVW(5.11±1.19 vs 4.16±0.76μm,P<0.05)and WSD(11.94±3.44 vs 9.04±3.74,P<0.05)were both increased.BFV slowed in the adenomas(709.74±213.28 vs 1256.51±383.31μm/s,P<0.05).CONCLUSION The novel high-resolution magnification endoscope with BLI can be used for in vivo study of adenoma superficial microcirculation.Superficial vessel density was decreased,more irregular,with slower blood flow.
文摘Based on Yan Fu’s translation norms of“faithfulness,expressiveness,and elegance”and Liu Miqing’s concept of aesthetic representation in translation,the present study employed a combined method of qualitative and quantitative analysis to investigate the linguistic styles employed by Zhu Ziqing in his renowned prose Beiying.Then,using relevant corpora and self-designed Python software,we investigated whether Zhang Peiji,as a translator,has successfully reproduced the simplistic,emotional,and realistic linguistic characteristics in Zhu Ziqing’s prose from the perspectives of“faithfulness,expressiveness,and elegance.”The findings of the research indicate that by employing a dynamic imitative translation approach,Zhang Peiji has successfully enhanced the linguistic aesthetic qualities of the source text,striving to reflect the distinctive linguistic style of Zhu Ziqing.
基金supported by the National Natural Science Foundation of China(Nos.62006001,62372001)the Natural Science Foundation of Chongqing City(Grant No.CSTC2021JCYJ-MSXMX0002).
文摘Due to the presence of a large amount of personal sensitive information in social networks,privacy preservation issues in social networks have attracted the attention of many scholars.Inspired by the self-nonself discrimination paradigmin the biological immune system,the negative representation of information indicates features such as simplicity and efficiency,which is very suitable for preserving social network privacy.Therefore,we suggest a method to preserve the topology privacy and node attribute privacy of attribute social networks,called AttNetNRI.Specifically,a negative survey-based method is developed to disturb the relationship between nodes in the social network so that the topology structure can be kept private.Moreover,a negative database-based method is proposed to hide node attributes,so that the privacy of node attributes can be preserved while supporting the similarity estimation between different node attributes,which is crucial to the analysis of social networks.To evaluate the performance of the AttNetNRI,empirical studies have been conducted on various attribute social networks and compared with several state-of-the-art methods tailored to preserve the privacy of social networks.The experimental results show the superiority of the developed method in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topology disturbing and attribute hiding parts.The experimental results show the superiority of the developed methods in preserving the privacy of attribute social networks and demonstrate the effectiveness of the topological interference and attribute-hiding components.
基金Project supported by the Foundation for Young Talents in College of Anhui Province, China (Grant Nos. gxyq2021210 and gxyq2019077)the Natural Science Foundation of the Anhui Higher Education Institutions, China (Grant Nos. 2022AH051580 and 2022AH051586)。
文摘To conveniently calculate the Wigner function of the optical cumulant operator and its dissipation evolution in a thermal environment, in this paper, the thermo-entangled state representation is introduced to derive the general evolution formula of the Wigner function, and its relation to Weyl correspondence is also discussed. The method of integration within the ordered product of operators is essential to our discussion.
基金supported by the National Natural Science Foundation of China (62101359)Sichuan University and Yibin Municipal People’s Government University and City Strategic Cooperation Special Fund Project (2020CDYB-29)+1 种基金the Science and Technology Plan Transfer Payment Project of Sichuan Province (2021ZYSF007)the Key Research and Development Program of Science and Technology Department of Sichuan Province (2020YFS0575,2021KJT0012-2 021YFS-0067)。
文摘Classical localization methods use Cartesian or Polar coordinates, which require a priori range information to determine whether to estimate position or to only find bearings. The modified polar representation (MPR) unifies near-field and farfield models, alleviating the thresholding effect. Current localization methods in MPR based on the angle of arrival (AOA) and time difference of arrival (TDOA) measurements resort to semidefinite relaxation (SDR) and Gauss-Newton iteration, which are computationally complex and face the possible diverge problem. This paper formulates a pseudo linear equation between the measurements and the unknown MPR position,which leads to a closed-form solution for the hybrid TDOA-AOA localization problem, namely hybrid constrained optimization(HCO). HCO attains Cramér-Rao bound (CRB)-level accuracy for mild Gaussian noise. Compared with the existing closed-form solutions for the hybrid TDOA-AOA case, HCO provides comparable performance to the hybrid generalized trust region subproblem (HGTRS) solution and is better than the hybrid successive unconstrained minimization (HSUM) solution in large noise region. Its computational complexity is lower than that of HGTRS. Simulations validate the performance of HCO achieves the CRB that the maximum likelihood estimator (MLE) attains if the noise is small, but the MLE deviates from CRB earlier.
文摘The research consistently highlights the gender disparity in cybersecurity leadership roles, necessitating targeted interventions. Biased recruitment practices, limited STEM education opportunities for girls, and workplace culture contribute to this gap. Proposed solutions include addressing biased recruitment through gender-neutral language and blind processes, promoting STEM education for girls to increase qualified female candidates, and fostering inclusive workplace cultures with mentorship and sponsorship programs. Gender parity is crucial for the industry’s success, as embracing diversity enables the cybersecurity sector to leverage various perspectives, drive innovation, and effectively combat cyber threats. Achieving this balance is not just about fairness but also a strategic imperative. By embracing concerted efforts towards gender parity, we can create a more resilient and impactful cybersecurity landscape, benefiting industry and society.
基金supported by the Natural Science Foundation of Zhejiang Province(LY19A020001).
文摘With the increasing demand for electrical services,wind farm layout optimization has been one of the biggest challenges that we have to deal with.Despite the promising performance of the heuristic algorithm on the route network design problem,the expressive capability and search performance of the algorithm on multi-objective problems remain unexplored.In this paper,the wind farm layout optimization problem is defined.Then,a multi-objective algorithm based on Graph Neural Network(GNN)and Variable Neighborhood Search(VNS)algorithm is proposed.GNN provides the basis representations for the following search algorithm so that the expressiveness and search accuracy of the algorithm can be improved.The multi-objective VNS algorithm is put forward by combining it with the multi-objective optimization algorithm to solve the problem with multiple objectives.The proposed algorithm is applied to the 18-node simulation example to evaluate the feasibility and practicality of the developed optimization strategy.The experiment on the simulation example shows that the proposed algorithm yields a reduction of 6.1% in Point of Common Coupling(PCC)over the current state-of-the-art algorithm,which means that the proposed algorithm designs a layout that improves the quality of the power supply by 6.1%at the same cost.The ablation experiments show that the proposed algorithm improves the power quality by more than 8.6% and 7.8% compared to both the original VNS algorithm and the multi-objective VNS algorithm.
基金supported by the National Natural Science Foundation of China(61271331 61571229)
文摘To realize high-resolution digital beamforming(DBF)of ultra-wideband(UWB) signals, we propose a DBF method based on Carath ′eodory representation for delay compensation and array extrapolation. Delay compensation by Carath ′eodory representation could achieve high interpolation accuracy while using the single channel sampling technique. Array extrapolation by Carath ′eodory representation reformulates and extends each snapshot, consequently extends the aperture of the original uniform linear array(ULA) by several times and provides a better realtime performance than the existing aperture extrapolation utilizing vector extrapolation based on the two dimensional autoregressive(2-D AR) model. The UWB linear frequency modulated(LFM) signal is used for simulation analysis. Simulation results demonstrate that the proposed method is featured by a much higher spatial resolution than traditional DBF methods and lower sidelobes than using Lagrange fractional filters.
基金supported by the Research Foundation for Advanced Talents of Guizhou University under Grant(2016)No.49,Key Disciplines of Guizhou Province Computer Science and Technology(ZDXK[2018]007)Research Projects of Innovation Group of Education(QianJiaoHeKY[2021]022)supported by the National Natural Science Foundation of China(62062023).
文摘Sparse representation plays an important role in the research of face recognition.As a deformable sample classification task,face recognition is often used to test the performance of classification algorithms.In face recognition,differences in expression,angle,posture,and lighting conditions have become key factors that affect recognition accuracy.Essentially,there may be significant differences between different image samples of the same face,which makes image classification very difficult.Therefore,how to build a robust virtual image representation becomes a vital issue.To solve the above problems,this paper proposes a novel image classification algorithm.First,to better retain the global features and contour information of the original sample,the algorithm uses an improved non‐linear image representation method to highlight the low‐intensity and high‐intensity pixels of the original training sample,thus generating a virtual sample.Second,by the principle of sparse representation,the linear expression coefficients of the original sample and the virtual sample can be calculated,respectively.After obtaining these two types of coefficients,calculate the distances between the original sample and the test sample and the distance between the virtual sample and the test sample.These two distances are converted into distance scores.Finally,a simple and effective weight fusion scheme is adopted to fuse the classification scores of the original image and the virtual image.The fused score will determine the final classification result.The experimental results show that the proposed method outperforms other typical sparse representation classification methods.
基金the National Natural Science Foundation of China under grant 62172059 and 62072055Hunan Provincial Natural Science Foundations of China under Grant 2020JJ4626+2 种基金Scientific Research Fund of Hunan Provincial Education Department of China under Grant 19B004“Double First-class”International Cooperation and Development Scientific Research Project of Changsha University of Science and Technology under Grant 2018IC25the Young Teacher Growth Plan Project of Changsha University of Science and Technology under Grant 2019QJCZ076.
文摘Now object detection based on deep learning tries different strategies.It uses fewer data training networks to achieve the effect of large dataset training.However,the existing methods usually do not achieve the balance between network parameters and training data.It makes the information provided by a small amount of picture data insufficient to optimize model parameters,resulting in unsatisfactory detection results.To improve the accuracy of few shot object detection,this paper proposes a network based on the transformer and high-resolution feature extraction(THR).High-resolution feature extractionmaintains the resolution representation of the image.Channels and spatial attention are used to make the network focus on features that are more useful to the object.In addition,the recently popular transformer is used to fuse the features of the existing object.This compensates for the previous network failure by making full use of existing object features.Experiments on the Pascal VOC and MS-COCO datasets prove that the THR network has achieved better results than previous mainstream few shot object detection.