The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology pro...The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.展开更多
Weighted total least squares(WTLS)have been regarded as the standard tool for the errors-in-variables(EIV)model in which all the elements in the observation vector and the coefficient matrix are contaminated with rand...Weighted total least squares(WTLS)have been regarded as the standard tool for the errors-in-variables(EIV)model in which all the elements in the observation vector and the coefficient matrix are contaminated with random errors.However,in many geodetic applications,some elements are error-free and some random observations appear repeatedly in different positions in the augmented coefficient matrix.It is called the linear structured EIV(LSEIV)model.Two kinds of methods are proposed for the LSEIV model from functional and stochastic modifications.On the one hand,the functional part of the LSEIV model is modified into the errors-in-observations(EIO)model.On the other hand,the stochastic model is modified by applying the Moore-Penrose inverse of the cofactor matrix.The algorithms are derived through the Lagrange multipliers method and linear approximation.The estimation principles and iterative formula of the parameters are proven to be consistent.The first-order approximate variance-covariance matrix(VCM)of the parameters is also derived.A numerical example is given to compare the performances of our proposed three algorithms with the STLS approach.Afterwards,the least squares(LS),total least squares(TLS)and linear structured weighted total least squares(LSWTLS)solutions are compared and the accuracy evaluation formula is proven to be feasible and effective.Finally,the LSWTLS is applied to the field of deformation analysis,which yields a better result than the traditional LS and TLS estimations.展开更多
The selection and scaling of ground motion records is considered a primary and essential task in performing structural analysis and design.Conventional methods involve using ground motion models and a conditional spec...The selection and scaling of ground motion records is considered a primary and essential task in performing structural analysis and design.Conventional methods involve using ground motion models and a conditional spectrum to select ground motion records based on the target spectrum.This research demonstrates the influence of adopting different weighted factors for various period ranges during matching selected ground motions with the target hazard spectrum.The event data from the Next Generation Attenuation West 2(NGA-West 2)database is used as the basis for ground motion selection,and hazard de-aggregation is conducted to estimate the event parameters of interest,which are then used to construct the target intensity measure(IM).The target IMs are then used to select ground motion records with different weighted vector-valued objective functions.The weights are altered to account for the relative importance of IM in accordance with the structural analysis application of steel moment resisting frame(SMRF)buildings.Instead of an ordinary objective function for the matching spectrum,a novel model is introduced and compared with the conventional cost function.The results indicate that when applying the new cost function for ground motion selection,it places higher demands on structures compared to the conventional cost function.Moreover,submitting more weights to the first-mode period of structures increases engineering demand parameters.Findings demonstrate that weight factors allocated to different period ranges can successfully account for period elongation and higher mode effects.展开更多
This paper is devoted to studying the existence of solutions for the following logarithmic Schrödinger problem: −div(a(x)∇u)+V(x)u=ulogu2+k(x)| u |q1−2u+h(x)| u |q2−2u, x∈ℝN.(1)We first prove that the correspon...This paper is devoted to studying the existence of solutions for the following logarithmic Schrödinger problem: −div(a(x)∇u)+V(x)u=ulogu2+k(x)| u |q1−2u+h(x)| u |q2−2u, x∈ℝN.(1)We first prove that the corresponding functional I belongs to C1(HV1(ℝN),ℝ). Furthermore, by using the variational method, we prove the existence of a sigh-changing solution to problem (1).展开更多
As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery...As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery fault diagnosis.The traditional signal processing methods,such as classical inference and weighted averaging algorithm usually lack dynamic adaptability that is easy for trends to cause the faults to be misjudged or left out.To enhance the measuring veracity and precision of vibration signal in rotary machine multi-sensor vibration signal fault diagnosis,a novel data level fusion approach is presented on the basis of correlation function analysis to fast determine the weighted value of multi-sensor vibration signals.The approach doesn't require knowing the prior information about sensors,and the weighted value of sensors can be confirmed depending on the correlation measure of real-time data tested in the data level fusion process.It gives greater weighted value to the greater correlation measure of sensor signals,and vice versa.The approach can effectively suppress large errors and even can still fuse data in the case of sensor failures because it takes full advantage of sensor's own-information to determine the weighted value.Moreover,it has good performance of anti-jamming due to the correlation measures between noise and effective signals are usually small.Through the simulation of typical signal collected from multi-sensors,the comparative analysis of dynamic adaptability and fault tolerance between the proposed approach and traditional weighted averaging approach is taken.Finally,the rotor dynamics and integrated fault simulator is taken as an example to verify the feasibility and advantages of the proposed approach,it is shown that the multi-sensor data level fusion based on correlation function weighted approach is better than the traditional weighted average approach with respect to fusion precision and dynamic adaptability.Meantime,the approach is adaptable and easy to use,can be applied to other areas of vibration measurement.展开更多
We obtain several estimates of the essential norms of the products of differen- tiation operators and weighted composition operators between weighted Banach spaces of analytic functions with general weights. As applic...We obtain several estimates of the essential norms of the products of differen- tiation operators and weighted composition operators between weighted Banach spaces of analytic functions with general weights. As applications, we also give estimates of the es- sential norms of weighted composition operators between weighted Banach space of analytic functions and Bloch-type spaces.展开更多
Purpose: (1) To test basic assumptions underlying frequency-weighted citation analysis: (a) Uni-citations correspond to citations that are nonessential to the citing papers; (b) The influence of a cited paper ...Purpose: (1) To test basic assumptions underlying frequency-weighted citation analysis: (a) Uni-citations correspond to citations that are nonessential to the citing papers; (b) The influence of a cited paper on the citing paper increases with the frequency with which it is cited in the citing paper. (2) To explore the degree to which citation location may be used to help identify nonessential citations. Design/methodology/approach: Each of the in-text citations in all research articles published in Issue 1 of the Journal of the Association for Information Science and Technology (JASIST) 2016 was manually classified into one of these five categories: Applied, Contrastive, Supportive, Reviewed, and Perfunctory. The distributions of citations at different in-text frequencies and in different locations in the text by these functions were analyzed. Findings: Filtering out nonessential citations before assigning weight is important for frequency-weighted citation analysis. For this purpose, removing citations by location is more effective than re-citation analysis that simply removes uni-citations. Removing all citation occurrences in the Background and Literature Review sections and uni-citations in the Introduction section appears to provide a good balance between filtration and error rates. Research limitations: This case study suffers from the limitation of scalability and generalizability. We took careful measures to reduce the impact of other limitations of the data collection approach used. Relying on the researcher's judgment to attribute citation functions, this approach is unobtrusive but speculative, and can suffer from a low degree of confidence, thus creating reliability concerns. Practical implications: Weighted citation analysis promises to improve citation analysis for research evaluation, knowledge network analysis, knowledge representation, and information retrieval. The present study showed the importance of filtering out nonessential citations before assigning weight in a weighted citation analysis, which may be a significant step forward to realizing these promises. Originality/value: Weighted citation analysis has long been proposed as a theoretical solution to the problem of citation analysis that treats all citations equally, and has attracted increasing research interest in recent years. The present study showed, for the first time, the importance of filtering out nonessential citations in weighted citation analysis, pointing research in this area in a new direction.展开更多
This paper is devoted to studying the commutators of the multilinear singular integral operators with the non-smooth kernels and the weighted Lipschitz functions. Some mapping properties for two types of commutators o...This paper is devoted to studying the commutators of the multilinear singular integral operators with the non-smooth kernels and the weighted Lipschitz functions. Some mapping properties for two types of commutators on the weighted Lebesgue spaces, which extend and generalize some previous results, are obtained.展开更多
As an important type of polynomial approximation, approximation of functions by Bernstein operators is an important topic in approximation theory and computational theory. This paper gives global and pointwise estimat...As an important type of polynomial approximation, approximation of functions by Bernstein operators is an important topic in approximation theory and computational theory. This paper gives global and pointwise estimates for weighted approximation of functions with singularities by Bernstein operators. The main results are the Jackson's estimates of functions f∈ (Wwλ)2 andre Cw, which extends the result of (Della Vecchia et al., 2004).展开更多
Orthomorphic permutations have good characteristics in cryptosystems. In this paper, by using of knowledge about relation between orthomorphic permutations and multi-output functions, and conceptions of the generalize...Orthomorphic permutations have good characteristics in cryptosystems. In this paper, by using of knowledge about relation between orthomorphic permutations and multi-output functions, and conceptions of the generalized Walsh spectrum of multi-output functions and the auto-correlation function of multi-output functions to investigate the Walsh spectral characteristics and the auto-correlation function characteristics of orthormophic permutations, several results are obtained.展开更多
In this paper, we will obtain the weak type estimates of intrinsic square func- tions including the Lusin area integral, Littlewood-Paley g-function and g^-function on the weighted Morrey spaces L^1,k (w) for 0〈k〈...In this paper, we will obtain the weak type estimates of intrinsic square func- tions including the Lusin area integral, Littlewood-Paley g-function and g^-function on the weighted Morrey spaces L^1,k (w) for 0〈k〈 1 and w ∈ A1.展开更多
In this article, we provide estimates for the degree of V bilipschitz determinacy of weighted homogeneous function germs defined on weighted homogeneous analytic variety V satisfying a convenient Lojasiewicz condition...In this article, we provide estimates for the degree of V bilipschitz determinacy of weighted homogeneous function germs defined on weighted homogeneous analytic variety V satisfying a convenient Lojasiewicz condition.The result gives an explicit order such that the geometrical structure of a weighted homogeneous polynomial function germs is preserved after higher order perturbations.展开更多
Role-based network embedding aims to embed role-similar nodes into a similar embedding space,which is widely used in graph mining tasks such as role classification and detection.Roles are sets of nodes in graph networ...Role-based network embedding aims to embed role-similar nodes into a similar embedding space,which is widely used in graph mining tasks such as role classification and detection.Roles are sets of nodes in graph networks with similar structural patterns and functions.However,the rolesimilar nodes may be far away or even disconnected from each other.Meanwhile,the neighborhood node features and noise also affect the result of the role-based network embedding,which are also challenges of current network embedding work.In this paper,we propose a Role-based network Embedding via Quantum walk with weighted Features fusion(REQF),which simultaneously considers the influence of global and local role information,node features,and noise.Firstly,we capture the global role information of nodes via quantum walk based on its superposition property which emphasizes the local role information via biased quantum walk.Secondly,we utilize the quantum walkweighted characteristic function to extract and fuse features of nodes and their neighborhood by different distributions which contain role information implicitly.Finally,we leverage the Variational Auto-Encoder(VAE)to reduce the effect of noise.We conduct extensive experiments on seven real-world datasets,and the results show that REQF is more effective at capturing role information in the network,which outperforms the best baseline by up to 14.6% in role classification,and 23% in role detection on average.展开更多
Given an admissible weight w and 0<p<∞, the estimate∫ D|f(z)| pw(z)dm(z)~|f(0)| p+∫ D|f′(z)| p ψ p(z)w(z)dm(z)is valid for all holomorphic functions f in the unit disc D. Here,ψ(r)=∫ 1 rw(t)dtw(r...Given an admissible weight w and 0<p<∞, the estimate∫ D|f(z)| pw(z)dm(z)~|f(0)| p+∫ D|f′(z)| p ψ p(z)w(z)dm(z)is valid for all holomorphic functions f in the unit disc D. Here,ψ(r)=∫ 1 rw(t)dtw(r) is the distortion of w. As an application of the above estimate, it is proved that the Cesàro operator C[·] is bounded on the weighted Bergman spaces L p a,w (D).展开更多
In recent years, functional data has been widely used in finance, medicine, biology and other fields. The current clustering analysis can solve the problems in finite-dimensional space, but it is difficult to be direc...In recent years, functional data has been widely used in finance, medicine, biology and other fields. The current clustering analysis can solve the problems in finite-dimensional space, but it is difficult to be directly used for the clustering of functional data. In this paper, we propose a new unsupervised clustering algorithm based on adaptive weights. In the absence of initialization parameter, we use entropy-type penalty terms and fuzzy partition matrix to find the optimal number of clusters. At the same time, we introduce a measure based on adaptive weights to reflect the difference in information content between different clustering metrics. Simulation experiments show that the proposed algorithm has higher purity than some algorithms.展开更多
The classification of functional data has drawn much attention in recent years.The main challenge is representing infinite-dimensional functional data by finite-dimensional features while utilizing those features to a...The classification of functional data has drawn much attention in recent years.The main challenge is representing infinite-dimensional functional data by finite-dimensional features while utilizing those features to achieve better classification accuracy.In this paper,we propose a mean-variance-based(MV)feature weighting method for classifying functional data or functional curves.In the feature extraction stage,each sample curve is approximated by B-splines to transfer features to the coefficients of the spline basis.After that,a feature weighting approach based on statistical principles is introduced by comprehensively considering the between-class differences and within-class variations of the coefficients.We also introduce a scaling parameter to adjust the gap between the weights of features.The new feature weighting approach can adaptively enhance noteworthy local features while mitigating the impact of confusing features.The algorithms for feature weighted K-nearest neighbor and support vector machine classifiers are both provided.Moreover,the new approach can be well integrated into existing functional data classifiers,such as the generalized functional linear model and functional linear discriminant analysis,resulting in a more accurate classification.The performance of the mean-variance-based classifiers is evaluated by simulation studies and real data.The results show that the newfeatureweighting approach significantly improves the classification accuracy for complex functional data.展开更多
A novel scheme to construct a hash function based on a weighted complex dynamical network (WCDN) generated from an original message is proposed in this paper. First, the original message is divided into blocks. Then...A novel scheme to construct a hash function based on a weighted complex dynamical network (WCDN) generated from an original message is proposed in this paper. First, the original message is divided into blocks. Then, each block is divided into components, and the nodes and weighted edges are well defined from these components and their relations. Namely, the WCDN closely related to the original message is established. Furthermore, the node dynamics of the WCDN are chosen as a chaotic map. After chaotic iterations, quantization and exclusive-or operations, the fixed-length hash value is obtained. This scheme has the property that any tiny change in message can be diffused rapidly through the WCDN, leading to very different hash values. Analysis and simulation show that the scheme possesses good statistical properties, excellent confusion and diffusion, strong collision resistance and high efficiency.展开更多
Let (Ω, A, P) be a probability space, X(t, ω) a random function continuous in probability for t∈[0,+∞) or (-∞,+∞)(ω∈Ω), and F(t) a positive function continuous for t∈[0,+∞) or (-∞, +∞). If X(t, ω) and F(...Let (Ω, A, P) be a probability space, X(t, ω) a random function continuous in probability for t∈[0,+∞) or (-∞,+∞)(ω∈Ω), and F(t) a positive function continuous for t∈[0,+∞) or (-∞, +∞). If X(t, ω) and F(t) verify certain conditions, then there exists a sequence {Qn(t,ω)} of random polynomials such that we have almost surely: for t∈[0,+∞) or (-∞, +∞), lim|X(t, ω)-Qn(t, ω)|/F(t)=0.展开更多
In this paper, a sufficient and necessary condition of quick trickle permutations is given from the point of inverse permutations. The bridge is built between quick trickle permutations and m-value logic functions. By...In this paper, a sufficient and necessary condition of quick trickle permutations is given from the point of inverse permutations. The bridge is built between quick trickle permutations and m-value logic functions. By the methods of the Chrestenson spectrum of m-value logic functions and the auto-correlation function of m-value logic functions to investigate the Chrestenson spectral characteristics and the auto-correlation function charac- teristics of inverse permutations of quick trickle permutations, a determinant arithmetic of quick trickle permutations is given. Using the results, it becomes easy to judge that a permutation is a quick trickle permutation or not by using computer. This gives a new pathway to study constructions and enumerations of quick trickle permutations.展开更多
A new kind of combining forecasting model based on the generalized weighted functional proportional mean is proposed and the parameter estimation method of its weighting coefficients by means of the algorithm of quadr...A new kind of combining forecasting model based on the generalized weighted functional proportional mean is proposed and the parameter estimation method of its weighting coefficients by means of the algorithm of quadratic programming is given. This model has extensive representation. It is a new kind of aggregative method of group forecasting. By taking the suitable combining form of the forecasting models and seeking the optimal parameter, the optimal combining form can be obtained and the forecasting accuracy can be improved. The effectiveness of this model is demonstrated by an example.展开更多
基金Jilin Science and Technology Development Plan Project(No.20200403075SF)Doctoral Research Start-Up Fund of Northeast Electric Power University(No.BSJXM-2018202).
文摘The current existing problem of deep learning framework for the detection and segmentation of electrical equipment is dominantly related to low precision.Because of the reliable,safe and easy-to-operate technology provided by deep learning-based video surveillance for unmanned inspection of electrical equipment,this paper uses the bottleneck attention module(BAM)attention mechanism to improve the Solov2 model and proposes a new electrical equipment segmentation mode.Firstly,the BAM attention mechanism is integrated into the feature extraction network to adaptively learn the correlation between feature channels,thereby improving the expression ability of the feature map;secondly,the weighted sum of CrossEntropy Loss and Dice loss is designed as the mask loss to improve the segmentation accuracy and robustness of the model;finally,the non-maximal suppression(NMS)algorithm to better handle the overlap problem in instance segmentation.Experimental results show that the proposed method achieves an average segmentation accuracy of mAP of 80.4% on three types of electrical equipment datasets,including transformers,insulators and voltage transformers,which improve the detection accuracy by more than 5.7% compared with the original Solov2 model.The segmentation model proposed can provide a focusing technical means for the intelligent management of power systems.
基金the financial support of the National Natural Science Foundation of China(Grant No.42074016,42104025,42274057and 41704007)Hunan Provincial Natural Science Foundation of China(Grant No.2021JJ30244)Scientific Research Fund of Hunan Provincial Education Department(Grant No.22B0496)。
文摘Weighted total least squares(WTLS)have been regarded as the standard tool for the errors-in-variables(EIV)model in which all the elements in the observation vector and the coefficient matrix are contaminated with random errors.However,in many geodetic applications,some elements are error-free and some random observations appear repeatedly in different positions in the augmented coefficient matrix.It is called the linear structured EIV(LSEIV)model.Two kinds of methods are proposed for the LSEIV model from functional and stochastic modifications.On the one hand,the functional part of the LSEIV model is modified into the errors-in-observations(EIO)model.On the other hand,the stochastic model is modified by applying the Moore-Penrose inverse of the cofactor matrix.The algorithms are derived through the Lagrange multipliers method and linear approximation.The estimation principles and iterative formula of the parameters are proven to be consistent.The first-order approximate variance-covariance matrix(VCM)of the parameters is also derived.A numerical example is given to compare the performances of our proposed three algorithms with the STLS approach.Afterwards,the least squares(LS),total least squares(TLS)and linear structured weighted total least squares(LSWTLS)solutions are compared and the accuracy evaluation formula is proven to be feasible and effective.Finally,the LSWTLS is applied to the field of deformation analysis,which yields a better result than the traditional LS and TLS estimations.
基金financial support from Teesside University to support the Ph.D. program of the first author.
文摘The selection and scaling of ground motion records is considered a primary and essential task in performing structural analysis and design.Conventional methods involve using ground motion models and a conditional spectrum to select ground motion records based on the target spectrum.This research demonstrates the influence of adopting different weighted factors for various period ranges during matching selected ground motions with the target hazard spectrum.The event data from the Next Generation Attenuation West 2(NGA-West 2)database is used as the basis for ground motion selection,and hazard de-aggregation is conducted to estimate the event parameters of interest,which are then used to construct the target intensity measure(IM).The target IMs are then used to select ground motion records with different weighted vector-valued objective functions.The weights are altered to account for the relative importance of IM in accordance with the structural analysis application of steel moment resisting frame(SMRF)buildings.Instead of an ordinary objective function for the matching spectrum,a novel model is introduced and compared with the conventional cost function.The results indicate that when applying the new cost function for ground motion selection,it places higher demands on structures compared to the conventional cost function.Moreover,submitting more weights to the first-mode period of structures increases engineering demand parameters.Findings demonstrate that weight factors allocated to different period ranges can successfully account for period elongation and higher mode effects.
文摘This paper is devoted to studying the existence of solutions for the following logarithmic Schrödinger problem: −div(a(x)∇u)+V(x)u=ulogu2+k(x)| u |q1−2u+h(x)| u |q2−2u, x∈ℝN.(1)We first prove that the corresponding functional I belongs to C1(HV1(ℝN),ℝ). Furthermore, by using the variational method, we prove the existence of a sigh-changing solution to problem (1).
基金supported by National Hi-tech Research and Development Program of China (863 Program, Grant No. 2007AA04Z433)Hunan Provincial Natural Science Foundation of China (Grant No. 09JJ8005)Scientific Research Foundation of Graduate School of Beijing University of Chemical and Technology,China (Grant No. 10Me002)
文摘As the differences of sensor's precision and some random factors are difficult to control,the actual measurement signals are far from the target signals that affect the reliability and precision of rotating machinery fault diagnosis.The traditional signal processing methods,such as classical inference and weighted averaging algorithm usually lack dynamic adaptability that is easy for trends to cause the faults to be misjudged or left out.To enhance the measuring veracity and precision of vibration signal in rotary machine multi-sensor vibration signal fault diagnosis,a novel data level fusion approach is presented on the basis of correlation function analysis to fast determine the weighted value of multi-sensor vibration signals.The approach doesn't require knowing the prior information about sensors,and the weighted value of sensors can be confirmed depending on the correlation measure of real-time data tested in the data level fusion process.It gives greater weighted value to the greater correlation measure of sensor signals,and vice versa.The approach can effectively suppress large errors and even can still fuse data in the case of sensor failures because it takes full advantage of sensor's own-information to determine the weighted value.Moreover,it has good performance of anti-jamming due to the correlation measures between noise and effective signals are usually small.Through the simulation of typical signal collected from multi-sensors,the comparative analysis of dynamic adaptability and fault tolerance between the proposed approach and traditional weighted averaging approach is taken.Finally,the rotor dynamics and integrated fault simulator is taken as an example to verify the feasibility and advantages of the proposed approach,it is shown that the multi-sensor data level fusion based on correlation function weighted approach is better than the traditional weighted average approach with respect to fusion precision and dynamic adaptability.Meantime,the approach is adaptable and easy to use,can be applied to other areas of vibration measurement.
文摘We obtain several estimates of the essential norms of the products of differen- tiation operators and weighted composition operators between weighted Banach spaces of analytic functions with general weights. As applications, we also give estimates of the es- sential norms of weighted composition operators between weighted Banach space of analytic functions and Bloch-type spaces.
文摘Purpose: (1) To test basic assumptions underlying frequency-weighted citation analysis: (a) Uni-citations correspond to citations that are nonessential to the citing papers; (b) The influence of a cited paper on the citing paper increases with the frequency with which it is cited in the citing paper. (2) To explore the degree to which citation location may be used to help identify nonessential citations. Design/methodology/approach: Each of the in-text citations in all research articles published in Issue 1 of the Journal of the Association for Information Science and Technology (JASIST) 2016 was manually classified into one of these five categories: Applied, Contrastive, Supportive, Reviewed, and Perfunctory. The distributions of citations at different in-text frequencies and in different locations in the text by these functions were analyzed. Findings: Filtering out nonessential citations before assigning weight is important for frequency-weighted citation analysis. For this purpose, removing citations by location is more effective than re-citation analysis that simply removes uni-citations. Removing all citation occurrences in the Background and Literature Review sections and uni-citations in the Introduction section appears to provide a good balance between filtration and error rates. Research limitations: This case study suffers from the limitation of scalability and generalizability. We took careful measures to reduce the impact of other limitations of the data collection approach used. Relying on the researcher's judgment to attribute citation functions, this approach is unobtrusive but speculative, and can suffer from a low degree of confidence, thus creating reliability concerns. Practical implications: Weighted citation analysis promises to improve citation analysis for research evaluation, knowledge network analysis, knowledge representation, and information retrieval. The present study showed the importance of filtering out nonessential citations before assigning weight in a weighted citation analysis, which may be a significant step forward to realizing these promises. Originality/value: Weighted citation analysis has long been proposed as a theoretical solution to the problem of citation analysis that treats all citations equally, and has attracted increasing research interest in recent years. The present study showed, for the first time, the importance of filtering out nonessential citations in weighted citation analysis, pointing research in this area in a new direction.
基金Supported by the National Natural Science Foundation of China (10771054,11071200)the NFS of Fujian Province of China (No. 2010J01013)
文摘This paper is devoted to studying the commutators of the multilinear singular integral operators with the non-smooth kernels and the weighted Lipschitz functions. Some mapping properties for two types of commutators on the weighted Lebesgue spaces, which extend and generalize some previous results, are obtained.
文摘As an important type of polynomial approximation, approximation of functions by Bernstein operators is an important topic in approximation theory and computational theory. This paper gives global and pointwise estimates for weighted approximation of functions with singularities by Bernstein operators. The main results are the Jackson's estimates of functions f∈ (Wwλ)2 andre Cw, which extends the result of (Della Vecchia et al., 2004).
基金Supported by State Key Laboratory of InformationSecurity Opening Foundation(01-02) .
文摘Orthomorphic permutations have good characteristics in cryptosystems. In this paper, by using of knowledge about relation between orthomorphic permutations and multi-output functions, and conceptions of the generalized Walsh spectrum of multi-output functions and the auto-correlation function of multi-output functions to investigate the Walsh spectral characteristics and the auto-correlation function characteristics of orthormophic permutations, several results are obtained.
文摘In this paper, we will obtain the weak type estimates of intrinsic square func- tions including the Lusin area integral, Littlewood-Paley g-function and g^-function on the weighted Morrey spaces L^1,k (w) for 0〈k〈 1 and w ∈ A1.
基金Supported by the National Nature Science Foundation of China(10671009,60534080,10871149)
文摘In this article, we provide estimates for the degree of V bilipschitz determinacy of weighted homogeneous function germs defined on weighted homogeneous analytic variety V satisfying a convenient Lojasiewicz condition.The result gives an explicit order such that the geometrical structure of a weighted homogeneous polynomial function germs is preserved after higher order perturbations.
基金supported in part by the National Nature Science Foundation of China(Grant 62172065)the Natural Science Foundation of Chongqing(Grant cstc2020jcyjmsxmX0137).
文摘Role-based network embedding aims to embed role-similar nodes into a similar embedding space,which is widely used in graph mining tasks such as role classification and detection.Roles are sets of nodes in graph networks with similar structural patterns and functions.However,the rolesimilar nodes may be far away or even disconnected from each other.Meanwhile,the neighborhood node features and noise also affect the result of the role-based network embedding,which are also challenges of current network embedding work.In this paper,we propose a Role-based network Embedding via Quantum walk with weighted Features fusion(REQF),which simultaneously considers the influence of global and local role information,node features,and noise.Firstly,we capture the global role information of nodes via quantum walk based on its superposition property which emphasizes the local role information via biased quantum walk.Secondly,we utilize the quantum walkweighted characteristic function to extract and fuse features of nodes and their neighborhood by different distributions which contain role information implicitly.Finally,we leverage the Variational Auto-Encoder(VAE)to reduce the effect of noise.We conduct extensive experiments on seven real-world datasets,and the results show that REQF is more effective at capturing role information in the network,which outperforms the best baseline by up to 14.6% in role classification,and 23% in role detection on average.
基金the1 5 1 Projection and the Natural Science Foundation of Zhejiang Province( M1 0 31 0 4 )
文摘Given an admissible weight w and 0<p<∞, the estimate∫ D|f(z)| pw(z)dm(z)~|f(0)| p+∫ D|f′(z)| p ψ p(z)w(z)dm(z)is valid for all holomorphic functions f in the unit disc D. Here,ψ(r)=∫ 1 rw(t)dtw(r) is the distortion of w. As an application of the above estimate, it is proved that the Cesàro operator C[·] is bounded on the weighted Bergman spaces L p a,w (D).
文摘In recent years, functional data has been widely used in finance, medicine, biology and other fields. The current clustering analysis can solve the problems in finite-dimensional space, but it is difficult to be directly used for the clustering of functional data. In this paper, we propose a new unsupervised clustering algorithm based on adaptive weights. In the absence of initialization parameter, we use entropy-type penalty terms and fuzzy partition matrix to find the optimal number of clusters. At the same time, we introduce a measure based on adaptive weights to reflect the difference in information content between different clustering metrics. Simulation experiments show that the proposed algorithm has higher purity than some algorithms.
基金the National Social Science Foundation of China(Grant No.22BTJ035).
文摘The classification of functional data has drawn much attention in recent years.The main challenge is representing infinite-dimensional functional data by finite-dimensional features while utilizing those features to achieve better classification accuracy.In this paper,we propose a mean-variance-based(MV)feature weighting method for classifying functional data or functional curves.In the feature extraction stage,each sample curve is approximated by B-splines to transfer features to the coefficients of the spline basis.After that,a feature weighting approach based on statistical principles is introduced by comprehensively considering the between-class differences and within-class variations of the coefficients.We also introduce a scaling parameter to adjust the gap between the weights of features.The new feature weighting approach can adaptively enhance noteworthy local features while mitigating the impact of confusing features.The algorithms for feature weighted K-nearest neighbor and support vector machine classifiers are both provided.Moreover,the new approach can be well integrated into existing functional data classifiers,such as the generalized functional linear model and functional linear discriminant analysis,resulting in a more accurate classification.The performance of the mean-variance-based classifiers is evaluated by simulation studies and real data.The results show that the newfeatureweighting approach significantly improves the classification accuracy for complex functional data.
基金Project supported by the Natural Science Foundation of Jiangsu Province, China (Grant No. BK2010526)the Specialized Research Fund for the Doctoral Program of Higher Education of China (Grant No. 20103223110003)The Ministry of Education Research in the Humanities and Social Sciences Planning Fund, China (Grant No. 12YJAZH120)
文摘A novel scheme to construct a hash function based on a weighted complex dynamical network (WCDN) generated from an original message is proposed in this paper. First, the original message is divided into blocks. Then, each block is divided into components, and the nodes and weighted edges are well defined from these components and their relations. Namely, the WCDN closely related to the original message is established. Furthermore, the node dynamics of the WCDN are chosen as a chaotic map. After chaotic iterations, quantization and exclusive-or operations, the fixed-length hash value is obtained. This scheme has the property that any tiny change in message can be diffused rapidly through the WCDN, leading to very different hash values. Analysis and simulation show that the scheme possesses good statistical properties, excellent confusion and diffusion, strong collision resistance and high efficiency.
文摘Let (Ω, A, P) be a probability space, X(t, ω) a random function continuous in probability for t∈[0,+∞) or (-∞,+∞)(ω∈Ω), and F(t) a positive function continuous for t∈[0,+∞) or (-∞, +∞). If X(t, ω) and F(t) verify certain conditions, then there exists a sequence {Qn(t,ω)} of random polynomials such that we have almost surely: for t∈[0,+∞) or (-∞, +∞), lim|X(t, ω)-Qn(t, ω)|/F(t)=0.
基金the Opening Foundation of State Key Labo-ratory of Information Security (20050102)
文摘In this paper, a sufficient and necessary condition of quick trickle permutations is given from the point of inverse permutations. The bridge is built between quick trickle permutations and m-value logic functions. By the methods of the Chrestenson spectrum of m-value logic functions and the auto-correlation function of m-value logic functions to investigate the Chrestenson spectral characteristics and the auto-correlation function charac- teristics of inverse permutations of quick trickle permutations, a determinant arithmetic of quick trickle permutations is given. Using the results, it becomes easy to judge that a permutation is a quick trickle permutation or not by using computer. This gives a new pathway to study constructions and enumerations of quick trickle permutations.
文摘A new kind of combining forecasting model based on the generalized weighted functional proportional mean is proposed and the parameter estimation method of its weighting coefficients by means of the algorithm of quadratic programming is given. This model has extensive representation. It is a new kind of aggregative method of group forecasting. By taking the suitable combining form of the forecasting models and seeking the optimal parameter, the optimal combining form can be obtained and the forecasting accuracy can be improved. The effectiveness of this model is demonstrated by an example.