In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluste...In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.展开更多
The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-F...The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-Flipper operation preserves the regularity and weak connectivity of multi-digraphs.The generalized 1-Flipper operation is proved to be symmetric.Moreover,it is presented that a series of random generalized 1-Flipper operations eventually lead to a uniform probability distribution over all connected d-regular multi-digraphs without loops.展开更多
Bioluminescence tomography(BLT)is an important noninvasive optical molecular imaging modality in preclinical research.To improve the image quality,reconstruction algorithms have to deal with the inherent ill-posedness...Bioluminescence tomography(BLT)is an important noninvasive optical molecular imaging modality in preclinical research.To improve the image quality,reconstruction algorithms have to deal with the inherent ill-posedness of BLT inverse problem.The sparse characteristic of bioluminescent sources in spatial distribution has been widely explored in BLT and many L1-regularized methods have been investigated due to the sparsity-inducing properties of L1 norm.In this paper,we present a reconstruction method based on L_(1/2) regularization to enhance sparsity of BLT solution and solve the nonconvex L_(1/2) norm problem by converting it to a series of weighted L1 homotopy minimization problems with iteratively updated weights.To assess the performance of the proposed reconstruction algorithm,simulations on a heterogeneous mouse model are designed to compare it with three representative sparse reconstruction algorithms,including the weighted interior-point,L1 homotopy,and the Stagewise Orthogonal Matching Pursuit algorithm.Simulation results show that the proposed method yield stable reconstruction results under different noise levels.Quantitative comparison results demonstrate that the proposed algorithm outperforms the competitor algorithms in location accuracy,multiple-source resolving and image quality.展开更多
Tomographic synthetic aperture radar(TomoSAR)imaging exploits the antenna array measurements taken at different elevation aperture to recover the reflectivity function along the elevation direction.In these years,for ...Tomographic synthetic aperture radar(TomoSAR)imaging exploits the antenna array measurements taken at different elevation aperture to recover the reflectivity function along the elevation direction.In these years,for the sparse elevation distribution,compressive sensing(CS)is a developed favorable technique for the high-resolution elevation reconstruction in TomoSAR by solving an L_(1) regularization problem.However,because the elevation distribution in the forested area is nonsparse,if we want to use CS in the recovery,some basis,such as wavelet,should be exploited in the sparse L_(1/2) representation of the elevation reflectivity function.This paper presents a novel wavelet-based L_(2) regularization CS-TomoSAR imaging method of the forested area.In the proposed method,we first construct a wavelet basis,which can sparsely represent the elevation reflectivity function of the forested area,and then reconstruct the elevation distribution by using the L_(1/2) regularization technique.Compared to the wavelet-based L_(1) regularization TomoSAR imaging,the proposed method can improve the elevation recovered quality efficiently.展开更多
Hepatocyte nuclear factor 1 alpha(HNF1A),hepatocyte nuclear factor 4 alpha(HNF4A),and forkhead box protein A2(FOXA2)are key transcription factors that regulate a complex gene network in the liver,cre-ating a regulator...Hepatocyte nuclear factor 1 alpha(HNF1A),hepatocyte nuclear factor 4 alpha(HNF4A),and forkhead box protein A2(FOXA2)are key transcription factors that regulate a complex gene network in the liver,cre-ating a regulatory transcriptional loop.The Encode and ChIP-Atlas databases identify the recognition sites of these transcription factors in many glycosyltransferase genes.Our in silico analysis of HNF1A,HNF4A.and FOXA2 binding to the ten candidate glyco-genes studied in this work confirms a significant enrich-ment of these transcription factors specifically in the liver.Our previous studies identified HNF1A as a master regulator of fucosylation,glycan branching,and galactosylation of plasma glycoproteins.Here,we aimed to functionally validate the role of the three transcription factors on downstream glyco-gene transcriptional expression and the possible effect on glycan phenotype.We used the state-of-the-art clus-tered regularly interspaced short palindromic repeats/dead Cas9(CRISPR/dCas9)molecular tool for the downregulation of the HNF1A,HNF4A,and FOXA2 genes in HepG2 cells-a human liver cancer cell line.The results show that the downregulation of all three genes individually and in pairs affects the transcrip-tional activity of many glyco-genes,although downregulation of glyco-genes was not always followed by an unambiguous change in the corresponding glycan structures.The effect is better seen as an overall change in the total HepG2 N-glycome,primarily due to the extension of biantennary glycans.We propose an alternative way to evaluate the N-glycome composition via estimating the overall complexity of the glycome by quantifying the number of monomers in each glycan structure.We also propose a model showing feedback loops with the mutual activation of HNF1A-FOXA2 and HNF4A-FOXA2 affecting glyco-genes and protein glycosylation in HepG2 cells.展开更多
Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empir...Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection.展开更多
For complex flows in compressors containing flow separations and adverse pressure gradients,the numerical simulation results based on Reynolds-averaged Navier-Stokes(RANS)models often deviate from experimental measure...For complex flows in compressors containing flow separations and adverse pressure gradients,the numerical simulation results based on Reynolds-averaged Navier-Stokes(RANS)models often deviate from experimental measurements more or less.To improve the prediction accuracy and reduce the difference between the RANS prediction results and experimental measurements,an experimental data-driven flow field prediction method based on deep learning and l_(1)regularization is proposed and applied to a compressor cascade flow field.The inlet boundary conditions and turbulence model parameters are calibrated to obtain the high-fidelity flow fields.The Saplart-Allmaras and SST turbulence models are used independently for mutual validation.The contributions of key modified parameters are also analyzed via sensitivity analysis.The results show that the prediction error can be reduced by nearly 70%based on the proposed algorithm.The flow fields predicted by the two calibrated turbulence models are almost the same and nearly independent of the turbulence models.The corrections of the inlet boundary conditions reduce the error in the first half of the chord.The turbulence model calibrations fix the overprediction of flow separation on the suction surface near the tail edge.展开更多
By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-...By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-layer feedforward regular fuzzy neural networks to the fuzzy valued integrably bounded function F : Rn → FcO(R). That is, if the transfer functionσ: R→R is non-polynomial and integrable function on each finite interval, F may be innorm approximated by fuzzy valued functions defined as to anydegree of accuracy. Finally some real examples demonstrate the conclusions.展开更多
Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) prov...Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.展开更多
The paper discusses the core parameters of the 3 D and 4 D variational merging based on L1 norm regularization,namely optimization characteristic correlation length of background error covariance matrix and regulariza...The paper discusses the core parameters of the 3 D and 4 D variational merging based on L1 norm regularization,namely optimization characteristic correlation length of background error covariance matrix and regularization parameter. Classical 3 D/4 D variational merging is based on the theory that error follows Gaussian distribution. It involves the solution of the objective functional gradient in minimization iteration,which requires the data to have continuity and differentiability. Classic 3 D/4 D-dimensional variational merging method was extended,and L1 norm was used as the constraint coupling to the classical variational merged model. Experiment was carried out by using linear advection-diffusion equation as four-dimensional prediction model,and parameter optimization of this method is discussed. Considering the strong temporal and spatial variation of water vapor,this method is further applied to the precipitable water vapor( PWV) merging by calculating reanalysis data and GNSS retrieval.Parameters were adjusted gradually to analyze the influence of background field on the merging result,and the experiment results show that the mathematical algorithm adopted in this paper is feasible.展开更多
L(d, 1)-labeling is a kind of graph coloring problem from frequency assignment in radio networks, in which adjacent nodes must receive colors that are at least d apart while nodes at distance two from each other must ...L(d, 1)-labeling is a kind of graph coloring problem from frequency assignment in radio networks, in which adjacent nodes must receive colors that are at least d apart while nodes at distance two from each other must receive different colors. We focus on L(d, 1)-labeling of regular tilings for d≥3 since the cases d=0, 1 or 2 have been researched by Calamoneri and Petreschi. For all three kinds of regular tilings, we give their L (d, 1)-labeling numbers for any integer d≥3. Therefore, combined with the results given by Calamoneri and Petreschi, the L(d, 1)-labeling numbers of regular tilings for any nonnegative integer d may be determined completely.展开更多
Let us consider the following elliptic systems of second order-D_α(A_i~α(x, u, Du))=B_4(x, u, Du), i=1, …, N, x∈Q(?)R^n, n≥3 (1) and supposeⅰ) |A_i~α(x, u, Du)|≤L(1+|Du|);ⅱ) (1+|p|)^(-1)A_i~α(x, u, p)are H(?...Let us consider the following elliptic systems of second order-D_α(A_i~α(x, u, Du))=B_4(x, u, Du), i=1, …, N, x∈Q(?)R^n, n≥3 (1) and supposeⅰ) |A_i~α(x, u, Du)|≤L(1+|Du|);ⅱ) (1+|p|)^(-1)A_i~α(x, u, p)are H(?)lder-continuous functions with some exponent δ on (?)×R^N uniformly with respect to p, i.e.ⅲ) A_i~α(x, u, p) are differentiable function in p with bounded and continuous derivativesⅳ)ⅴ) for all u∈H_(loc)~1(Ω, R^N)∩L^(n(γ-1)/(2-γ))(Ω, R^N), B(x, u, Du)is ineasurable and |B(x, u, p)|≤a(|p|~γ+|u|~τ)+b(x), where 1+2/n<γ<2, τ≤max((n+2)/(n-2), (γ-1)/(2-γ)-ε), (?)ε>0, b(x)∈L2n/(n+2), n^2/(n+2)+e(Ω), (?)ε>0.Remarks. Only bounded open set Q will be considered in this paper; for all p≥1, λ≥0, which is clled a Morrey Space.Let assumptions ⅰ)-ⅳ) hold, Giaquinta and Modica have proved the regularity of both the H^1 weak solutions of (1) under controllable growth condition |B|≤α(|p|~γ+|u|^((n+2)/(n-2))+b, 0<γ≤1+2/n and the H^1∩L~∞ weak solutions of (1) under natural growth condition |B|≤α|p|~2+b with a smallness condition 2aM<λ(|u|≤M), which implys that the H^1∩L~∞ weak solutions have the same regularty in the case of 1+2/n<γ<2. In the case of γ=2, many counterexamples (see [2] showed that u must be in H^1L~∞, while in the case of 1+2/n<γ<2, we consider the H^1∩L^n(γ-1)/(2-γ) weak solutions of (1), weaken the instability conditions upon them (from L~∞ to L^n(γ-1)/(2-γ) and obtain the same regularity results. Finally we show that the exponent n(γ-1)/(2-γ) can not be docreased anymore for the sake of the regularity results.Delinition 1. We call u∈H^1∩L^n(γ-1)/(2-γ)(Q, R^N) be a weak solution of (1), providod that where We use the convention that repeated indices are summed. i, j go from 1 to N ann α, β from 1 to n.展开更多
文摘In view of the composition analysis and identification of ancient glass products, L1 regularization, K-Means cluster analysis, elbow rule and other methods were comprehensively used to build logical regression, cluster analysis, hyper-parameter test and other models, and SPSS, Python and other tools were used to obtain the classification rules of glass products under different fluxes, sub classification under different chemical compositions, hyper-parameter K value test and rationality analysis. Research can provide theoretical support for the protection and restoration of ancient glass relics.
基金National Natural Science Foundation of China(No.11671258)。
文摘The properties of generalized flip Markov chains on connected regular digraphs are discussed.The 1-Flipper operation on Markov chains for undirected graphs is generalized to that for multi-digraphs.The generalized 1-Flipper operation preserves the regularity and weak connectivity of multi-digraphs.The generalized 1-Flipper operation is proved to be symmetric.Moreover,it is presented that a series of random generalized 1-Flipper operations eventually lead to a uniform probability distribution over all connected d-regular multi-digraphs without loops.
基金supported by the National Natural Science Foundation of China(No.61401264,11574192)the Natural Science Research Plan Program in Shaanxi Province of China(No.2015JM6322)the Fundamental Research Funds for the Central Universities(No.GK201603025).
文摘Bioluminescence tomography(BLT)is an important noninvasive optical molecular imaging modality in preclinical research.To improve the image quality,reconstruction algorithms have to deal with the inherent ill-posedness of BLT inverse problem.The sparse characteristic of bioluminescent sources in spatial distribution has been widely explored in BLT and many L1-regularized methods have been investigated due to the sparsity-inducing properties of L1 norm.In this paper,we present a reconstruction method based on L_(1/2) regularization to enhance sparsity of BLT solution and solve the nonconvex L_(1/2) norm problem by converting it to a series of weighted L1 homotopy minimization problems with iteratively updated weights.To assess the performance of the proposed reconstruction algorithm,simulations on a heterogeneous mouse model are designed to compare it with three representative sparse reconstruction algorithms,including the weighted interior-point,L1 homotopy,and the Stagewise Orthogonal Matching Pursuit algorithm.Simulation results show that the proposed method yield stable reconstruction results under different noise levels.Quantitative comparison results demonstrate that the proposed algorithm outperforms the competitor algorithms in location accuracy,multiple-source resolving and image quality.
基金This work was supported by the Fundamental Research Funds for the Central Universities(NE2020004)the National Natural Science Foundation of China(61901213)+3 种基金the Natural Science Foundation of Jiangsu Province(BK20190397)the Aeronautical Science Foundation of China(201920052001)the Young Science and Technology Talent Support Project of Jiangsu Science and Technology Associationthe Foundation of Graduate Innovation Center in Nanjing University of Aeronautics and Astronautics(kfjj20200419).
文摘Tomographic synthetic aperture radar(TomoSAR)imaging exploits the antenna array measurements taken at different elevation aperture to recover the reflectivity function along the elevation direction.In these years,for the sparse elevation distribution,compressive sensing(CS)is a developed favorable technique for the high-resolution elevation reconstruction in TomoSAR by solving an L_(1) regularization problem.However,because the elevation distribution in the forested area is nonsparse,if we want to use CS in the recovery,some basis,such as wavelet,should be exploited in the sparse L_(1/2) representation of the elevation reflectivity function.This paper presents a novel wavelet-based L_(2) regularization CS-TomoSAR imaging method of the forested area.In the proposed method,we first construct a wavelet basis,which can sparsely represent the elevation reflectivity function of the forested area,and then reconstruct the elevation distribution by using the L_(1/2) regularization technique.Compared to the wavelet-based L_(1) regularization TomoSAR imaging,the proposed method can improve the elevation recovered quality efficiently.
基金the European Structural and Investment Funded Grant"Cardio Metabolic"(#KK.01.2.1.02.0321)the Croatian National Centre of Research Excellence in Personalized Healthcare Grant(#KK.01.1.1.01.0010)+2 种基金the European Regional Development Fund Grant,project"CRISPR/Cas9-CasMouse"(#KK.01.1.1.04.0085)the European Structural and Investment Funded Project of Centre of Competence in Molecular Diagnostics(#KK.01.2.2.03.0006)the Croatian National Centre of Research Excellence in Personalized Healthcare Grant(#KK.01.1.1.01.0010).
文摘Hepatocyte nuclear factor 1 alpha(HNF1A),hepatocyte nuclear factor 4 alpha(HNF4A),and forkhead box protein A2(FOXA2)are key transcription factors that regulate a complex gene network in the liver,cre-ating a regulatory transcriptional loop.The Encode and ChIP-Atlas databases identify the recognition sites of these transcription factors in many glycosyltransferase genes.Our in silico analysis of HNF1A,HNF4A.and FOXA2 binding to the ten candidate glyco-genes studied in this work confirms a significant enrich-ment of these transcription factors specifically in the liver.Our previous studies identified HNF1A as a master regulator of fucosylation,glycan branching,and galactosylation of plasma glycoproteins.Here,we aimed to functionally validate the role of the three transcription factors on downstream glyco-gene transcriptional expression and the possible effect on glycan phenotype.We used the state-of-the-art clus-tered regularly interspaced short palindromic repeats/dead Cas9(CRISPR/dCas9)molecular tool for the downregulation of the HNF1A,HNF4A,and FOXA2 genes in HepG2 cells-a human liver cancer cell line.The results show that the downregulation of all three genes individually and in pairs affects the transcrip-tional activity of many glyco-genes,although downregulation of glyco-genes was not always followed by an unambiguous change in the corresponding glycan structures.The effect is better seen as an overall change in the total HepG2 N-glycome,primarily due to the extension of biantennary glycans.We propose an alternative way to evaluate the N-glycome composition via estimating the overall complexity of the glycome by quantifying the number of monomers in each glycan structure.We also propose a model showing feedback loops with the mutual activation of HNF1A-FOXA2 and HNF4A-FOXA2 affecting glyco-genes and protein glycosylation in HepG2 cells.
文摘Bayesian empirical likelihood is a semiparametric method that combines parametric priors and nonparametric likelihoods, that is, replacing the parametric likelihood function in Bayes theorem with a nonparametric empirical likelihood function, which can be used without assuming the distribution of the data. It can effectively avoid the problems caused by the wrong setting of the model. In the variable selection based on Bayesian empirical likelihood, the penalty term is introduced into the model in the form of parameter prior. In this paper, we propose a novel variable selection method, L<sub>1/2</sub> regularization based on Bayesian empirical likelihood. The L<sub>1/2</sub> penalty is introduced into the model through a scale mixture of uniform representation of generalized Gaussian prior, and the posterior distribution is then sampled using MCMC method. Simulations demonstrate that the proposed method can have better predictive ability when the error violates the zero-mean normality assumption of the standard parameter model, and can perform variable selection.
基金the support of the National Natural Science Foundation of China(No.52106053,No.92152301)。
文摘For complex flows in compressors containing flow separations and adverse pressure gradients,the numerical simulation results based on Reynolds-averaged Navier-Stokes(RANS)models often deviate from experimental measurements more or less.To improve the prediction accuracy and reduce the difference between the RANS prediction results and experimental measurements,an experimental data-driven flow field prediction method based on deep learning and l_(1)regularization is proposed and applied to a compressor cascade flow field.The inlet boundary conditions and turbulence model parameters are calibrated to obtain the high-fidelity flow fields.The Saplart-Allmaras and SST turbulence models are used independently for mutual validation.The contributions of key modified parameters are also analyzed via sensitivity analysis.The results show that the prediction error can be reduced by nearly 70%based on the proposed algorithm.The flow fields predicted by the two calibrated turbulence models are almost the same and nearly independent of the turbulence models.The corrections of the inlet boundary conditions reduce the error in the first half of the chord.The turbulence model calibrations fix the overprediction of flow separation on the suction surface near the tail edge.
基金Supported by the National Natural Science Foundation of China(No:69872039)
文摘By defining fuzzy valued simple functions and giving L1(μ) approximations of fuzzy valued integrably bounded functions by such simple functions, the paper analyses by L1(μ)-norm the approximation capability of four-layer feedforward regular fuzzy neural networks to the fuzzy valued integrably bounded function F : Rn → FcO(R). That is, if the transfer functionσ: R→R is non-polynomial and integrable function on each finite interval, F may be innorm approximated by fuzzy valued functions defined as to anydegree of accuracy. Finally some real examples demonstrate the conclusions.
基金financially supported by The 2011 Prospective Research Project of SINOPEC(P11096)
文摘Seismic data regularization is an important preprocessing step in seismic signal processing. Traditional seismic acquisition methods follow the Shannon–Nyquist sampling theorem, whereas compressive sensing(CS) provides a fundamentally new paradigm to overcome limitations in data acquisition. Besides the sparse representation of seismic signal in some transform domain and the 1-norm reconstruction algorithm, the seismic data regularization quality of CS-based techniques strongly depends on random undersampling schemes. For 2D seismic data, discrete uniform-based methods have been investigated, where some seismic traces are randomly sampled with an equal probability. However, in theory and practice, some seismic traces with different probability are required to be sampled for satisfying the assumptions in CS. Therefore, designing new undersampling schemes is imperative. We propose a Bernoulli-based random undersampling scheme and its jittered version to determine the regular traces that are randomly sampled with different probability, while both schemes comply with the Bernoulli process distribution. We performed experiments using the Fourier and curvelet transforms and the spectral projected gradient reconstruction algorithm for 1-norm(SPGL1), and ten different random seeds. According to the signal-to-noise ratio(SNR) between the original and reconstructed seismic data, the detailed experimental results from 2D numerical and physical simulation data show that the proposed novel schemes perform overall better than the discrete uniform schemes.
基金Supported by Open Foundation Project of Shenyang Institute of Atmospheric Environment,China Meteorological Administration(2016SYIAE14)Natural Science Foundation of Anhui Province,China(1708085QD89)National Natural Science Foundation of China(41805080)
文摘The paper discusses the core parameters of the 3 D and 4 D variational merging based on L1 norm regularization,namely optimization characteristic correlation length of background error covariance matrix and regularization parameter. Classical 3 D/4 D variational merging is based on the theory that error follows Gaussian distribution. It involves the solution of the objective functional gradient in minimization iteration,which requires the data to have continuity and differentiability. Classic 3 D/4 D-dimensional variational merging method was extended,and L1 norm was used as the constraint coupling to the classical variational merged model. Experiment was carried out by using linear advection-diffusion equation as four-dimensional prediction model,and parameter optimization of this method is discussed. Considering the strong temporal and spatial variation of water vapor,this method is further applied to the precipitable water vapor( PWV) merging by calculating reanalysis data and GNSS retrieval.Parameters were adjusted gradually to analyze the influence of background field on the merging result,and the experiment results show that the mathematical algorithm adopted in this paper is feasible.
文摘L(d, 1)-labeling is a kind of graph coloring problem from frequency assignment in radio networks, in which adjacent nodes must receive colors that are at least d apart while nodes at distance two from each other must receive different colors. We focus on L(d, 1)-labeling of regular tilings for d≥3 since the cases d=0, 1 or 2 have been researched by Calamoneri and Petreschi. For all three kinds of regular tilings, we give their L (d, 1)-labeling numbers for any integer d≥3. Therefore, combined with the results given by Calamoneri and Petreschi, the L(d, 1)-labeling numbers of regular tilings for any nonnegative integer d may be determined completely.
基金This work is supported in part by the Foundation of Zhongshan University, Advanced Research Center.
文摘Let us consider the following elliptic systems of second order-D_α(A_i~α(x, u, Du))=B_4(x, u, Du), i=1, …, N, x∈Q(?)R^n, n≥3 (1) and supposeⅰ) |A_i~α(x, u, Du)|≤L(1+|Du|);ⅱ) (1+|p|)^(-1)A_i~α(x, u, p)are H(?)lder-continuous functions with some exponent δ on (?)×R^N uniformly with respect to p, i.e.ⅲ) A_i~α(x, u, p) are differentiable function in p with bounded and continuous derivativesⅳ)ⅴ) for all u∈H_(loc)~1(Ω, R^N)∩L^(n(γ-1)/(2-γ))(Ω, R^N), B(x, u, Du)is ineasurable and |B(x, u, p)|≤a(|p|~γ+|u|~τ)+b(x), where 1+2/n<γ<2, τ≤max((n+2)/(n-2), (γ-1)/(2-γ)-ε), (?)ε>0, b(x)∈L2n/(n+2), n^2/(n+2)+e(Ω), (?)ε>0.Remarks. Only bounded open set Q will be considered in this paper; for all p≥1, λ≥0, which is clled a Morrey Space.Let assumptions ⅰ)-ⅳ) hold, Giaquinta and Modica have proved the regularity of both the H^1 weak solutions of (1) under controllable growth condition |B|≤α(|p|~γ+|u|^((n+2)/(n-2))+b, 0<γ≤1+2/n and the H^1∩L~∞ weak solutions of (1) under natural growth condition |B|≤α|p|~2+b with a smallness condition 2aM<λ(|u|≤M), which implys that the H^1∩L~∞ weak solutions have the same regularty in the case of 1+2/n<γ<2. In the case of γ=2, many counterexamples (see [2] showed that u must be in H^1L~∞, while in the case of 1+2/n<γ<2, we consider the H^1∩L^n(γ-1)/(2-γ) weak solutions of (1), weaken the instability conditions upon them (from L~∞ to L^n(γ-1)/(2-γ) and obtain the same regularity results. Finally we show that the exponent n(γ-1)/(2-γ) can not be docreased anymore for the sake of the regularity results.Delinition 1. We call u∈H^1∩L^n(γ-1)/(2-γ)(Q, R^N) be a weak solution of (1), providod that where We use the convention that repeated indices are summed. i, j go from 1 to N ann α, β from 1 to n.