Recent seismic events have raised concerns over the safety and vulnerability of reinforced concrete moment resisting frame "RC-MRF" buildings. The seismic response of such buildings is greatly dependent on the compu...Recent seismic events have raised concerns over the safety and vulnerability of reinforced concrete moment resisting frame "RC-MRF" buildings. The seismic response of such buildings is greatly dependent on the computational tools used and the inherent assumptions in the modelling process. Thus, it is essential to investigate the sensitivity of the response demands to the corresponding modelling assumption. Many parameters and assumptions are justified to generate effective structural finite element(FE) models of buildings to simulate lateral behaviour and evaluate seismic design demands. As such, the present study focuses on the development of reliable FE models with various levels of refinement. The effects of the FE modelling assumptions on the seismic response demands on the design of buildings are investigated. the predictive ability of a FE model is tied to the accuracy of numerical analysis; a numerical analysis is performed for a series of symmetric buildings in active seismic zones. The results of the seismic response demands are presented in a comparative format to confirm drift and strength limits requirements. A proposed model is formulated based on a simplified modeling approach, where the most refined model is used to calibrate the simplified model.展开更多
Background:Familiarity with a simulation platform can seduce modellers into accepting untested assumptions for convenience of implementation.These assumptions may have consequences greater than commonly suspected,and ...Background:Familiarity with a simulation platform can seduce modellers into accepting untested assumptions for convenience of implementation.These assumptions may have consequences greater than commonly suspected,and it is important that modellers remain mindful of assumptions and remain diligent with sensitivity testing.Methods:Familiarity with a technique can lead to complacency,and alternative approaches and software can reveal untested assumptions.Visual modelling environments based on system dynamics may help to make critical assumptions more evident by offering an accessible visual overview and empowering a focus on representational rather than computational efficiency.This capacity is illustrated using a cohort-based forest growth model developed for mixed species forest.Results:The alternative model implementation revealed that untested assumptions in the original model could have substantial influence on simulated outcomes.Conclusions:An important implication is that modellers should remain conscious of all assumptions,consider alternative implementations that reveal assumptions more clearly,and conduct sensitivity tests to inform decisions.展开更多
The article is devoted to hitherto never undertaken applying an almost unknown logically formalized axiomatic epistemology-and-axiology system called “Sigma-V” to the Third Newton’s Law of mechanics. The author has...The article is devoted to hitherto never undertaken applying an almost unknown logically formalized axiomatic epistemology-and-axiology system called “Sigma-V” to the Third Newton’s Law of mechanics. The author has continued investigating the extraordinary (paradigm-breaking) hypothesis of formal-axiological interpreting Newton’s mathematical principles of natural philosophy and, thus, has arrived to discrete mathematical modeling a system of formal axiology of nature by extracting and systematical studying its proper algebraic aspect. Along with the proper algebraic machinery, the axiomatic (hypothetic-deductive) method is exploited in this investigation systematically. The research results are the followings. 1) The Third Newton’s Law of mechanics has been modeled by a formal-axiological equation of two-valued algebraic system of metaphysics as formal axiology. (Precise defining the algebraic system is provided.) The formal-axiological equation has been established (and examined) in this algebraic system by accurate computing compositions of relevant evaluation-functions. Precise tabular definitions of the evaluation-functions are given. 2) The wonderful formula representing the Third Newton’s Law (in the relevant physical interpretation of the formal theory Sigma-V) has been derived logically in Sigma-V from the presumption of a-priori-ness of knowledge. A precise axiomatic definition of the nontrivial notion “a-priori-ness of knowledge” is given. The formal derivation is implemented in strict accordance with the rigor standard of D. Hilbert’s formalism;hence, checking the formal derivation submitted in this article is not a difficult task. With respect to proper theoretical physics, the formal inference is a nontrivial scientific novelty which has not been discussed and published elsewhere yet.展开更多
This study is focused on the mineralogical and chemical compositions,deposition environment and mechanism of formation of sediments of Kyzyltokoy basin.By an interpretation of formation,an environment of sedimentation...This study is focused on the mineralogical and chemical compositions,deposition environment and mechanism of formation of sediments of Kyzyltokoy basin.By an interpretation of formation,an environment of sedimentation of a basin was separated into three general conditions:a condition where glauconitization process interrupted,where process reached a completion and where occurred decay of glauconites,i.e.,the beginning and interruption in the middle of glauconitization,completion of the展开更多
Recently, many bit commitment schemes have been presented. This paper presents a new practical bit commitment scheme based on Schnorr's one-time knowledge proof scheme,where the use of cut-and-choose method and ma...Recently, many bit commitment schemes have been presented. This paper presents a new practical bit commitment scheme based on Schnorr's one-time knowledge proof scheme,where the use of cut-and-choose method and many random exam candidates in the protocols are replaced by a single challenge number. Therefore the proposed bit commitment scheme is more efficient and practical than the previous schemes In addition, the security of the proposed scheme under factoring assumption is proved, thus the cryptographic basis of the proposed scheme is clarified.展开更多
Based on the general theory of elastic plates which abandons Kirchhoff-Loveassunption in the classical theory. this paper establishes a first order approximationtheory of elastic circular plates with non-Kirchhoff-Lov...Based on the general theory of elastic plates which abandons Kirchhoff-Loveassunption in the classical theory. this paper establishes a first order approximationtheory of elastic circular plates with non-Kirchhoff-Love assumption, and presents ananalytic solution to the axisymmetric problem of elastic circular plates with clampedboundary under uniformly distributed load. By comparing with the classical solution ofthe thin circular plates, it is verified that the new solution is closer to the experimentresults than the classical solution. By virtue of the new theory. the influence of thediameter-to=thickness ratio upon the precision of the classical theory is examined.展开更多
SUMMARY Linear regression is widely used in biomedical and psychosocial research.A critical assumption that is often overlooked is homoscedasticity.Unlike normality,the other assumption on data distribution,homoscedas...SUMMARY Linear regression is widely used in biomedical and psychosocial research.A critical assumption that is often overlooked is homoscedasticity.Unlike normality,the other assumption on data distribution,homoscedasticity is often taken for granted when fitting linear regression models.However,contrary to popular belief,this assumption actually has a bigger impact on validity of linear regression results than normality.In this report,we use Monte Carlo simulation studies to investigate and compare their effects on validity of inference.展开更多
The design and analysis of authenticated key exchange protocol is an important problem in information security area. At present, extended Canetti-Krawczyk (eCK) model provides the strongest definition of security for ...The design and analysis of authenticated key exchange protocol is an important problem in information security area. At present, extended Canetti-Krawczyk (eCK) model provides the strongest definition of security for two party key agreement protocol, however most of the current secure protocols can not be prove to secure without Gap assumption. To avoid this phenomenon, by using twinning key technology we propose a new two party key agreement protocol TUP which is obtained by modifying the UP protocol, then in conjunction with the trapdoor test, we prove strictly that the new protocol is secure in eCK model. Compared with previous protocols, the security assumption of new proposal is more standard and weaker, and it also solves an open problem in ProvSec'09.展开更多
The classical small deflection theory of elastic plates id based on the Kirchhoff-Lore assumptions ̄[1,2].Ther are used on the basis of the thinness of plate and the smallness of deflection.In terms of Cartesian tens...The classical small deflection theory of elastic plates id based on the Kirchhoff-Lore assumptions ̄[1,2].Ther are used on the basis of the thinness of plate and the smallness of deflection.In terms of Cartesian tensor coordinates x_i(i=0, 12)these basic assumptions are:(1)the transversal normal strain may be neglected i.e._(00)=0;(2)the transversal shear strain may be neglected i.e.e_(0α)=0(α= 1, 2)(3)the transversal normal stress may be neglected i.e.. σ_(00)=0 .In classical theory of elastic plates,the strain-displacement relations and the corresponding stress-displacement relations are established on the basis of these assumptions. And the equations of the classical theory for a set of undetermined quantities defined on the middle surface are established through integrating the three dimensional equations of equilibrium of stress over the thickness.In the previous papers ̄[3,4,5],an approximation theory is given on the basis of Ihree dimensional theory of elastic plates without using Kirchhoff-Love assumptions。However,no uniqueness study is given,and also the boundary conditions have never been studied. In this paper.the same problems are studied on the basis of generalizedvariational principle of the three dimensional theory of elastic bodies ̄[6].The stationary conditions of variation give an unique and complete set of field equations and the related boundary conditions for the approximation theory.In this paper,the first order approximation theory is studied in detail.展开更多
To understand any statistical tool requires not only an understanding of the relevant computational procedures but also an awareness of the assumptions upon which the procedures are based, and the effects of violation...To understand any statistical tool requires not only an understanding of the relevant computational procedures but also an awareness of the assumptions upon which the procedures are based, and the effects of violations of these assumptions. In our earlier articles (Laverty, Miket, & Kelly [1]) and (Laverty & Kelly, [2] [3]) we used Microsoft Excel to simulate both a Hidden Markov model and heteroskedastic models showing different realizations of these models and the performance of the techniques for identifying the underlying hidden states using simulated data. The advantage of using Excel is that the simulations are regenerated when the spreadsheet is recalculated allowing the user to observe the performance of the statistical technique under different realizations of the data. In this article we will show how to use Excel to generate data from a one-way ANOVA (Analysis of Variance) model and how the statistical methods behave both when the fundamental assumptions of the model hold and when these assumptions are violated. The purpose of this article is to provide tools for individuals to gain an intuitive understanding of these violations using this readily available program.展开更多
One of the key assumptions in respondent-driven sampling (RDS) analysis, called “random selection assumption,” is that respondents randomly recruit their peers from their personal networks. The objective of this stu...One of the key assumptions in respondent-driven sampling (RDS) analysis, called “random selection assumption,” is that respondents randomly recruit their peers from their personal networks. The objective of this study was to verify this assumption in the empirical data of egocentric networks. Methods: We conducted an egocentric network study among young drug users in China, in which RDS was used to recruit this hard-to-reach population. If the random recruitment assumption holds, the RDS-estimated population proportions should be similar to the actual population proportions. Following this logic, we first calculated the population proportions of five visible variables (gender, age, education, marital status, and drug use mode) among the total drug-use alters from which the RDS sample was drawn, and then estimated the RDS-adjusted population proportions and their 95% confidence intervals in the RDS sample. Theoretically, if the random recruitment assumption holds, the 95% confidence intervals estimated in the RDS sample should include the population proportions calculated in the total drug-use alters. Results: The evaluation of the RDS sample indicated its success in reaching the convergence of RDS compositions and including a broad cross-section of the hidden population. Findings demonstrate that the random selection assumption holds for three group traits, but not for two others. Specifically, egos randomly recruited subjects in different age groups, marital status, or drug use modes from their network alters, but not in gender and education levels. Conclusions: This study demonstrates the occurrence of non-random recruitment, indicating that the recruitment of subjects in this RDS study was not completely at random. Future studies are needed to assess the extent to which the population proportion estimates can be biased when the violation of the assumption occurs in some group traits in RDS samples.展开更多
In this paper, the theory of elastic circular plate with no classical Kirchhoff-Love assumptions is established on the basis of a previous paper. In this theory, no classical Kirchhoff-Love assumptions are pre-assumed...In this paper, the theory of elastic circular plate with no classical Kirchhoff-Love assumptions is established on the basis of a previous paper. In this theory, no classical Kirchhoff-Love assumptions are pre-assumed and the axial symmetrical analytic solution of fixed circular plate under the action of uniform pressure is obtained. Comparison of this solution and the known classical solution shows that this new solution agrees better than classical solution with the experiment measurement.This gives also the quantitative effect of the thickness on the deflection of circular plate with moderate thickness.展开更多
At present,although knowledge graphs have been widely used in various fields such as recommendation systems,question and answer systems,and intelligent search,there are always quality problems such as knowledge omissi...At present,although knowledge graphs have been widely used in various fields such as recommendation systems,question and answer systems,and intelligent search,there are always quality problems such as knowledge omissions and errors.Quality assessment and control,as an important means to ensure the quality of knowledge,can make the applications based on knowledge graphs more complete and more accurate by reasonably assessing the knowledge graphs and fixing and improving the quality problems at the same time.Therefore,as an indispensable part of the knowledge graph construction process,the results of quality assessment and control determine the usefulness of the knowledge graph.Among them,the assessment and enhancement of completeness,as an important part of the assessment and control phase,determine whether the knowledge graph can fully reflect objective phenomena and reveal potential connections among entities.In this paper,we review specific techniques of completeness assessment and classify completeness assessment techniques in terms of closed world assumptions,open world assumptions,and partial completeness assumptions.The purpose of this paper is to further promote the development of knowledge graph quality control and to lay the foundation for subsequent research on the completeness assessment of knowledge graphs by reviewing and classifying completeness assessment techniques.展开更多
This paper presents a thorough study of the effect of the Constant Eddy Viscosity(CEV)assumption on the optimization of a discrete adjoint-based design optimization system.First,the algorithms of the adjoint methods w...This paper presents a thorough study of the effect of the Constant Eddy Viscosity(CEV)assumption on the optimization of a discrete adjoint-based design optimization system.First,the algorithms of the adjoint methods with and without the CEV assumption are presented,followed by a discussion of the two methods’solution stability.Second,the sensitivity accuracy,adjoint solution stability,and Root Mean Square(RMS)residual convergence rates at both design and offdesign operating points are compared between the CEV and full viscosity adjoint methods in detail.Finally,a multi-point steady aerodynamic and a multi-objective unsteady aerodynamic and aeroelastic coupled design optimizations are performed to study the impact of the CEV assumption on optimization.Two gradient-based optimizers,the Sequential Least-Square Quadratic Programming(SLSQP)method and Steepest Descent Method(SDM)are respectively used to draw a firm conclusion.The results from the transonic NASA Rotor 67 show that the CEV assumption can deteriorate RMS residual convergence rates and even lead to solution instability,especially at a near stall point.Compared with the steady cases,the effect of the CEV assumption on unsteady sensitivity accuracy is much stronger.Nevertheless,the CEV adjoint solver is still capable of achieving optimization goals to some extent,particularly if the flow under consideration is benign.展开更多
Graph Neural Networks(GNNs)play a significant role in tasks related to homophilic graphs.Traditional GNNs,based on the assumption of homophily,employ low-pass filters for neighboring nodes to achieve information aggre...Graph Neural Networks(GNNs)play a significant role in tasks related to homophilic graphs.Traditional GNNs,based on the assumption of homophily,employ low-pass filters for neighboring nodes to achieve information aggregation and embedding.However,in heterophilic graphs,nodes from different categories often establish connections,while nodes of the same category are located further apart in the graph topology.This characteristic poses challenges to traditional GNNs,leading to issues of“distant node modeling deficiency”and“failure of the homophily assumption”.In response,this paper introduces the Spatial-Frequency domain Adaptive Heterophilic Graph Neural Networks(SFA-HGNN),which integrates adaptive embedding mechanisms for both spatial and frequency domains to address the aforementioned issues.Specifically,for the first problem,we propose the“Distant Spatial Embedding Module”,aiming to select and aggregate distant nodes through high-order randomwalk transition probabilities to enhance modeling capabilities.For the second issue,we design the“Proximal Frequency Domain Embedding Module”,constructing adaptive filters to separate high and low-frequency signals of nodes,and introduce frequency-domain guided attention mechanisms to fuse the relevant information,thereby reducing the noise introduced by the failure of the homophily assumption.We deploy the SFA-HGNN on six publicly available heterophilic networks,achieving state-of-the-art results in four of them.Furthermore,we elaborate on the hyperparameter selection mechanism and validate the performance of each module through experimentation,demonstrating a positive correlation between“node structural similarity”,“node attribute vector similarity”,and“node homophily”in heterophilic networks.展开更多
In contrast to the solutions of applied mathematics to Zeno’s paradoxes, I focus on the concept of motion and show that, by distinguishing two different forms of motion, Zeno’s apparent paradoxes are not paradoxical...In contrast to the solutions of applied mathematics to Zeno’s paradoxes, I focus on the concept of motion and show that, by distinguishing two different forms of motion, Zeno’s apparent paradoxes are not paradoxical at all. Zeno’s paradoxes indirectly prove that distances are not composed of extensionless points and, in general, that a higher dimension cannot be completely composed of lower ones. Conversely, lower dimensions can be understood as special cases of higher dimensions. To illustrate this approach, I consider Cantor’s only apparent proof that the real numbers are uncountable. However, his widely accepted indirect proof has the disadvantage that it depends on whether there is another way to make the real numbers countable. Cantor rightly assumes that there can be no smallest number between 0 and 1, and therefore no beginning of counting. For this reason he arbitrarily lists the real numbers in order to show with his diagonal method that this list can never be complete. The situation is different if we start with the largest number between 0 and 1 (0.999…) and use the method of an inverted triangle, which can be understood as a special fractal form. Here we can construct a vertical and a horizontal stratification with which it is actually possible to construct all real numbers between 0 and 1 without exception. Each column is infinite, and each number in that column is the starting point of a new triangle, while each row is finite. Even in a simple sine curve, we experience finiteness with respect to the y-axis and infinity with respect to the x-axis. The first parts of this article show that Zeno’s assumptions contradict the concept of motion as such, so it is not surprising that this misconstruction leads to contradictions. In the last part, I discuss Cantor’s diagonal method and explain the method of an inverted triangle that is internally structured like a fractal by repeating this inverted triangle at each column. The consequence is that we encounter two very different methods of counting. Vertically it is continuous, horizontally it is discrete. While Frege, Tarski, Cantor, Gödel and the Vienna Circle tried to derive the higher dimension from the lower, a procedure that always leads to new contradictions and antinomies (Tarski, Russell), I take the opposite approach here, in which I derive the lower dimension from the higher. This perspective seems to fail because Tarski, Russell, Wittgenstein, and especially the Vienna Circle have shown that the completeness of the absolute itself is logically contradictory. For this reason, we agree with Hegel in assuming that we can never fully comprehend the Absolute, but only its particular manifestations—otherwise we would be putting ourselves in the place of the Absolute, or even God. Nevertheless, we can understand the Absolute in its particular expressions, as I will show with the modest example of the triangle proof of the combined horizontal and vertical countability of the real numbers, which I developed in rejection of Cantor’s diagonal proof. .展开更多
Inspired by the framework of Boyen, in this paper, an attribute-based signature(ABS) scheme from lattice assumption is proposed. In this attribute-based signature scheme, an entity's attributes set corresponds to t...Inspired by the framework of Boyen, in this paper, an attribute-based signature(ABS) scheme from lattice assumption is proposed. In this attribute-based signature scheme, an entity's attributes set corresponds to the concatenation of a lattice matrix with the sum of some random matrices, and the signature vector is generated by using the Preimage Sampling algorithm. Compared with current attribute-based signature schemes, this scheme can resist quantum attacks and enjoy shorter public-key, smaller signature size and higher efficiency.展开更多
In this work,the problem of dependency of the predicted rainfall upon the grid-size in mesoscale numerical weather prediction models is addressed.We argue that this problem is due to (i) the violation of the quasi-equ...In this work,the problem of dependency of the predicted rainfall upon the grid-size in mesoscale numerical weather prediction models is addressed.We argue that this problem is due to (i) the violation of the quasi-equilibrium assump- tion,which is underlying most existing convective parameterization schemes,and states that the convective activity may be considered in instantaneous equilibrium with the larger-scale forcing;and (ii) the violation of the hydrostatic approx- imation,made in most mesoscale models,which would induce too large-scale circulation in occurrence of strong con- vection.On the contrary,meso-β and meso-α scale models,i.e.models with horizontal grid size ranging from 10 to 100 km,have a capacity to resolve motions with characteristic scales close to the ones of the convective motions.We hypothesize that a possible way to eliminate this problem is (i) to take a prognostic approach to the parameterization of deep convection,whereby the quantities that describe the activity of convection are no longer diagnosed from the instan- taneous value of the large-scale forcing,but predicted by time-dependent equations,that integrate the large-scale forc- ing over time;(ii)to introduce a mesoscale parameter which varies systematically with the grid size of the numerical model in order to damp large-scale circulation usually too induced when the grid size becomes smaller (from 100 km to 10 kin).We propose an implementation of this idea in the frame of one existing scheme,already tested and used for a long time at the French Weather Service.The results of the test through one-dimensional experiments with the Phase Ⅲ of GATE data are reported in this paper;and the ones on its implementation in the three-dimensional model with the OSCAR data will be reported in a companion paper.展开更多
A tightly secure cryptographic scheme refers to a construction with a tight security reduction to a hardness assumption,where the reduction loss is a small constant.A scheme with tight security is preferred in practic...A tightly secure cryptographic scheme refers to a construction with a tight security reduction to a hardness assumption,where the reduction loss is a small constant.A scheme with tight security is preferred in practice since it could be implemented using a smaller parameter to improve efficiency.Recently,Bader et al.(EUROCRYPT 2016)have proposed a comprehensive study on the impossible tight security reductions for certain(e.g.,key-unique)public-key cryptographic schemes in the multi-user with adaptive corruptions(MU-C)setting built upon non-interactive assumptions.The assumptions of one-more version,such as one-more computational Diffie-Hellman(n-CDH),are variants of the standard assumptions and have found various applications.However,whether it is possible to have tightly secure key-unique schemes from the one-more assumptions or the impossible tight reduction results also hold for these assumptions remains unknown.In this paper,we give affirmative answers to the above question,i.e.,we can have efficient key-unique public-key cryptographic schemes with tight security built upon the one-more assumptions.Specifically,we propose a digital signature scheme and an encryption scheme,both of which are key-unique and have tight MU-C security under the one-more computational Diffie-Hellman(n-CDH)assumption.Our results also reflect from another aspect that there indeed exists a gap between the standard assumptions and their one-more version counterparts.展开更多
In this paper, the relationship between argumentation and closed world reasoning for disjunctive information is studied. In particular, the authors propose a simple and intuitive generalization of the closed world ass...In this paper, the relationship between argumentation and closed world reasoning for disjunctive information is studied. In particular, the authors propose a simple and intuitive generalization of the closed world assumption (CWA) for general disjunctive deductive databases (with default negation). This semantics, called DCWA, allows a natural argumentation-based interpretation and can be used to represent reasoning for disjunctive information. We compare DCWA with GCWA and prove that DCWA extends Minker's GCWA to the class of disjunctive databases with default negation. Also we compare our semantics with some related approaches. In addition, the computational complexity of DCWA is investigated.展开更多
基金Scientific Research Deanship,Taibah University Grant No.6363/436
文摘Recent seismic events have raised concerns over the safety and vulnerability of reinforced concrete moment resisting frame "RC-MRF" buildings. The seismic response of such buildings is greatly dependent on the computational tools used and the inherent assumptions in the modelling process. Thus, it is essential to investigate the sensitivity of the response demands to the corresponding modelling assumption. Many parameters and assumptions are justified to generate effective structural finite element(FE) models of buildings to simulate lateral behaviour and evaluate seismic design demands. As such, the present study focuses on the development of reliable FE models with various levels of refinement. The effects of the FE modelling assumptions on the seismic response demands on the design of buildings are investigated. the predictive ability of a FE model is tied to the accuracy of numerical analysis; a numerical analysis is performed for a series of symmetric buildings in active seismic zones. The results of the seismic response demands are presented in a comparative format to confirm drift and strength limits requirements. A proposed model is formulated based on a simplified modeling approach, where the most refined model is used to calibrate the simplified model.
文摘Background:Familiarity with a simulation platform can seduce modellers into accepting untested assumptions for convenience of implementation.These assumptions may have consequences greater than commonly suspected,and it is important that modellers remain mindful of assumptions and remain diligent with sensitivity testing.Methods:Familiarity with a technique can lead to complacency,and alternative approaches and software can reveal untested assumptions.Visual modelling environments based on system dynamics may help to make critical assumptions more evident by offering an accessible visual overview and empowering a focus on representational rather than computational efficiency.This capacity is illustrated using a cohort-based forest growth model developed for mixed species forest.Results:The alternative model implementation revealed that untested assumptions in the original model could have substantial influence on simulated outcomes.Conclusions:An important implication is that modellers should remain conscious of all assumptions,consider alternative implementations that reveal assumptions more clearly,and conduct sensitivity tests to inform decisions.
文摘The article is devoted to hitherto never undertaken applying an almost unknown logically formalized axiomatic epistemology-and-axiology system called “Sigma-V” to the Third Newton’s Law of mechanics. The author has continued investigating the extraordinary (paradigm-breaking) hypothesis of formal-axiological interpreting Newton’s mathematical principles of natural philosophy and, thus, has arrived to discrete mathematical modeling a system of formal axiology of nature by extracting and systematical studying its proper algebraic aspect. Along with the proper algebraic machinery, the axiomatic (hypothetic-deductive) method is exploited in this investigation systematically. The research results are the followings. 1) The Third Newton’s Law of mechanics has been modeled by a formal-axiological equation of two-valued algebraic system of metaphysics as formal axiology. (Precise defining the algebraic system is provided.) The formal-axiological equation has been established (and examined) in this algebraic system by accurate computing compositions of relevant evaluation-functions. Precise tabular definitions of the evaluation-functions are given. 2) The wonderful formula representing the Third Newton’s Law (in the relevant physical interpretation of the formal theory Sigma-V) has been derived logically in Sigma-V from the presumption of a-priori-ness of knowledge. A precise axiomatic definition of the nontrivial notion “a-priori-ness of knowledge” is given. The formal derivation is implemented in strict accordance with the rigor standard of D. Hilbert’s formalism;hence, checking the formal derivation submitted in this article is not a difficult task. With respect to proper theoretical physics, the formal inference is a nontrivial scientific novelty which has not been discussed and published elsewhere yet.
文摘This study is focused on the mineralogical and chemical compositions,deposition environment and mechanism of formation of sediments of Kyzyltokoy basin.By an interpretation of formation,an environment of sedimentation of a basin was separated into three general conditions:a condition where glauconitization process interrupted,where process reached a completion and where occurred decay of glauconites,i.e.,the beginning and interruption in the middle of glauconitization,completion of the
基金Supported by the National Natural Science Foundation of China(No.69772035,69882002) and "863" Programme
文摘Recently, many bit commitment schemes have been presented. This paper presents a new practical bit commitment scheme based on Schnorr's one-time knowledge proof scheme,where the use of cut-and-choose method and many random exam candidates in the protocols are replaced by a single challenge number. Therefore the proposed bit commitment scheme is more efficient and practical than the previous schemes In addition, the security of the proposed scheme under factoring assumption is proved, thus the cryptographic basis of the proposed scheme is clarified.
文摘Based on the general theory of elastic plates which abandons Kirchhoff-Loveassunption in the classical theory. this paper establishes a first order approximationtheory of elastic circular plates with non-Kirchhoff-Love assumption, and presents ananalytic solution to the axisymmetric problem of elastic circular plates with clampedboundary under uniformly distributed load. By comparing with the classical solution ofthe thin circular plates, it is verified that the new solution is closer to the experimentresults than the classical solution. By virtue of the new theory. the influence of thediameter-to=thickness ratio upon the precision of the classical theory is examined.
文摘SUMMARY Linear regression is widely used in biomedical and psychosocial research.A critical assumption that is often overlooked is homoscedasticity.Unlike normality,the other assumption on data distribution,homoscedasticity is often taken for granted when fitting linear regression models.However,contrary to popular belief,this assumption actually has a bigger impact on validity of linear regression results than normality.In this report,we use Monte Carlo simulation studies to investigate and compare their effects on validity of inference.
文摘The design and analysis of authenticated key exchange protocol is an important problem in information security area. At present, extended Canetti-Krawczyk (eCK) model provides the strongest definition of security for two party key agreement protocol, however most of the current secure protocols can not be prove to secure without Gap assumption. To avoid this phenomenon, by using twinning key technology we propose a new two party key agreement protocol TUP which is obtained by modifying the UP protocol, then in conjunction with the trapdoor test, we prove strictly that the new protocol is secure in eCK model. Compared with previous protocols, the security assumption of new proposal is more standard and weaker, and it also solves an open problem in ProvSec'09.
文摘The classical small deflection theory of elastic plates id based on the Kirchhoff-Lore assumptions ̄[1,2].Ther are used on the basis of the thinness of plate and the smallness of deflection.In terms of Cartesian tensor coordinates x_i(i=0, 12)these basic assumptions are:(1)the transversal normal strain may be neglected i.e._(00)=0;(2)the transversal shear strain may be neglected i.e.e_(0α)=0(α= 1, 2)(3)the transversal normal stress may be neglected i.e.. σ_(00)=0 .In classical theory of elastic plates,the strain-displacement relations and the corresponding stress-displacement relations are established on the basis of these assumptions. And the equations of the classical theory for a set of undetermined quantities defined on the middle surface are established through integrating the three dimensional equations of equilibrium of stress over the thickness.In the previous papers ̄[3,4,5],an approximation theory is given on the basis of Ihree dimensional theory of elastic plates without using Kirchhoff-Love assumptions。However,no uniqueness study is given,and also the boundary conditions have never been studied. In this paper.the same problems are studied on the basis of generalizedvariational principle of the three dimensional theory of elastic bodies ̄[6].The stationary conditions of variation give an unique and complete set of field equations and the related boundary conditions for the approximation theory.In this paper,the first order approximation theory is studied in detail.
文摘To understand any statistical tool requires not only an understanding of the relevant computational procedures but also an awareness of the assumptions upon which the procedures are based, and the effects of violations of these assumptions. In our earlier articles (Laverty, Miket, & Kelly [1]) and (Laverty & Kelly, [2] [3]) we used Microsoft Excel to simulate both a Hidden Markov model and heteroskedastic models showing different realizations of these models and the performance of the techniques for identifying the underlying hidden states using simulated data. The advantage of using Excel is that the simulations are regenerated when the spreadsheet is recalculated allowing the user to observe the performance of the statistical technique under different realizations of the data. In this article we will show how to use Excel to generate data from a one-way ANOVA (Analysis of Variance) model and how the statistical methods behave both when the fundamental assumptions of the model hold and when these assumptions are violated. The purpose of this article is to provide tools for individuals to gain an intuitive understanding of these violations using this readily available program.
文摘One of the key assumptions in respondent-driven sampling (RDS) analysis, called “random selection assumption,” is that respondents randomly recruit their peers from their personal networks. The objective of this study was to verify this assumption in the empirical data of egocentric networks. Methods: We conducted an egocentric network study among young drug users in China, in which RDS was used to recruit this hard-to-reach population. If the random recruitment assumption holds, the RDS-estimated population proportions should be similar to the actual population proportions. Following this logic, we first calculated the population proportions of five visible variables (gender, age, education, marital status, and drug use mode) among the total drug-use alters from which the RDS sample was drawn, and then estimated the RDS-adjusted population proportions and their 95% confidence intervals in the RDS sample. Theoretically, if the random recruitment assumption holds, the 95% confidence intervals estimated in the RDS sample should include the population proportions calculated in the total drug-use alters. Results: The evaluation of the RDS sample indicated its success in reaching the convergence of RDS compositions and including a broad cross-section of the hidden population. Findings demonstrate that the random selection assumption holds for three group traits, but not for two others. Specifically, egos randomly recruited subjects in different age groups, marital status, or drug use modes from their network alters, but not in gender and education levels. Conclusions: This study demonstrates the occurrence of non-random recruitment, indicating that the recruitment of subjects in this RDS study was not completely at random. Future studies are needed to assess the extent to which the population proportion estimates can be biased when the violation of the assumption occurs in some group traits in RDS samples.
文摘In this paper, the theory of elastic circular plate with no classical Kirchhoff-Love assumptions is established on the basis of a previous paper. In this theory, no classical Kirchhoff-Love assumptions are pre-assumed and the axial symmetrical analytic solution of fixed circular plate under the action of uniform pressure is obtained. Comparison of this solution and the known classical solution shows that this new solution agrees better than classical solution with the experiment measurement.This gives also the quantitative effect of the thickness on the deflection of circular plate with moderate thickness.
基金supported by the National Key Laboratory for Complex Systems Simulation Foundation(6142006190301)。
文摘At present,although knowledge graphs have been widely used in various fields such as recommendation systems,question and answer systems,and intelligent search,there are always quality problems such as knowledge omissions and errors.Quality assessment and control,as an important means to ensure the quality of knowledge,can make the applications based on knowledge graphs more complete and more accurate by reasonably assessing the knowledge graphs and fixing and improving the quality problems at the same time.Therefore,as an indispensable part of the knowledge graph construction process,the results of quality assessment and control determine the usefulness of the knowledge graph.Among them,the assessment and enhancement of completeness,as an important part of the assessment and control phase,determine whether the knowledge graph can fully reflect objective phenomena and reveal potential connections among entities.In this paper,we review specific techniques of completeness assessment and classify completeness assessment techniques in terms of closed world assumptions,open world assumptions,and partial completeness assumptions.The purpose of this paper is to further promote the development of knowledge graph quality control and to lay the foundation for subsequent research on the completeness assessment of knowledge graphs by reviewing and classifying completeness assessment techniques.
基金supported by the National Science and Technology Major Project,China(No.2017-II-0009-0023)China’s 111 project(No.B17037)sponsored by Innovation Foundation for Doctor Dissertation of Northwestern Polytechnical University,China.
文摘This paper presents a thorough study of the effect of the Constant Eddy Viscosity(CEV)assumption on the optimization of a discrete adjoint-based design optimization system.First,the algorithms of the adjoint methods with and without the CEV assumption are presented,followed by a discussion of the two methods’solution stability.Second,the sensitivity accuracy,adjoint solution stability,and Root Mean Square(RMS)residual convergence rates at both design and offdesign operating points are compared between the CEV and full viscosity adjoint methods in detail.Finally,a multi-point steady aerodynamic and a multi-objective unsteady aerodynamic and aeroelastic coupled design optimizations are performed to study the impact of the CEV assumption on optimization.Two gradient-based optimizers,the Sequential Least-Square Quadratic Programming(SLSQP)method and Steepest Descent Method(SDM)are respectively used to draw a firm conclusion.The results from the transonic NASA Rotor 67 show that the CEV assumption can deteriorate RMS residual convergence rates and even lead to solution instability,especially at a near stall point.Compared with the steady cases,the effect of the CEV assumption on unsteady sensitivity accuracy is much stronger.Nevertheless,the CEV adjoint solver is still capable of achieving optimization goals to some extent,particularly if the flow under consideration is benign.
基金supported by the Fundamental Research Funds for the Central Universities(Grant No.2022JKF02039).
文摘Graph Neural Networks(GNNs)play a significant role in tasks related to homophilic graphs.Traditional GNNs,based on the assumption of homophily,employ low-pass filters for neighboring nodes to achieve information aggregation and embedding.However,in heterophilic graphs,nodes from different categories often establish connections,while nodes of the same category are located further apart in the graph topology.This characteristic poses challenges to traditional GNNs,leading to issues of“distant node modeling deficiency”and“failure of the homophily assumption”.In response,this paper introduces the Spatial-Frequency domain Adaptive Heterophilic Graph Neural Networks(SFA-HGNN),which integrates adaptive embedding mechanisms for both spatial and frequency domains to address the aforementioned issues.Specifically,for the first problem,we propose the“Distant Spatial Embedding Module”,aiming to select and aggregate distant nodes through high-order randomwalk transition probabilities to enhance modeling capabilities.For the second issue,we design the“Proximal Frequency Domain Embedding Module”,constructing adaptive filters to separate high and low-frequency signals of nodes,and introduce frequency-domain guided attention mechanisms to fuse the relevant information,thereby reducing the noise introduced by the failure of the homophily assumption.We deploy the SFA-HGNN on six publicly available heterophilic networks,achieving state-of-the-art results in four of them.Furthermore,we elaborate on the hyperparameter selection mechanism and validate the performance of each module through experimentation,demonstrating a positive correlation between“node structural similarity”,“node attribute vector similarity”,and“node homophily”in heterophilic networks.
文摘In contrast to the solutions of applied mathematics to Zeno’s paradoxes, I focus on the concept of motion and show that, by distinguishing two different forms of motion, Zeno’s apparent paradoxes are not paradoxical at all. Zeno’s paradoxes indirectly prove that distances are not composed of extensionless points and, in general, that a higher dimension cannot be completely composed of lower ones. Conversely, lower dimensions can be understood as special cases of higher dimensions. To illustrate this approach, I consider Cantor’s only apparent proof that the real numbers are uncountable. However, his widely accepted indirect proof has the disadvantage that it depends on whether there is another way to make the real numbers countable. Cantor rightly assumes that there can be no smallest number between 0 and 1, and therefore no beginning of counting. For this reason he arbitrarily lists the real numbers in order to show with his diagonal method that this list can never be complete. The situation is different if we start with the largest number between 0 and 1 (0.999…) and use the method of an inverted triangle, which can be understood as a special fractal form. Here we can construct a vertical and a horizontal stratification with which it is actually possible to construct all real numbers between 0 and 1 without exception. Each column is infinite, and each number in that column is the starting point of a new triangle, while each row is finite. Even in a simple sine curve, we experience finiteness with respect to the y-axis and infinity with respect to the x-axis. The first parts of this article show that Zeno’s assumptions contradict the concept of motion as such, so it is not surprising that this misconstruction leads to contradictions. In the last part, I discuss Cantor’s diagonal method and explain the method of an inverted triangle that is internally structured like a fractal by repeating this inverted triangle at each column. The consequence is that we encounter two very different methods of counting. Vertically it is continuous, horizontally it is discrete. While Frege, Tarski, Cantor, Gödel and the Vienna Circle tried to derive the higher dimension from the lower, a procedure that always leads to new contradictions and antinomies (Tarski, Russell), I take the opposite approach here, in which I derive the lower dimension from the higher. This perspective seems to fail because Tarski, Russell, Wittgenstein, and especially the Vienna Circle have shown that the completeness of the absolute itself is logically contradictory. For this reason, we agree with Hegel in assuming that we can never fully comprehend the Absolute, but only its particular manifestations—otherwise we would be putting ourselves in the place of the Absolute, or even God. Nevertheless, we can understand the Absolute in its particular expressions, as I will show with the modest example of the triangle proof of the combined horizontal and vertical countability of the real numbers, which I developed in rejection of Cantor’s diagonal proof. .
基金Supported by the National Natural Science Foundation of China(61173151,61472309)
文摘Inspired by the framework of Boyen, in this paper, an attribute-based signature(ABS) scheme from lattice assumption is proposed. In this attribute-based signature scheme, an entity's attributes set corresponds to the concatenation of a lattice matrix with the sum of some random matrices, and the signature vector is generated by using the Preimage Sampling algorithm. Compared with current attribute-based signature schemes, this scheme can resist quantum attacks and enjoy shorter public-key, smaller signature size and higher efficiency.
文摘In this work,the problem of dependency of the predicted rainfall upon the grid-size in mesoscale numerical weather prediction models is addressed.We argue that this problem is due to (i) the violation of the quasi-equilibrium assump- tion,which is underlying most existing convective parameterization schemes,and states that the convective activity may be considered in instantaneous equilibrium with the larger-scale forcing;and (ii) the violation of the hydrostatic approx- imation,made in most mesoscale models,which would induce too large-scale circulation in occurrence of strong con- vection.On the contrary,meso-β and meso-α scale models,i.e.models with horizontal grid size ranging from 10 to 100 km,have a capacity to resolve motions with characteristic scales close to the ones of the convective motions.We hypothesize that a possible way to eliminate this problem is (i) to take a prognostic approach to the parameterization of deep convection,whereby the quantities that describe the activity of convection are no longer diagnosed from the instan- taneous value of the large-scale forcing,but predicted by time-dependent equations,that integrate the large-scale forc- ing over time;(ii)to introduce a mesoscale parameter which varies systematically with the grid size of the numerical model in order to damp large-scale circulation usually too induced when the grid size becomes smaller (from 100 km to 10 kin).We propose an implementation of this idea in the frame of one existing scheme,already tested and used for a long time at the French Weather Service.The results of the test through one-dimensional experiments with the Phase Ⅲ of GATE data are reported in this paper;and the ones on its implementation in the three-dimensional model with the OSCAR data will be reported in a companion paper.
基金This work was supported by the National Natural Science Foundation of China under Grant Nos.61672289,61972094,61802195,and 61902191the Natural Science Foundation of Jiangsu Province under Grant No.BK20190696the Purple Mountain Laboratories。
文摘A tightly secure cryptographic scheme refers to a construction with a tight security reduction to a hardness assumption,where the reduction loss is a small constant.A scheme with tight security is preferred in practice since it could be implemented using a smaller parameter to improve efficiency.Recently,Bader et al.(EUROCRYPT 2016)have proposed a comprehensive study on the impossible tight security reductions for certain(e.g.,key-unique)public-key cryptographic schemes in the multi-user with adaptive corruptions(MU-C)setting built upon non-interactive assumptions.The assumptions of one-more version,such as one-more computational Diffie-Hellman(n-CDH),are variants of the standard assumptions and have found various applications.However,whether it is possible to have tightly secure key-unique schemes from the one-more assumptions or the impossible tight reduction results also hold for these assumptions remains unknown.In this paper,we give affirmative answers to the above question,i.e.,we can have efficient key-unique public-key cryptographic schemes with tight security built upon the one-more assumptions.Specifically,we propose a digital signature scheme and an encryption scheme,both of which are key-unique and have tight MU-C security under the one-more computational Diffie-Hellman(n-CDH)assumption.Our results also reflect from another aspect that there indeed exists a gap between the standard assumptions and their one-more version counterparts.
基金the National Natural Science Foundation of China (No.69883008,No.69773027), and in part by the NKBRSF of China (No.1999032704)
文摘In this paper, the relationship between argumentation and closed world reasoning for disjunctive information is studied. In particular, the authors propose a simple and intuitive generalization of the closed world assumption (CWA) for general disjunctive deductive databases (with default negation). This semantics, called DCWA, allows a natural argumentation-based interpretation and can be used to represent reasoning for disjunctive information. We compare DCWA with GCWA and prove that DCWA extends Minker's GCWA to the class of disjunctive databases with default negation. Also we compare our semantics with some related approaches. In addition, the computational complexity of DCWA is investigated.