Accurate cropland information is critical for agricultural planning and production,especially in foodstressed countries like China.Although widely used medium-to-high-resolution satellite-based cropland maps have been...Accurate cropland information is critical for agricultural planning and production,especially in foodstressed countries like China.Although widely used medium-to-high-resolution satellite-based cropland maps have been developed from various remotely sensed data sources over the past few decades,considerable discrepancies exist among these products both in total area and in spatial distribution of croplands,impeding further applications of these datasets.The factors influencing their inconsistency are also unknown.In this study,we evaluated the consistency and accuracy of six cropland maps widely used in China in circa 2020,including three state-of-the-art 10-m products(i.e.,Google Dynamic World,ESRI Land Cover,and ESA WorldCover)and three 30-m ones(i.e.,GLC_FCS30,GlobeLand 30,and CLCD).We also investigated the effects of landscape fragmentation,climate,and agricultural management.Validation using a ground-truth sample revealed that the 10-m-resolution WorldCover provided the highest accuracy(92.3%).These maps collectively overestimated Chinese cropland area by up to 56%.Up to 37%of the land showed spatial inconsistency among the maps,concentrated mainly in mountainous regions and attributed to the varying accuracy of cropland maps,cropland fragmentation and management practices such as irrigation.Our work shed light on the promotion of future cropland mapping efforts,especially in highly inconsistent regions.展开更多
This paper presents a new method of using a convolutional neural network(CNN)in machine learning to identify brand consistency by product appearance variation.In Experiment 1,we collected fifty mouse devices from the ...This paper presents a new method of using a convolutional neural network(CNN)in machine learning to identify brand consistency by product appearance variation.In Experiment 1,we collected fifty mouse devices from the past thirty-five years from a renowned company to build a dataset consisting of product pictures with pre-defined design features of their appearance and functions.Results show that it is a challenge to distinguish periods for the subtle evolution of themouse devices with such traditionalmethods as time series analysis and principal component analysis(PCA).In Experiment 2,we applied deep learning to predict the extent to which the product appearance variation ofmouse devices of various brands.The investigation collected 6,042 images ofmouse devices and divided theminto the Early Stage and the Late Stage.Results show the highest accuracy of 81.4%with the CNNmodel,and the evaluation score of brand style consistency is 0.36,implying that the brand consistency score converted by the CNN accuracy rate is not always perfect in the real world.The relationship between product appearance variation,brand style consistency,and evaluation score is beneficial for predicting new product styles and future product style roadmaps.In addition,the CNN heat maps highlight the critical areas of design features of different styles,providing alternative clues related to the blurred boundary.The study provides insights into practical problems for designers,manufacturers,and marketers in product design.It not only contributes to the scientific understanding of design development,but also provides industry professionals with practical tools and methods to improve the design process and maintain brand consistency.Designers can use these techniques to find features that influence brand style.Then,capture these features as innovative design elements and maintain core brand values.展开更多
With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to mult...With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to multimodalinformation exchange and fusion, with many methods attempting to integrate unimodal features to generatemultimodal news representations. However, they still need to fully explore the hierarchical and complex semanticcorrelations between different modal contents, severely limiting their performance detecting multimodal falseinformation. This work proposes a two-stage detection framework for multimodal false information detection,called ASMFD, which is based on image aesthetic similarity to segment and explores the consistency andinconsistency features of images and texts. Specifically, we first use the Contrastive Language-Image Pre-training(CLIP) model to learn the relationship between text and images through label awareness and train an imageaesthetic attribute scorer using an aesthetic attribute dataset. Then, we calculate the aesthetic similarity betweenthe image and related images and use this similarity as a threshold to divide the multimodal correlation matrixinto consistency and inconsistencymatrices. Finally, the fusionmodule is designed to identify essential features fordetectingmultimodal false information. In extensive experiments on four datasets, the performance of the ASMFDis superior to state-of-the-art baseline methods.展开更多
Objective To observe the value of grey-level histogram analysis based on T2WI for differentiating consistency of meningioma.Methods Data of 109 patients with meningioma were retrospectively analyzed.The patients were ...Objective To observe the value of grey-level histogram analysis based on T2WI for differentiating consistency of meningioma.Methods Data of 109 patients with meningioma were retrospectively analyzed.The patients were divided into hard group(n=71)and soft group(n=38)according to the consistency of tumors.Tumor ROI was outlined on axial T2WI showing the largest tumor section,gray levels were extracted and histogram analysis was performed.The value of each histogram parameter were compared between groups.Then receiver operating characteristic curve was drawn,and the area under the curve(AUC)was calculated to evaluate the efficiency for differentiating soft and hard meningioma.Results P 1,P 10,P 50,P 90,P 99 and the mean grey levels on T2WI in soft group were all higher than those in hard group(all P<0.05),while the variance,the kurtosis and the skewness were not significantly different between groups(all P>0.05).The differentiating efficiency of P 1,P 10,P 50,P 90,P 99 and the mean grey levels on T2WI were all fine,with AUC of 0.774 to 0.833,and no significant difference was found(all P>0.05).Conclusion Parameters of grey-level histogram analysis such as P 1,P 10,P 50,P 90,P 99 and the mean values based on T2WI were all valuable for differentiating soft and hard meningioma.展开更多
As one of the major threats to the current DeFi(Decentralized Finance)ecosystem,reentrant attack induces data inconsistency of the victim smart contract,enabling attackers to steal on-chain assets from DeFi projects,w...As one of the major threats to the current DeFi(Decentralized Finance)ecosystem,reentrant attack induces data inconsistency of the victim smart contract,enabling attackers to steal on-chain assets from DeFi projects,which could terribly do harm to the confidence of the blockchain investors.However,protecting DeFi projects from the reentrant attack is very difficult,since generating a call loop within the highly automatic DeFi ecosystem could be very practicable.Existing researchers mainly focus on the detection of the reentrant vulnerabilities in the code testing,and no method could promise the non-existent of reentrant vulnerabilities.In this paper,we introduce the database lock mechanism to isolate the correlated smart contract states from other operations in the same contract,so that we can prevent the attackers from abusing the inconsistent smart contract state.Compared to the existing resolutions of front-running,code audit,andmodifier,our method guarantees protection resultswith better flexibility.And we further evaluate our method on a number of de facto reentrant attacks observed from Etherscan.The results prove that our method could efficiently prevent the reentrant attack with less running cost.展开更多
Domain adaptation(DA) aims to find a subspace,where the discrepancies between the source and target domains are reduced. Based on this subspace, the classifier trained by the labeled source samples can classify unlabe...Domain adaptation(DA) aims to find a subspace,where the discrepancies between the source and target domains are reduced. Based on this subspace, the classifier trained by the labeled source samples can classify unlabeled target samples well.Existing approaches leverage Graph Embedding Learning to explore such a subspace. Unfortunately, due to 1) the interaction of the consistency and specificity between samples, and 2) the joint impact of the degenerated features and incorrect labels in the samples, the existing approaches might assign unsuitable similarity, which restricts their performance. In this paper, we propose an approach called adaptive graph embedding with consistency and specificity(AGE-CS) to cope with these issues. AGE-CS consists of two methods, i.e., graph embedding with consistency and specificity(GECS), and adaptive graph embedding(AGE).GECS jointly learns the similarity of samples under the geometric distance and semantic similarity metrics, while AGE adaptively adjusts the relative importance between the geometric distance and semantic similarity during the iterations. By AGE-CS,the neighborhood samples with the same label are rewarded,while the neighborhood samples with different labels are punished. As a result, compact structures are preserved, and advanced performance is achieved. Extensive experiments on five benchmark datasets demonstrate that the proposed method performs better than other Graph Embedding methods.展开更多
System-wide information management(SWIM)is a complex distributed information transfer and sharing system for the next generation of Air Transportation System(ATS).In response to the growing volume of civil aviation ai...System-wide information management(SWIM)is a complex distributed information transfer and sharing system for the next generation of Air Transportation System(ATS).In response to the growing volume of civil aviation air operations,users accessing different authentication domains in the SWIM system have problems with the validity,security,and privacy of SWIM-shared data.In order to solve these problems,this paper proposes a SWIM crossdomain authentication scheme based on a consistent hashing algorithm on consortium blockchain and designs a blockchain certificate format for SWIM cross-domain authentication.The scheme uses a consistent hash algorithm with virtual nodes in combination with a cluster of authentication centers in the SWIM consortium blockchain architecture to synchronize the user’s authentication mapping relationships between authentication domains.The virtual authentication nodes are mapped separately using different services provided by SWIM to guarantee the partitioning of the consistent hash ring on the consortium blockchain.According to the dynamic change of user’s authentication requests,the nodes of virtual service authentication can be added and deleted to realize the dynamic load balancing of cross-domain authentication of different services.Security analysis shows that this protocol can resist network attacks such as man-in-the-middle attacks,replay attacks,and Sybil attacks.Experiments show that this scheme can reduce the redundant authentication operations of identity information and solve the problems of traditional cross-domain authentication with single-point collapse,difficulty in expansion,and uneven load.At the same time,it has better security of information storage and can realize the cross-domain authentication requirements of SWIM users with low communication costs and system overhead.KEYWORDS System-wide information management(SWIM);consortium blockchain;consistent hash;cross-domain authentication;load balancing.展开更多
Recently,the convolutional neural network(CNN)has been dom-inant in studies on interpreting remote sensing images(RSI).However,it appears that training optimization strategies have received less attention in relevant ...Recently,the convolutional neural network(CNN)has been dom-inant in studies on interpreting remote sensing images(RSI).However,it appears that training optimization strategies have received less attention in relevant research.To evaluate this problem,the author proposes a novel algo-rithm named the Fast Training CNN(FST-CNN).To verify the algorithm’s effectiveness,twenty methods,including six classic models and thirty archi-tectures from previous studies,are included in a performance comparison.The overall accuracy(OA)trained by the FST-CNN algorithm on the same model architecture and dataset is treated as an evaluation baseline.Results show that there is a maximal OA gap of 8.35%between the FST-CNN and those methods in the literature,which means a 10%margin in performance.Meanwhile,all those complex roadmaps,e.g.,deep feature fusion,model combination,model ensembles,and human feature engineering,are not as effective as expected.It reveals that there was systemic suboptimal perfor-mance in the previous studies.Most of the CNN-based methods proposed in the previous studies show a consistent mistake,which has made the model’s accuracy lower than its potential value.The most important reasons seem to be the inappropriate training strategy and the shift in data distribution introduced by data augmentation(DA).As a result,most of the performance evaluation was conducted based on an inaccurate,suboptimal,and unfair result.It has made most of the previous research findings questionable to some extent.However,all these confusing results also exactly demonstrate the effectiveness of FST-CNN.This novel algorithm is model-agnostic and can be employed on any image classification model to potentially boost performance.In addition,the results also show that a standardized training strategy is indeed very meaningful for the research tasks of the RSI-SC.展开更多
The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator con...The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator converges asymptotically to some multivariate normal distribution with shrinkage effect values. We establish that the rate of convergence is of order and rate , hence the James-Stein shrinkage estimator is -consistent. Then visualise its consistency by studying the asymptotic behaviour using simulating plots in R for the mean squared error of the maximum likelihood estimator and the shrinkage estimator. The latter graphically shows lower mean squared error as compared to that of the maximum likelihood estimator.展开更多
Functional brain networks (FBN) based on resting-state functional magnetic resonance imaging (rs-fMRI) have become an important tool for exploring underlying organization patterns in the brain, which can provide an ob...Functional brain networks (FBN) based on resting-state functional magnetic resonance imaging (rs-fMRI) have become an important tool for exploring underlying organization patterns in the brain, which can provide an objective basis for brain disorders such as autistic spectrum disorder (ASD). Due to its importance, researchers have proposed a number of FBN estimation methods. However, most existing methods only model a type of functional connection relationship between brain regions-of-interest (ROIs), such as partial correlation or full correlation, which is difficult to fully capture the subtle connections among ROIs since these connections are extremely complex. Motivated by the multi-view learning, in this study we propose a novel Consistent and Specific Multi-view FBNs Fusion (CSMF) approach. Concretely, we first construct multi-view FBNs (i.e., multiple types of FBNs modelling various relationships among ROIs), and then these FBNs are decomposed into a consistent representation matrix and their own specific matrices which capture their common and unique information, respectively. Lastly, to obtain a better brain representation, it is fusing the consistent and specific representation matrices in the latent representation spaces of FBNs, but not directly fusing the original FBNs. This potentially makes it more easily to find the comprehensively brain connections. The experimental results of ASD identification on the ABIDE datasets validate the effectiveness of our proposed method compared to several state-of-the-art methods. Our proposed CSMF method achieved 72.8% and 76.67% classification performance on the ABIDE dataset.展开更多
“The Fundamental Rights and obligations of Citizens”, the title of Chapter II of the current Constitution of PRC, and the stipulation that citizens must fulfill certain obligations while enjoying rights have trigger...“The Fundamental Rights and obligations of Citizens”, the title of Chapter II of the current Constitution of PRC, and the stipulation that citizens must fulfill certain obligations while enjoying rights have triggered many debates. Considering the historical origin, constitutional philosophy, and the text and structure of the Constitution, the special provisions of the current Constitution are influenced by the principle of consistency of rights and obligations. The principle of consistency of rights and obligations in the Constitution is of complex connotation. Therefore, although the principle of consistency of rights and obligations effectively connects the public and private spheres, it ignores the diversity and differences of the interests and elements contained in the Constitution, the asymmetry of the normative status of fundamental rights and fundamental obligations,and the right of citizens to self-determination of personal interests.The principle of consistency of rights and obligations should be purposefully narrowed and concretized: In the context of public-private integration and risk society prevention, the principle of consistency of rights and obligations can be used as a supplement to the functional system of the Constitution;in the field of fundamental political obligations, the principle of consistency of rights and obligations should be in line with the requirements of the state to respect and protect human rights;in the field of fundamental social obligations, the exercise of fundamental rights by individuals is protected by the Constitution as long as they comply with the law and do not infringe upon the interests of the social community. The principle of the consistency of rights and obligations is only used as the negative constituents of the determination of rights and the basis for the effect against a third party of fundamental rights.展开更多
Infinitely many conservation laws for some (1+1)-dimension soliton hierarchy with self-consistent sources are constructed from their corresponding Lax pairs directly. Three examples are given. Besides, infinitely m...Infinitely many conservation laws for some (1+1)-dimension soliton hierarchy with self-consistent sources are constructed from their corresponding Lax pairs directly. Three examples are given. Besides, infinitely many conservation laws for Kadomtsev-Petviashvili (KP) hierarchy with self-consistent sources are obtained from the pseudo-differential operator and the Lax pair.展开更多
An appropriate coupled cohesive law for predicting the mixed mode failure is established by combining normal separation and tangential separation of surfaces in the cohesive zone model (CZM) and the cohesive element...An appropriate coupled cohesive law for predicting the mixed mode failure is established by combining normal separation and tangential separation of surfaces in the cohesive zone model (CZM) and the cohesive element method. The Xu-Needleman exponential cohesive law with the fully shear failure mechanism is one of the most popular models. Based on the proposed consistently coupled rule/principle, the Xu-Needleman law with the fully shear failure mechanism is proved to be a non-consistently coupled cohesive law by analyzing the surface separation work. It is shown that the Xu-Needleman law is only valid in the mixed mode fracture when the normal separation work equals the tangential separation work. Based on the consistently coupled principle and the modification of the Xu-Needleman law, a consistently coupled cohesive (CCC) law is given. It is shown that the proposed CCC law has already overcome the non-consistency defect of the Xu-Needleman law with great promise in mixed mode analyses.展开更多
The classical propositional calculus(often called also as“zero-order logic”),is the most fundamental two-valued logical system.It is necessary to construct the classical calculus of quantifiers(often called also as...The classical propositional calculus(often called also as“zero-order logic”),is the most fundamental two-valued logical system.It is necessary to construct the classical calculus of quantifiers(often called also as“classical calculus of predicates”or“first-order logic”),which is necessary to construct the classical functional calculus.This last one is being used for formalization of the Arithmetic System.At the beginning of this paper,we introduce a notation and we repeat certain well-known notions(among others,the notions of operation of consequence,a system,consistency in the traditional sense,consistency in the absolute sense)and certain well-known theorems.Next,we establish that classical propositional calculus is an inconsistent theory.展开更多
In this paper, we have discussed a random censoring test with incomplete information, and proved that the maximum likelihood estimator(MLE) of the parameter based on the randomly censored data with incomplete informat...In this paper, we have discussed a random censoring test with incomplete information, and proved that the maximum likelihood estimator(MLE) of the parameter based on the randomly censored data with incomplete information in the case of the exponential distribution has the strong consistency.展开更多
To improve the inconsistency in the analytic hierarchy process(AHP), a new method based on marginal optimization theory is proposed. During the improving process, this paper regards the reduction of consistency ratio(...To improve the inconsistency in the analytic hierarchy process(AHP), a new method based on marginal optimization theory is proposed. During the improving process, this paper regards the reduction of consistency ratio(CR) as benefit, and the maximum modification compared to the original pairwise comparison matrix(PCM) as cost, then the improvement of consistency is transformed to a benefit/cost analysis problem. According to the maximal marginal effect principle, the elements of PCM are modified by a fixed increment(or decrement) step by step till the consistency ratio becomes acceptable, which can ensure minimum adjustment to the original PCM so that the decision makers’ judgment is preserved as much as possible. The correctness of the proposed method is proved mathematically by theorem. Firstly, the marginal benefit/cost ratio is calculated for each single element of the PCM when it has been modified by a fixed increment(or decrement).Then, modification to the element with the maximum marginal benefit/cost ratio is accepted. Next, the marginal benefit/cost ratio is calculated again upon the revised matrix, and followed by choosing the modification to the element with the maximum marginal benefit/cost ratio. The process of calculating marginal effect and choosing the best modified element is repeated for each revised matrix till acceptable consistency is reached, i.e., CR<0.1. Finally,illustrative examples show the proposed method is more effective and better in preserving the original comparison information than existing methods.展开更多
Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadra...Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.展开更多
By the transformation relations between complementary judgement matrix and recip- rocal judgement matrix,this paper proposes two methods for improving the consistency of com- plementary judgement matrix and gives tw...By the transformation relations between complementary judgement matrix and recip- rocal judgement matrix,this paper proposes two methods for improving the consistency of com- plementary judgement matrix and gives two simple practical iterative algorithms.These two al- gorithms are easy to implementon computer,and the modified complementary judgementmatri- ces remain most information that original matrix contains.Thus the methods supplement and develop the theory and methodology for improving consistency of complementary judgement matrix展开更多
Image segmentation is a key and fundamental problem in image processing,computer graphics,and computer vision.Level set based method for image segmentation is used widely for its topology flexibility and proper mathem...Image segmentation is a key and fundamental problem in image processing,computer graphics,and computer vision.Level set based method for image segmentation is used widely for its topology flexibility and proper mathematical formulation.However,poor performance of existing level set models on noisy images and weak boundary limit its application in image segmentation.In this paper,we present a region consistency constraint term to measure the regional consistency on both sides of the boundary,this term defines the boundary of the image within a range,and hence increases the stability of the level set model.The term can make existing level set models significantly improve the efficiency of the algorithms on segmenting images with noise and weak boundary.Furthermore,this constraint term can make edge-based level set model overcome the defect of sensitivity to the initial contour.The experimental results show that our algorithm is efficient for image segmentation and outperform the existing state-of-art methods regarding images with noise and weak boundary.展开更多
基金This work was supported by the National Natural Science Foundation of China(72221002,42271375)the Strategic Priority Research Program(XDA28060100)the Informatization Plan Project(CAS-WX2021PY-0109)of the Chinese Academy of Sciences.
文摘Accurate cropland information is critical for agricultural planning and production,especially in foodstressed countries like China.Although widely used medium-to-high-resolution satellite-based cropland maps have been developed from various remotely sensed data sources over the past few decades,considerable discrepancies exist among these products both in total area and in spatial distribution of croplands,impeding further applications of these datasets.The factors influencing their inconsistency are also unknown.In this study,we evaluated the consistency and accuracy of six cropland maps widely used in China in circa 2020,including three state-of-the-art 10-m products(i.e.,Google Dynamic World,ESRI Land Cover,and ESA WorldCover)and three 30-m ones(i.e.,GLC_FCS30,GlobeLand 30,and CLCD).We also investigated the effects of landscape fragmentation,climate,and agricultural management.Validation using a ground-truth sample revealed that the 10-m-resolution WorldCover provided the highest accuracy(92.3%).These maps collectively overestimated Chinese cropland area by up to 56%.Up to 37%of the land showed spatial inconsistency among the maps,concentrated mainly in mountainous regions and attributed to the varying accuracy of cropland maps,cropland fragmentation and management practices such as irrigation.Our work shed light on the promotion of future cropland mapping efforts,especially in highly inconsistent regions.
基金supported in part by a grant,PHA1110214,from MOE,Taiwan.
文摘This paper presents a new method of using a convolutional neural network(CNN)in machine learning to identify brand consistency by product appearance variation.In Experiment 1,we collected fifty mouse devices from the past thirty-five years from a renowned company to build a dataset consisting of product pictures with pre-defined design features of their appearance and functions.Results show that it is a challenge to distinguish periods for the subtle evolution of themouse devices with such traditionalmethods as time series analysis and principal component analysis(PCA).In Experiment 2,we applied deep learning to predict the extent to which the product appearance variation ofmouse devices of various brands.The investigation collected 6,042 images ofmouse devices and divided theminto the Early Stage and the Late Stage.Results show the highest accuracy of 81.4%with the CNNmodel,and the evaluation score of brand style consistency is 0.36,implying that the brand consistency score converted by the CNN accuracy rate is not always perfect in the real world.The relationship between product appearance variation,brand style consistency,and evaluation score is beneficial for predicting new product styles and future product style roadmaps.In addition,the CNN heat maps highlight the critical areas of design features of different styles,providing alternative clues related to the blurred boundary.The study provides insights into practical problems for designers,manufacturers,and marketers in product design.It not only contributes to the scientific understanding of design development,but also provides industry professionals with practical tools and methods to improve the design process and maintain brand consistency.Designers can use these techniques to find features that influence brand style.Then,capture these features as innovative design elements and maintain core brand values.
文摘With the explosive growth of false information on social media platforms, the automatic detection of multimodalfalse information has received increasing attention. Recent research has significantly contributed to multimodalinformation exchange and fusion, with many methods attempting to integrate unimodal features to generatemultimodal news representations. However, they still need to fully explore the hierarchical and complex semanticcorrelations between different modal contents, severely limiting their performance detecting multimodal falseinformation. This work proposes a two-stage detection framework for multimodal false information detection,called ASMFD, which is based on image aesthetic similarity to segment and explores the consistency andinconsistency features of images and texts. Specifically, we first use the Contrastive Language-Image Pre-training(CLIP) model to learn the relationship between text and images through label awareness and train an imageaesthetic attribute scorer using an aesthetic attribute dataset. Then, we calculate the aesthetic similarity betweenthe image and related images and use this similarity as a threshold to divide the multimodal correlation matrixinto consistency and inconsistencymatrices. Finally, the fusionmodule is designed to identify essential features fordetectingmultimodal false information. In extensive experiments on four datasets, the performance of the ASMFDis superior to state-of-the-art baseline methods.
文摘Objective To observe the value of grey-level histogram analysis based on T2WI for differentiating consistency of meningioma.Methods Data of 109 patients with meningioma were retrospectively analyzed.The patients were divided into hard group(n=71)and soft group(n=38)according to the consistency of tumors.Tumor ROI was outlined on axial T2WI showing the largest tumor section,gray levels were extracted and histogram analysis was performed.The value of each histogram parameter were compared between groups.Then receiver operating characteristic curve was drawn,and the area under the curve(AUC)was calculated to evaluate the efficiency for differentiating soft and hard meningioma.Results P 1,P 10,P 50,P 90,P 99 and the mean grey levels on T2WI in soft group were all higher than those in hard group(all P<0.05),while the variance,the kurtosis and the skewness were not significantly different between groups(all P>0.05).The differentiating efficiency of P 1,P 10,P 50,P 90,P 99 and the mean grey levels on T2WI were all fine,with AUC of 0.774 to 0.833,and no significant difference was found(all P>0.05).Conclusion Parameters of grey-level histogram analysis such as P 1,P 10,P 50,P 90,P 99 and the mean values based on T2WI were all valuable for differentiating soft and hard meningioma.
基金supported byNationalKeyResearch andDevelopment Plan(Grant No.2018YFB1800701)Key-Area Research and Development Program of Guangdong Province 2020B0101090003,CCF-NSFOCUS Kunpeng Scientific Research Fund(CCF-NSFOCUS 2021010)+2 种基金National Natural Science Foundation of China(Grant Nos.61902083,62172115,61976064)Guangdong Higher Education Innovation Group 2020KCXTD007 and Guangzhou Higher Education Innovation Group(No.202032854)Guangzhou Fundamental Research Plan of“Municipalschool”Jointly Funded Projects(No.202102010445).
文摘As one of the major threats to the current DeFi(Decentralized Finance)ecosystem,reentrant attack induces data inconsistency of the victim smart contract,enabling attackers to steal on-chain assets from DeFi projects,which could terribly do harm to the confidence of the blockchain investors.However,protecting DeFi projects from the reentrant attack is very difficult,since generating a call loop within the highly automatic DeFi ecosystem could be very practicable.Existing researchers mainly focus on the detection of the reentrant vulnerabilities in the code testing,and no method could promise the non-existent of reentrant vulnerabilities.In this paper,we introduce the database lock mechanism to isolate the correlated smart contract states from other operations in the same contract,so that we can prevent the attackers from abusing the inconsistent smart contract state.Compared to the existing resolutions of front-running,code audit,andmodifier,our method guarantees protection resultswith better flexibility.And we further evaluate our method on a number of de facto reentrant attacks observed from Etherscan.The results prove that our method could efficiently prevent the reentrant attack with less running cost.
基金supported in part by the Key-Area Research and Development Program of Guangdong Province (2020B010166006)the National Natural Science Foundation of China (61972102)+2 种基金the Guangzhou Science and Technology Plan Project (023A04J1729)the Science and Technology development fund (FDCT)Macao SAR (015/2020/AMJ)。
文摘Domain adaptation(DA) aims to find a subspace,where the discrepancies between the source and target domains are reduced. Based on this subspace, the classifier trained by the labeled source samples can classify unlabeled target samples well.Existing approaches leverage Graph Embedding Learning to explore such a subspace. Unfortunately, due to 1) the interaction of the consistency and specificity between samples, and 2) the joint impact of the degenerated features and incorrect labels in the samples, the existing approaches might assign unsuitable similarity, which restricts their performance. In this paper, we propose an approach called adaptive graph embedding with consistency and specificity(AGE-CS) to cope with these issues. AGE-CS consists of two methods, i.e., graph embedding with consistency and specificity(GECS), and adaptive graph embedding(AGE).GECS jointly learns the similarity of samples under the geometric distance and semantic similarity metrics, while AGE adaptively adjusts the relative importance between the geometric distance and semantic similarity during the iterations. By AGE-CS,the neighborhood samples with the same label are rewarded,while the neighborhood samples with different labels are punished. As a result, compact structures are preserved, and advanced performance is achieved. Extensive experiments on five benchmark datasets demonstrate that the proposed method performs better than other Graph Embedding methods.
基金funded by the National Natural Science Foundation of China(62172418)the Joint Funds of the National Natural Science Foundation of China and the Civil Aviation Administration of China(U2133203)+1 种基金the Education Commission Scientific Research Project of Tianjin China(2022KJ081)the Open Fund of Key Laboratory of Civil Aircraft Airworthiness Technology(SH2021111907).
文摘System-wide information management(SWIM)is a complex distributed information transfer and sharing system for the next generation of Air Transportation System(ATS).In response to the growing volume of civil aviation air operations,users accessing different authentication domains in the SWIM system have problems with the validity,security,and privacy of SWIM-shared data.In order to solve these problems,this paper proposes a SWIM crossdomain authentication scheme based on a consistent hashing algorithm on consortium blockchain and designs a blockchain certificate format for SWIM cross-domain authentication.The scheme uses a consistent hash algorithm with virtual nodes in combination with a cluster of authentication centers in the SWIM consortium blockchain architecture to synchronize the user’s authentication mapping relationships between authentication domains.The virtual authentication nodes are mapped separately using different services provided by SWIM to guarantee the partitioning of the consistent hash ring on the consortium blockchain.According to the dynamic change of user’s authentication requests,the nodes of virtual service authentication can be added and deleted to realize the dynamic load balancing of cross-domain authentication of different services.Security analysis shows that this protocol can resist network attacks such as man-in-the-middle attacks,replay attacks,and Sybil attacks.Experiments show that this scheme can reduce the redundant authentication operations of identity information and solve the problems of traditional cross-domain authentication with single-point collapse,difficulty in expansion,and uneven load.At the same time,it has better security of information storage and can realize the cross-domain authentication requirements of SWIM users with low communication costs and system overhead.KEYWORDS System-wide information management(SWIM);consortium blockchain;consistent hash;cross-domain authentication;load balancing.
基金Hunan University of Arts and Science provided doctoral research funding for this study (grant number 16BSQD23)Fund of Geography Subject ([2022]351)also provided funding.
文摘Recently,the convolutional neural network(CNN)has been dom-inant in studies on interpreting remote sensing images(RSI).However,it appears that training optimization strategies have received less attention in relevant research.To evaluate this problem,the author proposes a novel algo-rithm named the Fast Training CNN(FST-CNN).To verify the algorithm’s effectiveness,twenty methods,including six classic models and thirty archi-tectures from previous studies,are included in a performance comparison.The overall accuracy(OA)trained by the FST-CNN algorithm on the same model architecture and dataset is treated as an evaluation baseline.Results show that there is a maximal OA gap of 8.35%between the FST-CNN and those methods in the literature,which means a 10%margin in performance.Meanwhile,all those complex roadmaps,e.g.,deep feature fusion,model combination,model ensembles,and human feature engineering,are not as effective as expected.It reveals that there was systemic suboptimal perfor-mance in the previous studies.Most of the CNN-based methods proposed in the previous studies show a consistent mistake,which has made the model’s accuracy lower than its potential value.The most important reasons seem to be the inappropriate training strategy and the shift in data distribution introduced by data augmentation(DA).As a result,most of the performance evaluation was conducted based on an inaccurate,suboptimal,and unfair result.It has made most of the previous research findings questionable to some extent.However,all these confusing results also exactly demonstrate the effectiveness of FST-CNN.This novel algorithm is model-agnostic and can be employed on any image classification model to potentially boost performance.In addition,the results also show that a standardized training strategy is indeed very meaningful for the research tasks of the RSI-SC.
文摘The study explores the asymptotic consistency of the James-Stein shrinkage estimator obtained by shrinking a maximum likelihood estimator. We use Hansen’s approach to show that the James-Stein shrinkage estimator converges asymptotically to some multivariate normal distribution with shrinkage effect values. We establish that the rate of convergence is of order and rate , hence the James-Stein shrinkage estimator is -consistent. Then visualise its consistency by studying the asymptotic behaviour using simulating plots in R for the mean squared error of the maximum likelihood estimator and the shrinkage estimator. The latter graphically shows lower mean squared error as compared to that of the maximum likelihood estimator.
文摘Functional brain networks (FBN) based on resting-state functional magnetic resonance imaging (rs-fMRI) have become an important tool for exploring underlying organization patterns in the brain, which can provide an objective basis for brain disorders such as autistic spectrum disorder (ASD). Due to its importance, researchers have proposed a number of FBN estimation methods. However, most existing methods only model a type of functional connection relationship between brain regions-of-interest (ROIs), such as partial correlation or full correlation, which is difficult to fully capture the subtle connections among ROIs since these connections are extremely complex. Motivated by the multi-view learning, in this study we propose a novel Consistent and Specific Multi-view FBNs Fusion (CSMF) approach. Concretely, we first construct multi-view FBNs (i.e., multiple types of FBNs modelling various relationships among ROIs), and then these FBNs are decomposed into a consistent representation matrix and their own specific matrices which capture their common and unique information, respectively. Lastly, to obtain a better brain representation, it is fusing the consistent and specific representation matrices in the latent representation spaces of FBNs, but not directly fusing the original FBNs. This potentially makes it more easily to find the comprehensively brain connections. The experimental results of ASD identification on the ABIDE datasets validate the effectiveness of our proposed method compared to several state-of-the-art methods. Our proposed CSMF method achieved 72.8% and 76.67% classification performance on the ABIDE dataset.
文摘“The Fundamental Rights and obligations of Citizens”, the title of Chapter II of the current Constitution of PRC, and the stipulation that citizens must fulfill certain obligations while enjoying rights have triggered many debates. Considering the historical origin, constitutional philosophy, and the text and structure of the Constitution, the special provisions of the current Constitution are influenced by the principle of consistency of rights and obligations. The principle of consistency of rights and obligations in the Constitution is of complex connotation. Therefore, although the principle of consistency of rights and obligations effectively connects the public and private spheres, it ignores the diversity and differences of the interests and elements contained in the Constitution, the asymmetry of the normative status of fundamental rights and fundamental obligations,and the right of citizens to self-determination of personal interests.The principle of consistency of rights and obligations should be purposefully narrowed and concretized: In the context of public-private integration and risk society prevention, the principle of consistency of rights and obligations can be used as a supplement to the functional system of the Constitution;in the field of fundamental political obligations, the principle of consistency of rights and obligations should be in line with the requirements of the state to respect and protect human rights;in the field of fundamental social obligations, the exercise of fundamental rights by individuals is protected by the Constitution as long as they comply with the law and do not infringe upon the interests of the social community. The principle of the consistency of rights and obligations is only used as the negative constituents of the determination of rights and the basis for the effect against a third party of fundamental rights.
基金supported by the National Natural Science Foundation of China (Grant Nos.10371070, 10671121)the Shanghai Leading Academic Discipline Project (Grant No.J50101)the President Foundation of East China Institute of Technology (Grant No.DHXK0810)
文摘Infinitely many conservation laws for some (1+1)-dimension soliton hierarchy with self-consistent sources are constructed from their corresponding Lax pairs directly. Three examples are given. Besides, infinitely many conservation laws for Kadomtsev-Petviashvili (KP) hierarchy with self-consistent sources are obtained from the pseudo-differential operator and the Lax pair.
基金Project supported by the National Natural Science Foundation of China(Nos.50878117 and 51038006)the China Scholarship Council Project(No.M.H.HE-2009621076)the Tsinghua University Initiative Scientific Research Program(No.20101081766)
文摘An appropriate coupled cohesive law for predicting the mixed mode failure is established by combining normal separation and tangential separation of surfaces in the cohesive zone model (CZM) and the cohesive element method. The Xu-Needleman exponential cohesive law with the fully shear failure mechanism is one of the most popular models. Based on the proposed consistently coupled rule/principle, the Xu-Needleman law with the fully shear failure mechanism is proved to be a non-consistently coupled cohesive law by analyzing the surface separation work. It is shown that the Xu-Needleman law is only valid in the mixed mode fracture when the normal separation work equals the tangential separation work. Based on the consistently coupled principle and the modification of the Xu-Needleman law, a consistently coupled cohesive (CCC) law is given. It is shown that the proposed CCC law has already overcome the non-consistency defect of the Xu-Needleman law with great promise in mixed mode analyses.
文摘The classical propositional calculus(often called also as“zero-order logic”),is the most fundamental two-valued logical system.It is necessary to construct the classical calculus of quantifiers(often called also as“classical calculus of predicates”or“first-order logic”),which is necessary to construct the classical functional calculus.This last one is being used for formalization of the Arithmetic System.At the beginning of this paper,we introduce a notation and we repeat certain well-known notions(among others,the notions of operation of consequence,a system,consistency in the traditional sense,consistency in the absolute sense)and certain well-known theorems.Next,we establish that classical propositional calculus is an inconsistent theory.
文摘In this paper, we have discussed a random censoring test with incomplete information, and proved that the maximum likelihood estimator(MLE) of the parameter based on the randomly censored data with incomplete information in the case of the exponential distribution has the strong consistency.
基金supported by the National Natural Science Foundation of China(6160150161502521)
文摘To improve the inconsistency in the analytic hierarchy process(AHP), a new method based on marginal optimization theory is proposed. During the improving process, this paper regards the reduction of consistency ratio(CR) as benefit, and the maximum modification compared to the original pairwise comparison matrix(PCM) as cost, then the improvement of consistency is transformed to a benefit/cost analysis problem. According to the maximal marginal effect principle, the elements of PCM are modified by a fixed increment(or decrement) step by step till the consistency ratio becomes acceptable, which can ensure minimum adjustment to the original PCM so that the decision makers’ judgment is preserved as much as possible. The correctness of the proposed method is proved mathematically by theorem. Firstly, the marginal benefit/cost ratio is calculated for each single element of the PCM when it has been modified by a fixed increment(or decrement).Then, modification to the element with the maximum marginal benefit/cost ratio is accepted. Next, the marginal benefit/cost ratio is calculated again upon the revised matrix, and followed by choosing the modification to the element with the maximum marginal benefit/cost ratio. The process of calculating marginal effect and choosing the best modified element is repeated for each revised matrix till acceptable consistency is reached, i.e., CR<0.1. Finally,illustrative examples show the proposed method is more effective and better in preserving the original comparison information than existing methods.
文摘Consistency of LS estimate of simple linear EV model is studied. It is shown that under some common assumptions of the model, both weak and strong consistency of the estimate are equivalent but it is not so for quadratic-mean consistency.
基金Supported by the National Natural Science Foundation of China(79970 0 93 ) and the Subject of PLAUniversity of Science and Technology
文摘By the transformation relations between complementary judgement matrix and recip- rocal judgement matrix,this paper proposes two methods for improving the consistency of com- plementary judgement matrix and gives two simple practical iterative algorithms.These two al- gorithms are easy to implementon computer,and the modified complementary judgementmatri- ces remain most information that original matrix contains.Thus the methods supplement and develop the theory and methodology for improving consistency of complementary judgement matrix
基金supported in part by the NSFC-Zhejiang Joint Fund of the Integration of Informatization and Industrialization(U1609218)NSFC(61772312,61373078,61772253)+1 种基金the Key Research and Development Project of Shandong Province(2017GGX10110)NSF of Shandong Province(ZR2016FM21,ZR2016FM13)
文摘Image segmentation is a key and fundamental problem in image processing,computer graphics,and computer vision.Level set based method for image segmentation is used widely for its topology flexibility and proper mathematical formulation.However,poor performance of existing level set models on noisy images and weak boundary limit its application in image segmentation.In this paper,we present a region consistency constraint term to measure the regional consistency on both sides of the boundary,this term defines the boundary of the image within a range,and hence increases the stability of the level set model.The term can make existing level set models significantly improve the efficiency of the algorithms on segmenting images with noise and weak boundary.Furthermore,this constraint term can make edge-based level set model overcome the defect of sensitivity to the initial contour.The experimental results show that our algorithm is efficient for image segmentation and outperform the existing state-of-art methods regarding images with noise and weak boundary.