Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tac...Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.展开更多
This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to fac...This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.展开更多
Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding ...Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.展开更多
The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules...The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules can be translated into machine language and used by autonomous vehicles.In this paper,a translation flow is designed.Beyond the translation,a deeper examination is required,because the semantics of natural languages are rich and complex,and frequently contain hidden assumptions.The issue of how to ensure that digital rules are accurate and consistent with the original intent of the traffic rules they represent is both significant and unresolved.In response,we propose a method of formal verification that combines equivalence verification with model checking.Reasonable and reassuring digital traffic rules can be obtained by utilizing the proposed traffic rule digitization flow and verification method.In addition,we offer a number of simulation applications that employ digital traffic rules to assess vehicle violations.The experimental findings indicate that our digital rules utilizing metric temporal logic(MTL)can be easily incorporated into simulation platforms and autonomous driving systems(ADS).展开更多
Traditional clustering algorithms often struggle to produce satisfactory results when dealing with datasets withuneven density. Additionally, they incur substantial computational costs when applied to high-dimensional...Traditional clustering algorithms often struggle to produce satisfactory results when dealing with datasets withuneven density. Additionally, they incur substantial computational costs when applied to high-dimensional datadue to calculating similarity matrices. To alleviate these issues, we employ the KD-Tree to partition the dataset andcompute the K-nearest neighbors (KNN) density for each point, thereby avoiding the computation of similaritymatrices. Moreover, we apply the rules of voting elections, treating each data point as a voter and casting a votefor the point with the highest density among its KNN. By utilizing the vote counts of each point, we develop thestrategy for classifying noise points and potential cluster centers, allowing the algorithm to identify clusters withuneven density and complex shapes. Additionally, we define the concept of “adhesive points” between two clustersto merge adjacent clusters that have similar densities. This process helps us identify the optimal number of clustersautomatically. Experimental results indicate that our algorithm not only improves the efficiency of clustering butalso increases its accuracy.展开更多
Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the exis...Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.展开更多
Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship ...Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.展开更多
Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero....Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.展开更多
The objective principles of shiology are mainly reflected in three fields as food acquisition, eaters' health and shiance order. Most of the objective principles in the field of food acquisition have been revealed...The objective principles of shiology are mainly reflected in three fields as food acquisition, eaters' health and shiance order. Most of the objective principles in the field of food acquisition have been revealed by agronomy and foodstuff science. This research mainly focuses on 10 principles in the field of eaters' health and shiance order and in addition, there are also five lemmas that extend from the above principles. The 10 principles are the core theory of the shiology knowledge system, which play an important role in the objective principles revealed by human beings and constitute one of the basic principles of human civilization. Compared with the scientific principles of mathematics, physics, chemistry and economics, the principles of shiology have three characteristics as popularity, practicability and survivability. The principles of shiology in the field of eaters' health are all around us, and everyone can understand and master them. Using the principles of shiology can improve the healthy life span of 8 billion people. The principles of shiology in the field of shiance order is an important tool of social governance, which can reduce human social conflicts, reduce social involution, improve overall efficiency of social operation, and maintain the sustainable development of human beings.展开更多
BACKGROUND It is increasingly common to find patients affected by a combination of type 2 diabetes mellitus(T2DM)and coronary artery disease(CAD),and studies are able to correlate their relationships with available bi...BACKGROUND It is increasingly common to find patients affected by a combination of type 2 diabetes mellitus(T2DM)and coronary artery disease(CAD),and studies are able to correlate their relationships with available biological and clinical evidence.The aim of the current study was to apply association rule mining(ARM)to discover whether there are consistent patterns of clinical features relevant to these diseases.ARM leverages clinical and laboratory data to the meaningful patterns for diabetic CAD by harnessing the power help of data-driven algorithms to optimise the decision-making in patient care.AIM To reinforce the evidence of the T2DM-CAD interplay and demonstrate the ability of ARM to provide new insights into multivariate pattern discovery.METHODS This cross-sectional study was conducted at the Department of Biochemistry in a specialized tertiary care centre in Delhi,involving a total of 300 consented subjects categorized into three groups:CAD with diabetes,CAD without diabetes,and healthy controls,with 100 subjects in each group.The participants were enrolled from the Cardiology IPD&OPD for the sample collection.The study employed ARM technique to extract the meaningful patterns and relationships from the clinical data with its original value.RESULTS The clinical dataset comprised 35 attributes from enrolled subjects.The analysis produced rules with a maximum branching factor of 4 and a rule length of 5,necessitating a 1%probability increase for enhancement.Prominent patterns emerged,highlighting strong links between health indicators and diabetes likelihood,particularly elevated HbA1C and random blood sugar levels.The ARM technique identified individuals with a random blood sugar level>175 and HbA1C>6.6 are likely in the“CAD-with-diabetes”group,offering valuable insights into health indicators and influencing factors on disease outcomes.CONCLUSION The application of this method holds promise for healthcare practitioners to offer valuable insights for enhancing patient treatment targeting specific subtypes of CAD with diabetes.Implying artificial intelligence techniques with medical data,we have shown the potential for personalized healthcare and the development of user-friendly applications aimed at improving cardiovascular health outcomes for this high-risk population to optimise the decision-making in patient care.展开更多
Objective:To apply and verify the application of intelligent audit rules for urine analysis by Cui et al.Method:A total of 1139 urine samples of hospitalized patients in Tai’an Central Hospital from September 2021 to...Objective:To apply and verify the application of intelligent audit rules for urine analysis by Cui et al.Method:A total of 1139 urine samples of hospitalized patients in Tai’an Central Hospital from September 2021 to November 2021 were randomly selected,and all samples were manually microscopic examined after the detection of the UN9000 urine analysis line.The intelligent audit rules(including the microscopic review rules and manual verification rules)were validated based on the manual microscopic examination and manual audit,and the rules were adjusted to apply to our laboratory.The laboratory turnaround time(TAT)before and after the application of intelligent audit rules was compared.Result:The microscopic review rate of intelligent rules was 25.63%(292/1139),the true positive rate,false positive rate,true negative rate,and false negative rate were 27.66%(315/1139),6.49%(74/1139),62.34%(710/1139)and 3.51%(40/1139),respectively.The approval consistency rate of manual verification rules was 84.92%(727/856),the approval inconsistency rate was 0%(0/856),the interception consistency rate was 12.61%(108/856),and the interception inconsistency rate was 0%(0/856).Conclusion:The intelligence audit rules for urine analysis by Cui et al.have good clinical applicability in our laboratory.展开更多
The network arbitration cases arising from the network lending disputes are pouring into the courts in large numbers.It is reported that the network arbitration system of some arbitration institutions even“can accept...The network arbitration cases arising from the network lending disputes are pouring into the courts in large numbers.It is reported that the network arbitration system of some arbitration institutions even“can accept more than 10,000 cases every day,”while online lending is booming,it has also caused a lot of contradictions and disputes,and traditional dispute resolution methods have failed to effectively respond to the need for efficient and convenient resolution of online lending disputes.This paper tries to study the arbitral award of online loans and proposes the construction of implementation review rules.展开更多
Cropland elevation uplift(CLEU) has recently become a new challenge for agricultural modernization,food security,and sustainable cropland use in China.Uncovering the rules of CLEU is of great theoretical and practical...Cropland elevation uplift(CLEU) has recently become a new challenge for agricultural modernization,food security,and sustainable cropland use in China.Uncovering the rules of CLEU is of great theoretical and practical significance for China’s sustainable agricultural development and rural revitalization strategy.However,existing studies lack in-depth disclosure of multi-scale CLEU evolution rules,making it difficult to support the formulation of specific cropland protection policies.We analyzed the spatio-temporal evolution and multiscale CLEU in China from 1980 to 2020 using the Lorenz curve,gravity center model,hotspot analysis,and cropland elevation spectrum.The results indicated that the center of gravity of cropland moved to the northeast from 1980 to 2000 and then shifted to the northwest.The spatial distribution of cropland became increasingly imbalanced from 1980 to 2000.The change hotspots clustered in the northwest and the northeast,whereas cold-spots were mainly in southeastern China.The average elevation of cropland increased by 17.38 m,and the elevation uplift rule in different regions differed evidently across scales.From 1980 to 2000,all provinces except Xinjiang,Inner Mongolia,Gansu,and Yunnan exhibited CLEU,with Qinghai,Tibet,Beijing,and Guangdong showing the most noticeable uplifting.The CLEU can alleviate the shortage of cropland to some extent.However,without a planning constraint,the CLEU will lead to the increase of ecological risk and food security risk.展开更多
Themulti-skill resource-constrained project scheduling problem(MS-RCPSP)is a significantmanagement science problem that extends from the resource-constrained project scheduling problem(RCPSP)and is integrated with a r...Themulti-skill resource-constrained project scheduling problem(MS-RCPSP)is a significantmanagement science problem that extends from the resource-constrained project scheduling problem(RCPSP)and is integrated with a real project and production environment.To solve MS-RCPSP,it is an efficient method to use dispatching rules combined with a parallel scheduling mechanism to generate a scheduling scheme.This paper proposes an improved gene expression programming(IGEP)approach to explore newly dispatching rules that can broadly solve MS-RCPSP.A new backward traversal decoding mechanism,and several neighborhood operators are applied in IGEP.The backward traversal decoding mechanism dramatically reduces the space complexity in the decoding process,and improves the algorithm’s performance.Several neighborhood operators improve the exploration of the potential search space.The experiment takes the intelligent multi-objective project scheduling environment(iMOPSE)benchmark dataset as the training set and testing set of IGEP.Ten newly dispatching rules are discovered and extracted by IGEP,and eight out of ten are superior to other typical dispatching rules.展开更多
Information security has emerged as a key problem in encryption because of the rapid evolution of the internet and networks.Thus,the progress of image encryption techniques is becoming an increasingly serious issue an...Information security has emerged as a key problem in encryption because of the rapid evolution of the internet and networks.Thus,the progress of image encryption techniques is becoming an increasingly serious issue and considerable problem.Small space of the key,encryption-based low confidentiality,low key sensitivity,and easily exploitable existing image encryption techniques integrating chaotic system and DNA computing are purposing the main problems to propose a new encryption technique in this study.In our proposed scheme,a three-dimensional Chen’s map and a one-dimensional Logistic map are employed to construct a double-layer image encryption scheme.In the confusion stage,different scrambling operations related to the original plain image pixels are designed using Chen’s map.A stream pixel scrambling operation related to the plain image is constructed.Then,a block scrambling-based image encryption-related stream pixel scrambled image is designed.In the diffusion stage,two rounds of pixel diffusion are generated related to the confusing image for intra-image diffusion.Chen’s map,logistic map,and DNA computing are employed to construct diffusion operations.A reverse complementary rule is applied to obtain a new form of DNA.A Chen’s map is used to produce a pseudorandom DNA sequence,and then another DNA form is constructed from a reverse pseudorandom DNA sequence.Finally,the XOR operation is performed multiple times to obtain the encrypted image.According to the simulation of experiments and security analysis,this approach extends the key space,has great sensitivity,and is able to withstand various typical attacks.An adequate encryption effect is achieved by the proposed algorithm,which can simultaneously decrease the correlation between adjacent pixels by making it near zero,also the information entropy is increased.The number of pixels changing rate(NPCR)and the unified average change intensity(UACI)both are very near to optimal values.展开更多
An information system is a type of knowledge representation,and attribute reduction is crucial in big data,machine learning,data mining,and intelligent systems.There are several ways for solving attribute reduction pr...An information system is a type of knowledge representation,and attribute reduction is crucial in big data,machine learning,data mining,and intelligent systems.There are several ways for solving attribute reduction problems,but they all require a common categorization.The selection of features in most scientific studies is a challenge for the researcher.When working with huge datasets,selecting all available attributes is not an option because it frequently complicates the study and decreases performance.On the other side,neglecting some attributes might jeopardize data accuracy.In this case,rough set theory provides a useful approach for identifying superfluous attributes that may be ignored without sacrificing any significant information;nonetheless,investigating all available combinations of attributes will result in some problems.Furthermore,because attribute reduction is primarily a mathematical issue,technical progress in reduction is dependent on the advancement of mathematical models.Because the focus of this study is on the mathematical side of attribute reduction,we propose some methods to make a reduction for information systems according to classical rough set theory,the strength of rules and similarity matrix,we applied our proposed methods to several examples and calculate the reduction for each case.These methods expand the options of attribute reductions for researchers.展开更多
Water exchange between the different compartments of a heterogeneous specimen can be characterized via diffusion magnetic resonance imaging(dMRI).Many analysis frameworks using dMRI data have been proposed to describe...Water exchange between the different compartments of a heterogeneous specimen can be characterized via diffusion magnetic resonance imaging(dMRI).Many analysis frameworks using dMRI data have been proposed to describe exchange,often using a double diffusion encoding(DDE)stimulated echo sequence.Techniques such as diffusion exchange weighted imaging(DEWI)and the filter exchange and rapid exchange models,use a specific subset of the full space DDE signal.In this work,a general representation of the DDE signal was employed with different sampling schemes(namely constant b1,diagonal and anti-diagonal)from the data reduction models to estimate exchange.A near-uniform sampling scheme was proposed and compared with the other sampling schemes.The filter exchange and rapid exchange models were also applied to estimate exchange with their own subsampling schemes.These subsampling schemes and models were compared on both simulated data and experimental data acquired with a benchtop MR scanner.In synthetic data,the diagonal and near-uniform sampling schemes performed the best due to the consistency of their estimates with the ground truth.In experimental data,the shifted diagonal and near-uniform sampling schemes outperformed the others,yielding the most consistent estimates with the full space estimation.The results suggest the feasibility of measuring exchange using a general representation of the DDE signal along with variable sampling schemes.In future studies,algorithms could be further developed for the optimization of sampling schemes,as well as incorporating additional properties,such as geometry and diffusion anisotropy,into exchange frameworks.展开更多
The Red-Thai Binh River system is an important water resource to the Northern Delta, serving the development of agriculture, people’s livelihood and other economic sectors through its upstream reservoirs and a system...The Red-Thai Binh River system is an important water resource to the Northern Delta, serving the development of agriculture, people’s livelihood and other economic sectors through its upstream reservoirs and a system of water abstraction works along the rivers. However, due to the impact of climate change and pressure from socio-economic development, the operation of the reservoir system according to Decision No. 740/QD-TTg was issued on June 17, 2019 by the Prime Minister of Government promulgating the Red-Thai Binh River system inter-reservoir operation rules (Operation rules 740) has some shortcomings that need adjustments for higher water use efficiency, meeting downstream water demand and power generation benefits. Through the results of water balance calculation and analysis of economic benefits from water use scenarios, this research proposed adjustment to the inter-reservoir operation during dry season in the Red River system. The result showed that an average water level of 1.0 - 1.7 m should be maintained at Hanoi during the increased release period.展开更多
Tea has a history of thousands of years in China and it plays an important role in the working-life and daily life of people.Tea culture rich in connotation is an important part of Chinese traditional culture,and its ...Tea has a history of thousands of years in China and it plays an important role in the working-life and daily life of people.Tea culture rich in connotation is an important part of Chinese traditional culture,and its existence and development are also of great significance to the diversified development of world culture.Based on Stuart Hall’s encoding/decoding theory,this paper analyzes the problems in the spreading of Chinese tea in and out of the country and provides solutions from the perspective of encoding,communication,and decoding.It is expected to provide a reference for the domestic and international dissemination of Chinese tea culture.展开更多
As per World Health Organization report which was released in the year of 2019,Diabetes claimed the lives of approximately 1.5 million individuals globally in 2019 and around 450 million people are affected by diabete...As per World Health Organization report which was released in the year of 2019,Diabetes claimed the lives of approximately 1.5 million individuals globally in 2019 and around 450 million people are affected by diabetes all over the world.Hence it is inferred that diabetes is rampant across the world with the majority of the world population being affected by it.Among the diabetics,it can be observed that a large number of people had failed to identify their disease in the initial stage itself and hence the disease level moved from Type-1 to Type-2.To avoid this situation,we propose a new fuzzy logic based neural classifier for early detection of diabetes.A set of new neuro-fuzzy rules is introduced with time constraints that are applied for thefirst level classification.These levels are further refined by using the Fuzzy Cognitive Maps(FCM)with time intervals for making thefinal decision over the classification process.The main objective of this proposed model is to detect the diabetes level based on the time.Also,the set of neuro-fuzzy rules are used for selecting the most contributing values over the decision-making process in diabetes prediction.The proposed model proved its efficiency in performance after experiments conducted not only from the repository but also by using the standard diabetic detection models that are available in the market.展开更多
基金the National Natural Science Foun-dation of China(Grant Nos.12105090 and 12175057).
文摘Leveraging the extraordinary phenomena of quantum superposition and quantum correlation,quantum computing offers unprecedented potential for addressing challenges beyond the reach of classical computers.This paper tackles two pivotal challenges in the realm of quantum computing:firstly,the development of an effective encoding protocol for translating classical data into quantum states,a critical step for any quantum computation.Different encoding strategies can significantly influence quantum computer performance.Secondly,we address the need to counteract the inevitable noise that can hinder quantum acceleration.Our primary contribution is the introduction of a novel variational data encoding method,grounded in quantum regression algorithm models.By adapting the learning concept from machine learning,we render data encoding a learnable process.This allowed us to study the role of quantum correlation in data encoding.Through numerical simulations of various regression tasks,we demonstrate the efficacy of our variational data encoding,particularly post-learning from instructional data.Moreover,we delve into the role of quantum correlation in enhancing task performance,especially in noisy environments.Our findings underscore the critical role of quantum correlation in not only bolstering performance but also in mitigating noise interference,thus advancing the frontier of quantum computing.
基金the Department of Education of Hunan Province,China(No.21A0541)the U.S.Department of Energy(No.DE-FG03-93ER40773)H.Z.acknowledges the financial support from Key Laboratory of Quark and Lepton Physics in Central China Normal University(No.QLPL2024P01)。
文摘This study proposes a novel particle encoding mechanism that seamlessly incorporates the quantum properties of particles,with a specific emphasis on constituent quarks.The primary objective of this mechanism is to facilitate the digital registration and identification of a wide range of particle information.Its design ensures easy integration with different event generators and digital simulations commonly used in high-energy experiments.Moreover,this innovative framework can be easily expanded to encode complex multi-quark states comprising up to nine valence quarks and accommodating an angular momentum of up to 99/2.This versatility and scalability make it a valuable tool.
基金supported in part by the National Natural Science Foundation of China under Grant No.61931020,U19B2024,62171449,62001483in part by the science and technology innovation Program of Hunan Province under Grant No.2021JJ40690。
文摘Increasing research has focused on semantic communication,the goal of which is to convey accurately the meaning instead of transmitting symbols from the sender to the receiver.In this paper,we design a novel encoding and decoding semantic communication framework,which adopts the semantic information and the contextual correlations between items to optimize the performance of a communication system over various channels.On the sender side,the average semantic loss caused by the wrong detection is defined,and a semantic source encoding strategy is developed to minimize the average semantic loss.To further improve communication reliability,a decoding strategy that utilizes the semantic and the context information to recover messages is proposed in the receiver.Extensive simulation results validate the superior performance of our strategies over state-of-the-art semantic coding and decoding policies on different communication channels.
文摘The consensus of the automotive industry and traffic management authorities is that autonomous vehicles must follow the same traffic laws as human drivers.Using formal or digital methods,natural language traffic rules can be translated into machine language and used by autonomous vehicles.In this paper,a translation flow is designed.Beyond the translation,a deeper examination is required,because the semantics of natural languages are rich and complex,and frequently contain hidden assumptions.The issue of how to ensure that digital rules are accurate and consistent with the original intent of the traffic rules they represent is both significant and unresolved.In response,we propose a method of formal verification that combines equivalence verification with model checking.Reasonable and reassuring digital traffic rules can be obtained by utilizing the proposed traffic rule digitization flow and verification method.In addition,we offer a number of simulation applications that employ digital traffic rules to assess vehicle violations.The experimental findings indicate that our digital rules utilizing metric temporal logic(MTL)can be easily incorporated into simulation platforms and autonomous driving systems(ADS).
基金National Natural Science Foundation of China Nos.61962054 and 62372353.
文摘Traditional clustering algorithms often struggle to produce satisfactory results when dealing with datasets withuneven density. Additionally, they incur substantial computational costs when applied to high-dimensional datadue to calculating similarity matrices. To alleviate these issues, we employ the KD-Tree to partition the dataset andcompute the K-nearest neighbors (KNN) density for each point, thereby avoiding the computation of similaritymatrices. Moreover, we apply the rules of voting elections, treating each data point as a voter and casting a votefor the point with the highest density among its KNN. By utilizing the vote counts of each point, we develop thestrategy for classifying noise points and potential cluster centers, allowing the algorithm to identify clusters withuneven density and complex shapes. Additionally, we define the concept of “adhesive points” between two clustersto merge adjacent clusters that have similar densities. This process helps us identify the optimal number of clustersautomatically. Experimental results indicate that our algorithm not only improves the efficiency of clustering butalso increases its accuracy.
基金National Natural Science Foundation of China(62073212).
文摘Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.
基金funded by the National Science Foundation of China(62006068)Hebei Natural Science Foundation(A2021402008),Natural Science Foundation of Scientific Research Project of Higher Education in Hebei Province(ZD2020185,QN2020188)333 Talent Supported Project of Hebei Province(C20221026).
文摘Imbalanced datasets are common in practical applications,and oversampling methods using fuzzy rules have been shown to enhance the classification performance of imbalanced data by taking into account the relationship between data attributes.However,the creation of fuzzy rules typically depends on expert knowledge,which may not fully leverage the label information in training data and may be subjective.To address this issue,a novel fuzzy rule oversampling approach is developed based on the learning vector quantization(LVQ)algorithm.In this method,the label information of the training data is utilized to determine the antecedent part of If-Then fuzzy rules by dynamically dividing attribute intervals using LVQ.Subsequently,fuzzy rules are generated and adjusted to calculate rule weights.The number of new samples to be synthesized for each rule is then computed,and samples from the minority class are synthesized based on the newly generated fuzzy rules.This results in the establishment of a fuzzy rule oversampling method based on LVQ.To evaluate the effectiveness of this method,comparative experiments are conducted on 12 publicly available imbalance datasets with five other sampling techniques in combination with the support function machine.The experimental results demonstrate that the proposed method can significantly enhance the classification algorithm across seven performance indicators,including a boost of 2.15%to 12.34%in Accuracy,6.11%to 27.06%in G-mean,and 4.69%to 18.78%in AUC.These show that the proposed method is capable of more efficiently improving the classification performance of imbalanced data.
基金supported by the Scientific Research Project of Xiang Jiang Lab(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(ZC23112101-10)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJ-Z03)the Science and Technology Innovation Program of Humnan Province(2023RC1002)。
文摘Traditional large-scale multi-objective optimization algorithms(LSMOEAs)encounter difficulties when dealing with sparse large-scale multi-objective optimization problems(SLM-OPs)where most decision variables are zero.As a result,many algorithms use a two-layer encoding approach to optimize binary variable Mask and real variable Dec separately.Nevertheless,existing optimizers often focus on locating non-zero variable posi-tions to optimize the binary variables Mask.However,approxi-mating the sparse distribution of real Pareto optimal solutions does not necessarily mean that the objective function is optimized.In data mining,it is common to mine frequent itemsets appear-ing together in a dataset to reveal the correlation between data.Inspired by this,we propose a novel two-layer encoding learning swarm optimizer based on frequent itemsets(TELSO)to address these SLMOPs.TELSO mined the frequent terms of multiple particles with better target values to find mask combinations that can obtain better objective values for fast convergence.Experi-mental results on five real-world problems and eight benchmark sets demonstrate that TELSO outperforms existing state-of-the-art sparse large-scale multi-objective evolutionary algorithms(SLMOEAs)in terms of performance and convergence speed.
文摘The objective principles of shiology are mainly reflected in three fields as food acquisition, eaters' health and shiance order. Most of the objective principles in the field of food acquisition have been revealed by agronomy and foodstuff science. This research mainly focuses on 10 principles in the field of eaters' health and shiance order and in addition, there are also five lemmas that extend from the above principles. The 10 principles are the core theory of the shiology knowledge system, which play an important role in the objective principles revealed by human beings and constitute one of the basic principles of human civilization. Compared with the scientific principles of mathematics, physics, chemistry and economics, the principles of shiology have three characteristics as popularity, practicability and survivability. The principles of shiology in the field of eaters' health are all around us, and everyone can understand and master them. Using the principles of shiology can improve the healthy life span of 8 billion people. The principles of shiology in the field of shiance order is an important tool of social governance, which can reduce human social conflicts, reduce social involution, improve overall efficiency of social operation, and maintain the sustainable development of human beings.
文摘BACKGROUND It is increasingly common to find patients affected by a combination of type 2 diabetes mellitus(T2DM)and coronary artery disease(CAD),and studies are able to correlate their relationships with available biological and clinical evidence.The aim of the current study was to apply association rule mining(ARM)to discover whether there are consistent patterns of clinical features relevant to these diseases.ARM leverages clinical and laboratory data to the meaningful patterns for diabetic CAD by harnessing the power help of data-driven algorithms to optimise the decision-making in patient care.AIM To reinforce the evidence of the T2DM-CAD interplay and demonstrate the ability of ARM to provide new insights into multivariate pattern discovery.METHODS This cross-sectional study was conducted at the Department of Biochemistry in a specialized tertiary care centre in Delhi,involving a total of 300 consented subjects categorized into three groups:CAD with diabetes,CAD without diabetes,and healthy controls,with 100 subjects in each group.The participants were enrolled from the Cardiology IPD&OPD for the sample collection.The study employed ARM technique to extract the meaningful patterns and relationships from the clinical data with its original value.RESULTS The clinical dataset comprised 35 attributes from enrolled subjects.The analysis produced rules with a maximum branching factor of 4 and a rule length of 5,necessitating a 1%probability increase for enhancement.Prominent patterns emerged,highlighting strong links between health indicators and diabetes likelihood,particularly elevated HbA1C and random blood sugar levels.The ARM technique identified individuals with a random blood sugar level>175 and HbA1C>6.6 are likely in the“CAD-with-diabetes”group,offering valuable insights into health indicators and influencing factors on disease outcomes.CONCLUSION The application of this method holds promise for healthcare practitioners to offer valuable insights for enhancing patient treatment targeting specific subtypes of CAD with diabetes.Implying artificial intelligence techniques with medical data,we have shown the potential for personalized healthcare and the development of user-friendly applications aimed at improving cardiovascular health outcomes for this high-risk population to optimise the decision-making in patient care.
文摘Objective:To apply and verify the application of intelligent audit rules for urine analysis by Cui et al.Method:A total of 1139 urine samples of hospitalized patients in Tai’an Central Hospital from September 2021 to November 2021 were randomly selected,and all samples were manually microscopic examined after the detection of the UN9000 urine analysis line.The intelligent audit rules(including the microscopic review rules and manual verification rules)were validated based on the manual microscopic examination and manual audit,and the rules were adjusted to apply to our laboratory.The laboratory turnaround time(TAT)before and after the application of intelligent audit rules was compared.Result:The microscopic review rate of intelligent rules was 25.63%(292/1139),the true positive rate,false positive rate,true negative rate,and false negative rate were 27.66%(315/1139),6.49%(74/1139),62.34%(710/1139)and 3.51%(40/1139),respectively.The approval consistency rate of manual verification rules was 84.92%(727/856),the approval inconsistency rate was 0%(0/856),the interception consistency rate was 12.61%(108/856),and the interception inconsistency rate was 0%(0/856).Conclusion:The intelligence audit rules for urine analysis by Cui et al.have good clinical applicability in our laboratory.
文摘The network arbitration cases arising from the network lending disputes are pouring into the courts in large numbers.It is reported that the network arbitration system of some arbitration institutions even“can accept more than 10,000 cases every day,”while online lending is booming,it has also caused a lot of contradictions and disputes,and traditional dispute resolution methods have failed to effectively respond to the need for efficient and convenient resolution of online lending disputes.This paper tries to study the arbitral award of online loans and proposes the construction of implementation review rules.
基金sponsored in part by the National Natural Science Foundation of China (Grant No.42001187)Scientific Research Project of Education Department of Hubei Province (No.B2022262)。
文摘Cropland elevation uplift(CLEU) has recently become a new challenge for agricultural modernization,food security,and sustainable cropland use in China.Uncovering the rules of CLEU is of great theoretical and practical significance for China’s sustainable agricultural development and rural revitalization strategy.However,existing studies lack in-depth disclosure of multi-scale CLEU evolution rules,making it difficult to support the formulation of specific cropland protection policies.We analyzed the spatio-temporal evolution and multiscale CLEU in China from 1980 to 2020 using the Lorenz curve,gravity center model,hotspot analysis,and cropland elevation spectrum.The results indicated that the center of gravity of cropland moved to the northeast from 1980 to 2000 and then shifted to the northwest.The spatial distribution of cropland became increasingly imbalanced from 1980 to 2000.The change hotspots clustered in the northwest and the northeast,whereas cold-spots were mainly in southeastern China.The average elevation of cropland increased by 17.38 m,and the elevation uplift rule in different regions differed evidently across scales.From 1980 to 2000,all provinces except Xinjiang,Inner Mongolia,Gansu,and Yunnan exhibited CLEU,with Qinghai,Tibet,Beijing,and Guangdong showing the most noticeable uplifting.The CLEU can alleviate the shortage of cropland to some extent.However,without a planning constraint,the CLEU will lead to the increase of ecological risk and food security risk.
基金funded by the National Natural Science Foundation of China(Nos.51875420,51875421,52275504).
文摘Themulti-skill resource-constrained project scheduling problem(MS-RCPSP)is a significantmanagement science problem that extends from the resource-constrained project scheduling problem(RCPSP)and is integrated with a real project and production environment.To solve MS-RCPSP,it is an efficient method to use dispatching rules combined with a parallel scheduling mechanism to generate a scheduling scheme.This paper proposes an improved gene expression programming(IGEP)approach to explore newly dispatching rules that can broadly solve MS-RCPSP.A new backward traversal decoding mechanism,and several neighborhood operators are applied in IGEP.The backward traversal decoding mechanism dramatically reduces the space complexity in the decoding process,and improves the algorithm’s performance.Several neighborhood operators improve the exploration of the potential search space.The experiment takes the intelligent multi-objective project scheduling environment(iMOPSE)benchmark dataset as the training set and testing set of IGEP.Ten newly dispatching rules are discovered and extracted by IGEP,and eight out of ten are superior to other typical dispatching rules.
基金Deanship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research work through the Project Number:IFP22UQU4400257DSR031.
文摘Information security has emerged as a key problem in encryption because of the rapid evolution of the internet and networks.Thus,the progress of image encryption techniques is becoming an increasingly serious issue and considerable problem.Small space of the key,encryption-based low confidentiality,low key sensitivity,and easily exploitable existing image encryption techniques integrating chaotic system and DNA computing are purposing the main problems to propose a new encryption technique in this study.In our proposed scheme,a three-dimensional Chen’s map and a one-dimensional Logistic map are employed to construct a double-layer image encryption scheme.In the confusion stage,different scrambling operations related to the original plain image pixels are designed using Chen’s map.A stream pixel scrambling operation related to the plain image is constructed.Then,a block scrambling-based image encryption-related stream pixel scrambled image is designed.In the diffusion stage,two rounds of pixel diffusion are generated related to the confusing image for intra-image diffusion.Chen’s map,logistic map,and DNA computing are employed to construct diffusion operations.A reverse complementary rule is applied to obtain a new form of DNA.A Chen’s map is used to produce a pseudorandom DNA sequence,and then another DNA form is constructed from a reverse pseudorandom DNA sequence.Finally,the XOR operation is performed multiple times to obtain the encrypted image.According to the simulation of experiments and security analysis,this approach extends the key space,has great sensitivity,and is able to withstand various typical attacks.An adequate encryption effect is achieved by the proposed algorithm,which can simultaneously decrease the correlation between adjacent pixels by making it near zero,also the information entropy is increased.The number of pixels changing rate(NPCR)and the unified average change intensity(UACI)both are very near to optimal values.
文摘An information system is a type of knowledge representation,and attribute reduction is crucial in big data,machine learning,data mining,and intelligent systems.There are several ways for solving attribute reduction problems,but they all require a common categorization.The selection of features in most scientific studies is a challenge for the researcher.When working with huge datasets,selecting all available attributes is not an option because it frequently complicates the study and decreases performance.On the other side,neglecting some attributes might jeopardize data accuracy.In this case,rough set theory provides a useful approach for identifying superfluous attributes that may be ignored without sacrificing any significant information;nonetheless,investigating all available combinations of attributes will result in some problems.Furthermore,because attribute reduction is primarily a mathematical issue,technical progress in reduction is dependent on the advancement of mathematical models.Because the focus of this study is on the mathematical side of attribute reduction,we propose some methods to make a reduction for information systems according to classical rough set theory,the strength of rules and similarity matrix,we applied our proposed methods to several examples and calculate the reduction for each case.These methods expand the options of attribute reductions for researchers.
基金the Swedish Foundation for International Cooperation in Research and Higher Education(STINT),and the Swedish Research Council(Dnr 2022e04715).
文摘Water exchange between the different compartments of a heterogeneous specimen can be characterized via diffusion magnetic resonance imaging(dMRI).Many analysis frameworks using dMRI data have been proposed to describe exchange,often using a double diffusion encoding(DDE)stimulated echo sequence.Techniques such as diffusion exchange weighted imaging(DEWI)and the filter exchange and rapid exchange models,use a specific subset of the full space DDE signal.In this work,a general representation of the DDE signal was employed with different sampling schemes(namely constant b1,diagonal and anti-diagonal)from the data reduction models to estimate exchange.A near-uniform sampling scheme was proposed and compared with the other sampling schemes.The filter exchange and rapid exchange models were also applied to estimate exchange with their own subsampling schemes.These subsampling schemes and models were compared on both simulated data and experimental data acquired with a benchtop MR scanner.In synthetic data,the diagonal and near-uniform sampling schemes performed the best due to the consistency of their estimates with the ground truth.In experimental data,the shifted diagonal and near-uniform sampling schemes outperformed the others,yielding the most consistent estimates with the full space estimation.The results suggest the feasibility of measuring exchange using a general representation of the DDE signal along with variable sampling schemes.In future studies,algorithms could be further developed for the optimization of sampling schemes,as well as incorporating additional properties,such as geometry and diffusion anisotropy,into exchange frameworks.
文摘The Red-Thai Binh River system is an important water resource to the Northern Delta, serving the development of agriculture, people’s livelihood and other economic sectors through its upstream reservoirs and a system of water abstraction works along the rivers. However, due to the impact of climate change and pressure from socio-economic development, the operation of the reservoir system according to Decision No. 740/QD-TTg was issued on June 17, 2019 by the Prime Minister of Government promulgating the Red-Thai Binh River system inter-reservoir operation rules (Operation rules 740) has some shortcomings that need adjustments for higher water use efficiency, meeting downstream water demand and power generation benefits. Through the results of water balance calculation and analysis of economic benefits from water use scenarios, this research proposed adjustment to the inter-reservoir operation during dry season in the Red River system. The result showed that an average water level of 1.0 - 1.7 m should be maintained at Hanoi during the increased release period.
文摘Tea has a history of thousands of years in China and it plays an important role in the working-life and daily life of people.Tea culture rich in connotation is an important part of Chinese traditional culture,and its existence and development are also of great significance to the diversified development of world culture.Based on Stuart Hall’s encoding/decoding theory,this paper analyzes the problems in the spreading of Chinese tea in and out of the country and provides solutions from the perspective of encoding,communication,and decoding.It is expected to provide a reference for the domestic and international dissemination of Chinese tea culture.
文摘As per World Health Organization report which was released in the year of 2019,Diabetes claimed the lives of approximately 1.5 million individuals globally in 2019 and around 450 million people are affected by diabetes all over the world.Hence it is inferred that diabetes is rampant across the world with the majority of the world population being affected by it.Among the diabetics,it can be observed that a large number of people had failed to identify their disease in the initial stage itself and hence the disease level moved from Type-1 to Type-2.To avoid this situation,we propose a new fuzzy logic based neural classifier for early detection of diabetes.A set of new neuro-fuzzy rules is introduced with time constraints that are applied for thefirst level classification.These levels are further refined by using the Fuzzy Cognitive Maps(FCM)with time intervals for making thefinal decision over the classification process.The main objective of this proposed model is to detect the diabetes level based on the time.Also,the set of neuro-fuzzy rules are used for selecting the most contributing values over the decision-making process in diabetes prediction.The proposed model proved its efficiency in performance after experiments conducted not only from the repository but also by using the standard diabetic detection models that are available in the market.