With its untameable and traceable properties,blockchain technology has been widely used in the field of data sharing.How to preserve individual privacy while enabling efficient data queries is one of the primary issue...With its untameable and traceable properties,blockchain technology has been widely used in the field of data sharing.How to preserve individual privacy while enabling efficient data queries is one of the primary issues with secure data sharing.In this paper,we study verifiable keyword frequency(KF)queries with local differential privacy in blockchain.Both the numerical and the keyword attributes are present in data objects;the latter are sensitive and require privacy protection.However,prior studies in blockchain have the problem of trilemma in privacy protection and are unable to handle KF queries.We propose an efficient framework that protects data owners’privacy on keyword attributes while enabling quick and verifiable query processing for KF queries.The framework computes an estimate of a keyword’s frequency and is efficient in query time and verification object(VO)size.A utility-optimized local differential privacy technique is used for privacy protection.The data owner adds noise locally into data based on local differential privacy so that the attacker cannot infer the owner of the keywords while keeping the difference in the probability distribution of the KF within the privacy budget.We propose the VB-cm tree as the authenticated data structure(ADS).The VB-cm tree combines the Verkle tree and the Count-Min sketch(CM-sketch)to lower the VO size and query time.The VB-cm tree uses the vector commitment to verify the query results.The fixed-size CM-sketch,which summarizes the frequency of multiple keywords,is used to estimate the KF via hashing operations.We conduct an extensive evaluation of the proposed framework.The experimental results show that compared to theMerkle B+tree,the query time is reduced by 52.38%,and the VO size is reduced by more than one order of magnitude.展开更多
The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimizati...The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimization is already an NP-hard problem.Learned query optimizers(mainly in the single-node DBMS)receive attention due to its capability to capture data distributions and flexible ways to avoid hard-craft rules in refinement and adaptation to new hardware.In this paper,we focus on extensions of learned query optimizers to distributed DBMSs.Specifically,we propose one possible but general architecture of the learned query optimizer in the distributed context and highlight differences from the learned optimizer in the single-node ones.In addition,we discuss the challenges and possible solutions.展开更多
With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enha...With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.展开更多
The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is...The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is a combinatorial optimization problem,which renders exhaustive search impossible as query sizes rise.Increases in CPU performance have surpassed main memory,and disk access speeds in recent decades,allowing data compression to be used—strategies for improving database performance systems.For performance enhancement,compression and query optimization are the two most factors.Compression reduces the volume of data,whereas query optimization minimizes execution time.Compressing the database reduces memory requirement,data takes less time to load into memory,fewer buffer missing occur,and the size of intermediate results is more diminutive.This paper performed query optimization on the graph database in a cloud dew environment by considering,which requires less time to execute a query.The factors compression and query optimization improve the performance of the databases.This research compares the performance of MySQL and Neo4j databases in terms of memory usage and execution time running on cloud dew servers.展开更多
The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data...The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data.In Microsoft SQL Server 2000,recursive queries are implemented to retrieve data which is presented in a hierarchical format,but this way has its disadvantages.Common table expression(CTE)construction introduced in Microsoft SQL Server 2005 provides the significant advantage of being able to reference itself to create a recursive CTE.Hierarchical data structures,organizational charts and other parent-child table relationship reports can easily benefit from the use of recursive CTEs.The recursive query is illustrated and implemented on some simple hierarchical data.In addition,one business case study is brought forward and the solution using recursive query based on CTE is shown.At the same time,stored procedures are programmed to do the recursion in SQL.Test results show that recursive queries based on CTEs bring us the chance to create much more complex queries while retaining a much simpler syntax.展开更多
Visible-infrared person re-identification(VIPR), is a cross-modal retrieval task that searches a target from a gallery captured by cameras of different spectrums.The severe challenge for VIPR is the large intra-class ...Visible-infrared person re-identification(VIPR), is a cross-modal retrieval task that searches a target from a gallery captured by cameras of different spectrums.The severe challenge for VIPR is the large intra-class variation caused by the modal discrepancy between visible and infrared images.For that, this paper proposes a query related cluster(QRC) method for VIPR.Firstly, this paper uses an attention mechanism to calculate the similarity relation between a visible query and infrared images with the same identity in the gallery.Secondly, those infrared images with the same query images are aggregated by using the similarity relation to form a dynamic clustering center corresponding to the query image.Thirdly, QRC loss function is designed to enlarge the similarity between the query image and its dynamic cluster center to achieve query related clustering, so as to compact the intra-class variations.Consequently, in the proposed QRC method, each query has its own dynamic clustering center, which can well characterize intra-class variations in VIPR.Experimental results demonstrate that the proposed QRC method is superior to many state-of-the-art approaches, acquiring a 90.77% rank-1 identification rate on the RegDB dataset.展开更多
To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,al...To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.展开更多
Query expansion with thesaurus is one of the useful techniques in modern information retrieval (IR). In this paper, a method of query expansion for Chinese IR by using a decaying co-occurrence model is proposed and re...Query expansion with thesaurus is one of the useful techniques in modern information retrieval (IR). In this paper, a method of query expansion for Chinese IR by using a decaying co-occurrence model is proposed and realized. The model is an extension of the traditional co-occurrence model by adding a decaying factor that decreases the mutual information when the distance between the terms increases. Experimental results on TREC-9 collections show this query expansion method results in significant improvements over the IR without query expansion.展开更多
Aiming at the problem that only some types of SPARQL ( simple protocal and resource description framework query language) queries can be answered by using the current resource description framework link traversal ba...Aiming at the problem that only some types of SPARQL ( simple protocal and resource description framework query language) queries can be answered by using the current resource description framework link traversal based query execution (RDF-LTE) approach, this paper discusses how the execution order of the triple pattern affects the query results and cost based on concrete SPARQL queries, and analyzes two properties of the web of linked data, missing backward links and missing contingency solution. Then three heuristic principles for logic query plan optimization, namely, the filtered basic graph pattern (FBGP) principle, the triple pattern chain principle and the seed URIs principle, are proposed. The three principles contribute to decrease the intermediate solutions and increase the types of queries that can be answered. The effectiveness and feasibility of the proposed approach is evaluated. The experimental results show that more query results can be returned with less cost, thus enabling users to develop the full potential of the web of linked data.展开更多
Reverse k nearest neighbor (RNNk) is a generalization of the reverse nearest neighbor problem and receives increasing attention recently in the spatial data index and query. RNNk query is to retrieve all the data po...Reverse k nearest neighbor (RNNk) is a generalization of the reverse nearest neighbor problem and receives increasing attention recently in the spatial data index and query. RNNk query is to retrieve all the data points which use a query point as one of their k nearest neighbors. To answer the RNNk of queries efficiently, the properties of the Voronoi cell and the space-dividing regions are applied. The RNNk of the given point can be found without computing its nearest neighbors every time by using the rank Voronoi cell. With the elementary RNNk query result, the candidate data points of reverse nearest neighbors can he further limited by the approximation with sweepline and the partial extension of query region Q. The approximate minimum average distance (AMAD) can be calculated by the approximate RNNk without the restriction of k. Experimental results indicate the efficiency and the effectiveness of the algorithm and the approximate method in three varied data distribution spaces. The approximate query and the calculation method with the high precision and the accurate recall are obtained by filtrating data and pruning the search space.展开更多
In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image ...In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image used in traditional image retrieval into multi query examples so as to include more image features related with semantics.Retrieving images for each of the multi query examples and integrating the retrieval results,more relevant images can be obtained.The property of the recall-precision curve of a general retrieval algorithm and the K-means clustering method are used to realize the expansion according to the distance of image features of the initially retrieved images.The experimental results demonstrate that the AMQE technology can greatly improve the recall and precision of the original algorithms.展开更多
An approximate approach of querying between heterogeneous ontology-basedinformation systems based on an association matrix is proposed. First, the association matrix isdefined to describe relations between concepts in...An approximate approach of querying between heterogeneous ontology-basedinformation systems based on an association matrix is proposed. First, the association matrix isdefined to describe relations between concepts in two ontologies. Then, a methodof rewriting queriesbased on the association matrix is presented to solve the ontology heterogeneity problem. Itrewrites the queries in one ontology to approximate queries in another ontology based on thesubsumption relations between concepts. The method also uses vectors to represent queries, and thencomputes the vectors with the association matrix; the disjoint relations between concepts can beconsidered by the results. It can get better approximations than the methods currently in use, whichdonot consider disjoint relations. The method can be processed by machines automatically. It issimple to implement and expected to run quite fast.展开更多
This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed...This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors.展开更多
In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on...In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on cloud servers.Servers on cloud platforms usually have some subjective or objective attacks,which make the outsourced graph data in an insecure state.The issue of privacy data protection has become an important obstacle to data sharing and usage.How to query outsourcing graph data safely and effectively has become the focus of research.Adjacency query is a basic and frequently used operation in graph,and it will effectively promote the query range and query ability if multi-keyword fuzzy search can be supported at the same time.This work proposes to protect the privacy information of outsourcing graph data by encryption,mainly studies the problem of multi-keyword fuzzy adjacency query,and puts forward a solution.In our scheme,we use the Bloom filter and encryption mechanism to build a secure index and query token,and adjacency queries are implemented through indexes and query tokens on the cloud server.Our proposed scheme is proved by formal analysis,and the performance and effectiveness of the scheme are illustrated by experimental analysis.The research results of this work will provide solid theoretical and technical support for the further popularization and application of encrypted graph data processing technology.展开更多
文摘With its untameable and traceable properties,blockchain technology has been widely used in the field of data sharing.How to preserve individual privacy while enabling efficient data queries is one of the primary issues with secure data sharing.In this paper,we study verifiable keyword frequency(KF)queries with local differential privacy in blockchain.Both the numerical and the keyword attributes are present in data objects;the latter are sensitive and require privacy protection.However,prior studies in blockchain have the problem of trilemma in privacy protection and are unable to handle KF queries.We propose an efficient framework that protects data owners’privacy on keyword attributes while enabling quick and verifiable query processing for KF queries.The framework computes an estimate of a keyword’s frequency and is efficient in query time and verification object(VO)size.A utility-optimized local differential privacy technique is used for privacy protection.The data owner adds noise locally into data based on local differential privacy so that the attacker cannot infer the owner of the keywords while keeping the difference in the probability distribution of the KF within the privacy budget.We propose the VB-cm tree as the authenticated data structure(ADS).The VB-cm tree combines the Verkle tree and the Count-Min sketch(CM-sketch)to lower the VO size and query time.The VB-cm tree uses the vector commitment to verify the query results.The fixed-size CM-sketch,which summarizes the frequency of multiple keywords,is used to estimate the KF via hashing operations.We conduct an extensive evaluation of the proposed framework.The experimental results show that compared to theMerkle B+tree,the query time is reduced by 52.38%,and the VO size is reduced by more than one order of magnitude.
基金partially supported by NSFC under Grant Nos.61832001 and 62272008ZTE Industry-University-Institute Fund Project。
文摘The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimization is already an NP-hard problem.Learned query optimizers(mainly in the single-node DBMS)receive attention due to its capability to capture data distributions and flexible ways to avoid hard-craft rules in refinement and adaptation to new hardware.In this paper,we focus on extensions of learned query optimizers to distributed DBMSs.Specifically,we propose one possible but general architecture of the learned query optimizer in the distributed context and highlight differences from the learned optimizer in the single-node ones.In addition,we discuss the challenges and possible solutions.
文摘With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.
文摘The query optimizer uses cost-based optimization to create an execution plan with the least cost,which also consumes the least amount of resources.The challenge of query optimization for relational database systems is a combinatorial optimization problem,which renders exhaustive search impossible as query sizes rise.Increases in CPU performance have surpassed main memory,and disk access speeds in recent decades,allowing data compression to be used—strategies for improving database performance systems.For performance enhancement,compression and query optimization are the two most factors.Compression reduces the volume of data,whereas query optimization minimizes execution time.Compressing the database reduces memory requirement,data takes less time to load into memory,fewer buffer missing occur,and the size of intermediate results is more diminutive.This paper performed query optimization on the graph database in a cloud dew environment by considering,which requires less time to execute a query.The factors compression and query optimization improve the performance of the databases.This research compares the performance of MySQL and Neo4j databases in terms of memory usage and execution time running on cloud dew servers.
文摘The advantage of recursive programming is that it is very easy to write and it only requires very few lines of code if done correctly.Structured query language(SQL)is a database language and is used to manipulate data.In Microsoft SQL Server 2000,recursive queries are implemented to retrieve data which is presented in a hierarchical format,but this way has its disadvantages.Common table expression(CTE)construction introduced in Microsoft SQL Server 2005 provides the significant advantage of being able to reference itself to create a recursive CTE.Hierarchical data structures,organizational charts and other parent-child table relationship reports can easily benefit from the use of recursive CTEs.The recursive query is illustrated and implemented on some simple hierarchical data.In addition,one business case study is brought forward and the solution using recursive query based on CTE is shown.At the same time,stored procedures are programmed to do the recursion in SQL.Test results show that recursive queries based on CTEs bring us the chance to create much more complex queries while retaining a much simpler syntax.
基金Supported by the National Natural Science Foundation of China (No.61976098)the Natural Science Foundation for Outstanding Young Scholars of Fujian Province (No.2022J06023)。
文摘Visible-infrared person re-identification(VIPR), is a cross-modal retrieval task that searches a target from a gallery captured by cameras of different spectrums.The severe challenge for VIPR is the large intra-class variation caused by the modal discrepancy between visible and infrared images.For that, this paper proposes a query related cluster(QRC) method for VIPR.Firstly, this paper uses an attention mechanism to calculate the similarity relation between a visible query and infrared images with the same identity in the gallery.Secondly, those infrared images with the same query images are aggregated by using the similarity relation to form a dynamic clustering center corresponding to the query image.Thirdly, QRC loss function is designed to enlarge the similarity between the query image and its dynamic cluster center to achieve query related clustering, so as to compact the intra-class variations.Consequently, in the proposed QRC method, each query has its own dynamic clustering center, which can well characterize intra-class variations in VIPR.Experimental results demonstrate that the proposed QRC method is superior to many state-of-the-art approaches, acquiring a 90.77% rank-1 identification rate on the RegDB dataset.
基金Weaponry Equipment Pre-Research Foundation of PLA Equipment Ministry (No. 9140A06050409JB8102)Pre-Research Foundation of PLA University of Science and Technology (No. 2009JSJ11)
文摘To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.
文摘Query expansion with thesaurus is one of the useful techniques in modern information retrieval (IR). In this paper, a method of query expansion for Chinese IR by using a decaying co-occurrence model is proposed and realized. The model is an extension of the traditional co-occurrence model by adding a decaying factor that decreases the mutual information when the distance between the terms increases. Experimental results on TREC-9 collections show this query expansion method results in significant improvements over the IR without query expansion.
基金The National Natural Science Foundation of China(No.61070170)the Natural Science Foundation of Higher Education Institutions of Jiangsu Province(No.11KJB520017)Suzhou Application Foundation Research Project(No.SYG201238)
文摘Aiming at the problem that only some types of SPARQL ( simple protocal and resource description framework query language) queries can be answered by using the current resource description framework link traversal based query execution (RDF-LTE) approach, this paper discusses how the execution order of the triple pattern affects the query results and cost based on concrete SPARQL queries, and analyzes two properties of the web of linked data, missing backward links and missing contingency solution. Then three heuristic principles for logic query plan optimization, namely, the filtered basic graph pattern (FBGP) principle, the triple pattern chain principle and the seed URIs principle, are proposed. The three principles contribute to decrease the intermediate solutions and increase the types of queries that can be answered. The effectiveness and feasibility of the proposed approach is evaluated. The experimental results show that more query results can be returned with less cost, thus enabling users to develop the full potential of the web of linked data.
基金Supported by the National Natural Science Foundation of China (60673136)the Natural Science Foundation of Heilongjiang Province of China (F200601)~~
文摘Reverse k nearest neighbor (RNNk) is a generalization of the reverse nearest neighbor problem and receives increasing attention recently in the spatial data index and query. RNNk query is to retrieve all the data points which use a query point as one of their k nearest neighbors. To answer the RNNk of queries efficiently, the properties of the Voronoi cell and the space-dividing regions are applied. The RNNk of the given point can be found without computing its nearest neighbors every time by using the rank Voronoi cell. With the elementary RNNk query result, the candidate data points of reverse nearest neighbors can he further limited by the approximation with sweepline and the partial extension of query region Q. The approximate minimum average distance (AMAD) can be calculated by the approximate RNNk without the restriction of k. Experimental results indicate the efficiency and the effectiveness of the algorithm and the approximate method in three varied data distribution spaces. The approximate query and the calculation method with the high precision and the accurate recall are obtained by filtrating data and pruning the search space.
基金The National High Technology Research and Develop-ment Program of China (863 Program) (No.2002AA413420).
文摘In order to narrow the semantic gap existing in content-based image retrieval (CBIR),a novel retrieval technology called auto-extended multi query examples (AMQE) is proposed.It expands the single one query image used in traditional image retrieval into multi query examples so as to include more image features related with semantics.Retrieving images for each of the multi query examples and integrating the retrieval results,more relevant images can be obtained.The property of the recall-precision curve of a general retrieval algorithm and the K-means clustering method are used to realize the expansion according to the distance of image features of the initially retrieved images.The experimental results demonstrate that the AMQE technology can greatly improve the recall and precision of the original algorithms.
文摘An approximate approach of querying between heterogeneous ontology-basedinformation systems based on an association matrix is proposed. First, the association matrix isdefined to describe relations between concepts in two ontologies. Then, a methodof rewriting queriesbased on the association matrix is presented to solve the ontology heterogeneity problem. Itrewrites the queries in one ontology to approximate queries in another ontology based on thesubsumption relations between concepts. The method also uses vectors to represent queries, and thencomputes the vectors with the association matrix; the disjoint relations between concepts can beconsidered by the results. It can get better approximations than the methods currently in use, whichdonot consider disjoint relations. The method can be processed by machines automatically. It issimple to implement and expected to run quite fast.
文摘This article presents an innovative approach to automatic rule discovery for data transformation tasks leveraging XGBoost,a machine learning algorithm renowned for its efficiency and performance.The framework proposed herein utilizes the fusion of diversified feature formats,specifically,metadata,textual,and pattern features.The goal is to enhance the system’s ability to discern and generalize transformation rules fromsource to destination formats in varied contexts.Firstly,the article delves into the methodology for extracting these distinct features from raw data and the pre-processing steps undertaken to prepare the data for the model.Subsequent sections expound on the mechanism of feature optimization using Recursive Feature Elimination(RFE)with linear regression,aiming to retain the most contributive features and eliminate redundant or less significant ones.The core of the research revolves around the deployment of the XGBoostmodel for training,using the prepared and optimized feature sets.The article presents a detailed overview of the mathematical model and algorithmic steps behind this procedure.Finally,the process of rule discovery(prediction phase)by the trained XGBoost model is explained,underscoring its role in real-time,automated data transformations.By employingmachine learning and particularly,the XGBoost model in the context of Business Rule Engine(BRE)data transformation,the article underscores a paradigm shift towardsmore scalable,efficient,and less human-dependent data transformation systems.This research opens doors for further exploration into automated rule discovery systems and their applications in various sectors.
基金This research was supported in part by the Nature Science Foundation of China(Nos.62262033,61962029,61762055,62062045 and 62362042)the Jiangxi Provincial Natural Science Foundation of China(Nos.20224BAB202012,20202ACBL202005 and 20202BAB212006)+3 种基金the Science and Technology Research Project of Jiangxi Education Department(Nos.GJJ211815,GJJ2201914 and GJJ201832)the Hubei Natural Science Foundation Innovation and Development Joint Fund Project(No.2022CFD101)Xiangyang High-Tech Key Science and Technology Plan Project(No.2022ABH006848)Hubei Superior and Distinctive Discipline Group of“New Energy Vehicle and Smart Transportation”,the Project of Zhejiang Institute of Mechanical&Electrical Engineering,and the Jiangxi Provincial Social Science Foundation of China(No.23GL52D).
文摘In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on cloud servers.Servers on cloud platforms usually have some subjective or objective attacks,which make the outsourced graph data in an insecure state.The issue of privacy data protection has become an important obstacle to data sharing and usage.How to query outsourcing graph data safely and effectively has become the focus of research.Adjacency query is a basic and frequently used operation in graph,and it will effectively promote the query range and query ability if multi-keyword fuzzy search can be supported at the same time.This work proposes to protect the privacy information of outsourcing graph data by encryption,mainly studies the problem of multi-keyword fuzzy adjacency query,and puts forward a solution.In our scheme,we use the Bloom filter and encryption mechanism to build a secure index and query token,and adjacency queries are implemented through indexes and query tokens on the cloud server.Our proposed scheme is proved by formal analysis,and the performance and effectiveness of the scheme are illustrated by experimental analysis.The research results of this work will provide solid theoretical and technical support for the further popularization and application of encrypted graph data processing technology.