In the context of big data, many large-scale knowledge graphs have emerged to effectively organize the explosive growth of web data on the Internet. To select suitable knowledge graphs for use from many knowledge grap...In the context of big data, many large-scale knowledge graphs have emerged to effectively organize the explosive growth of web data on the Internet. To select suitable knowledge graphs for use from many knowledge graphs, quality assessment is particularly important. As an important thing of quality assessment, completeness assessment generally refers to the ratio of the current data volume to the total data volume.When evaluating the completeness of a knowledge graph, it is often necessary to refine the completeness dimension by setting different completeness metrics to produce more complete and understandable evaluation results for the knowledge graph.However, lack of awareness of requirements is the most problematic quality issue. In the actual evaluation process, the existing completeness metrics need to consider the actual application. Therefore, to accurately recommend suitable knowledge graphs to many users, it is particularly important to develop relevant measurement metrics and formulate measurement schemes for completeness. In this paper, we will first clarify the concept of completeness, establish each metric of completeness, and finally design a measurement proposal for the completeness of knowledge graphs.展开更多
At present,although knowledge graphs have been widely used in various fields such as recommendation systems,question and answer systems,and intelligent search,there are always quality problems such as knowledge omissi...At present,although knowledge graphs have been widely used in various fields such as recommendation systems,question and answer systems,and intelligent search,there are always quality problems such as knowledge omissions and errors.Quality assessment and control,as an important means to ensure the quality of knowledge,can make the applications based on knowledge graphs more complete and more accurate by reasonably assessing the knowledge graphs and fixing and improving the quality problems at the same time.Therefore,as an indispensable part of the knowledge graph construction process,the results of quality assessment and control determine the usefulness of the knowledge graph.Among them,the assessment and enhancement of completeness,as an important part of the assessment and control phase,determine whether the knowledge graph can fully reflect objective phenomena and reveal potential connections among entities.In this paper,we review specific techniques of completeness assessment and classify completeness assessment techniques in terms of closed world assumptions,open world assumptions,and partial completeness assumptions.The purpose of this paper is to further promote the development of knowledge graph quality control and to lay the foundation for subsequent research on the completeness assessment of knowledge graphs by reviewing and classifying completeness assessment techniques.展开更多
To solve the problem of long response time when users obtain suitable cutting parameters through the Internet based platform,a case-based reasoning framework is proposed.Specifically,a Hamming distance and Euclidean d...To solve the problem of long response time when users obtain suitable cutting parameters through the Internet based platform,a case-based reasoning framework is proposed.Specifically,a Hamming distance and Euclidean distance combined method is designed to measure the similarity of case features which have both numeric and category properties.In addition,AHP(Analytic Hierarchy Process)and entropy weight method are integrated to provide features weight,where both user preferences and comprehensive impact of the index have been concerned.Grey relation analysis is used to obtain the similarity of a new problem and alternative cases.Finally,a platform is also developed on Visual Studio 2015,and a case study is demonstrated to verify the practicality and efficiency of the proposed method.This method can obtain cutting parameters which is suitable without iterative calculation.Compared with the traditional PSO(Particle swarm optimization algorithm)and GA(Genetic algorithm),it can obtain faster response speed.This method can provide ideas for selecting processing parameters in industrial production.While guaranteeing the characteristic information is similar,this approach can select processing parameters which is the most appropriate for the production process and a lot of time can be saved.展开更多
Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved throu...Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.展开更多
In the economic development of Beijing,although the share of the total amount of agricultural industry in the overall economy is relatively low,it has an important impact on the daily life of residents,social stabilit...In the economic development of Beijing,although the share of the total amount of agricultural industry in the overall economy is relatively low,it has an important impact on the daily life of residents,social stability and the development of other industries.Changping District,as an important agricultural production base of Beijing,its agricultural development has an indispensable strategic significance for the stability and growth of the entire regional economy.Therefore,it is very important to study the structure of agricultural industry in Changping District.Based on the detailed analysis of the agricultural industrial structure of Changping District,this paper uses the grey relation theory to analyze the different industries in the agricultural industrial structure of Changping District,including planting,forestry,animal husbandry,fishery and agricultural,forestry,service industries,in order to reveal the impact of these industries on the agricultural industrial structure of Changping District.Through this study,it comes up with specific and feasible suggestions for the optimization of agricultural industrial structure in Changping District,and provides valuable reference for the agricultural development of other areas in Beijing.展开更多
In gas metal arc welding(GMAW)process,the short-circuit transition was the most typical transition observed in molten metal droplets.This paper used orthogonal tests to explore the coupling effect law of welding proce...In gas metal arc welding(GMAW)process,the short-circuit transition was the most typical transition observed in molten metal droplets.This paper used orthogonal tests to explore the coupling effect law of welding process parameters on the quality of weld forming under short-circuit transition,the design of 3 factors and 3 levels of a total of 9 groups of orthogonal tests,welding current,welding voltage,welding speed as input parameters:effective area ratio,humps,actual linear power density,aspect ratio,Vickers hardness as output paramet-ers(response targets).Using range analysis and trend charts,we can visually depict the relationship between input parameters and a single output parameter,ultimately determining the optimal process parameters that impact the single output index.Then combined with gray the-ory to transform the three response targets into a single gray relational grade(GRG)for analysis,the optimal combination of the weld mor-phology parameters as follows:welding current 100 A,welding voltage 25 V,welding speed 30 cm/min.Finally,validation experiments were conducted,and the results showed that the error between the gray relational grade and the predicted value was 2.74%.It was observed that the effective area ratio of the response target significantly improved,validating the reliability of the orthogonal gray relational method.展开更多
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e...In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.展开更多
This article broadens terminology and approaches that continue to advance time modelling within a relationalist framework. Time is modeled as a single dimension, flowing continuously through independent privileged poi...This article broadens terminology and approaches that continue to advance time modelling within a relationalist framework. Time is modeled as a single dimension, flowing continuously through independent privileged points. Introduced as absolute point-time, abstract continuous time is a backdrop for concrete relational-based time that is finite and discrete, bound to the limits of a real-world system. We discuss how discrete signals at a point are used to temporally anchor zero-temporal points [t = 0] in linear time. Object-oriented temporal line elements, flanked by temporal point elements, have a proportional geometric identity quantifiable by a standard unit system and can be mapped on a natural number line. Durations, line elements, are divisible into ordered unit ratio elements using ancient timekeeping formulas. The divisional structure provides temporal classes for rotational (Rt24t) and orbital (Rt18) sample periods, as well as a more general temporal class (Rt12) applicable to either sample or frame periods. We introduce notation for additive cyclic counts of sample periods, including divisional units, for calendar-like formatting. For system modeling, unit structures with dihedral symmetry, group order, and numerical order are shown to be applicable to Euclidean modelling. We introduce new functions for bijective and non-bijective mapping, modular arithmetic for cyclic-based time counts, and a novel formula relating to a subgroup of Pythagorean triples, preserving dihedral n-polygon symmetries. This article presents a new approach to model time in a relationalistic framework.展开更多
[Objective] The aim was to explore effects of environmental factors on the content of Chlorophyll a in ShaHu Lake.[Method] Based on the data in Shahu Lake from November in 2007 to September in 2008,the relationship be...[Objective] The aim was to explore effects of environmental factors on the content of Chlorophyll a in ShaHu Lake.[Method] Based on the data in Shahu Lake from November in 2007 to September in 2008,the relationship between chlorophyll a and environmental factors like water temperature,pH,secchi-depth (SD),total nitrogen,total phosphorus and potassium permanganate index was studied by grey relational analysis method.[Result] The main environmental factors affecting the content of Chlorophyll a in ShaHu Lake were in order of water temperature potassium permanganate index 〉total nitrogen 〉pH〉 total phosphorus 〉SD.[Conclusion] The research provides reference for the control of eutrophication and the reasonable development and utilization of Shahu Lake.展开更多
Utilising dissolved gases analysis, a new insulation fault diagnosis methodfor power transformers is proposed. This method is based on the group grey relational grade analysismethod. First, according to the fault type...Utilising dissolved gases analysis, a new insulation fault diagnosis methodfor power transformers is proposed. This method is based on the group grey relational grade analysismethod. First, according to the fault type and grey reference sequence structure, some typicalfault samples are divided into several sets of grey reference sequences. These sets are structuredas one grey reference sequence group. Secondly, according to a new calculation method of the greyrelational coefficient, the individual relational coefficient and grade are computed. Then accordingto the given calculation method for the group grey relation grade, the group grey relational gradeis computed and the group grey relational grade matrix is structured. Finally, according to therelational sequence, the insulation fault is identified for power transformers. The results of alarge quantity of instant analyses show that the proposed method has higher diagnosis accuracy andreliability than the three-ratio method and the traditional grey relational method. It has goodclassified diagnosis ability and reliability.展开更多
The necessity and the feasibility of introducing attribute weight into digital fingerprinting system are given. The weighted algorithm for fingerprinting relational databases of traitor tracing is proposed. Higher wei...The necessity and the feasibility of introducing attribute weight into digital fingerprinting system are given. The weighted algorithm for fingerprinting relational databases of traitor tracing is proposed. Higher weights are assigned to more significant attributes, so important attributes are more frequently fingerprinted than other ones. Finally, the robustness of the proposed algorithm, such as performance against collusion attacks, is analyzed. Experimental results prove the superiority of the algorithm.展开更多
To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,al...To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.展开更多
This paper focuses on exporting relational data into extensible markup language (XML). First, the characteristics of both relational schemas represented by E-R diagrams and XML document type definitions (DTDs) are an...This paper focuses on exporting relational data into extensible markup language (XML). First, the characteristics of both relational schemas represented by E-R diagrams and XML document type definitions (DTDs) are analyzed. Secondly, the corresponding mapping rules are proposed. At last an algorithm based on edge tables is presented. There are two key points in the algorithm. One is that the edge table is used to store the information of the relational dictionary, and this brings about the efficiency of the algorithm. The other is that structural information can be obtained from the resulting DTDs and other applications can optimize their query processes using the structural information.展开更多
Traumatic brain injury involves complex pathophysiological mechanisms,among which oxidative stress significantly contributes to the occurrence of secondary injury.In this study,we evaluated hypidone hydrochloride(YL-0...Traumatic brain injury involves complex pathophysiological mechanisms,among which oxidative stress significantly contributes to the occurrence of secondary injury.In this study,we evaluated hypidone hydrochloride(YL-0919),a self-developed antidepressant with selective sigma-1 receptor agonist properties,and its associated mechanisms and targets in traumatic brain injury.Behavioral experiments to assess functional deficits were followed by assessment of neuronal damage through histological analyses and examination of blood-brain barrier permeability and brain edema.Next,we investigated the antioxidative effects of YL-0919 by assessing the levels of traditional markers of oxidative stress in vivo in mice and in vitro in HT22 cells.Finally,the targeted action of YL-0919 was verified by employing a sigma-1 receptor antagonist(BD-1047).Our findings demonstrated that YL-0919 markedly improved deficits in motor function and spatial cognition on day 3 post traumatic brain injury,while also decreasing neuronal mortality and reversing blood-brain barrier disruption and brain edema.Furthermore,YL-0919 effectively combated oxidative stress both in vivo and in vitro.The protective effects of YL-0919 were partially inhibited by BD-1047.These results indicated that YL-0919 relieved impairments in motor and spatial cognition by restraining oxidative stress,a neuroprotective effect that was partially reversed by the sigma-1 receptor antagonist BD-1047.YL-0919 may have potential as a new treatment for traumatic brain injury.展开更多
BACKGROUND Gallbladder cancer(GBC)is the most common and aggressive subtype of biliary tract cancer(BTC)and has a poor prognosis.A newly developed regimen of gemcitabine,cisplatin,and durvalumab shows promise for the ...BACKGROUND Gallbladder cancer(GBC)is the most common and aggressive subtype of biliary tract cancer(BTC)and has a poor prognosis.A newly developed regimen of gemcitabine,cisplatin,and durvalumab shows promise for the treatment of advanced BTC.However,the efficacy of this treatment for GBC remains unclear.CASE SUMMARY In this report,we present a case in which the triple-drug regimen exhibited marked effectiveness in treating locally advanced GBC,thus leading to a long-term survival benefit.A 68-year-old man was diagnosed with locally advanced GBC,which rendered him ineligible for curative surgery.Following three cycles of therapy,a partial response was observed.After one year of combined therapy,a clinical complete response was successfully achieved.Subsequent maintenance therapy with durvalumab monotherapy resulted in a disease-free survival of 9 months for the patient.The patient experienced tolerable toxicities of reversible grade 2 nausea and fatigue.Tolerable adverse events were observed in the patient throughout the entirety of the treatment.CONCLUSION The combination of gemcitabine and cisplatin chemotherapy with durvalumab was proven to be an effective treatment approach for advanced GBC,with manageable adverse events.Further research is warranted to substantiate the effectiveness of the combined regimen in the context of GBC.展开更多
基金supported by the National Key Laboratory for Comp lex Systems Simulation Foundation (6142006190301)。
文摘In the context of big data, many large-scale knowledge graphs have emerged to effectively organize the explosive growth of web data on the Internet. To select suitable knowledge graphs for use from many knowledge graphs, quality assessment is particularly important. As an important thing of quality assessment, completeness assessment generally refers to the ratio of the current data volume to the total data volume.When evaluating the completeness of a knowledge graph, it is often necessary to refine the completeness dimension by setting different completeness metrics to produce more complete and understandable evaluation results for the knowledge graph.However, lack of awareness of requirements is the most problematic quality issue. In the actual evaluation process, the existing completeness metrics need to consider the actual application. Therefore, to accurately recommend suitable knowledge graphs to many users, it is particularly important to develop relevant measurement metrics and formulate measurement schemes for completeness. In this paper, we will first clarify the concept of completeness, establish each metric of completeness, and finally design a measurement proposal for the completeness of knowledge graphs.
基金supported by the National Key Laboratory for Complex Systems Simulation Foundation(6142006190301)。
文摘At present,although knowledge graphs have been widely used in various fields such as recommendation systems,question and answer systems,and intelligent search,there are always quality problems such as knowledge omissions and errors.Quality assessment and control,as an important means to ensure the quality of knowledge,can make the applications based on knowledge graphs more complete and more accurate by reasonably assessing the knowledge graphs and fixing and improving the quality problems at the same time.Therefore,as an indispensable part of the knowledge graph construction process,the results of quality assessment and control determine the usefulness of the knowledge graph.Among them,the assessment and enhancement of completeness,as an important part of the assessment and control phase,determine whether the knowledge graph can fully reflect objective phenomena and reveal potential connections among entities.In this paper,we review specific techniques of completeness assessment and classify completeness assessment techniques in terms of closed world assumptions,open world assumptions,and partial completeness assumptions.The purpose of this paper is to further promote the development of knowledge graph quality control and to lay the foundation for subsequent research on the completeness assessment of knowledge graphs by reviewing and classifying completeness assessment techniques.
基金the Sichuan Science and Technology Program(Nos.23ZHCG0049,2023YFG0078,23ZHCG0030,2021ZDZX0007)SCU-SUINING Project(2022CDSN-14).
文摘To solve the problem of long response time when users obtain suitable cutting parameters through the Internet based platform,a case-based reasoning framework is proposed.Specifically,a Hamming distance and Euclidean distance combined method is designed to measure the similarity of case features which have both numeric and category properties.In addition,AHP(Analytic Hierarchy Process)and entropy weight method are integrated to provide features weight,where both user preferences and comprehensive impact of the index have been concerned.Grey relation analysis is used to obtain the similarity of a new problem and alternative cases.Finally,a platform is also developed on Visual Studio 2015,and a case study is demonstrated to verify the practicality and efficiency of the proposed method.This method can obtain cutting parameters which is suitable without iterative calculation.Compared with the traditional PSO(Particle swarm optimization algorithm)and GA(Genetic algorithm),it can obtain faster response speed.This method can provide ideas for selecting processing parameters in industrial production.While guaranteeing the characteristic information is similar,this approach can select processing parameters which is the most appropriate for the production process and a lot of time can be saved.
文摘Text classification,by automatically categorizing texts,is one of the foundational elements of natural language processing applications.This study investigates how text classification performance can be improved through the integration of entity-relation information obtained from the Wikidata(Wikipedia database)database and BERTbased pre-trained Named Entity Recognition(NER)models.Focusing on a significant challenge in the field of natural language processing(NLP),the research evaluates the potential of using entity and relational information to extract deeper meaning from texts.The adopted methodology encompasses a comprehensive approach that includes text preprocessing,entity detection,and the integration of relational information.Experiments conducted on text datasets in both Turkish and English assess the performance of various classification algorithms,such as Support Vector Machine,Logistic Regression,Deep Neural Network,and Convolutional Neural Network.The results indicate that the integration of entity-relation information can significantly enhance algorithmperformance in text classification tasks and offer new perspectives for information extraction and semantic analysis in NLP applications.Contributions of this work include the utilization of distant supervised entity-relation information in Turkish text classification,the development of a Turkish relational text classification approach,and the creation of a relational database.By demonstrating potential performance improvements through the integration of distant supervised entity-relation information into Turkish text classification,this research aims to support the effectiveness of text-based artificial intelligence(AI)tools.Additionally,it makes significant contributions to the development ofmultilingual text classification systems by adding deeper meaning to text content,thereby providing a valuable addition to current NLP studies and setting an important reference point for future research.
文摘In the economic development of Beijing,although the share of the total amount of agricultural industry in the overall economy is relatively low,it has an important impact on the daily life of residents,social stability and the development of other industries.Changping District,as an important agricultural production base of Beijing,its agricultural development has an indispensable strategic significance for the stability and growth of the entire regional economy.Therefore,it is very important to study the structure of agricultural industry in Changping District.Based on the detailed analysis of the agricultural industrial structure of Changping District,this paper uses the grey relation theory to analyze the different industries in the agricultural industrial structure of Changping District,including planting,forestry,animal husbandry,fishery and agricultural,forestry,service industries,in order to reveal the impact of these industries on the agricultural industrial structure of Changping District.Through this study,it comes up with specific and feasible suggestions for the optimization of agricultural industrial structure in Changping District,and provides valuable reference for the agricultural development of other areas in Beijing.
基金supported by Major Special Projects of Science and Technology in Fujian Province,(Grant No.2020HZ03018)Natural Science Foundation of Fujian Province(Grant No.2020J01873).
文摘In gas metal arc welding(GMAW)process,the short-circuit transition was the most typical transition observed in molten metal droplets.This paper used orthogonal tests to explore the coupling effect law of welding process parameters on the quality of weld forming under short-circuit transition,the design of 3 factors and 3 levels of a total of 9 groups of orthogonal tests,welding current,welding voltage,welding speed as input parameters:effective area ratio,humps,actual linear power density,aspect ratio,Vickers hardness as output paramet-ers(response targets).Using range analysis and trend charts,we can visually depict the relationship between input parameters and a single output parameter,ultimately determining the optimal process parameters that impact the single output index.Then combined with gray the-ory to transform the three response targets into a single gray relational grade(GRG)for analysis,the optimal combination of the weld mor-phology parameters as follows:welding current 100 A,welding voltage 25 V,welding speed 30 cm/min.Finally,validation experiments were conducted,and the results showed that the error between the gray relational grade and the predicted value was 2.74%.It was observed that the effective area ratio of the response target significantly improved,validating the reliability of the orthogonal gray relational method.
基金Science and Technology Innovation 2030-Major Project of“New Generation Artificial Intelligence”granted by Ministry of Science and Technology,Grant Number 2020AAA0109300.
文摘In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.
文摘This article broadens terminology and approaches that continue to advance time modelling within a relationalist framework. Time is modeled as a single dimension, flowing continuously through independent privileged points. Introduced as absolute point-time, abstract continuous time is a backdrop for concrete relational-based time that is finite and discrete, bound to the limits of a real-world system. We discuss how discrete signals at a point are used to temporally anchor zero-temporal points [t = 0] in linear time. Object-oriented temporal line elements, flanked by temporal point elements, have a proportional geometric identity quantifiable by a standard unit system and can be mapped on a natural number line. Durations, line elements, are divisible into ordered unit ratio elements using ancient timekeeping formulas. The divisional structure provides temporal classes for rotational (Rt24t) and orbital (Rt18) sample periods, as well as a more general temporal class (Rt12) applicable to either sample or frame periods. We introduce notation for additive cyclic counts of sample periods, including divisional units, for calendar-like formatting. For system modeling, unit structures with dihedral symmetry, group order, and numerical order are shown to be applicable to Euclidean modelling. We introduce new functions for bijective and non-bijective mapping, modular arithmetic for cyclic-based time counts, and a novel formula relating to a subgroup of Pythagorean triples, preserving dihedral n-polygon symmetries. This article presents a new approach to model time in a relationalistic framework.
基金Supported by Natural Science Foundation of Ningxia (NZ0829)~~
文摘[Objective] The aim was to explore effects of environmental factors on the content of Chlorophyll a in ShaHu Lake.[Method] Based on the data in Shahu Lake from November in 2007 to September in 2008,the relationship between chlorophyll a and environmental factors like water temperature,pH,secchi-depth (SD),total nitrogen,total phosphorus and potassium permanganate index was studied by grey relational analysis method.[Result] The main environmental factors affecting the content of Chlorophyll a in ShaHu Lake were in order of water temperature potassium permanganate index 〉total nitrogen 〉pH〉 total phosphorus 〉SD.[Conclusion] The research provides reference for the control of eutrophication and the reasonable development and utilization of Shahu Lake.
文摘Utilising dissolved gases analysis, a new insulation fault diagnosis methodfor power transformers is proposed. This method is based on the group grey relational grade analysismethod. First, according to the fault type and grey reference sequence structure, some typicalfault samples are divided into several sets of grey reference sequences. These sets are structuredas one grey reference sequence group. Secondly, according to a new calculation method of the greyrelational coefficient, the individual relational coefficient and grade are computed. Then accordingto the given calculation method for the group grey relation grade, the group grey relational gradeis computed and the group grey relational grade matrix is structured. Finally, according to therelational sequence, the insulation fault is identified for power transformers. The results of alarge quantity of instant analyses show that the proposed method has higher diagnosis accuracy andreliability than the three-ratio method and the traditional grey relational method. It has goodclassified diagnosis ability and reliability.
文摘The necessity and the feasibility of introducing attribute weight into digital fingerprinting system are given. The weighted algorithm for fingerprinting relational databases of traitor tracing is proposed. Higher weights are assigned to more significant attributes, so important attributes are more frequently fingerprinted than other ones. Finally, the robustness of the proposed algorithm, such as performance against collusion attacks, is analyzed. Experimental results prove the superiority of the algorithm.
基金Weaponry Equipment Pre-Research Foundation of PLA Equipment Ministry (No. 9140A06050409JB8102)Pre-Research Foundation of PLA University of Science and Technology (No. 2009JSJ11)
文摘To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.
文摘This paper focuses on exporting relational data into extensible markup language (XML). First, the characteristics of both relational schemas represented by E-R diagrams and XML document type definitions (DTDs) are analyzed. Secondly, the corresponding mapping rules are proposed. At last an algorithm based on edge tables is presented. There are two key points in the algorithm. One is that the edge table is used to store the information of the relational dictionary, and this brings about the efficiency of the algorithm. The other is that structural information can be obtained from the resulting DTDs and other applications can optimize their query processes using the structural information.
基金supported by the National Natural Science Foundation of China,Nos.82204360(to HM)and 82270411(to GW)National Science and Technology Innovation 2030 Major Program,No.2021ZD0200900(to YL)。
文摘Traumatic brain injury involves complex pathophysiological mechanisms,among which oxidative stress significantly contributes to the occurrence of secondary injury.In this study,we evaluated hypidone hydrochloride(YL-0919),a self-developed antidepressant with selective sigma-1 receptor agonist properties,and its associated mechanisms and targets in traumatic brain injury.Behavioral experiments to assess functional deficits were followed by assessment of neuronal damage through histological analyses and examination of blood-brain barrier permeability and brain edema.Next,we investigated the antioxidative effects of YL-0919 by assessing the levels of traditional markers of oxidative stress in vivo in mice and in vitro in HT22 cells.Finally,the targeted action of YL-0919 was verified by employing a sigma-1 receptor antagonist(BD-1047).Our findings demonstrated that YL-0919 markedly improved deficits in motor function and spatial cognition on day 3 post traumatic brain injury,while also decreasing neuronal mortality and reversing blood-brain barrier disruption and brain edema.Furthermore,YL-0919 effectively combated oxidative stress both in vivo and in vitro.The protective effects of YL-0919 were partially inhibited by BD-1047.These results indicated that YL-0919 relieved impairments in motor and spatial cognition by restraining oxidative stress,a neuroprotective effect that was partially reversed by the sigma-1 receptor antagonist BD-1047.YL-0919 may have potential as a new treatment for traumatic brain injury.
基金Supported by General Project of Natural Science Foundation of Chongqing,China,No.cstc2021jcyj-msxmX0604Chongqing Doctoral"Through Train"Research Program,China,No.CSTB2022BSXM-JCX0045.
文摘BACKGROUND Gallbladder cancer(GBC)is the most common and aggressive subtype of biliary tract cancer(BTC)and has a poor prognosis.A newly developed regimen of gemcitabine,cisplatin,and durvalumab shows promise for the treatment of advanced BTC.However,the efficacy of this treatment for GBC remains unclear.CASE SUMMARY In this report,we present a case in which the triple-drug regimen exhibited marked effectiveness in treating locally advanced GBC,thus leading to a long-term survival benefit.A 68-year-old man was diagnosed with locally advanced GBC,which rendered him ineligible for curative surgery.Following three cycles of therapy,a partial response was observed.After one year of combined therapy,a clinical complete response was successfully achieved.Subsequent maintenance therapy with durvalumab monotherapy resulted in a disease-free survival of 9 months for the patient.The patient experienced tolerable toxicities of reversible grade 2 nausea and fatigue.Tolerable adverse events were observed in the patient throughout the entirety of the treatment.CONCLUSION The combination of gemcitabine and cisplatin chemotherapy with durvalumab was proven to be an effective treatment approach for advanced GBC,with manageable adverse events.Further research is warranted to substantiate the effectiveness of the combined regimen in the context of GBC.