Remote sensing data plays an important role in natural disaster management.However,with the increase of the variety and quantity of remote sensors,the problem of“knowledge barriers”arises when data users in disaster...Remote sensing data plays an important role in natural disaster management.However,with the increase of the variety and quantity of remote sensors,the problem of“knowledge barriers”arises when data users in disaster field retrieve remote sensing data.To improve this problem,this paper proposes an ontology and rule based retrieval(ORR)method to retrieve disaster remote sensing data,and this method introduces ontology technology to express earthquake disaster and remote sensing knowledge,on this basis,and realizes the task suitability reasoning of earthquake disaster remote sensing data,mining the semantic relationship between remote sensing metadata and disasters.The prototype system is built according to the ORR method,which is compared with the traditional method,using the ORR method to retrieve disaster remote sensing data can reduce the knowledge requirements of data users in the retrieval process and improve data retrieval efficiency.展开更多
In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple e...In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.展开更多
With the increasing number of remote sensing satellites,the diversification of observation modals,and the continuous advancement of artificial intelligence algorithms,historically opportunities have been brought to th...With the increasing number of remote sensing satellites,the diversification of observation modals,and the continuous advancement of artificial intelligence algorithms,historically opportunities have been brought to the applications of earth observation and information retrieval,including climate change monitoring,natural resource investigation,ecological environment protection,and territorial space planning.Over the past decade,artificial intelligence technology represented by deep learning has made significant contributions to the field of Earth observation.Therefore,this review will focus on the bottlenecks and development process of using deep learning methods for land use/land cover mapping of the Earth’s surface.Firstly,it introduces the basic framework of semantic segmentation network models for land use/land cover mapping.Then,we summarize the development of semantic segmentation models in geographical field,focusing on spatial and semantic feature extraction,context relationship perception,multi-scale effects modelling,and the transferability of models under geographical differences.Then,the application of semantic segmentation models in agricultural management,building boundary extraction,single tree segmentation and inter-species classification are reviewed.Finally,we discuss the future development prospects of deep learning technology in the context of remote sensing big data.展开更多
One major task of our improvement of China’s international communication capabilities is to tell China’s stories well and make China’s voice heard to communicate a comprehensive,multi-dimensional view of China in t...One major task of our improvement of China’s international communication capabilities is to tell China’s stories well and make China’s voice heard to communicate a comprehensive,multi-dimensional view of China in the international arena.International Chinese language education bears the dual responsibility of spreading the Chinese language and promoting the Chinese culture.How to shape global narratives for telling China’s stories and improve China’s international communication capabilities has become an important issue that concerns international Chinese language education.Chinese idioms,which are hidden treasures of the Chinese language,can play a unique role in this regard.It is therefore necessary to review the content and methods of including idioms in international Chinese language education.In this paper,instruction in idioms is approached by making a corpus-based comparative analysis from the theoretical perspectives of equivalence,cognitive metaphor,and second language acquisition.Qualitative and quantitative methods are combined to analyze idioms to be added to the vocabulary of international Chinese language education in three dimensions:word frequency,semantic transparency,and functional equivalence.This paper aims to explore a new approach to the dissemination of Chinese culture through instruction in idioms in international Chinese language education.展开更多
This work is about the progress of previous related work based on an experiment to improve the intelligence of robotic systems,with the aim of achieving more linguistic communication capabilities between humans and ro...This work is about the progress of previous related work based on an experiment to improve the intelligence of robotic systems,with the aim of achieving more linguistic communication capabilities between humans and robots.In this paper,the authors attempt an algorithmic approach to natural language generation through hole semantics and by applying the OMAS-III computational model as a grammatical formalism.In the original work,a technical language is used,while in the later works,this has been replaced by a limited Greek natural language dictionary.This particular effort was made to give the evolving system the ability to ask questions,as well as the authors developed an initial dialogue system using these techniques.The results show that the use of these techniques the authors apply can give us a more sophisticated dialogue system in the future.展开更多
The promotion of the national common language and writing system stands as a cornerstone in cultivating a sense of community among the Chinese nation.The national common language and writing system,serving as a potent...The promotion of the national common language and writing system stands as a cornerstone in cultivating a sense of community among the Chinese nation.The national common language and writing system,serving as a potent vehicle for cultural dissemination and linguistic communication,acts as a crucial bridge fostering unity,mutual support,cultural exchange,and cohesion among diverse ethnic groups.Its widespread promotion not only reinforces the cohesion of the Chinese nation but also plays a pivotal role in fortifying national and cultural identity.The advocacy and dissemination of the national common language and writing system contribute significantly to enhancing the sense of community within the Chinese nation.This concerted effort serves as an internal driving force,creating a positive feedback loop that strengthens both national identity and the cultural bonds shared by the Chinese people.The reciprocal relationship between the promotion and popularization of the national common language and writing system reinforces a mutual sense of assistance and reciprocity.This bidirectional dynamic propels a synergistic interplay,fostering a beneficial cycle in the promotion and widespread adoption of the national common language and writing system.In essence,the promotion and popularization of the national common language and writing system not only contribute to its advancement but also serve to deepen the sense of community within the Chinese nation.This reciprocal interaction between the two elements establishes a robust foundation for nurturing a strong and cohesive Chinese community.展开更多
A monitoring of multiple physical parameters in a moderate seismic area in Western Piedmont (NW Italy) and the simultaneous observation of the behaviour of numerous species of domestic and wild animals gave in a perio...A monitoring of multiple physical parameters in a moderate seismic area in Western Piedmont (NW Italy) and the simultaneous observation of the behaviour of numerous species of domestic and wild animals gave in a period of over twenty years the possibility to distinguish the unusual animal behaviours due to local earthquake nucleation from other causes. In particular, the observation of the body and vocal language of dogs (Canis familiaris) in the same area has permitted not only to specify the different meanings of vocal language in connection to their body language, but also to classify the minimum elements into a vocal language that is linked together by tonal and rhythmical sequences of sounds that form a semantic lexicon. The usage of the same tonal and rhythmical vocal sequences in similar or identical situations, which are experienced by different groups of dogs, induces us to verify whether it could be possible to link particular vocal sequences to precise physical anomalies before earthquakes. The individuation of physical anomalies due to an earthquake nucleation or due to a hydro-geological destabilization, is possible thanks to a continuous long-term monitoring of some parameters. Moreover, the complexity of the vocal language of dogs increases if the dogs live in an area with a law population density. Then the correlation between some vocal sequences and some seismic precursors is better if dogs live free in yard or on farms, if they are in good health, and if they can establish a strong social relation of group. When dogs live closed in yards of houses that are far apart, they communicate with each other with an amazing vocal language, full of questions and answers, imitations of sequences, and information about situations that may be harmful to them.展开更多
The concept of language sense has never failed to arouse interest among scholars in recent decades at home and abroad.Many scholars point out that language sense is an important competence which helps facilitate learn...The concept of language sense has never failed to arouse interest among scholars in recent decades at home and abroad.Many scholars point out that language sense is an important competence which helps facilitate learning a language.It bears much connection with learners’acquisition of a language.Another concept,implicit learning,which is proved effective and has been applied in second language acquisition(SLA),is consistent with language sense in terms of its learning mechanism.In this sense,cultivation of English language sense can be theoretically supported by implicit learning and pedagogical implications can be derived accordingly.展开更多
This research reports a study of language decay from the perspective of semantics, in an explicit sense, from perspectives of the relationship between the signifier and the signified which are Saussure's most impo...This research reports a study of language decay from the perspective of semantics, in an explicit sense, from perspectives of the relationship between the signifier and the signified which are Saussure's most important theory, the relationship between denotation and connotation, conversational implicature, as well as the relationship between language and culture. Through doing this, people can have a better understanding of the nature of language decay. At last, the writer briefly tells some bad results about language decay.展开更多
Winds stampeding the fields (Ted Hughes) (1) He sang his didn’t he danced his did (E. E. Cummings) (2) Now as I was young and easy under the apple boughs About the lilting house and happy as the grass was green (Dyla...Winds stampeding the fields (Ted Hughes) (1) He sang his didn’t he danced his did (E. E. Cummings) (2) Now as I was young and easy under the apple boughs About the lilting house and happy as the grass was green (Dylan Thomas) (3) That spanieled me heels,to whom I gave their wishes. (William Shakespeare, Anthony展开更多
Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods fo...Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.展开更多
One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse ...One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.展开更多
In recent years, China is deepening reform and opening up to the outside world especially after China’s successful accession to theWTO. In this respect, legal communication plays an important part in the intercourse ...In recent years, China is deepening reform and opening up to the outside world especially after China’s successful accession to theWTO. In this respect, legal communication plays an important part in the intercourse between different countries, and the basic features oflegal language can be noticed in specific situations in legal communication. In this thesis, the Lexical features, Semantic features, Grammaticaland Structural features of legal language are introduced.展开更多
The process inference cannot be achieved effectively by the traditional expert system,while the ontology and semantic technology could provide better solution to the knowledge acquisition and intelligent inference of ...The process inference cannot be achieved effectively by the traditional expert system,while the ontology and semantic technology could provide better solution to the knowledge acquisition and intelligent inference of expert system.The application mode of ontology and semantic technology on the process parameters recommendation are mainly investigated.Firstly,the content about ontology,semantic web rule language(SWRL)rules and the relative inference engine are introduced.Then,the inference method about process based on ontology technology and the SWRL rule is proposed.The construction method of process ontology base and the writing criterion of SWRL rule are described later.Finally,the results of inference are obtained.The mode raised could offer the reference to the construction of process knowledge base as well as the expert system's reusable process rule library.展开更多
Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly...Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.展开更多
Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous human...Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.展开更多
The detection of crack defects on the walls of road tunnels is a crucial step in the process of ensuring travel safetyand performing routine tunnel maintenance. The automatic and accurate detection of cracks on the su...The detection of crack defects on the walls of road tunnels is a crucial step in the process of ensuring travel safetyand performing routine tunnel maintenance. The automatic and accurate detection of cracks on the surface of roadtunnels is the key to improving the maintenance efficiency of road tunnels. Machine vision technology combinedwith a deep neural network model is an effective means to realize the localization and identification of crackdefects on the surface of road tunnels.We propose a complete set of automatic inspection methods for identifyingcracks on the walls of road tunnels as a solution to the problem of difficulty in identifying cracks during manualmaintenance. First, a set of equipment applied to the real-time acquisition of high-definition images of walls inroad tunnels is designed. Images of walls in road tunnels are acquired based on the designed equipment, whereimages containing crack defects are manually identified and selected. Subsequently, the training and validationsets used to construct the crack inspection model are obtained based on the acquired images, whereas the regionscontaining cracks and the pixels of the cracks are finely labeled. After that, a crack area sensing module is designedbased on the proposed you only look once version 7 model combined with coordinate attention mechanism (CAYOLOV7) network to locate the crack regions in the road tunnel surface images. Only subimages containingcracks are acquired and sent to the multiscale semantic segmentation module for extraction of the pixels to whichthe cracks belong based on the DeepLab V3+ network. The precision and recall of the crack region localizationon the surface of a road tunnel based on our proposed method are 82.4% and 93.8%, respectively. Moreover, themean intersection over union (MIoU) and pixel accuracy (PA) values for achieving pixel-level detection accuracyare 76.84% and 78.29%, respectively. The experimental results on the dataset show that our proposed two-stagedetection method outperforms other state-of-the-art models in crack region localization and detection. Based onour proposedmethod, the images captured on the surface of a road tunnel can complete crack detection at a speed often frames/second, and the detection accuracy can reach 0.25 mm, which meets the requirements for maintenanceof an actual project. The designed CA-YOLO V7 network enables precise localization of the area to which a crackbelongs in images acquired under different environmental and lighting conditions in road tunnels. The improvedDeepLab V3+ network based on lightweighting is able to extract crack morphology in a given region more quicklywhile maintaining segmentation accuracy. The established model combines defect localization and segmentationmodels for the first time, realizing pixel-level defect localization and extraction on the surface of road tunnelsin complex environments, and is capable of determining the actual size of cracks based on the physical coordinatesystemafter camera calibration. The trainedmodelhas highaccuracy andcanbe extendedandapplied to embeddedcomputing devices for the assessment and repair of damaged areas in different types of road tunnels.展开更多
In this paper, we focus on investigation of the predicate transformer semantics of the contract language introduced by Back and von Wright in their book titled as “Refinement Calculus: A Systematic Introduction” (Sp...In this paper, we focus on investigation of the predicate transformer semantics of the contract language introduced by Back and von Wright in their book titled as “Refinement Calculus: A Systematic Introduction” (Springer-Verlag, New York, 1998) in the framework of fuzziness. In order to define fuzzy operations, i.e., fuzzy logic connectives, we take into account implicator → and its associated based on residuated lattice theory. Based on these basic fuzzy operations, we introduce the angelic and demonic updates of fuzzy relations. They are the basis of fuzzy predicate transformers in the sense of that any strongly monotone fuzzy predicate transformer can be represented as the sequential composition of the angelic and demonic updates. Together with the standard strong negation , we set up the duality between the angel and demon. The fuzzy predicate transformers semantics of contract statements is established and a simple example of contract statements is given.展开更多
With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enha...With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.展开更多
基金supported by the National Key Research and Development Program of China(2020YFC1512304).
文摘Remote sensing data plays an important role in natural disaster management.However,with the increase of the variety and quantity of remote sensors,the problem of“knowledge barriers”arises when data users in disaster field retrieve remote sensing data.To improve this problem,this paper proposes an ontology and rule based retrieval(ORR)method to retrieve disaster remote sensing data,and this method introduces ontology technology to express earthquake disaster and remote sensing knowledge,on this basis,and realizes the task suitability reasoning of earthquake disaster remote sensing data,mining the semantic relationship between remote sensing metadata and disasters.The prototype system is built according to the ORR method,which is compared with the traditional method,using the ORR method to retrieve disaster remote sensing data can reduce the knowledge requirements of data users in the retrieval process and improve data retrieval efficiency.
基金Science and Technology Innovation 2030-Major Project of“New Generation Artificial Intelligence”granted by Ministry of Science and Technology,Grant Number 2020AAA0109300.
文摘In the process of constructing domain-specific knowledge graphs,the task of relational triple extraction plays a critical role in transforming unstructured text into structured information.Existing relational triple extraction models facemultiple challenges when processing domain-specific data,including insufficient utilization of semantic interaction information between entities and relations,difficulties in handling challenging samples,and the scarcity of domain-specific datasets.To address these issues,our study introduces three innovative components:Relation semantic enhancement,data augmentation,and a voting strategy,all designed to significantly improve the model’s performance in tackling domain-specific relational triple extraction tasks.We first propose an innovative attention interaction module.This method significantly enhances the semantic interaction capabilities between entities and relations by integrating semantic information fromrelation labels.Second,we propose a voting strategy that effectively combines the strengths of large languagemodels(LLMs)and fine-tuned small pre-trained language models(SLMs)to reevaluate challenging samples,thereby improving the model’s adaptability in specific domains.Additionally,we explore the use of LLMs for data augmentation,aiming to generate domain-specific datasets to alleviate the scarcity of domain data.Experiments conducted on three domain-specific datasets demonstrate that our model outperforms existing comparative models in several aspects,with F1 scores exceeding the State of the Art models by 2%,1.6%,and 0.6%,respectively,validating the effectiveness and generalizability of our approach.
基金National Natural Science Foundation of China(Nos.42371406,42071441,42222106,61976234).
文摘With the increasing number of remote sensing satellites,the diversification of observation modals,and the continuous advancement of artificial intelligence algorithms,historically opportunities have been brought to the applications of earth observation and information retrieval,including climate change monitoring,natural resource investigation,ecological environment protection,and territorial space planning.Over the past decade,artificial intelligence technology represented by deep learning has made significant contributions to the field of Earth observation.Therefore,this review will focus on the bottlenecks and development process of using deep learning methods for land use/land cover mapping of the Earth’s surface.Firstly,it introduces the basic framework of semantic segmentation network models for land use/land cover mapping.Then,we summarize the development of semantic segmentation models in geographical field,focusing on spatial and semantic feature extraction,context relationship perception,multi-scale effects modelling,and the transferability of models under geographical differences.Then,the application of semantic segmentation models in agricultural management,building boundary extraction,single tree segmentation and inter-species classification are reviewed.Finally,we discuss the future development prospects of deep learning technology in the context of remote sensing big data.
文摘One major task of our improvement of China’s international communication capabilities is to tell China’s stories well and make China’s voice heard to communicate a comprehensive,multi-dimensional view of China in the international arena.International Chinese language education bears the dual responsibility of spreading the Chinese language and promoting the Chinese culture.How to shape global narratives for telling China’s stories and improve China’s international communication capabilities has become an important issue that concerns international Chinese language education.Chinese idioms,which are hidden treasures of the Chinese language,can play a unique role in this regard.It is therefore necessary to review the content and methods of including idioms in international Chinese language education.In this paper,instruction in idioms is approached by making a corpus-based comparative analysis from the theoretical perspectives of equivalence,cognitive metaphor,and second language acquisition.Qualitative and quantitative methods are combined to analyze idioms to be added to the vocabulary of international Chinese language education in three dimensions:word frequency,semantic transparency,and functional equivalence.This paper aims to explore a new approach to the dissemination of Chinese culture through instruction in idioms in international Chinese language education.
文摘This work is about the progress of previous related work based on an experiment to improve the intelligence of robotic systems,with the aim of achieving more linguistic communication capabilities between humans and robots.In this paper,the authors attempt an algorithmic approach to natural language generation through hole semantics and by applying the OMAS-III computational model as a grammatical formalism.In the original work,a technical language is used,while in the later works,this has been replaced by a limited Greek natural language dictionary.This particular effort was made to give the evolving system the ability to ask questions,as well as the authors developed an initial dialogue system using these techniques.The results show that the use of these techniques the authors apply can give us a more sophisticated dialogue system in the future.
基金National Common Language Education Practice and Training Innovation Team(Science and Technology Leading Talents and Innovation Team BuildingProject Number:ZSLJ202201)。
文摘The promotion of the national common language and writing system stands as a cornerstone in cultivating a sense of community among the Chinese nation.The national common language and writing system,serving as a potent vehicle for cultural dissemination and linguistic communication,acts as a crucial bridge fostering unity,mutual support,cultural exchange,and cohesion among diverse ethnic groups.Its widespread promotion not only reinforces the cohesion of the Chinese nation but also plays a pivotal role in fortifying national and cultural identity.The advocacy and dissemination of the national common language and writing system contribute significantly to enhancing the sense of community within the Chinese nation.This concerted effort serves as an internal driving force,creating a positive feedback loop that strengthens both national identity and the cultural bonds shared by the Chinese people.The reciprocal relationship between the promotion and popularization of the national common language and writing system reinforces a mutual sense of assistance and reciprocity.This bidirectional dynamic propels a synergistic interplay,fostering a beneficial cycle in the promotion and widespread adoption of the national common language and writing system.In essence,the promotion and popularization of the national common language and writing system not only contribute to its advancement but also serve to deepen the sense of community within the Chinese nation.This reciprocal interaction between the two elements establishes a robust foundation for nurturing a strong and cohesive Chinese community.
文摘A monitoring of multiple physical parameters in a moderate seismic area in Western Piedmont (NW Italy) and the simultaneous observation of the behaviour of numerous species of domestic and wild animals gave in a period of over twenty years the possibility to distinguish the unusual animal behaviours due to local earthquake nucleation from other causes. In particular, the observation of the body and vocal language of dogs (Canis familiaris) in the same area has permitted not only to specify the different meanings of vocal language in connection to their body language, but also to classify the minimum elements into a vocal language that is linked together by tonal and rhythmical sequences of sounds that form a semantic lexicon. The usage of the same tonal and rhythmical vocal sequences in similar or identical situations, which are experienced by different groups of dogs, induces us to verify whether it could be possible to link particular vocal sequences to precise physical anomalies before earthquakes. The individuation of physical anomalies due to an earthquake nucleation or due to a hydro-geological destabilization, is possible thanks to a continuous long-term monitoring of some parameters. Moreover, the complexity of the vocal language of dogs increases if the dogs live in an area with a law population density. Then the correlation between some vocal sequences and some seismic precursors is better if dogs live free in yard or on farms, if they are in good health, and if they can establish a strong social relation of group. When dogs live closed in yards of houses that are far apart, they communicate with each other with an amazing vocal language, full of questions and answers, imitations of sequences, and information about situations that may be harmful to them.
文摘The concept of language sense has never failed to arouse interest among scholars in recent decades at home and abroad.Many scholars point out that language sense is an important competence which helps facilitate learning a language.It bears much connection with learners’acquisition of a language.Another concept,implicit learning,which is proved effective and has been applied in second language acquisition(SLA),is consistent with language sense in terms of its learning mechanism.In this sense,cultivation of English language sense can be theoretically supported by implicit learning and pedagogical implications can be derived accordingly.
文摘This research reports a study of language decay from the perspective of semantics, in an explicit sense, from perspectives of the relationship between the signifier and the signified which are Saussure's most important theory, the relationship between denotation and connotation, conversational implicature, as well as the relationship between language and culture. Through doing this, people can have a better understanding of the nature of language decay. At last, the writer briefly tells some bad results about language decay.
文摘Winds stampeding the fields (Ted Hughes) (1) He sang his didn’t he danced his did (E. E. Cummings) (2) Now as I was young and easy under the apple boughs About the lilting house and happy as the grass was green (Dylan Thomas) (3) That spanieled me heels,to whom I gave their wishes. (William Shakespeare, Anthony
基金funded by the Major Scientific and Technological Innovation Project of Shandong Province,Grant No.2022CXGC010609.
文摘Semantic segmentation of remote sensing images is one of the core tasks of remote sensing image interpretation.With the continuous develop-ment of artificial intelligence technology,the use of deep learning methods for interpreting remote-sensing images has matured.Existing neural networks disregard the spatial relationship between two targets in remote sensing images.Semantic segmentation models that combine convolutional neural networks(CNNs)and graph convolutional neural networks(GCNs)cause a lack of feature boundaries,which leads to the unsatisfactory segmentation of various target feature boundaries.In this paper,we propose a new semantic segmentation model for remote sensing images(called DGCN hereinafter),which combines deep semantic segmentation networks(DSSN)and GCNs.In the GCN module,a loss function for boundary information is employed to optimize the learning of spatial relationship features between the target features and their relationships.A hierarchical fusion method is utilized for feature fusion and classification to optimize the spatial relationship informa-tion in the original feature information.Extensive experiments on ISPRS 2D and DeepGlobe semantic segmentation datasets show that compared with the existing semantic segmentation models of remote sensing images,the DGCN significantly optimizes the segmentation effect of feature boundaries,effectively reduces the noise in the segmentation results and improves the segmentation accuracy,which demonstrates the advancements of our model.
文摘One of the critical hurdles, and breakthroughs, in the field of Natural Language Processing (NLP) in the last two decades has been the development of techniques for text representation that solves the so-called curse of dimensionality, a problem which plagues NLP in general given that the feature set for learning starts as a function of the size of the language in question, upwards of hundreds of thousands of terms typically. As such, much of the research and development in NLP in the last two decades has been in finding and optimizing solutions to this problem, to feature selection in NLP effectively. This paper looks at the development of these various techniques, leveraging a variety of statistical methods which rest on linguistic theories that were advanced in the middle of the last century, namely the distributional hypothesis which suggests that words that are found in similar contexts generally have similar meanings. In this survey paper we look at the development of some of the most popular of these techniques from a mathematical as well as data structure perspective, from Latent Semantic Analysis to Vector Space Models to their more modern variants which are typically referred to as word embeddings. In this review of algoriths such as Word2Vec, GloVe, ELMo and BERT, we explore the idea of semantic spaces more generally beyond applicability to NLP.
文摘In recent years, China is deepening reform and opening up to the outside world especially after China’s successful accession to theWTO. In this respect, legal communication plays an important part in the intercourse between different countries, and the basic features oflegal language can be noticed in specific situations in legal communication. In this thesis, the Lexical features, Semantic features, Grammaticaland Structural features of legal language are introduced.
基金supported by the National Science Foundation of China(No.51575264)the Jiangsu Province Science Foundation for Excellent Youths under Grant BK20121011the Fundamental Research Funds for the Central Universities(No.NS2015050)
文摘The process inference cannot be achieved effectively by the traditional expert system,while the ontology and semantic technology could provide better solution to the knowledge acquisition and intelligent inference of expert system.The application mode of ontology and semantic technology on the process parameters recommendation are mainly investigated.Firstly,the content about ontology,semantic web rule language(SWRL)rules and the relative inference engine are introduced.Then,the inference method about process based on ontology technology and the SWRL rule is proposed.The construction method of process ontology base and the writing criterion of SWRL rule are described later.Finally,the results of inference are obtained.The mode raised could offer the reference to the construction of process knowledge base as well as the expert system's reusable process rule library.
基金This work was co-funded by the European Research Council for the project ScienceGRAPH(Grant agreement ID:819536)by the TIB Leibniz Information Centre for Science and Technology.
文摘Purpose:This work aims to normalize the NLPCONTRIBUTIONS scheme(henceforward,NLPCONTRIBUTIONGRAPH)to structure,directly from article sentences,the contributions information in Natural Language Processing(NLP)scholarly articles via a two-stage annotation methodology:1)pilot stage-to define the scheme(described in prior work);and 2)adjudication stage-to normalize the graphing model(the focus of this paper).Design/methodology/approach:We re-annotate,a second time,the contributions-pertinent information across 50 prior-annotated NLP scholarly articles in terms of a data pipeline comprising:contribution-centered sentences,phrases,and triple statements.To this end,specifically,care was taken in the adjudication annotation stage to reduce annotation noise while formulating the guidelines for our proposed novel NLP contributions structuring and graphing scheme.Findings:The application of NLPCONTRIBUTIONGRAPH on the 50 articles resulted finally in a dataset of 900 contribution-focused sentences,4,702 contribution-information-centered phrases,and 2,980 surface-structured triples.The intra-annotation agreement between the first and second stages,in terms of F1-score,was 67.92%for sentences,41.82%for phrases,and 22.31%for triple statements indicating that with increased granularity of the information,the annotation decision variance is greater.Research limitations:NLPCONTRIBUTIONGRAPH has limited scope for structuring scholarly contributions compared with STEM(Science,Technology,Engineering,and Medicine)scholarly knowledge at large.Further,the annotation scheme in this work is designed by only an intra-annotator consensus-a single annotator first annotated the data to propose the initial scheme,following which,the same annotator reannotated the data to normalize the annotations in an adjudication stage.However,the expected goal of this work is to achieve a standardized retrospective model of capturing NLP contributions from scholarly articles.This would entail a larger initiative of enlisting multiple annotators to accommodate different worldviews into a“single”set of structures and relationships as the final scheme.Given that the initial scheme is first proposed and the complexity of the annotation task in the realistic timeframe,our intraannotation procedure is well-suited.Nevertheless,the model proposed in this work is presently limited since it does not incorporate multiple annotator worldviews.This is planned as future work to produce a robust model.Practical implications:We demonstrate NLPCONTRIBUTIONGRAPH data integrated into the Open Research Knowledge Graph(ORKG),a next-generation KG-based digital library with intelligent computations enabled over structured scholarly knowledge,as a viable aid to assist researchers in their day-to-day tasks.Originality/value:NLPCONTRIBUTIONGRAPH is a novel scheme to annotate research contributions from NLP articles and integrate them in a knowledge graph,which to the best of our knowledge does not exist in the community.Furthermore,our quantitative evaluations over the two-stage annotation tasks offer insights into task difficulty.
基金the National Natural Science Foundation of China(42001408,61806097).
文摘Significant advancements have been achieved in road surface extraction based on high-resolution remote sensingimage processing. Most current methods rely on fully supervised learning, which necessitates enormous humaneffort to label the image. Within this field, other research endeavors utilize weakly supervised methods. Theseapproaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data, such asscribbles. This paper presents a novel technique called a weakly supervised network using scribble-supervised andedge-mask (WSSE-net). This network is a three-branch network architecture, whereby each branch is equippedwith a distinct decoder module dedicated to road extraction tasks. One of the branches is dedicated to generatingedge masks using edge detection algorithms and optimizing road edge details. The other two branches supervise themodel’s training by employing scribble labels and spreading scribble information throughout the image. To addressthe historical flaw that created pseudo-labels that are not updated with network training, we use mixup to blendprediction results dynamically and continually update new pseudo-labels to steer network training. Our solutiondemonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-labelsupport. The studies are conducted on three separate road datasets, which consist primarily of high-resolutionremote-sensing satellite photos and drone images. The experimental findings suggest that our methodologyperforms better than advanced scribble-supervised approaches and specific traditional fully supervised methods.
基金the Changsha Science and Technology Plan 2004081in part by the Science and Technology Program of Hunan Provincial Department of Transportation 202117in part by the Science and Technology Research and Development Program Project of the China Railway Group Limited 2021-Special-08.
文摘The detection of crack defects on the walls of road tunnels is a crucial step in the process of ensuring travel safetyand performing routine tunnel maintenance. The automatic and accurate detection of cracks on the surface of roadtunnels is the key to improving the maintenance efficiency of road tunnels. Machine vision technology combinedwith a deep neural network model is an effective means to realize the localization and identification of crackdefects on the surface of road tunnels.We propose a complete set of automatic inspection methods for identifyingcracks on the walls of road tunnels as a solution to the problem of difficulty in identifying cracks during manualmaintenance. First, a set of equipment applied to the real-time acquisition of high-definition images of walls inroad tunnels is designed. Images of walls in road tunnels are acquired based on the designed equipment, whereimages containing crack defects are manually identified and selected. Subsequently, the training and validationsets used to construct the crack inspection model are obtained based on the acquired images, whereas the regionscontaining cracks and the pixels of the cracks are finely labeled. After that, a crack area sensing module is designedbased on the proposed you only look once version 7 model combined with coordinate attention mechanism (CAYOLOV7) network to locate the crack regions in the road tunnel surface images. Only subimages containingcracks are acquired and sent to the multiscale semantic segmentation module for extraction of the pixels to whichthe cracks belong based on the DeepLab V3+ network. The precision and recall of the crack region localizationon the surface of a road tunnel based on our proposed method are 82.4% and 93.8%, respectively. Moreover, themean intersection over union (MIoU) and pixel accuracy (PA) values for achieving pixel-level detection accuracyare 76.84% and 78.29%, respectively. The experimental results on the dataset show that our proposed two-stagedetection method outperforms other state-of-the-art models in crack region localization and detection. Based onour proposedmethod, the images captured on the surface of a road tunnel can complete crack detection at a speed often frames/second, and the detection accuracy can reach 0.25 mm, which meets the requirements for maintenanceof an actual project. The designed CA-YOLO V7 network enables precise localization of the area to which a crackbelongs in images acquired under different environmental and lighting conditions in road tunnels. The improvedDeepLab V3+ network based on lightweighting is able to extract crack morphology in a given region more quicklywhile maintaining segmentation accuracy. The established model combines defect localization and segmentationmodels for the first time, realizing pixel-level defect localization and extraction on the surface of road tunnelsin complex environments, and is capable of determining the actual size of cracks based on the physical coordinatesystemafter camera calibration. The trainedmodelhas highaccuracy andcanbe extendedandapplied to embeddedcomputing devices for the assessment and repair of damaged areas in different types of road tunnels.
基金Acknowledgements: This work was partially supported by China National Social Science Foundation (No. 08BYY046), Social Science Foundation of Chinese Ministry of Education (No. 06JJD740007) and Project of Shandong Social Science Fund (No. 07CWXJ03).
文摘In this paper, we focus on investigation of the predicate transformer semantics of the contract language introduced by Back and von Wright in their book titled as “Refinement Calculus: A Systematic Introduction” (Springer-Verlag, New York, 1998) in the framework of fuzziness. In order to define fuzzy operations, i.e., fuzzy logic connectives, we take into account implicator → and its associated based on residuated lattice theory. Based on these basic fuzzy operations, we introduce the angelic and demonic updates of fuzzy relations. They are the basis of fuzzy predicate transformers in the sense of that any strongly monotone fuzzy predicate transformer can be represented as the sequential composition of the angelic and demonic updates. Together with the standard strong negation , we set up the duality between the angel and demon. The fuzzy predicate transformers semantics of contract statements is established and a simple example of contract statements is given.
文摘With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.