Agroecological practices are promoted as a more proactive approach than conventional agriculture to achieving a collective global response to climate change and variability while building robust and resilient agricult...Agroecological practices are promoted as a more proactive approach than conventional agriculture to achieving a collective global response to climate change and variability while building robust and resilient agricultural systems to meet food needs and protect the integrity of ecosystems.There is relatively limited evidence on the key traditional agroecological knowledge and practices adopted by smallholder farmers,the factors that influence smallholder farmers’decision to adopt these practices,and the opportunities it presents for building resilient agricultural systems.Using a multi-scale mixed method approach,we conducted key informant interviews(n=12),focus group discussions(n=5),and questionnaire surveys(N=220)to explore the traditional agroecological knowledge and practices,the influencing factors,and the opportunities smallholder farmers presented for achieving resilient agricultural systems.Our findings suggest that smallholder farmers employ a suite of traditional agroecological knowledge and practices to enhance food security,combat climate change,and build resilient agricultural systems.The most important traditional agroecological knowledge and practices in the study area comprise cultivating leguminous crops,mixed crop-livestock systems,and crop rotation,with Relative Importance Index(RII)values of 0.710,0.708,and 0.695,respectively.It is reported that the choice of these practices by smallholder farmers is influenced by their own farming experience,access to market,access to local resources,information,and expertise,and the perceived risk of climate change.Moreover,the results further show that improving household food security and nutrition,improving soil quality,control of pest and disease infestation,and support from NonGovernmental Organizations(NGOs)and local authorities are opportunities for smallholder farmers in adopting traditional agroecological knowledge and practices for achieving resilient agricultural systems.The findings call into question the need for stakeholders and policy-makers at all levels to develop capacity and increase the awareness of traditional agroecological knowledge and practices as mechanisms to ensure resilient agricultural systems for sustainable food security.展开更多
The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gai...The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gait in a virtual environment was presented in previous research work titled “A Comparison of PPO, TD3, and SAC Reinforcement Algorithms for Quadruped Walking Gait Generation”. We demonstrated that the Soft Actor-Critic Reinforcement algorithm had the best performance generating the walking gait for a quadruped in certain instances of sensor configurations in the virtual environment. In this work, we present the performance analysis of the state-of-the-art Deep Reinforcement algorithms above for quadruped walking gait generation in a physical environment. The performance is determined in the physical environment by transfer learning augmented by real-time reinforcement learning for gait generation on a physical quadruped. The performance is analyzed on a quadruped equipped with a range of sensors such as position tracking using a stereo camera, contact sensing of each of the robot legs through force resistive sensors, and proprioceptive information of the robot body and legs using nine inertial measurement units. The performance comparison is presented using the metrics associated with the walking gait: average forward velocity (m/s), average forward velocity variance, average lateral velocity (m/s), average lateral velocity variance, and quaternion root mean square deviation. The strengths and weaknesses of each algorithm for the given task on the physical quadruped are discussed.展开更多
The meta-heuristic algorithm with local search is an excellent choice for the job-shop scheduling problem(JSP).However,due to the unique nature of the JSP,local search may generate infeasible neighbourhood solutions.I...The meta-heuristic algorithm with local search is an excellent choice for the job-shop scheduling problem(JSP).However,due to the unique nature of the JSP,local search may generate infeasible neighbourhood solutions.In the existing literature,although some domain knowledge of the JSP can be used to avoid infeasible solutions,the constraint conditions in this domain knowledge are sufficient but not necessary.It may lose many feasible solutions and make the local search inadequate.By analysing the causes of infeasible neighbourhood solutions,this paper further explores the domain knowledge contained in the JSP and proposes the sufficient and necessary constraint conditions to find all feasible neighbourhood solutions,allowing the local search to be carried out thoroughly.With the proposed conditions,a new neighbourhood structure is designed in this paper.Then,a fast calculation method for all feasible neighbourhood solutions is provided,significantly reducing the calculation time compared with ordinary methods.A set of standard benchmark instances is used to evaluate the performance of the proposed neighbourhood structure and calculation method.The experimental results show that the calculation method is effective,and the new neighbourhood structure has more reliability and superiority than the other famous and influential neighbourhood structures,where 90%of the results are the best compared with three other well-known neighbourhood structures.Finally,the result from a tabu search algorithm with the new neighbourhood structure is compared with the current best results,demonstrating the superiority of the proposed neighbourhood structure.展开更多
To address the difficulty of training high-quality models in some specific domains due to the lack of fine-grained annotation resources, we propose in this paper a knowledge-integrated cross-domain data generation met...To address the difficulty of training high-quality models in some specific domains due to the lack of fine-grained annotation resources, we propose in this paper a knowledge-integrated cross-domain data generation method for unsupervised domain adaptation tasks. Specifically, we extract domain features, lexical and syntactic knowledge from source-domain and target-domain data, and use a masking model with an extended masking strategy and a re-masking strategy to obtain domain-specific data that remove domain-specific features. Finally, we improve the sequence generation model BART and use it to generate high-quality target domain data for the task of aspect and opinion co-extraction from the target domain. Experiments were performed on three conventional English datasets from different domains, and our method generates more accurate and diverse target domain data with the best results compared to previous methods.展开更多
Text event mining,as an indispensable method of text mining processing,has attracted the extensive attention of researchers.A modeling method for knowledge graph of events based on mutual information among neighbor do...Text event mining,as an indispensable method of text mining processing,has attracted the extensive attention of researchers.A modeling method for knowledge graph of events based on mutual information among neighbor domains and sparse representation is proposed in this paper,i.e.UKGE-MS.Specifically,UKGE-MS can improve the existing text mining technology's ability of understanding and discovering high-dimensional unmarked information,and solves the problems of traditional unsupervised feature selection methods,which only focus on selecting features from a global perspective and ignoring the impact of local connection of samples.Firstly,considering the influence of local information of samples in feature correlation evaluation,a feature clustering algorithm based on average neighborhood mutual information is proposed,and the feature clusters with certain event correlation are obtained;Secondly,an unsupervised feature selection method based on the high-order correlation of multi-dimensional statistical data is designed by combining the dimension reduction advantage of local linear embedding algorithm and the feature selection ability of sparse representation,so as to enhance the generalization ability of the selected feature items.Finally,the events knowledge graph is constructed by means of sparse representation and l1 norm.Extensive experiments are carried out on five real datasets and synthetic datasets,and the UKGE-MS are compared with five corresponding algorithms.The experimental results show that UKGE-MS is better than the traditional method in event clustering and feature selection,and has some advantages over other methods in text event recognition and discovery.展开更多
Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audie...Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naive Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.展开更多
Research papers in the field of SLA published between 2009 and 2019 are analyzed in terms of research status of domes⁃tic SLA researchers,research institutions,research frontiers and hotspots in the paper,and maps the...Research papers in the field of SLA published between 2009 and 2019 are analyzed in terms of research status of domes⁃tic SLA researchers,research institutions,research frontiers and hotspots in the paper,and maps the knowledge domains of SLA re⁃searches.The data are retrieved from 10 core journals of linguistics via the CNKI journal database.By means of CiteSpace 5.3,an analysis of the overall trend of studies on SLA in China is made.展开更多
AIM:To track the knowledge structure,topics in focus,and trends in emerging research in pterygium in the past 20 y.METHODS:Base on the Web of Science Core Collection(Wo SCC),studies related to pterygium in the past 20...AIM:To track the knowledge structure,topics in focus,and trends in emerging research in pterygium in the past 20 y.METHODS:Base on the Web of Science Core Collection(Wo SCC),studies related to pterygium in the past 20 y from 2000-2019 have been included.With the help of VOSviewer software,a knowledge map was constructed and the distribution of countries,institutions,journals,and authors in the field of pterygium noted.Meanwhile,using cocitation analysis of references and co-occurrence analysis of keywords,we identified basis and hotspots,thereby obtaining an overview of this field.RESULTS:The search retrieved 1516 publications from Wo SCC on pterygium published between 2000 and 2019.In the past two decades,the annual number of publications is on the rise and fluctuated a little.Most productive institutions are from Singapore but the most prolific and active country is the United States.Journal Cornea published the most articles and Coroneo MT contributed the most publications on pterygium.From cooccurrence analysis,the keywords formed 3 clusters:1)surgical therapeutic techniques and adjuvant of pterygium,2)occurrence process and pathogenesis of pterygium,and 3)epidemiology,and etiology of pterygium formation.These three clusters were consistent with the clustering in co-citation analysis,in which Cluster 1 contained the most references(74 publications,47.74%),Cluster 2 contained 53 publications,accounting for 34.19%,and Cluster 3 focused on epidemiology with 18.06%of total 155 cocitation publications.CONCLUSION:This study demonstrates that the research of pterygium is gradually attracting the attention of scholars and researchers.The interaction between authors,institutions,and countries is lack of.Even though,the research hotspot,distribution,and research status in pterygium in this study could provide valuable information for scholars and researchers.展开更多
Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have le...Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.展开更多
Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of...Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.展开更多
In this paper we investigate the effectiveness of ensemble-based learners for web robot session identification from web server logs. We also perform multi fold robot session labeling to improve the performance of lear...In this paper we investigate the effectiveness of ensemble-based learners for web robot session identification from web server logs. We also perform multi fold robot session labeling to improve the performance of learner. We conduct a comparative study for various ensemble methods (Bagging, Boosting, and Voting) with simple classifiers in perspective of classification. We also evaluate the effectiveness of these classifiers (both ensemble and simple) on five different data sets of varying session length. Presently the results of web server log analyzers are not very much reliable because the input log files are highly inflated by sessions of automated web traverse software’s, known as web robots. Presence of web robots access traffic entries in web server log repositories imposes a great challenge to extract any actionable and usable knowledge about browsing behavior of actual visitors. So web robots sessions need accurate and fast detection from web server log repositories to extract knowledge about genuine visitors and to produce correct results of log analyzers.展开更多
Side-scan sonar(SSS)is now a prevalent instrument for large-scale seafloor topography measurements,deployable on an autonomous underwater vehicle(AUV)to execute fully automated underwater acoustic scanning imaging alo...Side-scan sonar(SSS)is now a prevalent instrument for large-scale seafloor topography measurements,deployable on an autonomous underwater vehicle(AUV)to execute fully automated underwater acoustic scanning imaging along a predetermined trajectory.However,SSS images often suffer from speckle noise caused by mutual interference between echoes,and limited AUV computational resources further hinder noise suppression.Existing approaches for SSS image processing and speckle noise reduction rely heavily on complex network structures and fail to combine the benefits of deep learning and domain knowledge.To address the problem,Rep DNet,a novel and effective despeckling convolutional neural network is proposed.Rep DNet introduces two re-parameterized blocks:the Pixel Smoothing Block(PSB)and Edge Enhancement Block(EEB),preserving edge information while attenuating speckle noise.During training,PSB and EEB manifest as double-layered multi-branch structures,integrating first-order and secondorder derivatives and smoothing functions.During inference,the branches are re-parameterized into a 3×3 convolution,enabling efficient inference without sacrificing accuracy.Rep DNet comprises three computational operations:3×3 convolution,element-wise summation and Rectified Linear Unit activation.Evaluations on benchmark datasets,a real SSS dataset and Data collected at Lake Mulan aestablish Rep DNet as a well-balanced network,meeting the AUV computational constraints in terms of performance and latency.展开更多
Purpose: The evolution of the socio-cognitive structure of the field of knowledge management(KM) during the period 1986–2015 is described. Design/methodology/approach: Records retrieved from Web of Science were submi...Purpose: The evolution of the socio-cognitive structure of the field of knowledge management(KM) during the period 1986–2015 is described. Design/methodology/approach: Records retrieved from Web of Science were submitted to author co-citation analysis(ACA) following a longitudinal perspective as of the following time slices: 1986–1996, 1997–2006, and 2007–2015. The top 10% of most cited first authors by sub-periods were mapped in bibliometric networks in order to interpret the communities formed and their relationships.Findings: KM is a homogeneous field as indicated by networks results. Nine classical authors are identified since they are highly co-cited in each sub-period, highlighting Ikujiro Nonaka as the most influential authors in the field. The most significant communities in KM are devoted to strategic management, KM foundations, organisational learning and behaviour, and organisational theories. Major trends in the evolution of the intellectual structure of KM evidence a technological influence in 1986–1996, a strategic influence in 1997–2006, and finally a sociological influence in 2007–2015.Research limitations: Describing a field from a single database can offer biases in terms of output coverage. Likewise, the conference proceedings and books were not used and the analysis was only based on first authors. However, the results obtained can be very useful to understand the evolution of KM research.Practical implications: These results might be useful for managers and academicians to understand the evolution of KM field and to(re)define research activities and organisational projects.Originality/value: The novelty of this paper lies in considering ACA as a bibliometric technique to study KM research. In addition, our investigation has a wider time coverage than earlier articles.展开更多
Software requirements engineering deals with: elicitation, specification, and validation of software requirements. Furthermore there is a need to facilitate collaboration amongst stakeholders and analysts. Fewer effor...Software requirements engineering deals with: elicitation, specification, and validation of software requirements. Furthermore there is a need to facilitate collaboration amongst stakeholders and analysts. Fewer efforts were deployed to support them in performing their job on a day to day basis. To solve this problem we use knowledge management for software requirements engineering. This paper proposes a knowledge management framework, based on the SECI model of knowledge creation, aimed at exploiting tacit and explicit knowledge related to software requirements within a given software project. The core part of the proposed framework is a set of four sub systems “Socializer”;“Externalizer”;“Combiner”;and “Internalizer”, attached to a couple of domain ontologies and a set of knowledge assets. Indeed we aim to facilitate a semantic based interpretation of knowledge assets related to software requirements by restricting their interpretation through the application domain and software requirements ontologies. We anticipate that this framework would be very helpful for stakeholders as well as analysts to exchange and manage their knowledge within a given software project. We show in the case study, through a virtual payroll project using the two-step approach: domain level requirements plus design level requirements, how the key elicitation SRE techniques are used during the first phase of domain requirements elicitation through the four subsystems of our framework.展开更多
Objective:Nurse practitioners(NPs)have drawn significant attention recently and played a major role in healthcare.We aim to find the evaluation of NPs through published studies and then visualize the research status a...Objective:Nurse practitioners(NPs)have drawn significant attention recently and played a major role in healthcare.We aim to find the evaluation of NPs through published studies and then visualize the research status and hotspots in this field.Methods:All data came from the Web of Science Core Collection,and the data were counted and entered into Excel 2016.The key documents nodes were excavated by analyzing the knowledge network map using Cite Space V software.In this study,these nodes of“author,country,institution,keyword,co-citation(referencecited-authorcited-journal),and grant”were harvested for analysis and comparison.Results:A total of 4912 records,which were published between 2007 and 2018 and pertained to NPs,were retrieved from the Web of Science Core Collection database(Wo SCC)from a diversity of languages.The total and the annual number of publications and citations of these continually increased over time.Most publications were in 2018(618 records).This study involved 8241 authors located in 98 countries and 4557 institutions totally.The United States(2737 records)and the University of Michigan(90 records)dominated in publication frequency.There are 902 journals and 2449 articles with funding support that have been analyzed.Most articles were from JAMA:The Journal of the American Medical Association(1386,IF=47.661),followed by the Journal of Advanced Nursing(1359,IF=2.267),and The New England Journal of Medicine(1109,IF=79.258).The reference“The Role of Nurse Practitioners in Reinventing Primary Care”was co-cited most frequently,which revealed it as the highest landmark article in NP.The top-ranked keyword was“Care,”other than“Nurse practitioner,”which has an ultra-high frequency.Some of the high-frequency keywords represent the significant direction of NPs.Conclusions:NPs are at the crux of health-care delivery and play an important role in providing high-quality nursing.Publications on NPs in Wo SCC have increased notably during the recent years,and have appeared in some journals that have a high impact factor.Research frontiers and developmental trends were revealed by the analysis in this study,which can be used to forecast future research developments in NPs and taken as a reference to choose the right directions by subsequent researchers who wish to use these results.However,the grant support from administration or research institutions is still in need of improvement and the scope of research in NPs should be broadened in the future.展开更多
BACKGROUND In the rapidly evolving landscape of psychiatric research,2023 marked another year of significant progress globally,with the World Journal of Psychiatry(WJP)experiencing notable expansion and influence.AIM ...BACKGROUND In the rapidly evolving landscape of psychiatric research,2023 marked another year of significant progress globally,with the World Journal of Psychiatry(WJP)experiencing notable expansion and influence.AIM To conduct a comprehensive visualization and analysis of the articles published in the WJP throughout 2023.By delving into these publications,the aim is to deter-mine the valuable insights that can illuminate pathways for future research endeavors in the field of psychiatry.METHODS A selection process led to the inclusion of 107 papers from the WJP published in 2023,forming the dataset for the analysis.Employing advanced visualization techniques,this study mapped the knowledge domains represented in these papers.RESULTS The findings revealed a prevalent focus on key topics such as depression,mental health,anxiety,schizophrenia,and the impact of coronavirus disease 2019.Additionally,through keyword clustering,it became evident that these papers were predominantly focused on exploring mental health disorders,depression,anxiety,schizophrenia,and related factors.Noteworthy contributions hailed authors in regions such as China,the United Kingdom,United States,and Turkey.Particularly,the paper garnered the highest number of citations,while the American Psychiatric Association was the most cited reference.CONCLUSION It is recommended that the WJP continue in its efforts to enhance the quality of papers published in the field of psychiatry.Additionally,there is a pressing need to delve into the potential applications of digital interventions and artificial intelligence within the discipline.展开更多
Alzheimer’s disease is the most common cause of dementia.It is an increasingly serious global health problem and has a significant impact on individuals and society.However,the precise cause of Alzheimer’s disease i...Alzheimer’s disease is the most common cause of dementia.It is an increasingly serious global health problem and has a significant impact on individuals and society.However,the precise cause of Alzheimer’s disease is still unknown.In this study,11,748 Web-of-Science-indexed manuscripts regarding Alzheimer’s disease,all published from 2015 to 2019,and their 693,938 references were analyzed.A document co-citation network map was drawn using CiteSpace software.Research frontiers and development trends were determined by retrieving subject headings with apparent changing word frequency trends,which can be used to forecast future research developments in Alzheimer’s disease.展开更多
文摘Agroecological practices are promoted as a more proactive approach than conventional agriculture to achieving a collective global response to climate change and variability while building robust and resilient agricultural systems to meet food needs and protect the integrity of ecosystems.There is relatively limited evidence on the key traditional agroecological knowledge and practices adopted by smallholder farmers,the factors that influence smallholder farmers’decision to adopt these practices,and the opportunities it presents for building resilient agricultural systems.Using a multi-scale mixed method approach,we conducted key informant interviews(n=12),focus group discussions(n=5),and questionnaire surveys(N=220)to explore the traditional agroecological knowledge and practices,the influencing factors,and the opportunities smallholder farmers presented for achieving resilient agricultural systems.Our findings suggest that smallholder farmers employ a suite of traditional agroecological knowledge and practices to enhance food security,combat climate change,and build resilient agricultural systems.The most important traditional agroecological knowledge and practices in the study area comprise cultivating leguminous crops,mixed crop-livestock systems,and crop rotation,with Relative Importance Index(RII)values of 0.710,0.708,and 0.695,respectively.It is reported that the choice of these practices by smallholder farmers is influenced by their own farming experience,access to market,access to local resources,information,and expertise,and the perceived risk of climate change.Moreover,the results further show that improving household food security and nutrition,improving soil quality,control of pest and disease infestation,and support from NonGovernmental Organizations(NGOs)and local authorities are opportunities for smallholder farmers in adopting traditional agroecological knowledge and practices for achieving resilient agricultural systems.The findings call into question the need for stakeholders and policy-makers at all levels to develop capacity and increase the awareness of traditional agroecological knowledge and practices as mechanisms to ensure resilient agricultural systems for sustainable food security.
文摘The performance of the state-of-the-art Deep Reinforcement algorithms such as Proximal Policy Optimization, Twin Delayed Deep Deterministic Policy Gradient, and Soft Actor-Critic for generating a quadruped walking gait in a virtual environment was presented in previous research work titled “A Comparison of PPO, TD3, and SAC Reinforcement Algorithms for Quadruped Walking Gait Generation”. We demonstrated that the Soft Actor-Critic Reinforcement algorithm had the best performance generating the walking gait for a quadruped in certain instances of sensor configurations in the virtual environment. In this work, we present the performance analysis of the state-of-the-art Deep Reinforcement algorithms above for quadruped walking gait generation in a physical environment. The performance is determined in the physical environment by transfer learning augmented by real-time reinforcement learning for gait generation on a physical quadruped. The performance is analyzed on a quadruped equipped with a range of sensors such as position tracking using a stereo camera, contact sensing of each of the robot legs through force resistive sensors, and proprioceptive information of the robot body and legs using nine inertial measurement units. The performance comparison is presented using the metrics associated with the walking gait: average forward velocity (m/s), average forward velocity variance, average lateral velocity (m/s), average lateral velocity variance, and quaternion root mean square deviation. The strengths and weaknesses of each algorithm for the given task on the physical quadruped are discussed.
基金Supported by National Natural Science Foundation of China(Grant Nos.U21B2029 and 51825502).
文摘The meta-heuristic algorithm with local search is an excellent choice for the job-shop scheduling problem(JSP).However,due to the unique nature of the JSP,local search may generate infeasible neighbourhood solutions.In the existing literature,although some domain knowledge of the JSP can be used to avoid infeasible solutions,the constraint conditions in this domain knowledge are sufficient but not necessary.It may lose many feasible solutions and make the local search inadequate.By analysing the causes of infeasible neighbourhood solutions,this paper further explores the domain knowledge contained in the JSP and proposes the sufficient and necessary constraint conditions to find all feasible neighbourhood solutions,allowing the local search to be carried out thoroughly.With the proposed conditions,a new neighbourhood structure is designed in this paper.Then,a fast calculation method for all feasible neighbourhood solutions is provided,significantly reducing the calculation time compared with ordinary methods.A set of standard benchmark instances is used to evaluate the performance of the proposed neighbourhood structure and calculation method.The experimental results show that the calculation method is effective,and the new neighbourhood structure has more reliability and superiority than the other famous and influential neighbourhood structures,where 90%of the results are the best compared with three other well-known neighbourhood structures.Finally,the result from a tabu search algorithm with the new neighbourhood structure is compared with the current best results,demonstrating the superiority of the proposed neighbourhood structure.
文摘To address the difficulty of training high-quality models in some specific domains due to the lack of fine-grained annotation resources, we propose in this paper a knowledge-integrated cross-domain data generation method for unsupervised domain adaptation tasks. Specifically, we extract domain features, lexical and syntactic knowledge from source-domain and target-domain data, and use a masking model with an extended masking strategy and a re-masking strategy to obtain domain-specific data that remove domain-specific features. Finally, we improve the sequence generation model BART and use it to generate high-quality target domain data for the task of aspect and opinion co-extraction from the target domain. Experiments were performed on three conventional English datasets from different domains, and our method generates more accurate and diverse target domain data with the best results compared to previous methods.
基金This study was funded by the International Science and Technology Cooperation Program of the Science and Technology Department of Shaanxi Province,China(No.2021KW-16)the Science and Technology Project in Xi’an(No.2019218114GXRC017CG018-GXYD17.11),Thesis work was supported by the special fund construction project of Key Disciplines in Ordinary Colleges and Universities in Shaanxi Province,the authors would like to thank the anonymous reviewers for their helpful comments and suggestions.
文摘Text event mining,as an indispensable method of text mining processing,has attracted the extensive attention of researchers.A modeling method for knowledge graph of events based on mutual information among neighbor domains and sparse representation is proposed in this paper,i.e.UKGE-MS.Specifically,UKGE-MS can improve the existing text mining technology's ability of understanding and discovering high-dimensional unmarked information,and solves the problems of traditional unsupervised feature selection methods,which only focus on selecting features from a global perspective and ignoring the impact of local connection of samples.Firstly,considering the influence of local information of samples in feature correlation evaluation,a feature clustering algorithm based on average neighborhood mutual information is proposed,and the feature clusters with certain event correlation are obtained;Secondly,an unsupervised feature selection method based on the high-order correlation of multi-dimensional statistical data is designed by combining the dimension reduction advantage of local linear embedding algorithm and the feature selection ability of sparse representation,so as to enhance the generalization ability of the selected feature items.Finally,the events knowledge graph is constructed by means of sparse representation and l1 norm.Extensive experiments are carried out on five real datasets and synthetic datasets,and the UKGE-MS are compared with five corresponding algorithms.The experimental results show that UKGE-MS is better than the traditional method in event clustering and feature selection,and has some advantages over other methods in text event recognition and discovery.
文摘Many companies like credit card, insurance, bank, retail industry require direct marketing. Data mining can help those institutes to set marketing goal. Data mining techniques have good prospects in their target audiences and improve the likelihood of response. In this work we have investigated two data mining techniques: the Naive Bayes and the C4.5 decision tree algorithms. The goal of this work is to predict whether a client will subscribe a term deposit. We also made comparative study of performance of those two algorithms. Publicly available UCI data is used to train and test the performance of the algorithms. Besides, we extract actionable knowledge from decision tree that focuses to take interesting and important decision in business area.
文摘Research papers in the field of SLA published between 2009 and 2019 are analyzed in terms of research status of domes⁃tic SLA researchers,research institutions,research frontiers and hotspots in the paper,and maps the knowledge domains of SLA re⁃searches.The data are retrieved from 10 core journals of linguistics via the CNKI journal database.By means of CiteSpace 5.3,an analysis of the overall trend of studies on SLA in China is made.
基金the National Natural Science Foundation of China(No.81870644)。
文摘AIM:To track the knowledge structure,topics in focus,and trends in emerging research in pterygium in the past 20 y.METHODS:Base on the Web of Science Core Collection(Wo SCC),studies related to pterygium in the past 20 y from 2000-2019 have been included.With the help of VOSviewer software,a knowledge map was constructed and the distribution of countries,institutions,journals,and authors in the field of pterygium noted.Meanwhile,using cocitation analysis of references and co-occurrence analysis of keywords,we identified basis and hotspots,thereby obtaining an overview of this field.RESULTS:The search retrieved 1516 publications from Wo SCC on pterygium published between 2000 and 2019.In the past two decades,the annual number of publications is on the rise and fluctuated a little.Most productive institutions are from Singapore but the most prolific and active country is the United States.Journal Cornea published the most articles and Coroneo MT contributed the most publications on pterygium.From cooccurrence analysis,the keywords formed 3 clusters:1)surgical therapeutic techniques and adjuvant of pterygium,2)occurrence process and pathogenesis of pterygium,and 3)epidemiology,and etiology of pterygium formation.These three clusters were consistent with the clustering in co-citation analysis,in which Cluster 1 contained the most references(74 publications,47.74%),Cluster 2 contained 53 publications,accounting for 34.19%,and Cluster 3 focused on epidemiology with 18.06%of total 155 cocitation publications.CONCLUSION:This study demonstrates that the research of pterygium is gradually attracting the attention of scholars and researchers.The interaction between authors,institutions,and countries is lack of.Even though,the research hotspot,distribution,and research status in pterygium in this study could provide valuable information for scholars and researchers.
基金support of the National Natural Science Foundation of China(Grant Nos.12104356 and52250191)China Postdoctoral Science Foundation(Grant No.2022M712552)+2 种基金the Opening Project of Shanghai Key Laboratory of Special Artificial Microstructure Materials and Technology(Grant No.Ammt2022B-1)the Fundamental Research Funds for the Central Universitiessupport by HPC Platform,Xi’an Jiaotong University。
文摘Thermoelectric and thermal materials are essential in achieving carbon neutrality. However, the high cost of lattice thermal conductivity calculations and the limited applicability of classical physical models have led to the inefficient development of thermoelectric materials. In this study, we proposed a two-stage machine learning framework with physical interpretability incorporating domain knowledge to calculate high/low thermal conductivity rapidly. Specifically, crystal graph convolutional neural network(CGCNN) is constructed to predict the fundamental physical parameters related to lattice thermal conductivity. Based on the above physical parameters, an interpretable machine learning model–sure independence screening and sparsifying operator(SISSO), is trained to predict the lattice thermal conductivity. We have predicted the lattice thermal conductivity of all available materials in the open quantum materials database(OQMD)(https://www.oqmd.org/). The proposed approach guides the next step of searching for materials with ultra-high or ultralow lattice thermal conductivity and promotes the development of new thermal insulation materials and thermoelectric materials.
文摘Recently, deep convolutional neural networks (DCNNs) have achieved remarkable results in image classification tasks. Despite convolutional networks’ great successes, their training process relies on a large amount of data prepared in advance, which is often challenging in real-world applications, such as streaming data and concept drift. For this reason, incremental learning (continual learning) has attracted increasing attention from scholars. However, incremental learning is associated with the challenge of catastrophic forgetting: the performance on previous tasks drastically degrades after learning a new task. In this paper, we propose a new strategy to alleviate catastrophic forgetting when neural networks are trained in continual domains. Specifically, two components are applied: data translation based on transfer learning and knowledge distillation. The former translates a portion of new data to reconstruct the partial data distribution of the old domain. The latter uses an old model as a teacher to guide a new model. The experimental results on three datasets have shown that our work can effectively alleviate catastrophic forgetting by a combination of the two methods aforementioned.
文摘In this paper we investigate the effectiveness of ensemble-based learners for web robot session identification from web server logs. We also perform multi fold robot session labeling to improve the performance of learner. We conduct a comparative study for various ensemble methods (Bagging, Boosting, and Voting) with simple classifiers in perspective of classification. We also evaluate the effectiveness of these classifiers (both ensemble and simple) on five different data sets of varying session length. Presently the results of web server log analyzers are not very much reliable because the input log files are highly inflated by sessions of automated web traverse software’s, known as web robots. Presence of web robots access traffic entries in web server log repositories imposes a great challenge to extract any actionable and usable knowledge about browsing behavior of actual visitors. So web robots sessions need accurate and fast detection from web server log repositories to extract knowledge about genuine visitors and to produce correct results of log analyzers.
基金supported by the National Key R&D Program of China(Grant No.2023YFC3010803)the National Nature Science Foundation of China(Grant No.52272424)+1 种基金the Key R&D Program of Hubei Province of China(Grant No.2023BCB123)the Fundamental Research Funds for the Central Universities(Grant No.WUT:2023IVB079)。
文摘Side-scan sonar(SSS)is now a prevalent instrument for large-scale seafloor topography measurements,deployable on an autonomous underwater vehicle(AUV)to execute fully automated underwater acoustic scanning imaging along a predetermined trajectory.However,SSS images often suffer from speckle noise caused by mutual interference between echoes,and limited AUV computational resources further hinder noise suppression.Existing approaches for SSS image processing and speckle noise reduction rely heavily on complex network structures and fail to combine the benefits of deep learning and domain knowledge.To address the problem,Rep DNet,a novel and effective despeckling convolutional neural network is proposed.Rep DNet introduces two re-parameterized blocks:the Pixel Smoothing Block(PSB)and Edge Enhancement Block(EEB),preserving edge information while attenuating speckle noise.During training,PSB and EEB manifest as double-layered multi-branch structures,integrating first-order and secondorder derivatives and smoothing functions.During inference,the branches are re-parameterized into a 3×3 convolution,enabling efficient inference without sacrificing accuracy.Rep DNet comprises three computational operations:3×3 convolution,element-wise summation and Rectified Linear Unit activation.Evaluations on benchmark datasets,a real SSS dataset and Data collected at Lake Mulan aestablish Rep DNet as a well-balanced network,meeting the AUV computational constraints in terms of performance and latency.
文摘Purpose: The evolution of the socio-cognitive structure of the field of knowledge management(KM) during the period 1986–2015 is described. Design/methodology/approach: Records retrieved from Web of Science were submitted to author co-citation analysis(ACA) following a longitudinal perspective as of the following time slices: 1986–1996, 1997–2006, and 2007–2015. The top 10% of most cited first authors by sub-periods were mapped in bibliometric networks in order to interpret the communities formed and their relationships.Findings: KM is a homogeneous field as indicated by networks results. Nine classical authors are identified since they are highly co-cited in each sub-period, highlighting Ikujiro Nonaka as the most influential authors in the field. The most significant communities in KM are devoted to strategic management, KM foundations, organisational learning and behaviour, and organisational theories. Major trends in the evolution of the intellectual structure of KM evidence a technological influence in 1986–1996, a strategic influence in 1997–2006, and finally a sociological influence in 2007–2015.Research limitations: Describing a field from a single database can offer biases in terms of output coverage. Likewise, the conference proceedings and books were not used and the analysis was only based on first authors. However, the results obtained can be very useful to understand the evolution of KM research.Practical implications: These results might be useful for managers and academicians to understand the evolution of KM field and to(re)define research activities and organisational projects.Originality/value: The novelty of this paper lies in considering ACA as a bibliometric technique to study KM research. In addition, our investigation has a wider time coverage than earlier articles.
文摘Software requirements engineering deals with: elicitation, specification, and validation of software requirements. Furthermore there is a need to facilitate collaboration amongst stakeholders and analysts. Fewer efforts were deployed to support them in performing their job on a day to day basis. To solve this problem we use knowledge management for software requirements engineering. This paper proposes a knowledge management framework, based on the SECI model of knowledge creation, aimed at exploiting tacit and explicit knowledge related to software requirements within a given software project. The core part of the proposed framework is a set of four sub systems “Socializer”;“Externalizer”;“Combiner”;and “Internalizer”, attached to a couple of domain ontologies and a set of knowledge assets. Indeed we aim to facilitate a semantic based interpretation of knowledge assets related to software requirements by restricting their interpretation through the application domain and software requirements ontologies. We anticipate that this framework would be very helpful for stakeholders as well as analysts to exchange and manage their knowledge within a given software project. We show in the case study, through a virtual payroll project using the two-step approach: domain level requirements plus design level requirements, how the key elicitation SRE techniques are used during the first phase of domain requirements elicitation through the four subsystems of our framework.
基金supported by Education and teaching reform research project of southwest medical university(No.JG2015082)。
文摘Objective:Nurse practitioners(NPs)have drawn significant attention recently and played a major role in healthcare.We aim to find the evaluation of NPs through published studies and then visualize the research status and hotspots in this field.Methods:All data came from the Web of Science Core Collection,and the data were counted and entered into Excel 2016.The key documents nodes were excavated by analyzing the knowledge network map using Cite Space V software.In this study,these nodes of“author,country,institution,keyword,co-citation(referencecited-authorcited-journal),and grant”were harvested for analysis and comparison.Results:A total of 4912 records,which were published between 2007 and 2018 and pertained to NPs,were retrieved from the Web of Science Core Collection database(Wo SCC)from a diversity of languages.The total and the annual number of publications and citations of these continually increased over time.Most publications were in 2018(618 records).This study involved 8241 authors located in 98 countries and 4557 institutions totally.The United States(2737 records)and the University of Michigan(90 records)dominated in publication frequency.There are 902 journals and 2449 articles with funding support that have been analyzed.Most articles were from JAMA:The Journal of the American Medical Association(1386,IF=47.661),followed by the Journal of Advanced Nursing(1359,IF=2.267),and The New England Journal of Medicine(1109,IF=79.258).The reference“The Role of Nurse Practitioners in Reinventing Primary Care”was co-cited most frequently,which revealed it as the highest landmark article in NP.The top-ranked keyword was“Care,”other than“Nurse practitioner,”which has an ultra-high frequency.Some of the high-frequency keywords represent the significant direction of NPs.Conclusions:NPs are at the crux of health-care delivery and play an important role in providing high-quality nursing.Publications on NPs in Wo SCC have increased notably during the recent years,and have appeared in some journals that have a high impact factor.Research frontiers and developmental trends were revealed by the analysis in this study,which can be used to forecast future research developments in NPs and taken as a reference to choose the right directions by subsequent researchers who wish to use these results.However,the grant support from administration or research institutions is still in need of improvement and the scope of research in NPs should be broadened in the future.
基金Supported by Philosophy and Social Science Foundation of Hunan Province,China,No.23YBJ08China Youth&Children Research Association,No.2023B01Research Project on the Theories and Practice of Hunan Women,No.22YB06.
文摘BACKGROUND In the rapidly evolving landscape of psychiatric research,2023 marked another year of significant progress globally,with the World Journal of Psychiatry(WJP)experiencing notable expansion and influence.AIM To conduct a comprehensive visualization and analysis of the articles published in the WJP throughout 2023.By delving into these publications,the aim is to deter-mine the valuable insights that can illuminate pathways for future research endeavors in the field of psychiatry.METHODS A selection process led to the inclusion of 107 papers from the WJP published in 2023,forming the dataset for the analysis.Employing advanced visualization techniques,this study mapped the knowledge domains represented in these papers.RESULTS The findings revealed a prevalent focus on key topics such as depression,mental health,anxiety,schizophrenia,and the impact of coronavirus disease 2019.Additionally,through keyword clustering,it became evident that these papers were predominantly focused on exploring mental health disorders,depression,anxiety,schizophrenia,and related factors.Noteworthy contributions hailed authors in regions such as China,the United Kingdom,United States,and Turkey.Particularly,the paper garnered the highest number of citations,while the American Psychiatric Association was the most cited reference.CONCLUSION It is recommended that the WJP continue in its efforts to enhance the quality of papers published in the field of psychiatry.Additionally,there is a pressing need to delve into the potential applications of digital interventions and artificial intelligence within the discipline.
文摘Alzheimer’s disease is the most common cause of dementia.It is an increasingly serious global health problem and has a significant impact on individuals and society.However,the precise cause of Alzheimer’s disease is still unknown.In this study,11,748 Web-of-Science-indexed manuscripts regarding Alzheimer’s disease,all published from 2015 to 2019,and their 693,938 references were analyzed.A document co-citation network map was drawn using CiteSpace software.Research frontiers and development trends were determined by retrieving subject headings with apparent changing word frequency trends,which can be used to forecast future research developments in Alzheimer’s disease.