The user’s intent to seek online information has been an active area of research in user profiling.User profiling considers user characteristics,behaviors,activities,and preferences to sketch user intentions,interest...The user’s intent to seek online information has been an active area of research in user profiling.User profiling considers user characteristics,behaviors,activities,and preferences to sketch user intentions,interests,and motivations.Determining user characteristics can help capture implicit and explicit preferences and intentions for effective user-centric and customized content presentation.The user’s complete online experience in seeking information is a blend of activities such as searching,verifying,and sharing it on social platforms.However,a combination of multiple behaviors in profiling users has yet to be considered.This research takes a novel approach and explores user intent types based on multidimensional online behavior in information acquisition.This research explores information search,verification,and dissemination behavior and identifies diverse types of users based on their online engagement using machine learning.The research proposes a generic user profile template that explains the user characteristics based on the internet experience and uses it as ground truth for data annotation.User feedback is based on online behavior and practices collected by using a survey method.The participants include both males and females from different occupation sectors and different ages.The data collected is subject to feature engineering,and the significant features are presented to unsupervised machine learning methods to identify user intent classes or profiles and their characteristics.Different techniques are evaluated,and the K-Mean clustering method successfully generates five user groups observing different user characteristics with an average silhouette of 0.36 and a distortion score of 1136.Feature average is computed to identify user intent type characteristics.The user intent classes are then further generalized to create a user intent template with an Inter-Rater Reliability of 75%.This research successfully extracts different user types based on their preferences in online content,platforms,criteria,and frequency.The study also validates the proposed template on user feedback data through Inter-Rater Agreement process using an external human rater.展开更多
Purpose:Recently,global science has shown an increasing open trend,however,the characteristics of research integrity of open access(OA)publications have rarely been studied.The aim of this study is to compare the char...Purpose:Recently,global science has shown an increasing open trend,however,the characteristics of research integrity of open access(OA)publications have rarely been studied.The aim of this study is to compare the characteristics of retracted articles across different OA levels and discover whether OA level influences the characteristics of retracted articles.Design/methodology/approach:The research conducted an analysis of 6,005 retracted publications between 2001 and 2020 from the Web of Science and Retraction Watch databases.These publications were categorized based on their OA levels,including Gold OA,Green OA,and non-OA.The study explored retraction rates,time lags and reasons within these categories.Findings:The findings of this research revealed distinct patterns in retraction rates among different OA levels.Publications with Gold OA demonstrated the highest retraction rate,followed by Green OA and non-OA.A comparison of retraction reasons between Gold OA and non-OA categories indicated similar proportions,while Green OA exhibited a higher proportion due to falsification and manipulation issues,along with a lower occurrence of plagiarism and authorship issues.The retraction time lag was shortest for Gold OA,followed by non-OA,and longest for Green OA.The prolonged retraction time for Green OA could be attributed to an atypical distribution of retraction reasons.A comparative study on characteristics of retracted publications across different open access levels Research limitations:There is no exploration of a wider range of OA levels,such as Hybrid OA and Bronze OA.Practical implications:The outcomes of this study suggest the need for increased attention to research integrity within the OA publications.The occurrences offalsification,manipulation,and ethical concerns within Green OA publications warrant attention from the scientific community.Originality/value:This study contributes to the understanding of research integrity in the realm of OA publications,shedding light on retraction patterns and reasons across different OA levels.展开更多
Fair exchange protocols play a critical role in enabling two distrustful entities to conduct electronic data exchanges in a fair and secure manner.These protocols are widely used in electronic payment systems and elec...Fair exchange protocols play a critical role in enabling two distrustful entities to conduct electronic data exchanges in a fair and secure manner.These protocols are widely used in electronic payment systems and electronic contract signing,ensuring the reliability and security of network transactions.In order to address the limitations of current research methods and enhance the analytical capabilities for fair exchange protocols,this paper proposes a formal model for analyzing such protocols.The proposed model begins with a thorough analysis of fair exchange protocols,followed by the formal definition of fairness.This definition accurately captures the inherent requirements of fair exchange protocols.Building upon event logic,the model incorporates the time factor into predicates and introduces knowledge set axioms.This enhancement empowers the improved logic to effectively describe the state and knowledge of protocol participants at different time points,facilitating reasoning about their acquired knowledge.To maximize the intruder’s capabilities,channel errors are translated into the behaviors of the intruder.The participants are further categorized into honest participants and malicious participants,enabling a comprehensive evaluation of the intruder’s potential impact.By employing a typical fair exchange protocol as an illustrative example,this paper demonstrates the detailed steps of utilizing the proposed model for protocol analysis.The entire process of protocol execution under attack scenarios is presented,shedding light on the underlying reasons for the attacks and proposing corresponding countermeasures.The developedmodel enhances the ability to reason about and evaluate the security properties of fair exchange protocols,thereby contributing to the advancement of secure network transactions.展开更多
Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information ...Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.展开更多
Purpose:The notable increase in retraction papers has attracted considerable attention from diverse stakeholders.Various sources are now offering information related to research integrity,including concerns voiced on ...Purpose:The notable increase in retraction papers has attracted considerable attention from diverse stakeholders.Various sources are now offering information related to research integrity,including concerns voiced on social media,disclosed lists of paper mills,and retraction notices accessible through journal websites.However,despite the availability of such resources,there remains a lack of a unified platform to consolidate this information,thereby hindering efficient searching and cross-referencing.Thus,it is imperative to develop a comprehensive platform for retracted papers and related concerns.This article aims to introduce“Amend,”a platform designed to integrate information on research integrity from diverse sources.Design/methodology/approach:The Amend platform consolidates concerns and lists of problematic articles sourced from social media platforms(e.g.,PubPeer,For Better Science),retraction notices from journal websites,and citation databases(e.g.,Web of Science,CrossRef).Moreover,Amend includes investigation and punishment announcements released by administrative agencies(e.g.,NSFC,MOE,MOST,CAS).Each related paper is marked and can be traced back to its information source via a provided link.Furthermore,the Amend database incorporates various attributes of retracted articles,including citation topics,funding details,open access status,and more.The reasons for retraction are identified and classified as either academic misconduct or honest errors,with detailed subcategories provided for further clarity.Findings:Within the Amend platform,a total of 32,515 retracted papers indexed in SCI,SSCI,and ESCI between 1980 and 2023 were identified.Of these,26,620(81.87%)were associated with academic misconduct.The retraction rate stands at 6.64 per 10,000 articles.Notably,the retraction rate for non-gold open access articles significantly differs from that for gold open access articles,with this disparity progressively widening over the years.Furthermore,the reasons for retractions have shifted from traditional individual behaviors like falsification,fabrication,plagiarism,and duplication to more organized large-scale fraudulent practices,including Paper Mills,Fake Peer-review,and Artificial Intelligence Generated Content(AIGC).Research limitations:The Amend platform may not fully capture all retracted and concerning papers,thereby impacting its comprehensiveness.Additionally,inaccuracies in retraction notices may lead to errors in tagged reasons.Practical implications:Amend provides an integrated platform for stakeholders to enhance monitoring,analysis,and research on academic misconduct issues.Ultimately,the Amend database can contribute to upholding scientific integrity.Originality/value:This study introduces a globally integrated platform for retracted and concerning papers,along with a preliminary analysis of the evolutionary trends in retracted papers.展开更多
Theα-universal triple I(α-UTI)method is a recognized scheme in the field of fuzzy reasoning,whichwas proposed by our research group previously.The robustness of fuzzy reasoning determines the quality of reasoning al...Theα-universal triple I(α-UTI)method is a recognized scheme in the field of fuzzy reasoning,whichwas proposed by our research group previously.The robustness of fuzzy reasoning determines the quality of reasoning algorithms to a large extent,which is quantified by calculating the disparity between the output of fuzzy reasoning with interference and the output without interference.Therefore,in this study,the interval robustness(embodied as the interval stability)of theα-UTI method is explored in the interval-valued fuzzy environment.To begin with,the stability of theα-UTI method is explored for the case of an individual rule,and the upper and lower bounds of its results are estimated,using four kinds of unified interval implications(including the R-interval implication,the S-interval implication,the QL-interval implication and the interval t-norm implication).Through analysis,it is found that theα-UTI method exhibits good interval stability for an individual rule.Moreover,the stability of theα-UTI method is revealed in the case of multiple rules,and the upper and lower bounds of its outcomes are estimated.The results show that theα-UTI method is stable for multiple rules when four kinds of unified interval implications are used,respectively.Lastly,theα-UTI reasoning chain method is presented,which contains a chain structure with multiple layers.The corresponding solutions and their interval perturbations are investigated.It is found that theα-UTI reasoning chain method is stable in the case of chain reasoning.Two application examples in affective computing are given to verify the stability of theα-UTImethod.In summary,through theoretical proof and example verification,it is found that theα-UTImethod has good interval robustness with four kinds of unified interval implications aiming at the situations of an individual rule,multi-rule and reasoning chain.展开更多
Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurr...Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.展开更多
In Chinese medicine, practitioners assess patients’ complaints, analyze their underlying problems, identify causes and come to a diagnosis, which then directs treatment. What is not obvious and not recorded in a cons...In Chinese medicine, practitioners assess patients’ complaints, analyze their underlying problems, identify causes and come to a diagnosis, which then directs treatment. What is not obvious and not recorded in a consultation is the clinical reasoning process that practitioners use. The research filmed three practitioners in the UK while they conducted a consultation and treatment on new patients. The practitioners and researchers viewed the films and used them as aide-memoirs while the reasoning process throughout was discussed. In order to determine the pattern, practitioners used the four examinations to gather information from the patient in an iterative process;their aesthetic reasoning was highly developed. Through triangulation they checked the information they received against a detailed understanding of the qi-dynamic. They used highly analytical strategies of forward(inductive) and backward(deductive) reasoning against the prototypes of the signs and symptoms that indicate a specific Zheng. This was achieved through an abductive process that linked description with explanation and causal factors with pathological mechanisms. The feedback loop with the patient continued through the consultation and into the treatment. A process of translation and interpretation was needed to turn the patient’s story into the practitioner’s story of qi-dynamics that then directed the treatment. Awareness of our clinical reasoning process will mitigate against biases, improve our diagnoses and treatment choices and support the training of students.展开更多
The disposal of contaminated water from Japan’s Fukushima nuclear power plant is a significant international nuclear safety issue with considerable cross-border implications.This matter requires compliance not only w...The disposal of contaminated water from Japan’s Fukushima nuclear power plant is a significant international nuclear safety issue with considerable cross-border implications.This matter requires compliance not only with the law of the sea but also with the principles of nuclear safety under international law.These principles serve as the overarching tenet of international and China’s domestic nuclear laws,applicable to nuclear facilities and activities.The principle of safety in nuclear activities is fully recognized in international and domestic laws,carrying broad legal binding force.Japan’s discharge of nuclear-contaminated water into the sea violates its obligations under the principle of safety in nuclear activities,including commitments to optimum protection,as low as reasonably practicable,and prevention.The Japanese government and the International Atomic Energy Agency(IAEA)have breached the obligation of optimum protection by restricting the scope of assessments,substituting core concepts,and shielding dissenting views.In the absence of clear radiation standards,they have acted unilaterally without fulfilling the obligation as low as reasonably practicable principle.The discharge of Fukushima nuclear-contaminated water poses an imminent and unpredictable risk to all countries worldwide,including Japanese residents.Japan and the IAEA should fulfill their obligations under international law regarding disposal,adhering to the principles of nuclear safety,including optimum protection,the obligation as low as reasonably practicable,and prevention through multilateral cooperation.Specifically,the obligation to provide optimum protection should be implemented by re-evaluating the most reliable disposal technologies and methods currently available and comprehensively assessing various options.The standard of the obligation as low as reasonably practicable requires that the minimization of negative impacts on human health,livelihoods,and the environment should not be subordinated to considerations of cutting costs and expenses.Multilateral cooperation should be promoted through the establishment of sound multilateral long-term monitoring mechanisms for the discharge of nuclear-contaminated water,notification and consultation obligations,and periodic assessments.These obligations under international law were fulfilled after the accidents at the Three Mile Island and Chernobyl nuclear power plants.The implications of the principles of nuclear safety align with the concept of building a community of shared future for nuclear safety advocated by China.In cases of violations of international law regarding the disposal of nuclear-contaminated water that jeopardize the concept of a community of a shared future for nuclear safety,China can also rely on its own strength to promote the implementation of due obligations through self-help.展开更多
CHATGPT,one of the leading Large Language Models(LLMs),has acquired linguistic capabilities such as text comprehension and logical reasoning,enabling it to engage in natural conversations with humans.
With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning te...With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.展开更多
Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem th...Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem that the models do not directly use explicit information of knowledge sources existing outside.To augment this,additional methods such as knowledge-aware graph network(KagNet)and multi-hop graph relation network(MHGRN)have been proposed.In this study,we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers(ALBERT)with knowledge graph information extraction technique.We also propose to applying the novel method,schema graph expansion to recent language models.Then,we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent.Furthermore,we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.展开更多
Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing nois...Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing noisy texts.Previous studies focus on the attention mechanism to construct the connection between different text features through semantic similarity.However,similarity-based methods cannot distinguish valid information from highly similar retrieved documents well.How to design an effective algorithm to implement aggregated reasoning in confusing information with similar features still remains an open issue.To address this problem,we design a novel local-toglobal causal reasoning(LGCR)network for cross-document RE,which enables efficient distinguishing,filtering and global reasoning on complex information from a causal perspective.Specifically,we propose a local causal estimation algorithm to estimate the causal effect,which is the first trial to use the causal reasoning independent of feature similarity to distinguish between confusing and valid information in cross-document RE.Furthermore,based on the causal effect,we propose a causality guided global reasoning algorithm to filter the confusing information and achieve global reasoning.Experimental results under the closed and the open settings of the large-scale dataset Cod RED demonstrate our LGCR network significantly outperforms the state-ofthe-art methods and validate the effectiveness of causal reasoning in confusing information processing.展开更多
Soft computing(SC)refers to the ability of a digital computer or robot to perform functions that are normally associated with intelligent individuals,such as reasoning and problem-solving.An example of this would be a...Soft computing(SC)refers to the ability of a digital computer or robot to perform functions that are normally associated with intelligent individuals,such as reasoning and problem-solving.An example of this would be a project aimed at creating systems capable of reasoning,discovering meaning,generalising,or learning from past experience.Science and engineering problems that are both non-linear and complex can be solved using these methodologies.It has been proven that these algorithms can be used to solve numerous real-world problems.The techniques outlined can be used to increase the accuracy of existing models/equations,or they can be used to propose a newmodel that can address the problem.展开更多
Objective The objective of this narrative review was to search the existing literature for studies reporting measures to minimize radiation use during endoscopic management of stone disease and present ways of reducin...Objective The objective of this narrative review was to search the existing literature for studies reporting measures to minimize radiation use during endoscopic management of stone disease and present ways of reducing the exposure of both patients and operating room staff.Methods A literature review in PubMed was performed to identify studies describing protocols or measures to reduce radiation received during endourological procedures from January 1970 to August 2022.Eligible studies were those that reported outcomes for ureteroscopy or percutaneous nephrolithotripsy regarding measures to minimize radiation doses used intraoperatively,performed either in real-life theatres or using phantoms.Both comparative and non-comparative studies were deemed eligible.Results Protection can be achieved initially at the level of diagnosis and follow-up of patients,which should be done following an algorithm and choice of more conservative imaging methods.Certain protocols,which follow principles for minimized fluoroscopy use should be implemented and urologists as well as operating room staff should be continuously trained regarding radiation damage and protection measures.Wearing protective lead equipment remains a cornerstone for personnel protection,while configuration of the operating room and adjusting X-ray machine settings can also significantly reduce radiation energy.Conclusion There are specific measures,which can be implemented to reduce radiation exposure.These include avoiding excessive use of computed tomography scans and X-rays during diagnosis and follow-up of urolithiasis patients.Intraoperative protocols with minimal fluoroscopy use can be employed.Staff training regarding dangers of radiation plays also a major role.Use and maintenance of protective equipment and setting up the operating room properly also serve towards this goal.Machine settings can be customized appropriately and finally continuously monitoring of exposure with dosimeters can be adopted.展开更多
文摘The user’s intent to seek online information has been an active area of research in user profiling.User profiling considers user characteristics,behaviors,activities,and preferences to sketch user intentions,interests,and motivations.Determining user characteristics can help capture implicit and explicit preferences and intentions for effective user-centric and customized content presentation.The user’s complete online experience in seeking information is a blend of activities such as searching,verifying,and sharing it on social platforms.However,a combination of multiple behaviors in profiling users has yet to be considered.This research takes a novel approach and explores user intent types based on multidimensional online behavior in information acquisition.This research explores information search,verification,and dissemination behavior and identifies diverse types of users based on their online engagement using machine learning.The research proposes a generic user profile template that explains the user characteristics based on the internet experience and uses it as ground truth for data annotation.User feedback is based on online behavior and practices collected by using a survey method.The participants include both males and females from different occupation sectors and different ages.The data collected is subject to feature engineering,and the significant features are presented to unsupervised machine learning methods to identify user intent classes or profiles and their characteristics.Different techniques are evaluated,and the K-Mean clustering method successfully generates five user groups observing different user characteristics with an average silhouette of 0.36 and a distortion score of 1136.Feature average is computed to identify user intent type characteristics.The user intent classes are then further generalized to create a user intent template with an Inter-Rater Reliability of 75%.This research successfully extracts different user types based on their preferences in online content,platforms,criteria,and frequency.The study also validates the proposed template on user feedback data through Inter-Rater Agreement process using an external human rater.
基金the National Social Science Foundation of China(No.22CTQ032).
文摘Purpose:Recently,global science has shown an increasing open trend,however,the characteristics of research integrity of open access(OA)publications have rarely been studied.The aim of this study is to compare the characteristics of retracted articles across different OA levels and discover whether OA level influences the characteristics of retracted articles.Design/methodology/approach:The research conducted an analysis of 6,005 retracted publications between 2001 and 2020 from the Web of Science and Retraction Watch databases.These publications were categorized based on their OA levels,including Gold OA,Green OA,and non-OA.The study explored retraction rates,time lags and reasons within these categories.Findings:The findings of this research revealed distinct patterns in retraction rates among different OA levels.Publications with Gold OA demonstrated the highest retraction rate,followed by Green OA and non-OA.A comparison of retraction reasons between Gold OA and non-OA categories indicated similar proportions,while Green OA exhibited a higher proportion due to falsification and manipulation issues,along with a lower occurrence of plagiarism and authorship issues.The retraction time lag was shortest for Gold OA,followed by non-OA,and longest for Green OA.The prolonged retraction time for Green OA could be attributed to an atypical distribution of retraction reasons.A comparative study on characteristics of retracted publications across different open access levels Research limitations:There is no exploration of a wider range of OA levels,such as Hybrid OA and Bronze OA.Practical implications:The outcomes of this study suggest the need for increased attention to research integrity within the OA publications.The occurrences offalsification,manipulation,and ethical concerns within Green OA publications warrant attention from the scientific community.Originality/value:This study contributes to the understanding of research integrity in the realm of OA publications,shedding light on retraction patterns and reasons across different OA levels.
基金the National Natural Science Foundation of China(Nos.61562026,61962020)Academic and Technical Leaders of Major Disciplines in Jiangxi Province(No.20172BCB22015)+1 种基金Special Fund Project for Postgraduate Innovation in Jiangxi Province(No.YC2020-B1141)Jiangxi Provincial Natural Science Foundation(No.20224ACB202006).
文摘Fair exchange protocols play a critical role in enabling two distrustful entities to conduct electronic data exchanges in a fair and secure manner.These protocols are widely used in electronic payment systems and electronic contract signing,ensuring the reliability and security of network transactions.In order to address the limitations of current research methods and enhance the analytical capabilities for fair exchange protocols,this paper proposes a formal model for analyzing such protocols.The proposed model begins with a thorough analysis of fair exchange protocols,followed by the formal definition of fairness.This definition accurately captures the inherent requirements of fair exchange protocols.Building upon event logic,the model incorporates the time factor into predicates and introduces knowledge set axioms.This enhancement empowers the improved logic to effectively describe the state and knowledge of protocol participants at different time points,facilitating reasoning about their acquired knowledge.To maximize the intruder’s capabilities,channel errors are translated into the behaviors of the intruder.The participants are further categorized into honest participants and malicious participants,enabling a comprehensive evaluation of the intruder’s potential impact.By employing a typical fair exchange protocol as an illustrative example,this paper demonstrates the detailed steps of utilizing the proposed model for protocol analysis.The entire process of protocol execution under attack scenarios is presented,shedding light on the underlying reasons for the attacks and proposing corresponding countermeasures.The developedmodel enhances the ability to reason about and evaluate the security properties of fair exchange protocols,thereby contributing to the advancement of secure network transactions.
基金supported in part by the Beijing Natural Science Foundation under Grants L211020 and M21032in part by the National Natural Science Foundation of China under Grants U1836106 and 62271045in part by the Scientific and Technological Innovation Foundation of Foshan under Grants BK21BF001 and BK20BF010。
文摘Knowledge graph(KG)serves as a specialized semantic network that encapsulates intricate relationships among real-world entities within a structured framework.This framework facilitates a transformation in information retrieval,transitioning it from mere string matching to far more sophisticated entity matching.In this transformative process,the advancement of artificial intelligence and intelligent information services is invigorated.Meanwhile,the role ofmachine learningmethod in the construction of KG is important,and these techniques have already achieved initial success.This article embarks on a comprehensive journey through the last strides in the field of KG via machine learning.With a profound amalgamation of cutting-edge research in machine learning,this article undertakes a systematical exploration of KG construction methods in three distinct phases:entity learning,ontology learning,and knowledge reasoning.Especially,a meticulous dissection of machine learningdriven algorithms is conducted,spotlighting their contributions to critical facets such as entity extraction,relation extraction,entity linking,and link prediction.Moreover,this article also provides an analysis of the unresolved challenges and emerging trajectories that beckon within the expansive application of machine learning-fueled,large-scale KG construction.
基金NSFC(No.71974017)LIS Outstanding Talents Introducing Program,Bureau of Development and Planning of CAS(2022).
文摘Purpose:The notable increase in retraction papers has attracted considerable attention from diverse stakeholders.Various sources are now offering information related to research integrity,including concerns voiced on social media,disclosed lists of paper mills,and retraction notices accessible through journal websites.However,despite the availability of such resources,there remains a lack of a unified platform to consolidate this information,thereby hindering efficient searching and cross-referencing.Thus,it is imperative to develop a comprehensive platform for retracted papers and related concerns.This article aims to introduce“Amend,”a platform designed to integrate information on research integrity from diverse sources.Design/methodology/approach:The Amend platform consolidates concerns and lists of problematic articles sourced from social media platforms(e.g.,PubPeer,For Better Science),retraction notices from journal websites,and citation databases(e.g.,Web of Science,CrossRef).Moreover,Amend includes investigation and punishment announcements released by administrative agencies(e.g.,NSFC,MOE,MOST,CAS).Each related paper is marked and can be traced back to its information source via a provided link.Furthermore,the Amend database incorporates various attributes of retracted articles,including citation topics,funding details,open access status,and more.The reasons for retraction are identified and classified as either academic misconduct or honest errors,with detailed subcategories provided for further clarity.Findings:Within the Amend platform,a total of 32,515 retracted papers indexed in SCI,SSCI,and ESCI between 1980 and 2023 were identified.Of these,26,620(81.87%)were associated with academic misconduct.The retraction rate stands at 6.64 per 10,000 articles.Notably,the retraction rate for non-gold open access articles significantly differs from that for gold open access articles,with this disparity progressively widening over the years.Furthermore,the reasons for retractions have shifted from traditional individual behaviors like falsification,fabrication,plagiarism,and duplication to more organized large-scale fraudulent practices,including Paper Mills,Fake Peer-review,and Artificial Intelligence Generated Content(AIGC).Research limitations:The Amend platform may not fully capture all retracted and concerning papers,thereby impacting its comprehensiveness.Additionally,inaccuracies in retraction notices may lead to errors in tagged reasons.Practical implications:Amend provides an integrated platform for stakeholders to enhance monitoring,analysis,and research on academic misconduct issues.Ultimately,the Amend database can contribute to upholding scientific integrity.Originality/value:This study introduces a globally integrated platform for retracted and concerning papers,along with a preliminary analysis of the evolutionary trends in retracted papers.
基金the National Natural Science Foundation of China under Grants 62176083,62176084,61877016,and 61976078the Key Research and Development Program of Anhui Province under Grant 202004d07020004the Natural Science Foundation of Anhui Province under Grant 2108085MF203.
文摘Theα-universal triple I(α-UTI)method is a recognized scheme in the field of fuzzy reasoning,whichwas proposed by our research group previously.The robustness of fuzzy reasoning determines the quality of reasoning algorithms to a large extent,which is quantified by calculating the disparity between the output of fuzzy reasoning with interference and the output without interference.Therefore,in this study,the interval robustness(embodied as the interval stability)of theα-UTI method is explored in the interval-valued fuzzy environment.To begin with,the stability of theα-UTI method is explored for the case of an individual rule,and the upper and lower bounds of its results are estimated,using four kinds of unified interval implications(including the R-interval implication,the S-interval implication,the QL-interval implication and the interval t-norm implication).Through analysis,it is found that theα-UTI method exhibits good interval stability for an individual rule.Moreover,the stability of theα-UTI method is revealed in the case of multiple rules,and the upper and lower bounds of its outcomes are estimated.The results show that theα-UTI method is stable for multiple rules when four kinds of unified interval implications are used,respectively.Lastly,theα-UTI reasoning chain method is presented,which contains a chain structure with multiple layers.The corresponding solutions and their interval perturbations are investigated.It is found that theα-UTI reasoning chain method is stable in the case of chain reasoning.Two application examples in affective computing are given to verify the stability of theα-UTImethod.In summary,through theoretical proof and example verification,it is found that theα-UTImethod has good interval robustness with four kinds of unified interval implications aiming at the situations of an individual rule,multi-rule and reasoning chain.
基金the National Natural Science Founda-tion of China(62062062)hosted by Gulila Altenbek.
文摘Due to the structural dependencies among concurrent events in the knowledge graph and the substantial amount of sequential correlation information carried by temporally adjacent events,we propose an Independent Recurrent Temporal Graph Convolution Networks(IndRT-GCNets)framework to efficiently and accurately capture event attribute information.The framework models the knowledge graph sequences to learn the evolutionary represen-tations of entities and relations within each period.Firstly,by utilizing the temporal graph convolution module in the evolutionary representation unit,the framework captures the structural dependency relationships within the knowledge graph in each period.Meanwhile,to achieve better event representation and establish effective correlations,an independent recurrent neural network is employed to implement auto-regressive modeling.Furthermore,static attributes of entities in the entity-relation events are constrained andmerged using a static graph constraint to obtain optimal entity representations.Finally,the evolution of entity and relation representations is utilized to predict events in the next subsequent step.On multiple real-world datasets such as Freebase13(FB13),Freebase 15k(FB15K),WordNet11(WN11),WordNet18(WN18),FB15K-237,WN18RR,YAGO3-10,and Nell-995,the results of multiple evaluation indicators show that our proposed IndRT-GCNets framework outperforms most existing models on knowledge reasoning tasks,which validates the effectiveness and robustness.
基金This research was self-funded as part of an Education Doctorate at the Institute of Education,University College London.
文摘In Chinese medicine, practitioners assess patients’ complaints, analyze their underlying problems, identify causes and come to a diagnosis, which then directs treatment. What is not obvious and not recorded in a consultation is the clinical reasoning process that practitioners use. The research filmed three practitioners in the UK while they conducted a consultation and treatment on new patients. The practitioners and researchers viewed the films and used them as aide-memoirs while the reasoning process throughout was discussed. In order to determine the pattern, practitioners used the four examinations to gather information from the patient in an iterative process;their aesthetic reasoning was highly developed. Through triangulation they checked the information they received against a detailed understanding of the qi-dynamic. They used highly analytical strategies of forward(inductive) and backward(deductive) reasoning against the prototypes of the signs and symptoms that indicate a specific Zheng. This was achieved through an abductive process that linked description with explanation and causal factors with pathological mechanisms. The feedback loop with the patient continued through the consultation and into the treatment. A process of translation and interpretation was needed to turn the patient’s story into the practitioner’s story of qi-dynamics that then directed the treatment. Awareness of our clinical reasoning process will mitigate against biases, improve our diagnoses and treatment choices and support the training of students.
基金funded by the Research on National Greenhouse Gas Emission Reduction Obligations under the Carbon Peak and Carbon Neutral Commitment,General Program of Humanities and Social Sciences,Ministry of Education of China[Grant No.21YJA820010].
文摘The disposal of contaminated water from Japan’s Fukushima nuclear power plant is a significant international nuclear safety issue with considerable cross-border implications.This matter requires compliance not only with the law of the sea but also with the principles of nuclear safety under international law.These principles serve as the overarching tenet of international and China’s domestic nuclear laws,applicable to nuclear facilities and activities.The principle of safety in nuclear activities is fully recognized in international and domestic laws,carrying broad legal binding force.Japan’s discharge of nuclear-contaminated water into the sea violates its obligations under the principle of safety in nuclear activities,including commitments to optimum protection,as low as reasonably practicable,and prevention.The Japanese government and the International Atomic Energy Agency(IAEA)have breached the obligation of optimum protection by restricting the scope of assessments,substituting core concepts,and shielding dissenting views.In the absence of clear radiation standards,they have acted unilaterally without fulfilling the obligation as low as reasonably practicable principle.The discharge of Fukushima nuclear-contaminated water poses an imminent and unpredictable risk to all countries worldwide,including Japanese residents.Japan and the IAEA should fulfill their obligations under international law regarding disposal,adhering to the principles of nuclear safety,including optimum protection,the obligation as low as reasonably practicable,and prevention through multilateral cooperation.Specifically,the obligation to provide optimum protection should be implemented by re-evaluating the most reliable disposal technologies and methods currently available and comprehensively assessing various options.The standard of the obligation as low as reasonably practicable requires that the minimization of negative impacts on human health,livelihoods,and the environment should not be subordinated to considerations of cutting costs and expenses.Multilateral cooperation should be promoted through the establishment of sound multilateral long-term monitoring mechanisms for the discharge of nuclear-contaminated water,notification and consultation obligations,and periodic assessments.These obligations under international law were fulfilled after the accidents at the Three Mile Island and Chernobyl nuclear power plants.The implications of the principles of nuclear safety align with the concept of building a community of shared future for nuclear safety advocated by China.In cases of violations of international law regarding the disposal of nuclear-contaminated water that jeopardize the concept of a community of a shared future for nuclear safety,China can also rely on its own strength to promote the implementation of due obligations through self-help.
基金supported in part by the Skywork Intelligence Culture and Technology LTDthe Science and Technology Development Fund,Macao Special Administrative Region(SAR)(0050/2020/A1)the National Natural Science Foundation of China(61533019)。
文摘CHATGPT,one of the leading Large Language Models(LLMs),has acquired linguistic capabilities such as text comprehension and logical reasoning,enabling it to engage in natural conversations with humans.
基金supported by the National Natural Science Foundation of China(No.U1960202)。
文摘With the development of automation and informatization in the steelmaking industry,the human brain gradually fails to cope with an increasing amount of data generated during the steelmaking process.Machine learning technology provides a new method other than production experience and metallurgical principles in dealing with large amounts of data.The application of machine learning in the steelmaking process has become a research hotspot in recent years.This paper provides an overview of the applications of machine learning in the steelmaking process modeling involving hot metal pretreatment,primary steelmaking,secondary refining,and some other aspects.The three most frequently used machine learning algorithms in steelmaking process modeling are the artificial neural network,support vector machine,and case-based reasoning,demonstrating proportions of 56%,14%,and 10%,respectively.Collected data in the steelmaking plants are frequently faulty.Thus,data processing,especially data cleaning,is crucially important to the performance of machine learning models.The detection of variable importance can be used to optimize the process parameters and guide production.Machine learning is used in hot metal pretreatment modeling mainly for endpoint S content prediction.The predictions of the endpoints of element compositions and the process parameters are widely investigated in primary steelmaking.Machine learning is used in secondary refining modeling mainly for ladle furnaces,Ruhrstahl–Heraeus,vacuum degassing,argon oxygen decarburization,and vacuum oxygen decarburization processes.Further development of machine learning in the steelmaking process modeling can be realized through additional efforts in the construction of the data platform,the industrial transformation of the research achievements to the practical steelmaking process,and the improvement of the universality of the machine learning models.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Korea Government(MSIT)(No.2020R1G1A1100493).
文摘Recently,pre-trained language representation models such as bidirec-tional encoder representations from transformers(BERT)have been performing well in commonsense question answering(CSQA).However,there is a problem that the models do not directly use explicit information of knowledge sources existing outside.To augment this,additional methods such as knowledge-aware graph network(KagNet)and multi-hop graph relation network(MHGRN)have been proposed.In this study,we propose to use the latest pre-trained language model a lite bidirectional encoder representations from transformers(ALBERT)with knowledge graph information extraction technique.We also propose to applying the novel method,schema graph expansion to recent language models.Then,we analyze the effect of applying knowledge graph-based knowledge extraction techniques to recent pre-trained language models and confirm that schema graph expansion is effective in some extent.Furthermore,we show that our proposed model can achieve better performance than existing KagNet and MHGRN models in CommonsenseQA dataset.
基金supported in part by the National Key Research and Development Program of China(2022ZD0116405)the Strategic Priority Research Program of the Chinese Academy of Sciences(XDA27030300)the Key Research Program of the Chinese Academy of Sciences(ZDBS-SSW-JSC006)。
文摘Cross-document relation extraction(RE),as an extension of information extraction,requires integrating information from multiple documents retrieved from open domains with a large number of irrelevant or confusing noisy texts.Previous studies focus on the attention mechanism to construct the connection between different text features through semantic similarity.However,similarity-based methods cannot distinguish valid information from highly similar retrieved documents well.How to design an effective algorithm to implement aggregated reasoning in confusing information with similar features still remains an open issue.To address this problem,we design a novel local-toglobal causal reasoning(LGCR)network for cross-document RE,which enables efficient distinguishing,filtering and global reasoning on complex information from a causal perspective.Specifically,we propose a local causal estimation algorithm to estimate the causal effect,which is the first trial to use the causal reasoning independent of feature similarity to distinguish between confusing and valid information in cross-document RE.Furthermore,based on the causal effect,we propose a causality guided global reasoning algorithm to filter the confusing information and achieve global reasoning.Experimental results under the closed and the open settings of the large-scale dataset Cod RED demonstrate our LGCR network significantly outperforms the state-ofthe-art methods and validate the effectiveness of causal reasoning in confusing information processing.
文摘Soft computing(SC)refers to the ability of a digital computer or robot to perform functions that are normally associated with intelligent individuals,such as reasoning and problem-solving.An example of this would be a project aimed at creating systems capable of reasoning,discovering meaning,generalising,or learning from past experience.Science and engineering problems that are both non-linear and complex can be solved using these methodologies.It has been proven that these algorithms can be used to solve numerous real-world problems.The techniques outlined can be used to increase the accuracy of existing models/equations,or they can be used to propose a newmodel that can address the problem.
文摘Objective The objective of this narrative review was to search the existing literature for studies reporting measures to minimize radiation use during endoscopic management of stone disease and present ways of reducing the exposure of both patients and operating room staff.Methods A literature review in PubMed was performed to identify studies describing protocols or measures to reduce radiation received during endourological procedures from January 1970 to August 2022.Eligible studies were those that reported outcomes for ureteroscopy or percutaneous nephrolithotripsy regarding measures to minimize radiation doses used intraoperatively,performed either in real-life theatres or using phantoms.Both comparative and non-comparative studies were deemed eligible.Results Protection can be achieved initially at the level of diagnosis and follow-up of patients,which should be done following an algorithm and choice of more conservative imaging methods.Certain protocols,which follow principles for minimized fluoroscopy use should be implemented and urologists as well as operating room staff should be continuously trained regarding radiation damage and protection measures.Wearing protective lead equipment remains a cornerstone for personnel protection,while configuration of the operating room and adjusting X-ray machine settings can also significantly reduce radiation energy.Conclusion There are specific measures,which can be implemented to reduce radiation exposure.These include avoiding excessive use of computed tomography scans and X-rays during diagnosis and follow-up of urolithiasis patients.Intraoperative protocols with minimal fluoroscopy use can be employed.Staff training regarding dangers of radiation plays also a major role.Use and maintenance of protective equipment and setting up the operating room properly also serve towards this goal.Machine settings can be customized appropriately and finally continuously monitoring of exposure with dosimeters can be adopted.