Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning...Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.展开更多
Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated...Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.展开更多
As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidab...As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.展开更多
In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural N...In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural Network),YOLO(You Only Look Once),and their variants have demonstrated superiority in quickly detecting objects from images and videos,creating new opportunities to enhance automatic and efficient fire detection.The YOLO model,especially newer versions like YOLOv10,stands out for its fast processing capability,making it suitable for low-latency applications.However,when applied to real-world datasets,the accuracy of fire prediction is still not high.This study improves the accuracy of YOLOv10 for real-time applications through model fine-tuning techniques and data augmentation.The core work of the research involves creating a diverse fire image dataset specifically suited for fire detection applications in buildings and factories,freezing the initial layers of the model to retain general features learned from the dataset by applying the Squeeze and Excitation attention mechanism and employing the Stochastic Gradient Descent(SGD)with a momentum optimization algorithm to enhance accuracy while ensuring real-time fire detection.Experimental results demonstrate the effectiveness of the proposed fire prediction approach,where the YOLOv10 small model exhibits the best balance compared to other YOLO family models such as nano,medium,and balanced.Additionally,the study provides an experimental evaluation to highlight the effectiveness of model fine-tuning compared to the YOLOv10 baseline,YOLOv8 and Faster R-CNN based on two criteria:accuracy and prediction time.展开更多
Cancer patients are at high risk of malnutrition,which can lead to adverse health outcomes such as prolonged hospitalization,increased complications,and increased mortality.Accurate and timely nutritional assessment p...Cancer patients are at high risk of malnutrition,which can lead to adverse health outcomes such as prolonged hospitalization,increased complications,and increased mortality.Accurate and timely nutritional assessment plays a critical role in effectively managing malnutrition in these patients.However,while many tools exist to assess malnutrition,there is no universally accepted standard.Although different tools have their own strengths and limitations,there is a lack of narrative reviews on nutritional assessment tools for cancer patients.To address this knowledge gap,we conducted a non-systematic literature search using PubMed,Embase,Web of Science,and the Cochrane Library from their inception until May 2023.A total of 90 studies met our selection criteria and were included in our narrative review.We evaluated the applications,strengths,and limitations of 4 commonly used nutritional assessment tools for cancer patients:the Subjective Global Assessment(SGA),Patient-Generated Subjective Global Assessment(PG-SGA),Mini Nutritional Assessment(MNA),and Global Leadership Initiative on Malnutrition(GLIM).Our findings revealed that malnutrition was associated with adverse health outcomes.Each of these 4 tools has its applications,strengths,and limitations.Our findings provide medical staff with a foundation for choosing the optimal tool to rapidly and accurately assess malnutrition in cancer patients.It is essential for medical staff to be familiar with these common tools to ensure effective nutritional management of cancer patients.展开更多
Machine tools,often referred to as the“mother machines”of the manufacturing industry,are crucial in developing smart manufacturing and are increasingly becoming more intelligent.Digital twin technology can promote m...Machine tools,often referred to as the“mother machines”of the manufacturing industry,are crucial in developing smart manufacturing and are increasingly becoming more intelligent.Digital twin technology can promote machine tool intelligence and has attracted considerable research interest.However,there is a lack of clear and systematic analyses on how the digital twin technology enables machine tool intelligence.Herein,digital twin modeling was identified as an enabling technology for machine tool intelligence based on a comparative study of the characteristics of machine tool intelligence and digital twin.The review then delves into state-of-the-art digital twin modelingenabled machine tool intelligence,examining it from the aspects of data-based modeling and mechanism-data dual-driven modeling.Additionally,it highlights three bottleneck issues facing the field.Considering these problems,the architecture of a digital twin machine tool(DTMT)is proposed,and three key technologies are expounded in detail:Data perception and fusion technology,mechanism-data-knowledge hybrid-driven digital twin modeling and virtual-real synchronization technology,and dynamic optimization and collaborative control technology for multilevel parameters.Finally,future research directions for the DTMT are discussed.This work can provide a foundation basis for the research and implementation of digital-twin modeling-enabled machine tool intelligence,making it significant for developing intelligent machine tools.展开更多
Ceramic cutting inserts are a type of cutting tool commonly used in high-speed metal cutting applications.However,the wear of these inserts caused by friction between the workpiece and cutting inserts limits their ove...Ceramic cutting inserts are a type of cutting tool commonly used in high-speed metal cutting applications.However,the wear of these inserts caused by friction between the workpiece and cutting inserts limits their overall effectiveness.In order to improve the tool life and reduce wear,this study introduces an emerging method called magnetic field-assisted batch polishing(MABP)for simultaneously polishing multiple ceramic cutting inserts.Several polishing experiments were conducted under different conditions,and the wear characteristics were clarified by cutting S136H steel.The results showed that after 15 min of polishing,the surface roughness at the flank face,edge,and nose of the inserts was reduced to below 2.5 nm,6.25 nm,and 45.8 nm,respectively.Furthermore,the nose radii of the inserts did not change significantly,and there were no significant changes in the weight percentage of elements before and after polishing.Additionally,the tool life of the batch polished inserts was found to be up to 1.75 times longer than that of unpolished inserts.These findings suggest that the MABP method is an effective way to mass polish ceramic cutting inserts,resulting in significantly reduced tool wear.Furthermore,this novel method offers new possibilities for polishing other tools.展开更多
Magnesium alloys have many advantages as lightweight materials for engineering applications,especially in the fields of automotive and aerospace.They undergo extensive cutting or machining while making products out of...Magnesium alloys have many advantages as lightweight materials for engineering applications,especially in the fields of automotive and aerospace.They undergo extensive cutting or machining while making products out of them.Dry cutting,a sustainable machining method,causes more friction and adhesion at the tool-chip interface.One of the promising solutions to this problem is cutting tool surface texturing,which can reduce tool wear and friction in dry cutting and improve machining performance.This paper aims to investigate the impact of dimple textures(made on the flank face of cutting inserts)on tool wear and chip morphology in the dry machining of AZ31B magnesium alloy.The results show that the cutting speed was the most significant factor affecting tool flank wear,followed by feed rate and cutting depth.The tool wear mechanism was examined using scanning electron microscope(SEM)images and energy dispersive X-ray spectroscopy(EDS)analysis reports,which showed that at low cutting speed,the main wear mechanism was abrasion,while at high speed,it was adhesion.The chips are discontinuous at low cutting speeds,while continuous at high cutting speeds.The dimple textured flank face cutting tools facilitate the dry machining of AZ31B magnesium alloy and contribute to ecological benefits.展开更多
Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracer...Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.展开更多
With the rise of blockchain technology,the security issues of smart contracts have become increasingly critical.Despite the availability of numerous smart contract vulnerability detection tools,many face challenges su...With the rise of blockchain technology,the security issues of smart contracts have become increasingly critical.Despite the availability of numerous smart contract vulnerability detection tools,many face challenges such as slow updates,usability issues,and limited installation methods.These challenges hinder the adoption and practicality of these tools.This paper examines smart contract vulnerability detection tools from 2016 to 2023,sourced from the Web of Science(WOS)and Google Scholar.By systematically collecting,screening,and synthesizing relevant research,38 open-source tools that provide installation methods were selected for further investigation.From a developer’s perspective,this paper offers a comprehensive survey of these 38 open-source tools,discussing their operating principles,installation methods,environmental dependencies,update frequencies,and installation challenges.Based on this,we propose an Ethereum smart contract vulnerability detection framework.This framework enables developers to easily utilize various detection tools and accurately analyze contract security issues.To validate the framework’s stability,over 1700 h of testing were conducted.Additionally,a comprehensive performance test was performed on the mainstream detection tools integrated within the framework,assessing their hardware requirements and vulnerability detection coverage.Experimental results indicate that the Slither tool demonstrates satisfactory performance in terms of system resource consumption and vulnerability detection coverage.This study represents the first performance evaluation of testing tools in this domain,providing significant reference value.展开更多
In community planning,due to the lack of evidence regarding the selection of media tools,this study examines how a common but differentiated ideal speech situation can be created as well as how more appropriate media ...In community planning,due to the lack of evidence regarding the selection of media tools,this study examines how a common but differentiated ideal speech situation can be created as well as how more appropriate media tools can be defined and selected in the community planning process.First,this study describes the concept and theoretical basis of media used in community planning from the perspectives of the multiple effects of media evolution on communicative planning.Second,the classification criteria and typical characteristics of media tools used to support community planning are clarified from three dimensions:acceptability,cost effectiveness,and applicability.Third,strategies for applying media tools in the four phases of communicative planning-namely,state analysis,problem identification,contradictory solution and optimization-are described.Finally,trends in the development of media tools for community planning are explored in terms of multistakeholder engagement,supporting scientific decision-making and multiple-type media integration.The results provide a reference for developing more inclusive,effective,and appropriate media tools for enhancing decision-making capacity and modernizing governance in community planning and policy-making processes.展开更多
This paper introduced the content, compilation process, reliability and validity, scoring method of the evaluation tool for patients’ medication compliance at home and abroad, and reviewed the research progress of th...This paper introduced the content, compilation process, reliability and validity, scoring method of the evaluation tool for patients’ medication compliance at home and abroad, and reviewed the research progress of the tool. The evaluation method, dimension, scoring method, evaluation content and application scope of the tool were compared, so as to provide reference for nurses to comprehensively and accurately evaluate patients’ medication status.展开更多
We have developed a protein array system,named"Phospho-Totum",which reproduces the phosphorylation state of a sample on the array.The protein array contains 1471 proteins from 273 known signaling pathways.Ac...We have developed a protein array system,named"Phospho-Totum",which reproduces the phosphorylation state of a sample on the array.The protein array contains 1471 proteins from 273 known signaling pathways.According to the activation degrees of tyrosine kinases in the sample,the corresponding groups of substrate proteins on the array are phosphorylated under the same conditions.In addition to measuring the phosphorylation levels of the 1471 substrates,we have developed and performed the artificial intelligence-assisted tools to further characterize the phosphorylation state and estimate pathway activation,tyrosine kinase activation,and a list of kinase inhibitors that produce phosphorylation states similar to that of the sample.The Phospho-Totum system,which seamlessly links and interrogates the measurements and analyses,has the potential to not only elucidate pathophysiological mechanisms in diseases by reproducing the phosphorylation state of samples,but also be useful for drug discovery,particularly for screening targeted kinases for potential drug kinase inhibitors.展开更多
The laser powder bed fusion(LPBF) process can integrally form geometrically complex and high-performance metallic parts that have attracted much interest,especially in the molds industry.The appearance of the LPBF mak...The laser powder bed fusion(LPBF) process can integrally form geometrically complex and high-performance metallic parts that have attracted much interest,especially in the molds industry.The appearance of the LPBF makes it possible to design and produce complex conformal cooling channel systems in molds.Thus,LPBF-processed tool steels have attracted more and more attention.The complex thermal history in the LPBF process makes the microstructural characteristics and properties different from those of conventional manufactured tool steels.This paper provides an overview of LPBF-processed tool steels by describing the physical phenomena,the microstructural characteristics,and the mechanical/thermal properties,including tensile properties,wear resistance,and thermal properties.The microstructural characteristics are presented through a multiscale perspective,ranging from densification,meso-structure,microstructure,substructure in grains,to nanoprecipitates.Finally,a summary of tool steels and their challenges and outlooks are introduced.展开更多
The 21^(st) century has started with several innovations in the medical sciences,with wide applications in health care management.This development has taken in the field of medicines(newer drugs/molecules),various too...The 21^(st) century has started with several innovations in the medical sciences,with wide applications in health care management.This development has taken in the field of medicines(newer drugs/molecules),various tools and technology which has completely changed the patient management including abdominal surgery.Surgery for abdominal diseases has moved from maximally invasive to minimally invasive(laparoscopic and robotic)surgery.Some of the newer medicines have its impact on need for surgical intervention.This article focuses on the development of these emerging molecules,tools,and technology and their impact on present surgical form and its future effects on the surgical intervention in gastroenterological diseases.展开更多
The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the s...The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the surfaceintegrity of the workpiece. Hence, during the cutting process, it is imperative to continually monitor the tool wearstate andpromptly replace anyheavilyworn tools toguarantee thequality of the cutting.The conventional tool wearmonitoring models, which are based on machine learning, are specifically built for the intended cutting conditions.However, these models require retraining when the cutting conditions undergo any changes. This method has noapplication value if the cutting conditions frequently change. This manuscript proposes a method for monitoringtool wear basedonunsuperviseddeep transfer learning. Due to the similarity of the tool wear process under varyingworking conditions, a tool wear recognitionmodel that can adapt to both current and previous working conditionshas been developed by utilizing cutting monitoring data from history. To extract and classify cutting vibrationsignals, the unsupervised deep transfer learning network comprises a one-dimensional (1D) convolutional neuralnetwork (CNN) with a multi-layer perceptron (MLP). To achieve distribution alignment of deep features throughthe maximum mean discrepancy algorithm, a domain adaptive layer is embedded in the penultimate layer of thenetwork. A platformformonitoring tool wear during endmilling has been constructed. The proposedmethod wasverified through the execution of a full life test of end milling under multiple working conditions with a Cr12MoVsteel workpiece. Our experiments demonstrate that the transfer learning model maintains a classification accuracyof over 80%. In comparisonwith the most advanced tool wearmonitoring methods, the presentedmodel guaranteessuperior performance in the target domains.展开更多
The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally in...The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.展开更多
基金This work is part of the research projects LaTe4PoliticES(PID2022-138099OBI00)funded by MICIU/AEI/10.13039/501100011033the European Regional Development Fund(ERDF)-A Way of Making Europe and LT-SWM(TED2021-131167B-I00)funded by MICIU/AEI/10.13039/501100011033the European Union NextGenerationEU/PRTR.Mr.Ronghao Pan is supported by the Programa Investigo grant,funded by the Region of Murcia,the Spanish Ministry of Labour and Social Economy and the European Union-NextGenerationEU under the“Plan de Recuperación,Transformación y Resiliencia(PRTR).”。
文摘Large Language Models(LLMs)are increasingly demonstrating their ability to understand natural language and solve complex tasks,especially through text generation.One of the relevant capabilities is contextual learning,which involves the ability to receive instructions in natural language or task demonstrations to generate expected outputs for test instances without the need for additional training or gradient updates.In recent years,the popularity of social networking has provided a medium through which some users can engage in offensive and harmful online behavior.In this study,we investigate the ability of different LLMs,ranging from zero-shot and few-shot learning to fine-tuning.Our experiments show that LLMs can identify sexist and hateful online texts using zero-shot and few-shot approaches through information retrieval.Furthermore,it is found that the encoder-decoder model called Zephyr achieves the best results with the fine-tuning approach,scoring 86.811%on the Explainable Detection of Online Sexism(EDOS)test-set and 57.453%on the Multilingual Detection of Hate Speech Against Immigrants and Women in Twitter(HatEval)test-set.Finally,it is confirmed that the evaluated models perform well in hate text detection,as they beat the best result in the HatEval task leaderboard.The error analysis shows that contextual learning had difficulty distinguishing between types of hate speech and figurative language.However,the fine-tuned approach tends to produce many false positives.
基金financial support from the SERB-SURE under file number of SUR/2022/003129Jong Hyeok Park acknowledges the support of the National Research Foundation of Korea (NRF)funded by the Ministry of Science and ICT (RS-2023-00302697,RS-2023-00268523).
文摘Mo_(2)C is an excellent electrocatalyst for hydrogen evolution reaction(HER).However,Mo_(2)C is a poor electrocatalyst for oxygen evolution reaction(OER).Herein,two different elements,namely Co and Fe,are incorporated in Mo_(2)C that,therefore,has a finely tuned electronic structure,which is not achievable by incorporation of any one of the metals.Consequently,the resulting electrocatalyst Co_(0.8)Fe_(0.2)-Mo_(2)C-80 displayed excellent OER catalytic performance,which is evidenced by a low overpotential of 214.0(and 246.5)mV to attain a current density of 10(and 50)mA cm^(-2),an ultralow Tafel slope of 38.4 mV dec^(-1),and longterm stability in alkaline medium.Theoretical data demonstrates that Co_(0.8)Fe_(0.2)-Mo_(2)C-80 requires the lowest overpotential(1.00 V)for OER and Co centers to be the active sites.The ultrahigh catalytic performance of the electrocatalyst is attributed to the excellent intrinsic catalytic activity due to high Brunauer-Emmett-Teller specific surface area,large electrochemically active surface area,small Tafel slope,and low chargetransfer resistance.
文摘As the realm of enterprise-level conversational AI continues to evolve, it becomes evident that while generalized Large Language Models (LLMs) like GPT-3.5 bring remarkable capabilities, they also bring forth formidable challenges. These models, honed on vast and diverse datasets, have undoubtedly pushed the boundaries of natural language understanding and generation. However, they often stumble when faced with the intricate demands of nuanced enterprise applications. This research advocates for a strategic paradigm shift, urging enterprises to embrace a fine-tuning approach as a means to optimize conversational AI. While generalized LLMs are linguistic marvels, their inability to cater to the specific needs of businesses across various industries poses a critical challenge. This strategic shift involves empowering enterprises to seamlessly integrate their own datasets into LLMs, a process that extends beyond linguistic enhancement. The core concept of this approach centers on customization, enabling businesses to fine-tune the AI’s functionality to fit precisely within their unique business landscapes. By immersing the LLM in industry-specific documents, customer interaction records, internal reports, and regulatory guidelines, the AI transcends its generic capabilities to become a sophisticated conversational partner aligned with the intricacies of the enterprise’s domain. The transformative potential of this fine-tuning approach cannot be overstated. It enables a transition from a universal AI solution to a highly customizable tool. The AI evolves from being a linguistic powerhouse to a contextually aware, industry-savvy assistant. As a result, it not only responds with linguistic accuracy but also with depth, relevance, and resonance, significantly elevating user experiences and operational efficiency. In the subsequent sections, this paper delves into the intricacies of fine-tuning, exploring the multifaceted challenges and abundant opportunities it presents. It addresses the technical intricacies of data integration, ethical considerations surrounding data usage, and the broader implications for the future of enterprise AI. The journey embarked upon in this research holds the potential to redefine the role of conversational AI in enterprises, ushering in an era where AI becomes a dynamic, deeply relevant, and highly effective tool, empowering businesses to excel in an ever-evolving digital landscape.
文摘In recent years,early detection and warning of fires have posed a significant challenge to environmental protection and human safety.Deep learning models such as Faster R-CNN(Faster Region based Convolutional Neural Network),YOLO(You Only Look Once),and their variants have demonstrated superiority in quickly detecting objects from images and videos,creating new opportunities to enhance automatic and efficient fire detection.The YOLO model,especially newer versions like YOLOv10,stands out for its fast processing capability,making it suitable for low-latency applications.However,when applied to real-world datasets,the accuracy of fire prediction is still not high.This study improves the accuracy of YOLOv10 for real-time applications through model fine-tuning techniques and data augmentation.The core work of the research involves creating a diverse fire image dataset specifically suited for fire detection applications in buildings and factories,freezing the initial layers of the model to retain general features learned from the dataset by applying the Squeeze and Excitation attention mechanism and employing the Stochastic Gradient Descent(SGD)with a momentum optimization algorithm to enhance accuracy while ensuring real-time fire detection.Experimental results demonstrate the effectiveness of the proposed fire prediction approach,where the YOLOv10 small model exhibits the best balance compared to other YOLO family models such as nano,medium,and balanced.Additionally,the study provides an experimental evaluation to highlight the effectiveness of model fine-tuning compared to the YOLOv10 baseline,YOLOv8 and Faster R-CNN based on two criteria:accuracy and prediction time.
基金financially supported by the Guangxi Medical University 2023 Innovation and Entrepreneurship Training Program Project(No.202310598015).
文摘Cancer patients are at high risk of malnutrition,which can lead to adverse health outcomes such as prolonged hospitalization,increased complications,and increased mortality.Accurate and timely nutritional assessment plays a critical role in effectively managing malnutrition in these patients.However,while many tools exist to assess malnutrition,there is no universally accepted standard.Although different tools have their own strengths and limitations,there is a lack of narrative reviews on nutritional assessment tools for cancer patients.To address this knowledge gap,we conducted a non-systematic literature search using PubMed,Embase,Web of Science,and the Cochrane Library from their inception until May 2023.A total of 90 studies met our selection criteria and were included in our narrative review.We evaluated the applications,strengths,and limitations of 4 commonly used nutritional assessment tools for cancer patients:the Subjective Global Assessment(SGA),Patient-Generated Subjective Global Assessment(PG-SGA),Mini Nutritional Assessment(MNA),and Global Leadership Initiative on Malnutrition(GLIM).Our findings revealed that malnutrition was associated with adverse health outcomes.Each of these 4 tools has its applications,strengths,and limitations.Our findings provide medical staff with a foundation for choosing the optimal tool to rapidly and accurately assess malnutrition in cancer patients.It is essential for medical staff to be familiar with these common tools to ensure effective nutritional management of cancer patients.
基金Supported by Tianjin Municipal University Science and Technology Development Foundation of China(Grant No.2021KJ176).
文摘Machine tools,often referred to as the“mother machines”of the manufacturing industry,are crucial in developing smart manufacturing and are increasingly becoming more intelligent.Digital twin technology can promote machine tool intelligence and has attracted considerable research interest.However,there is a lack of clear and systematic analyses on how the digital twin technology enables machine tool intelligence.Herein,digital twin modeling was identified as an enabling technology for machine tool intelligence based on a comparative study of the characteristics of machine tool intelligence and digital twin.The review then delves into state-of-the-art digital twin modelingenabled machine tool intelligence,examining it from the aspects of data-based modeling and mechanism-data dual-driven modeling.Additionally,it highlights three bottleneck issues facing the field.Considering these problems,the architecture of a digital twin machine tool(DTMT)is proposed,and three key technologies are expounded in detail:Data perception and fusion technology,mechanism-data-knowledge hybrid-driven digital twin modeling and virtual-real synchronization technology,and dynamic optimization and collaborative control technology for multilevel parameters.Finally,future research directions for the DTMT are discussed.This work can provide a foundation basis for the research and implementation of digital-twin modeling-enabled machine tool intelligence,making it significant for developing intelligent machine tools.
基金Supported by Research Grants Council of the Government of the Hong Kong Special Administrative Region of China (Grant No.15203620)Research and Innovation Office of The Hong Kong Polytechnic University of China (Grant Nos.BBXN,1-W308)+1 种基金Research Studentships (Grant No.RH3Y)State Key Laboratory of Mechanical System and Vibration of China (Grant No.MSV202315)。
文摘Ceramic cutting inserts are a type of cutting tool commonly used in high-speed metal cutting applications.However,the wear of these inserts caused by friction between the workpiece and cutting inserts limits their overall effectiveness.In order to improve the tool life and reduce wear,this study introduces an emerging method called magnetic field-assisted batch polishing(MABP)for simultaneously polishing multiple ceramic cutting inserts.Several polishing experiments were conducted under different conditions,and the wear characteristics were clarified by cutting S136H steel.The results showed that after 15 min of polishing,the surface roughness at the flank face,edge,and nose of the inserts was reduced to below 2.5 nm,6.25 nm,and 45.8 nm,respectively.Furthermore,the nose radii of the inserts did not change significantly,and there were no significant changes in the weight percentage of elements before and after polishing.Additionally,the tool life of the batch polished inserts was found to be up to 1.75 times longer than that of unpolished inserts.These findings suggest that the MABP method is an effective way to mass polish ceramic cutting inserts,resulting in significantly reduced tool wear.Furthermore,this novel method offers new possibilities for polishing other tools.
文摘Magnesium alloys have many advantages as lightweight materials for engineering applications,especially in the fields of automotive and aerospace.They undergo extensive cutting or machining while making products out of them.Dry cutting,a sustainable machining method,causes more friction and adhesion at the tool-chip interface.One of the promising solutions to this problem is cutting tool surface texturing,which can reduce tool wear and friction in dry cutting and improve machining performance.This paper aims to investigate the impact of dimple textures(made on the flank face of cutting inserts)on tool wear and chip morphology in the dry machining of AZ31B magnesium alloy.The results show that the cutting speed was the most significant factor affecting tool flank wear,followed by feed rate and cutting depth.The tool wear mechanism was examined using scanning electron microscope(SEM)images and energy dispersive X-ray spectroscopy(EDS)analysis reports,which showed that at low cutting speed,the main wear mechanism was abrasion,while at high speed,it was adhesion.The chips are discontinuous at low cutting speeds,while continuous at high cutting speeds.The dimple textured flank face cutting tools facilitate the dry machining of AZ31B magnesium alloy and contribute to ecological benefits.
基金Supported by Natural Science Foundation of Shaanxi Province of China(Grant No.2021JM010)Suzhou Municipal Natural Science Foundation of China(Grant Nos.SYG202018,SYG202134).
文摘Laser tracers are a three-dimensional coordinate measurement system that are widely used in industrial measurement.We propose a geometric error identification method based on multi-station synchronization laser tracers to enable the rapid and high-precision measurement of geometric errors for gantry-type computer numerical control(CNC)machine tools.This method also improves on the existing measurement efficiency issues in the single-base station measurement method and multi-base station time-sharing measurement method.We consider a three-axis gantry-type CNC machine tool,and the geometric error mathematical model is derived and established based on the combination of screw theory and a topological analysis of the machine kinematic chain.The four-station laser tracers position and measurement points are realized based on the multi-point positioning principle.A self-calibration algorithm is proposed for the coordinate calibration process of a laser tracer using the Levenberg-Marquardt nonlinear least squares method,and the geometric error is solved using Taylor’s first-order linearization iteration.The experimental results show that the geometric error calculated based on this modeling method is comparable to the results from the Etalon laser tracer.For a volume of 800 mm×1000 mm×350 mm,the maximum differences of the linear,angular,and spatial position errors were 2.0μm,2.7μrad,and 12.0μm,respectively,which verifies the accuracy of the proposed algorithm.This research proposes a modeling method for the precise measurement of errors in machine tools,and the applied nature of this study also makes it relevant both to researchers and those in the industrial sector.
基金supported by the Major Public Welfare Special Fund of Henan Province(No.201300210200)the Major Science and Technology Research Special Fund of Henan Province(No.221100210400).
文摘With the rise of blockchain technology,the security issues of smart contracts have become increasingly critical.Despite the availability of numerous smart contract vulnerability detection tools,many face challenges such as slow updates,usability issues,and limited installation methods.These challenges hinder the adoption and practicality of these tools.This paper examines smart contract vulnerability detection tools from 2016 to 2023,sourced from the Web of Science(WOS)and Google Scholar.By systematically collecting,screening,and synthesizing relevant research,38 open-source tools that provide installation methods were selected for further investigation.From a developer’s perspective,this paper offers a comprehensive survey of these 38 open-source tools,discussing their operating principles,installation methods,environmental dependencies,update frequencies,and installation challenges.Based on this,we propose an Ethereum smart contract vulnerability detection framework.This framework enables developers to easily utilize various detection tools and accurately analyze contract security issues.To validate the framework’s stability,over 1700 h of testing were conducted.Additionally,a comprehensive performance test was performed on the mainstream detection tools integrated within the framework,assessing their hardware requirements and vulnerability detection coverage.Experimental results indicate that the Slither tool demonstrates satisfactory performance in terms of system resource consumption and vulnerability detection coverage.This study represents the first performance evaluation of testing tools in this domain,providing significant reference value.
基金supported by the National Key Research and Development Program of China under the theme“Key technologies for urban sustainable development evaluation and decision-making support”[Grant No.2022YFC3802900].
文摘In community planning,due to the lack of evidence regarding the selection of media tools,this study examines how a common but differentiated ideal speech situation can be created as well as how more appropriate media tools can be defined and selected in the community planning process.First,this study describes the concept and theoretical basis of media used in community planning from the perspectives of the multiple effects of media evolution on communicative planning.Second,the classification criteria and typical characteristics of media tools used to support community planning are clarified from three dimensions:acceptability,cost effectiveness,and applicability.Third,strategies for applying media tools in the four phases of communicative planning-namely,state analysis,problem identification,contradictory solution and optimization-are described.Finally,trends in the development of media tools for community planning are explored in terms of multistakeholder engagement,supporting scientific decision-making and multiple-type media integration.The results provide a reference for developing more inclusive,effective,and appropriate media tools for enhancing decision-making capacity and modernizing governance in community planning and policy-making processes.
文摘This paper introduced the content, compilation process, reliability and validity, scoring method of the evaluation tool for patients’ medication compliance at home and abroad, and reviewed the research progress of the tool. The evaluation method, dimension, scoring method, evaluation content and application scope of the tool were compared, so as to provide reference for nurses to comprehensively and accurately evaluate patients’ medication status.
基金supported by the State Key Program of National Natural Science Foundation of China(Grant No.82230114 to F.H.)the National Key Research and Development Program of China(Grant No.2022YFE0104800 to F.H.).
文摘We have developed a protein array system,named"Phospho-Totum",which reproduces the phosphorylation state of a sample on the array.The protein array contains 1471 proteins from 273 known signaling pathways.According to the activation degrees of tyrosine kinases in the sample,the corresponding groups of substrate proteins on the array are phosphorylated under the same conditions.In addition to measuring the phosphorylation levels of the 1471 substrates,we have developed and performed the artificial intelligence-assisted tools to further characterize the phosphorylation state and estimate pathway activation,tyrosine kinase activation,and a list of kinase inhibitors that produce phosphorylation states similar to that of the sample.The Phospho-Totum system,which seamlessly links and interrogates the measurements and analyses,has the potential to not only elucidate pathophysiological mechanisms in diseases by reproducing the phosphorylation state of samples,but also be useful for drug discovery,particularly for screening targeted kinases for potential drug kinase inhibitors.
基金financial supports provided by the China Scholarship Council(Nos.202206 290061 and 202206290062)。
文摘The laser powder bed fusion(LPBF) process can integrally form geometrically complex and high-performance metallic parts that have attracted much interest,especially in the molds industry.The appearance of the LPBF makes it possible to design and produce complex conformal cooling channel systems in molds.Thus,LPBF-processed tool steels have attracted more and more attention.The complex thermal history in the LPBF process makes the microstructural characteristics and properties different from those of conventional manufactured tool steels.This paper provides an overview of LPBF-processed tool steels by describing the physical phenomena,the microstructural characteristics,and the mechanical/thermal properties,including tensile properties,wear resistance,and thermal properties.The microstructural characteristics are presented through a multiscale perspective,ranging from densification,meso-structure,microstructure,substructure in grains,to nanoprecipitates.Finally,a summary of tool steels and their challenges and outlooks are introduced.
文摘The 21^(st) century has started with several innovations in the medical sciences,with wide applications in health care management.This development has taken in the field of medicines(newer drugs/molecules),various tools and technology which has completely changed the patient management including abdominal surgery.Surgery for abdominal diseases has moved from maximally invasive to minimally invasive(laparoscopic and robotic)surgery.Some of the newer medicines have its impact on need for surgical intervention.This article focuses on the development of these emerging molecules,tools,and technology and their impact on present surgical form and its future effects on the surgical intervention in gastroenterological diseases.
基金the National Key Research and Development Program of China(No.2020YFB1713500)the Natural Science Basic Research Program of Shaanxi(Grant No.2023JCYB289)+1 种基金the National Natural Science Foundation of China(Grant No.52175112)the Fundamental Research Funds for the Central Universities(Grant No.ZYTS23102).
文摘The wear of metal cutting tools will progressively rise as the cutting time goes on. Wearing heavily on the toolwill generate significant noise and vibration, negatively impacting the accuracy of the forming and the surfaceintegrity of the workpiece. Hence, during the cutting process, it is imperative to continually monitor the tool wearstate andpromptly replace anyheavilyworn tools toguarantee thequality of the cutting.The conventional tool wearmonitoring models, which are based on machine learning, are specifically built for the intended cutting conditions.However, these models require retraining when the cutting conditions undergo any changes. This method has noapplication value if the cutting conditions frequently change. This manuscript proposes a method for monitoringtool wear basedonunsuperviseddeep transfer learning. Due to the similarity of the tool wear process under varyingworking conditions, a tool wear recognitionmodel that can adapt to both current and previous working conditionshas been developed by utilizing cutting monitoring data from history. To extract and classify cutting vibrationsignals, the unsupervised deep transfer learning network comprises a one-dimensional (1D) convolutional neuralnetwork (CNN) with a multi-layer perceptron (MLP). To achieve distribution alignment of deep features throughthe maximum mean discrepancy algorithm, a domain adaptive layer is embedded in the penultimate layer of thenetwork. A platformformonitoring tool wear during endmilling has been constructed. The proposedmethod wasverified through the execution of a full life test of end milling under multiple working conditions with a Cr12MoVsteel workpiece. Our experiments demonstrate that the transfer learning model maintains a classification accuracyof over 80%. In comparisonwith the most advanced tool wearmonitoring methods, the presentedmodel guaranteessuperior performance in the target domains.
文摘The dimensional accuracy of machined parts is strongly influenced by the thermal behavior of machine tools (MT). Minimizing this influence represents a key objective for any modern manufacturing industry. Thermally induced positioning error compensation remains the most effective and practical method in this context. However, the efficiency of the compensation process depends on the quality of the model used to predict the thermal errors. The model should consistently reflect the relationships between temperature distribution in the MT structure and thermally induced positioning errors. A judicious choice of the number and location of temperature sensitive points to represent heat distribution is a key factor for robust thermal error modeling. Therefore, in this paper, the temperature sensitive points are selected following a structured thermomechanical analysis carried out to evaluate the effects of various temperature gradients on MT structure deformation intensity. The MT thermal behavior is first modeled using finite element method and validated by various experimentally measured temperature fields using temperature sensors and thermal imaging. MT Thermal behavior validation shows a maximum error of less than 10% when comparing the numerical estimations with the experimental results even under changing operation conditions. The numerical model is used through several series of simulations carried out using varied working condition to explore possible relationships between temperature distribution and thermal deformation characteristics to select the most appropriate temperature sensitive points that will be considered for building an empirical prediction model for thermal errors as function of MT thermal state. Validation tests achieved using an artificial neural network based simplified model confirmed the efficiency of the proposed temperature sensitive points allowing the prediction of the thermally induced errors with an accuracy greater than 90%.