Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La...Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.展开更多
Based on Conceptual Metaphor Theory(CMT),this paper creates a tiny corpus of ChatGPT-written speeches.Through employing a corpus-driven approach,this study analyzes the identification and utilization of conceptual met...Based on Conceptual Metaphor Theory(CMT),this paper creates a tiny corpus of ChatGPT-written speeches.Through employing a corpus-driven approach,this study analyzes the identification and utilization of conceptual metaphors in artificial intelligence(AI)languages.The AI demonstrated its capacity to utilize metaphors in the metaphoric corpora through the display of diversity,non-arbitrariness,repetition,and intersectionality in the selection of source domains.It often uses vocabulary combinations with clear similarities to establish metaphorical meaning.In the literal sense,the outcomes of metaphor identification by artificial intelligence differ significantly from those of humans.Therefore,there is a need to develop advanced automatic models for identifying metaphors in order to enhance the precision of metaphor identification consistently.展开更多
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ...Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.展开更多
This opinion paper explores the transformative potential of large language models(LLMs)in laparoscopic surgery and argues for their integration to enhance surgical education,decision support,reporting,and patient care...This opinion paper explores the transformative potential of large language models(LLMs)in laparoscopic surgery and argues for their integration to enhance surgical education,decision support,reporting,and patient care.LLMs can revolutionize surgical education by providing personalized learning experiences and accelerating skill acquisition.Intelligent decision support systems powered by LLMs can assist surgeons in making complex decisions,optimizing surgical workflows,and improving patient outcomes.Moreover,LLMs can automate surgical reporting and generate personalized patient education materials,streamlining documentation and improving patient engagement.However,challenges such as data scarcity,surgical semantic capture,real-time inference,and integration with existing systems need to be addressed for successful LLM integration.The future of laparoscopic surgery lies in the seamless integration of LLMs,enabling autonomous robotic surgery,predictive surgical planning,intraoperative decision support,virtual surgical assistants,and continuous learning.By harnessing the power of LLMs,laparoscopic surgery can be transformed,empowering surgeons and ultimately benefiting patients.展开更多
Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automa...Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community.展开更多
Users of social networks can readily express their thoughts on websites like Twitter(now X),Facebook,and Instagram.The volume of textual data flowing from users has greatly increased with the advent of social media in...Users of social networks can readily express their thoughts on websites like Twitter(now X),Facebook,and Instagram.The volume of textual data flowing from users has greatly increased with the advent of social media in comparison to traditional media.For instance,using natural language processing(NLP)methods,social media can be leveraged to obtain crucial information on the present situation during disasters.In this work,tweets on the Uttarakhand flash flood are analyzed using a hybrid NLP model.This investigation employed sentiment analysis(SA)to determine the people’s expressed negative attitudes regarding the disaster.We apply a machine learning algorithm and evaluate the performance using the standard metrics,namely root mean square error(RMSE),mean absolute error(MAE),and mean absolute percentage error(MAPE).Our random forest(RF)classifier outperforms comparable works with an accuracy of 98.10%.In order to gain a competitive edge,the study shows how Twitter(now X)data and machine learning(ML)techniques can analyze public discourse and sentiments regarding disasters.It does this by comparing positive and negative comments in order to develop strategies to deal with public sentiments on disasters.展开更多
This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like r...This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications.展开更多
This study presents results from sentiment analysis of Dynamic message sign (DMS) message content, focusing on messages that include numbers of road fatalities. As a traffic management tool, DMS plays a role in influe...This study presents results from sentiment analysis of Dynamic message sign (DMS) message content, focusing on messages that include numbers of road fatalities. As a traffic management tool, DMS plays a role in influencing driver behavior and assisting transportation agencies in achieving safe and efficient traffic movement. However, the psychological and behavioral effects of displaying fatality numbers on DMS remain poorly understood;hence, it is important to know the potential impacts of displaying such messages. The Iowa Department of Transportation displays the number of fatalities on a first screen, followed by a supplemental message hoping to promote safe driving;an example is “19 TRAFFIC DEATHS THIS YEAR IF YOU HAVE A SUPER BOWL DON’T DRIVE HIGH.” We employ natural language processing to decode the sentiment and undertone of the supplementary message and investigate how they influence driving speeds. According to the results of a mixed effect model, drivers reduced speeds marginally upon encountering DMS fatality text with a positive sentiment with a neutral undertone. This category had the largest associated amount of speed reduction, while messages with negative sentiment with a negative undertone had the second largest amount of speed reduction, greater than other combinations, including positive sentiment with a positive undertone.展开更多
This editorial explores the transformative potential of artificial intelligence(AI)in identifying conflicts of interest(COIs)within academic and scientific research.By harnessing advanced data analysis,pattern recogni...This editorial explores the transformative potential of artificial intelligence(AI)in identifying conflicts of interest(COIs)within academic and scientific research.By harnessing advanced data analysis,pattern recognition,and natural language processing techniques,AI offers innovative solutions for enhancing transparency and integrity in research.This editorial discusses how AI can automatically detect COIs,integrate data from various sources,and streamline reporting processes,thereby maintaining the credibility of scientific findings.展开更多
BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication ...BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication information remains to be evaluated.AIM To evaluate the accuracy of AI systems(ChatGPT 3.5,ChatGPT 4,Google Bard)in providing drug dosage information per Harrison's Principles of Internal Medicine.METHODS A set of natural language queries mimicking real-world medical dosage inquiries was presented to the AI systems.Responses were analyzed using a 3-point Likert scale.The analysis,conducted with Python and its libraries,focused on basic statistics,overall system accuracy,and disease-specific and organ system accuracies.RESULTS ChatGPT 4 outperformed the other systems,showing the highest rate of correct responses(83.77%)and the best overall weighted accuracy(0.6775).Disease-specific accuracy varied notably across systems,with some diseases being accurately recognized,while others demonstrated significant discrepancies.Organ system accuracy also showed variable results,underscoring system-specific strengths and weaknesses.CONCLUSION ChatGPT 4 demonstrates superior reliability in medical dosage information,yet variations across diseases emphasize the need for ongoing improvements.These results highlight AI's potential in aiding healthcare professionals,urging continuous development for dependable accuracy in critical medical situations.展开更多
Sentiment analysis or opinion mining(OM)concepts become familiar due to advances in networking technologies and social media.Recently,massive amount of text has been generated over Internet daily which makes the patte...Sentiment analysis or opinion mining(OM)concepts become familiar due to advances in networking technologies and social media.Recently,massive amount of text has been generated over Internet daily which makes the pattern recognition and decision making process difficult.Since OM find useful in business sectors to improve the quality of the product as well as services,machine learning(ML)and deep learning(DL)models can be considered into account.Besides,the hyperparameters involved in the DL models necessitate proper adjustment process to boost the classification process.Therefore,in this paper,a new Artificial Fish Swarm Optimization with Bidirectional Long Short Term Memory(AFSO-BLSTM)model has been developed for OM process.The major intention of the AFSO-BLSTM model is to effectively mine the opinions present in the textual data.In addition,the AFSO-BLSTM model undergoes pre-processing and TF-IFD based feature extraction process.Besides,BLSTM model is employed for the effectual detection and classification of opinions.Finally,the AFSO algorithm is utilized for effective hyperparameter adjustment process of the BLSTM model,shows the novelty of the work.A complete simulation study of the AFSO-BLSTM model is validated using benchmark dataset and the obtained experimental values revealed the high potential of the AFSO-BLSTM model on mining opinions.展开更多
The large language model called ChatGPT has drawn extensively attention because of its human-like expression and reasoning abilities.In this study,we investigate the feasibility of using ChatGPT in experiments on tran...The large language model called ChatGPT has drawn extensively attention because of its human-like expression and reasoning abilities.In this study,we investigate the feasibility of using ChatGPT in experiments on translating radiology reports into plain language for patients and healthcare providers so that they are educated for improved healthcare.Radiology reports from 62 low-dose chest computed tomography lung cancer screening scans and 76 brain magnetic resonance imaging metastases screening scans were collected in the first half of February for this study.According to the evaluation by radiologists,ChatGPT can successfully translate radiology reports into plain language with an average score of 4.27 in the five-point system with 0.08 places of information missing and 0.07 places of misinformation.In terms of the suggestions provided by ChatGPT,they are generally relevant such as keeping following-up with doctors and closely monitoring any symptoms,and for about 37%of 138 cases in total ChatGPT offers specific suggestions based on findings in the report.ChatGPT also presents some randomness in its responses with occasionally over-simplified or neglected information,which can be mitigated using a more detailed prompt.Furthermore,ChatGPT results are compared with a newly released large model GPT-4,showing that GPT-4 can significantly improve the quality of translated reports.Our results show that it is feasible to utilize large language models in clinical education,and further efforts are needed to address limitations and maximize their potential.展开更多
We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any...We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.展开更多
Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting t...Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting task.In this paper,a domain-specific language(DSL)for NNs,neural network language(NNL)is proposed to deliver productivity of NN programming and portable performance of NN execution on different hardware platforms.The productivity and flexibility of NN programming are enabled by abstracting NNs as a directed graph of blocks.The language describes 4 representative and widely used NNs and runs them on 3 different hardware platforms(CPU,GPU and NN accelerator).Experimental results show that NNs written with the proposed language are,on average,14.5%better than the baseline implementations across these 3 platforms.Moreover,compared with the Caffe framework that specifically targets the GPU platform,the code can achieve similar performance.展开更多
Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis envir...Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis environment based on Python language, and built a corpus based on the core chapters of SESD. The second step was to digitalize the corpus. The main steps included word segmentation, information cleaning and merging, document-entry matrix, dictionary compilation and information conversion. The third step was to mine and display the internal information of SESD corpus by means of word cloud, keyword extraction and visualization. Results NLP played a positive role in computer recognition and comprehension of SESD. Different chapters had different keywords and weights. Deficiency syndrome elements were an important component of SESD, such as "Qi deficiency""Yang deficiency" and "Yin deficiency". The important syndrome elements of substantiality included "Blood stasis""Qi stagnation", etc. Core syndrome elements were closely related. Conclusions Syndrome differentiation and treatment was the core of SESD. Using NLP to excavate syndromes differentiation could help reveal the internal relationship between syndromes differentiation and provide basis for artificial intelligence to learn syndromes differentiation.展开更多
Artificial intelligence and machine-learning are widely applied in all domain applications,including computer vision and natural language processing(NLP).We briefly discuss the development of edge detection,which play...Artificial intelligence and machine-learning are widely applied in all domain applications,including computer vision and natural language processing(NLP).We briefly discuss the development of edge detection,which plays an important role in representing the salience features in a wide range of computer vision applications.Meanwhile,transformer-based deep models facilitate the usage of NLP application.We introduce two ongoing research projects for pharmaceutical industry and business negotiation.We also selected five papers in the related areas for this journal issue.展开更多
The increased ownership of mobile phone users and the advancement of mobile applications enlarge the practicality and popularity of use for learning purposes among Chinese university students.However,even if innovativ...The increased ownership of mobile phone users and the advancement of mobile applications enlarge the practicality and popularity of use for learning purposes among Chinese university students.However,even if innovative functions of these applications are increasingly reported in relevant research in the education field,little research has been in the application of spoken English language.This paper examined the effect of using a Mobile-Assisted Language Learning(MALL)application“IELTS Liulishuo”(speaking English fluently in the IELTS test)as a unit of analysis to improve the English-speaking production of university students in China.The measurement of this mobile application in its effectiveness of validity and reliability is through the use of seven dimensional criteria.Although some technical and pedagogical issues challenge adoptions of MALL in some less-developed regions in China,the study showed positive effects of using a MALL oral English assessment application characterised with Automatic Speech Recognition(ASR)system on the improvement of complexity,accuracy,and fluency of English learners in China’s colleges.展开更多
基金supported by National Social Science Foundation Annual Project“Research on Evaluation and Improvement Paths of Integrated Development of Disabled Persons”(Grant No.20BRK029)the National Language Commission’s“14th Five-Year Plan”Scientific Research Plan 2023 Project“Domain Digital Language Service Resource Construction and Key Technology Research”(YB145-72)the National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.
文摘Based on Conceptual Metaphor Theory(CMT),this paper creates a tiny corpus of ChatGPT-written speeches.Through employing a corpus-driven approach,this study analyzes the identification and utilization of conceptual metaphors in artificial intelligence(AI)languages.The AI demonstrated its capacity to utilize metaphors in the metaphoric corpora through the display of diversity,non-arbitrariness,repetition,and intersectionality in the selection of source domains.It often uses vocabulary combinations with clear similarities to establish metaphorical meaning.In the literal sense,the outcomes of metaphor identification by artificial intelligence differ significantly from those of humans.Therefore,there is a need to develop advanced automatic models for identifying metaphors in order to enhance the precision of metaphor identification consistently.
基金We acknowledge funding from NSFC Grant 62306283.
文摘Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.
文摘This opinion paper explores the transformative potential of large language models(LLMs)in laparoscopic surgery and argues for their integration to enhance surgical education,decision support,reporting,and patient care.LLMs can revolutionize surgical education by providing personalized learning experiences and accelerating skill acquisition.Intelligent decision support systems powered by LLMs can assist surgeons in making complex decisions,optimizing surgical workflows,and improving patient outcomes.Moreover,LLMs can automate surgical reporting and generate personalized patient education materials,streamlining documentation and improving patient engagement.However,challenges such as data scarcity,surgical semantic capture,real-time inference,and integration with existing systems need to be addressed for successful LLM integration.The future of laparoscopic surgery lies in the seamless integration of LLMs,enabling autonomous robotic surgery,predictive surgical planning,intraoperative decision support,virtual surgical assistants,and continuous learning.By harnessing the power of LLMs,laparoscopic surgery can be transformed,empowering surgeons and ultimately benefiting patients.
基金supported from the National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘Sign language,a visual-gestural language used by the deaf and hard-of-hearing community,plays a crucial role in facilitating communication and promoting inclusivity.Sign language recognition(SLR),the process of automatically recognizing and interpreting sign language gestures,has gained significant attention in recent years due to its potential to bridge the communication gap between the hearing impaired and the hearing world.The emergence and continuous development of deep learning techniques have provided inspiration and momentum for advancing SLR.This paper presents a comprehensive and up-to-date analysis of the advancements,challenges,and opportunities in deep learning-based sign language recognition,focusing on the past five years of research.We explore various aspects of SLR,including sign data acquisition technologies,sign language datasets,evaluation methods,and different types of neural networks.Convolutional Neural Networks(CNN)and Recurrent Neural Networks(RNN)have shown promising results in fingerspelling and isolated sign recognition.However,the continuous nature of sign language poses challenges,leading to the exploration of advanced neural network models such as the Transformer model for continuous sign language recognition(CSLR).Despite significant advancements,several challenges remain in the field of SLR.These challenges include expanding sign language datasets,achieving user independence in recognition systems,exploring different input modalities,effectively fusing features,modeling co-articulation,and improving semantic and syntactic understanding.Additionally,developing lightweight network architectures for mobile applications is crucial for practical implementation.By addressing these challenges,we can further advance the field of deep learning for sign language recognition and improve communication for the hearing-impaired community.
文摘Users of social networks can readily express their thoughts on websites like Twitter(now X),Facebook,and Instagram.The volume of textual data flowing from users has greatly increased with the advent of social media in comparison to traditional media.For instance,using natural language processing(NLP)methods,social media can be leveraged to obtain crucial information on the present situation during disasters.In this work,tweets on the Uttarakhand flash flood are analyzed using a hybrid NLP model.This investigation employed sentiment analysis(SA)to determine the people’s expressed negative attitudes regarding the disaster.We apply a machine learning algorithm and evaluate the performance using the standard metrics,namely root mean square error(RMSE),mean absolute error(MAE),and mean absolute percentage error(MAPE).Our random forest(RF)classifier outperforms comparable works with an accuracy of 98.10%.In order to gain a competitive edge,the study shows how Twitter(now X)data and machine learning(ML)techniques can analyze public discourse and sentiments regarding disasters.It does this by comparing positive and negative comments in order to develop strategies to deal with public sentiments on disasters.
文摘This letter evaluates the article by Gravina et al on ChatGPT’s potential in providing medical information for inflammatory bowel disease patients.While promising,it highlights the need for advanced techniques like reasoning+action and retrieval-augmented generation to improve accuracy and reliability.Emphasizing that simple question and answer testing is insufficient,it calls for more nuanced evaluation methods to truly gauge large language models’capabilities in clinical applications.
文摘This study presents results from sentiment analysis of Dynamic message sign (DMS) message content, focusing on messages that include numbers of road fatalities. As a traffic management tool, DMS plays a role in influencing driver behavior and assisting transportation agencies in achieving safe and efficient traffic movement. However, the psychological and behavioral effects of displaying fatality numbers on DMS remain poorly understood;hence, it is important to know the potential impacts of displaying such messages. The Iowa Department of Transportation displays the number of fatalities on a first screen, followed by a supplemental message hoping to promote safe driving;an example is “19 TRAFFIC DEATHS THIS YEAR IF YOU HAVE A SUPER BOWL DON’T DRIVE HIGH.” We employ natural language processing to decode the sentiment and undertone of the supplementary message and investigate how they influence driving speeds. According to the results of a mixed effect model, drivers reduced speeds marginally upon encountering DMS fatality text with a positive sentiment with a neutral undertone. This category had the largest associated amount of speed reduction, while messages with negative sentiment with a negative undertone had the second largest amount of speed reduction, greater than other combinations, including positive sentiment with a positive undertone.
文摘This editorial explores the transformative potential of artificial intelligence(AI)in identifying conflicts of interest(COIs)within academic and scientific research.By harnessing advanced data analysis,pattern recognition,and natural language processing techniques,AI offers innovative solutions for enhancing transparency and integrity in research.This editorial discusses how AI can automatically detect COIs,integrate data from various sources,and streamline reporting processes,thereby maintaining the credibility of scientific findings.
文摘BACKGROUND Medication errors,especially in dosage calculation,pose risks in healthcare.Artificial intelligence(AI)systems like ChatGPT and Google Bard may help reduce errors,but their accuracy in providing medication information remains to be evaluated.AIM To evaluate the accuracy of AI systems(ChatGPT 3.5,ChatGPT 4,Google Bard)in providing drug dosage information per Harrison's Principles of Internal Medicine.METHODS A set of natural language queries mimicking real-world medical dosage inquiries was presented to the AI systems.Responses were analyzed using a 3-point Likert scale.The analysis,conducted with Python and its libraries,focused on basic statistics,overall system accuracy,and disease-specific and organ system accuracies.RESULTS ChatGPT 4 outperformed the other systems,showing the highest rate of correct responses(83.77%)and the best overall weighted accuracy(0.6775).Disease-specific accuracy varied notably across systems,with some diseases being accurately recognized,while others demonstrated significant discrepancies.Organ system accuracy also showed variable results,underscoring system-specific strengths and weaknesses.CONCLUSION ChatGPT 4 demonstrates superior reliability in medical dosage information,yet variations across diseases emphasize the need for ongoing improvements.These results highlight AI's potential in aiding healthcare professionals,urging continuous development for dependable accuracy in critical medical situations.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/142/43).
文摘Sentiment analysis or opinion mining(OM)concepts become familiar due to advances in networking technologies and social media.Recently,massive amount of text has been generated over Internet daily which makes the pattern recognition and decision making process difficult.Since OM find useful in business sectors to improve the quality of the product as well as services,machine learning(ML)and deep learning(DL)models can be considered into account.Besides,the hyperparameters involved in the DL models necessitate proper adjustment process to boost the classification process.Therefore,in this paper,a new Artificial Fish Swarm Optimization with Bidirectional Long Short Term Memory(AFSO-BLSTM)model has been developed for OM process.The major intention of the AFSO-BLSTM model is to effectively mine the opinions present in the textual data.In addition,the AFSO-BLSTM model undergoes pre-processing and TF-IFD based feature extraction process.Besides,BLSTM model is employed for the effectual detection and classification of opinions.Finally,the AFSO algorithm is utilized for effective hyperparameter adjustment process of the BLSTM model,shows the novelty of the work.A complete simulation study of the AFSO-BLSTM model is validated using benchmark dataset and the obtained experimental values revealed the high potential of the AFSO-BLSTM model on mining opinions.
文摘The large language model called ChatGPT has drawn extensively attention because of its human-like expression and reasoning abilities.In this study,we investigate the feasibility of using ChatGPT in experiments on translating radiology reports into plain language for patients and healthcare providers so that they are educated for improved healthcare.Radiology reports from 62 low-dose chest computed tomography lung cancer screening scans and 76 brain magnetic resonance imaging metastases screening scans were collected in the first half of February for this study.According to the evaluation by radiologists,ChatGPT can successfully translate radiology reports into plain language with an average score of 4.27 in the five-point system with 0.08 places of information missing and 0.07 places of misinformation.In terms of the suggestions provided by ChatGPT,they are generally relevant such as keeping following-up with doctors and closely monitoring any symptoms,and for about 37%of 138 cases in total ChatGPT offers specific suggestions based on findings in the report.ChatGPT also presents some randomness in its responses with occasionally over-simplified or neglected information,which can be mitigated using a more detailed prompt.Furthermore,ChatGPT results are compared with a newly released large model GPT-4,showing that GPT-4 can significantly improve the quality of translated reports.Our results show that it is feasible to utilize large language models in clinical education,and further efforts are needed to address limitations and maximize their potential.
文摘We study the short-term memory capacity of ancient readers of the original New Testament written in Greek, of its translations to Latin and to modern languages. To model it, we consider the number of words between any two contiguous interpunctions I<sub>p</sub>, because this parameter can model how the human mind memorizes “chunks” of information. Since I<sub>P</sub> can be calculated for any alphabetical text, we can perform experiments—otherwise impossible— with ancient readers by studying the literary works they used to read. The “experiments” compare the I<sub>P</sub> of texts of a language/translation to those of another language/translation by measuring the minimum average probability of finding joint readers (those who can read both texts because of similar short-term memory capacity) and by defining an “overlap index”. We also define the population of universal readers, people who can read any New Testament text in any language. Future work is vast, with many research tracks, because alphabetical literatures are very large and allow many experiments, such as comparing authors, translations or even texts written by artificial intelligence tools.
基金the National Key Research and Development Program of China(No.2017YFA0700902,2017YFB1003101)the National Natural Science Foundation of China(No.61472396,61432016,61473275,61522211,61532016,61521092,61502446,61672491,61602441,61602446,61732002,61702478)+3 种基金the 973 Program of China(No.2015CB358800)National Science and Technology Major Project(No.2018ZX01031102)the Transformation and Transfer of Scientific and Technological Achievements of Chinese Academy of Sciences(No.KFJ-HGZX-013)Strategic Priority Research Program of Chinese Academy of Sciences(No.XDBS01050200).
文摘Recent years,neural networks(NNs)have received increasing attention from both academia and industry.So far significant diversity among existing NNs as well as their hardware platforms makes NN programming a daunting task.In this paper,a domain-specific language(DSL)for NNs,neural network language(NNL)is proposed to deliver productivity of NN programming and portable performance of NN execution on different hardware platforms.The productivity and flexibility of NN programming are enabled by abstracting NNs as a directed graph of blocks.The language describes 4 representative and widely used NNs and runs them on 3 different hardware platforms(CPU,GPU and NN accelerator).Experimental results show that NNs written with the proposed language are,on average,14.5%better than the baseline implementations across these 3 platforms.Moreover,compared with the Caffe framework that specifically targets the GPU platform,the code can achieve similar performance.
基金the funding support from the National Natural Science Foundation of China (No. 81874429)Digital and Applied Research Platform for Diagnosis of Traditional Chinese Medicine (No. 49021003005)+1 种基金2018 Hunan Provincial Postgraduate Research Innovation Project (No. CX2018B465)Excellent Youth Project of Hunan Education Department in 2018 (No. 18B241)
文摘Objective Natural language processing (NLP) was used to excavate and visualize the core content of syndrome element syndrome differentiation (SESD). Methods The first step was to build a text mining and analysis environment based on Python language, and built a corpus based on the core chapters of SESD. The second step was to digitalize the corpus. The main steps included word segmentation, information cleaning and merging, document-entry matrix, dictionary compilation and information conversion. The third step was to mine and display the internal information of SESD corpus by means of word cloud, keyword extraction and visualization. Results NLP played a positive role in computer recognition and comprehension of SESD. Different chapters had different keywords and weights. Deficiency syndrome elements were an important component of SESD, such as "Qi deficiency""Yang deficiency" and "Yin deficiency". The important syndrome elements of substantiality included "Blood stasis""Qi stagnation", etc. Core syndrome elements were closely related. Conclusions Syndrome differentiation and treatment was the core of SESD. Using NLP to excavate syndromes differentiation could help reveal the internal relationship between syndromes differentiation and provide basis for artificial intelligence to learn syndromes differentiation.
文摘Artificial intelligence and machine-learning are widely applied in all domain applications,including computer vision and natural language processing(NLP).We briefly discuss the development of edge detection,which plays an important role in representing the salience features in a wide range of computer vision applications.Meanwhile,transformer-based deep models facilitate the usage of NLP application.We introduce two ongoing research projects for pharmaceutical industry and business negotiation.We also selected five papers in the related areas for this journal issue.
文摘The increased ownership of mobile phone users and the advancement of mobile applications enlarge the practicality and popularity of use for learning purposes among Chinese university students.However,even if innovative functions of these applications are increasingly reported in relevant research in the education field,little research has been in the application of spoken English language.This paper examined the effect of using a Mobile-Assisted Language Learning(MALL)application“IELTS Liulishuo”(speaking English fluently in the IELTS test)as a unit of analysis to improve the English-speaking production of university students in China.The measurement of this mobile application in its effectiveness of validity and reliability is through the use of seven dimensional criteria.Although some technical and pedagogical issues challenge adoptions of MALL in some less-developed regions in China,the study showed positive effects of using a MALL oral English assessment application characterised with Automatic Speech Recognition(ASR)system on the improvement of complexity,accuracy,and fluency of English learners in China’s colleges.