期刊文献+
共找到631篇文章
< 1 2 32 >
每页显示 20 50 100
Cultivation of Translators for Multimodal Translation in the New Media Age
1
作者 Minghui Long 《Journal of Contemporary Educational Research》 2024年第5期37-45,共9页
In the 21st century,the development of digital and new media technologies has ushered in an age of pervasive multimodal communication,which has significantly amplified the role of multimodal translation in facilitatin... In the 21st century,the development of digital and new media technologies has ushered in an age of pervasive multimodal communication,which has significantly amplified the role of multimodal translation in facilitating crosscultural exchanges.Despite the profound impact of these developments,the prevailing translation pedagogy remains predominantly focused on the enhancement of linguistic translation skills,with noticeable neglect of the imperative to cultivate students’competencies in multimodal translation.Based on the distinctive characteristics and challenges that multimodal translation presents in the context of new media,this study delves into the formulation of educational objectives and curriculum design for the training of multimodal translators.The intent is to propose a framework that can guide the preparation of translators who are adept and equipped to navigate the complexities and demands of the contemporary age. 展开更多
关键词 multimodal communication multimodal translation multimodal translation competencies Visual literacy Ability to integrate verbal and non-verbal modes Curriculum design
下载PDF
Multimodal Social Media Fake News Detection Based on Similarity Inference and Adversarial Networks 被引量:1
2
作者 Fangfang Shan Huifang Sun Mengyi Wang 《Computers, Materials & Continua》 SCIE EI 2024年第4期581-605,共25页
As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea... As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news. 展开更多
关键词 Fake news detection attention mechanism image-text similarity multimodal feature fusion
下载PDF
Design of AI-Enhanced and Hardware-Supported Multimodal E-Skin for Environmental Object Recognition and Wireless Toxic Gas Alarm
3
作者 Jianye Li Hao Wang +8 位作者 Yibing Luo Zijing Zhou He Zhang Huizhi Chen Kai Tao Chuan Liu Lingxing Zeng Fengwei Huo Jin Wu 《Nano-Micro Letters》 SCIE EI CAS CSCD 2024年第12期1-22,共22页
Post-earthquake rescue missions are full of challenges due to the unstable structure of ruins and successive aftershocks.Most of the current rescue robots lack the ability to interact with environments,leading to low ... Post-earthquake rescue missions are full of challenges due to the unstable structure of ruins and successive aftershocks.Most of the current rescue robots lack the ability to interact with environments,leading to low rescue efficiency.The multimodal electronic skin(e-skin)proposed not only reproduces the pressure,temperature,and humidity sensing capabilities of natural skin but also develops sensing functions beyond it—perceiving object proximity and NO2 gas.Its multilayer stacked structure based on Ecoflex and organohydrogel endows the e-skin with mechanical properties similar to natural skin.Rescue robots integrated with multimodal e-skin and artificial intelligence(AI)algorithms show strong environmental perception capabilities and can accurately distinguish objects and identify human limbs through grasping,laying the foundation for automated post-earthquake rescue.Besides,the combination of e-skin and NO2 wireless alarm circuits allows robots to sense toxic gases in the environment in real time,thereby adopting appropriate measures to protect trapped people from the toxic environment.Multimodal e-skin powered by AI algorithms and hardware circuits exhibits powerful environmental perception and information processing capabilities,which,as an interface for interaction with the physical world,dramatically expands intelligent robots’application scenarios. 展开更多
关键词 Stretchable hydrogel sensors multimodal e-skin Artificial intelligence Post-earthquake rescue Wireless toxic gas alarm
下载PDF
Multimodal fusion recognition for digital twin
4
作者 Tianzhe Zhou Xuguang Zhang +1 位作者 Bing Kang Mingkai Chen 《Digital Communications and Networks》 SCIE CSCD 2024年第2期337-346,共10页
The digital twin is the concept of transcending reality,which is the reverse feedback from the real physical space to the virtual digital space.People hold great prospects for this emerging technology.In order to real... The digital twin is the concept of transcending reality,which is the reverse feedback from the real physical space to the virtual digital space.People hold great prospects for this emerging technology.In order to realize the upgrading of the digital twin industrial chain,it is urgent to introduce more modalities,such as vision,haptics,hearing and smell,into the virtual digital space,which assists physical entities and virtual objects in creating a closer connection.Therefore,perceptual understanding and object recognition have become an urgent hot topic in the digital twin.Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality,ignoring the complementarity between multiple modalities.In order to overcome this dilemma,we propose a multimodal fusion network in our article that combines two modalities,visual and haptic,for surface material recognition.On the one hand,the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping.On the other hand,the network is extensible and can be used as a universal architecture to include more modalities.Experiments show that the constructed multimodal fusion network can achieve 99.42%classification accuracy while reducing complexity. 展开更多
关键词 Digital twin multimodal fusion Object recognition Deep learning Transfer learning
下载PDF
Multimodal Machine Learning Guides Low Carbon Aeration Strategies in Urban Wastewater Treatment
5
作者 Hong-Cheng Wang Yu-Qi Wang +4 位作者 Xu Wang Wan-Xin Yin Ting-Chao Yu Chen-Hao Xue Ai-Jie Wang 《Engineering》 SCIE EI CAS CSCD 2024年第5期51-62,共12页
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol... The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment. 展开更多
关键词 Wastewater treatment multimodal machine learning Deep learning Aeration control Interpretable machine learning
下载PDF
Multiobjective Differential Evolution for Higher-Dimensional Multimodal Multiobjective Optimization
6
作者 Jing Liang Hongyu Lin +2 位作者 Caitong Yue Ponnuthurai Nagaratnam Suganthan Yaonan Wang 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2024年第6期1458-1475,共18页
In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve... In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve MMOPs with higher-dimensional decision variables.Due to the increase in the dimensions of decision variables in real-world MMOPs,it is diffi-cult for current multimodal multiobjective optimization evolu-tionary algorithms(MMOEAs)to find multiple Pareto optimal solutions.The proposed algorithm adopts a dual-population framework and an improved environmental selection method.It utilizes a convergence archive to help the first population improve the quality of solutions.The improved environmental selection method enables the other population to search the remaining decision space and reserve more Pareto optimal solutions through the information of the first population.The combination of these two strategies helps to effectively balance and enhance conver-gence and diversity performance.In addition,to study the per-formance of the proposed algorithm,a novel set of multimodal multiobjective optimization test functions with extensible decision variables is designed.The proposed MMOEA is certified to be effective through comparison with six state-of-the-art MMOEAs on the test functions. 展开更多
关键词 Benchmark functions diversity measure evolution-ary algorithms multimodal multiobjective optimization.
下载PDF
A deep multimodal fusion and multitasking trajectory prediction model for typhoon trajectory prediction to reduce flight scheduling cancellation
7
作者 TANG Jun QIN Wanting +1 位作者 PAN Qingtao LAO Songyang 《Journal of Systems Engineering and Electronics》 SCIE CSCD 2024年第3期666-678,共13页
Natural events have had a significant impact on overall flight activity,and the aviation industry plays a vital role in helping society cope with the impact of these events.As one of the most impactful weather typhoon... Natural events have had a significant impact on overall flight activity,and the aviation industry plays a vital role in helping society cope with the impact of these events.As one of the most impactful weather typhoon seasons appears and continues,airlines operating in threatened areas and passengers having travel plans during this time period will pay close attention to the development of tropical storms.This paper proposes a deep multimodal fusion and multitasking trajectory prediction model that can improve the reliability of typhoon trajectory prediction and reduce the quantity of flight scheduling cancellation.The deep multimodal fusion module is formed by deep fusion of the feature output by multiple submodal fusion modules,and the multitask generation module uses longitude and latitude as two related tasks for simultaneous prediction.With more dependable data accuracy,problems can be analysed rapidly and more efficiently,enabling better decision-making with a proactive versus reactive posture.When multiple modalities coexist,features can be extracted from them simultaneously to supplement each other’s information.An actual case study,the typhoon Lichma that swept China in 2019,has demonstrated that the algorithm can effectively reduce the number of unnecessary flight cancellations compared to existing flight scheduling and assist the new generation of flight scheduling systems under extreme weather. 展开更多
关键词 flight scheduling optimization deep multimodal fusion multitasking trajectory prediction typhoon weather flight cancellation prediction reliability
下载PDF
Conditional selection with CNN augmented transformer for multimodal affective analysis
8
作者 Jianwen Wang Shiping Wang +3 位作者 Shunxin Xiao Renjie Lin Mianxiong Dong Wenzhong Guo 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期917-931,共15页
Attention mechanism has been a successful method for multimodal affective analysis in recent years. Despite the advances, several significant challenges remain in fusing language and its nonverbal context information.... Attention mechanism has been a successful method for multimodal affective analysis in recent years. Despite the advances, several significant challenges remain in fusing language and its nonverbal context information. One is to generate sparse attention coefficients associated with acoustic and visual modalities, which helps locate critical emotional se-mantics. The other is fusing complementary cross‐modal representation to construct optimal salient feature combinations of multiple modalities. A Conditional Transformer Fusion Network is proposed to handle these problems. Firstly, the authors equip the transformer module with CNN layers to enhance the detection of subtle signal patterns in nonverbal sequences. Secondly, sentiment words are utilised as context conditions to guide the computation of cross‐modal attention. As a result, the located nonverbal fea-tures are not only salient but also complementary to sentiment words directly. Experi-mental results show that the authors’ method achieves state‐of‐the‐art performance on several multimodal affective analysis datasets. 展开更多
关键词 affective computing data fusion information fusion multimodal approaches
下载PDF
Audio-Text Multimodal Speech Recognition via Dual-Tower Architecture for Mandarin Air Traffic Control Communications
9
作者 Shuting Ge Jin Ren +3 位作者 Yihua Shi Yujun Zhang Shunzhi Yang Jinfeng Yang 《Computers, Materials & Continua》 SCIE EI 2024年第3期3215-3245,共31页
In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a p... In air traffic control communications (ATCC), misunderstandings between pilots and controllers could result in fatal aviation accidents. Fortunately, advanced automatic speech recognition technology has emerged as a promising means of preventing miscommunications and enhancing aviation safety. However, most existing speech recognition methods merely incorporate external language models on the decoder side, leading to insufficient semantic alignment between speech and text modalities during the encoding phase. Furthermore, it is challenging to model acoustic context dependencies over long distances due to the longer speech sequences than text, especially for the extended ATCC data. To address these issues, we propose a speech-text multimodal dual-tower architecture for speech recognition. It employs cross-modal interactions to achieve close semantic alignment during the encoding stage and strengthen its capabilities in modeling auditory long-distance context dependencies. In addition, a two-stage training strategy is elaborately devised to derive semantics-aware acoustic representations effectively. The first stage focuses on pre-training the speech-text multimodal encoding module to enhance inter-modal semantic alignment and aural long-distance context dependencies. The second stage fine-tunes the entire network to bridge the input modality variation gap between the training and inference phases and boost generalization performance. Extensive experiments demonstrate the effectiveness of the proposed speech-text multimodal speech recognition method on the ATCC and AISHELL-1 datasets. It reduces the character error rate to 6.54% and 8.73%, respectively, and exhibits substantial performance gains of 28.76% and 23.82% compared with the best baseline model. The case studies indicate that the obtained semantics-aware acoustic representations aid in accurately recognizing terms with similar pronunciations but distinctive semantics. The research provides a novel modeling paradigm for semantics-aware speech recognition in air traffic control communications, which could contribute to the advancement of intelligent and efficient aviation safety management. 展开更多
关键词 Speech-text multimodal automatic speech recognition semantic alignment air traffic control communications dual-tower architecture
下载PDF
A Robust Framework for Multimodal Sentiment Analysis with Noisy Labels Generated from Distributed Data Annotation
10
作者 Kai Jiang Bin Cao Jing Fan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2965-2984,共20页
Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and sha... Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines. 展开更多
关键词 Distributed data collection multimodal sentiment analysis meta learning learn with noisy labels
下载PDF
Evolution and Prospects of Foundation Models: From Large Language Models to Large Multimodal Models
11
作者 Zheyi Chen Liuchang Xu +5 位作者 Hongting Zheng Luyao Chen Amr Tolba Liang Zhao Keping Yu Hailin Feng 《Computers, Materials & Continua》 SCIE EI 2024年第8期1753-1808,共56页
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ... Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field. 展开更多
关键词 Artificial intelligence large language models large multimodal models foundation models
下载PDF
FusionNN:A Semantic Feature Fusion Model Based on Multimodal for Web Anomaly Detection
12
作者 Li Wang Mingshan Xia +3 位作者 Hao Hu Jianfang Li Fengyao Hou Gang Chen 《Computers, Materials & Continua》 SCIE EI 2024年第5期2991-3006,共16页
With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althou... With the rapid development of the mobile communication and the Internet,the previous web anomaly detectionand identificationmodels were built relying on security experts’empirical knowledge and attack features.Althoughthis approach can achieve higher detection performance,it requires huge human labor and resources to maintainthe feature library.In contrast,semantic feature engineering can dynamically discover new semantic featuresand optimize feature selection by automatically analyzing the semantic information contained in the data itself,thus reducing dependence on prior knowledge.However,current semantic features still have the problem ofsemantic expression singularity,as they are extracted from a single semantic mode such as word segmentation,character segmentation,or arbitrary semantic feature extraction.This paper extracts features of web requestsfrom dual semantic granularity,and proposes a semantic feature fusion method to solve the above problems.Themethod first preprocesses web requests,and extracts word-level and character-level semantic features of URLs viaconvolutional neural network(CNN),respectively.By constructing three loss functions to reduce losses betweenfeatures,labels and categories.Experiments on the HTTP CSIC 2010,Malicious URLs and HttpParams datasetsverify the proposedmethod.Results show that compared withmachine learning,deep learningmethods and BERTmodel,the proposed method has better detection performance.And it achieved the best detection rate of 99.16%in the dataset HttpParams. 展开更多
关键词 Feature fusion web anomaly detection multimodal convolutional neural network(CNN) semantic feature extraction
下载PDF
Enhancing Cross-Lingual Image Description: A Multimodal Approach for Semantic Relevance and Stylistic Alignment
13
作者 Emran Al-Buraihy Dan Wang 《Computers, Materials & Continua》 SCIE EI 2024年第6期3913-3938,共26页
Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural net... Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural network models and semantic matching techniques.Experiments conducted on the Flickr8k and AraImg2k benchmark datasets,featuring images and descriptions in English and Arabic,showcase remarkable performance improvements over state-of-the-art methods.Our model,equipped with the Image&Cross-Language Semantic Matching module and the Target Language Domain Evaluation module,significantly enhances the semantic relevance of generated image descriptions.For English-to-Arabic and Arabic-to-English cross-language image descriptions,our approach achieves a CIDEr score for English and Arabic of 87.9%and 81.7%,respectively,emphasizing the substantial contributions of our methodology.Comparative analyses with previous works further affirm the superior performance of our approach,and visual results underscore that our model generates image captions that are both semantically accurate and stylistically consistent with the target language.In summary,this study advances the field of cross-lingual image description,offering an effective solution for generating image captions across languages,with the potential to impact multilingual communication and accessibility.Future research directions include expanding to more languages and incorporating diverse visual and textual data sources. 展开更多
关键词 Cross-language image description multimodal deep learning semantic matching reward mechanisms
下载PDF
Multimodal imaging diagnosis and analysis of prognostic factors in patients with adult-onset Coats disease
14
作者 Wei Zhou Hui Zhou +6 位作者 Yuan-Yuan Liu Meng-Xuan Li Xiao-Han Wu Jiao Liang Jing Hao Sheng-Nan Liu Chun-Jie Jin 《International Journal of Ophthalmology(English edition)》 SCIE CAS 2024年第8期1469-1476,共8页
AIM:To describe the multimodal imaging features,treatment,and outcomes of patients diagnosed with adultonset Coats disease.METHODS:This retrospective study included patients first diagnosed with Coats disease at≥18 y... AIM:To describe the multimodal imaging features,treatment,and outcomes of patients diagnosed with adultonset Coats disease.METHODS:This retrospective study included patients first diagnosed with Coats disease at≥18 years of age between September 2017 and September 2021.Some patients received anti-vascular endothelial growth factor(VEGF)therapy(conbercept,0.5 mg)as the initial treatment,which was combined with laser photocoagulation as needed.All the patients underwent best corrected visual acuity(BCVA)and intraocular pressure examinations,fundus color photography,spontaneous fluorescence tests,fundus fluorescein angiography,optical coherence tomography(OCT),OCT angiography,and other examinations.BCVA alterations and multimodal image findings in the affected eyes following treatment were compared and the prognostic factors were analyzed.RESULTS:The study included 15 patients who were aged 24-72(57.33±12.61)y at presentation.Systemic hypertension was the most common associated systemic condition,occurring in 13(86.7%)patients.Baseline BCVA ranged from 2.0 to 5.0(4.0±1.1),which showed improvement following treatment(4.2±1.0).Multimodal imaging revealed retinal telangiectasis in 13 patients(86.7%),patchy hemorrhage in 5 patients(33.3%),and stage 2B disease(Shield’s staging criteria)in 11 patients(73.3%).OCT revealed that the baseline central macular thickness(CMT)ranged from 129 to 964μm(473.0±230.1μm),with 13 patients(86.7%)exhibiting a baseline CMT exceeding 250μm.Furthermore,8 patients(53.3%)presented with an epiretinal membrane at baseline or during follow-up.Hyper-reflective scars were observed on OCT in five patients(33.3%)with poor visual prognosis.Vision deteriorated in one patient who did not receive treatment.Final vision was stable in three patients who received laser treatment,whereas improvement was observed in one of two patients who received anti-VEGF therapy alone.In addition,8 of 9 patients(88.9%)who received laser treatment and conbercept exhibited stable or improved BCVA.CONCLUSION:Multimodal imaging can help diagnose adult-onset Coats disease.Anti-VEGF treatment combined with laser therapy can be an option for improving or maintaining BCVA and resolving macular edema.The final visual outcome depends on macular involvement and the disease stage. 展开更多
关键词 adult-onset Coats disease multimodal imaging anti-vascular endothelial growth factor conbercept
下载PDF
An Immune-Inspired Approach with Interval Allocation in Solving Multimodal Multi-Objective Optimization Problems with Local Pareto Sets
15
作者 Weiwei Zhang Jiaqiang Li +2 位作者 Chao Wang Meng Li Zhi Rao 《Computers, Materials & Continua》 SCIE EI 2024年第6期4237-4257,共21页
In practical engineering,multi-objective optimization often encounters situations where multiple Pareto sets(PS)in the decision space correspond to the same Pareto front(PF)in the objective space,known as Multi-Modal ... In practical engineering,multi-objective optimization often encounters situations where multiple Pareto sets(PS)in the decision space correspond to the same Pareto front(PF)in the objective space,known as Multi-Modal Multi-Objective Optimization Problems(MMOP).Locating multiple equivalent global PSs poses a significant challenge in real-world applications,especially considering the existence of local PSs.Effectively identifying and locating both global and local PSs is a major challenge.To tackle this issue,we introduce an immune-inspired reproduction strategy designed to produce more offspring in less crowded,promising regions and regulate the number of offspring in areas that have been thoroughly explored.This approach achieves a balanced trade-off between exploration and exploitation.Furthermore,we present an interval allocation strategy that adaptively assigns fitness levels to each antibody.This strategy ensures a broader survival margin for solutions in their initial stages and progressively amplifies the differences in individual fitness values as the population matures,thus fostering better population convergence.Additionally,we incorporate a multi-population mechanism that precisely manages each subpopulation through the interval allocation strategy,ensuring the preservation of both global and local PSs.Experimental results on 21 test problems,encompassing both global and local PSs,are compared with eight state-of-the-art multimodal multi-objective optimization algorithms.The results demonstrate the effectiveness of our proposed algorithm in simultaneously identifying global Pareto sets and locally high-quality PSs. 展开更多
关键词 multimodal multi-objective optimization problem local PSs immune-inspired reproduction
下载PDF
Effect of different anesthetic modalities with multimodal analgesia on postoperative pain level in colorectal tumor patients
16
作者 Ji-Chun Tang Jia-Wei Ma +2 位作者 Jin-Jin Jian Jie Shen Liang-Liang Cao 《World Journal of Gastrointestinal Oncology》 SCIE 2024年第2期364-371,共8页
BACKGROUND According to clinical data,a significant percentage of patients experience pain after surgery,highlighting the importance of alleviating postoperative pain.The current approach involves intravenous self-con... BACKGROUND According to clinical data,a significant percentage of patients experience pain after surgery,highlighting the importance of alleviating postoperative pain.The current approach involves intravenous self-control analgesia,often utilizing opioid analgesics such as morphine,sufentanil,and fentanyl.Surgery for colo-rectal cancer typically involves general anesthesia.Therefore,optimizing anes-thetic management and postoperative analgesic programs can effectively reduce perioperative stress and enhance postoperative recovery.The study aims to analyze the impact of different anesthesia modalities with multimodal analgesia on patients'postoperative pain.AIM To explore the effects of different anesthesia methods coupled with multi-mode analgesia on postoperative pain in patients with colorectal cancer.METHODS Following the inclusion criteria and exclusion criteria,a total of 126 patients with colorectal cancer admitted to our hospital from January 2020 to December 2022 were included,of which 63 received general anesthesia coupled with multi-mode labor pain and were set as the control group,and 63 received general anesthesia associated with epidural anesthesia coupled with multi-mode labor pain and were set as the research group.After data collection,the effects of postoperative analgesia,sedation,and recovery were compared.RESULTS Compared to the control group,the research group had shorter recovery times for orientation,extubation,eye-opening,and spontaneous respiration(P<0.05).The research group also showed lower Visual analog scale scores at 24 h and 48 h,higher Ramany scores at 6 h and 12 h,and improved cognitive function at 24 h,48 h,and 72 h(P<0.05).Additionally,interleukin-6 and interleukin-10 levels were significantly reduced at various time points in the research group compared to the control group(P<0.05).Levels of CD3+,CD4+,and CD4+/CD8+were also lower in the research group at multiple time points(P<0.05).CONCLUSION For patients with colorectal cancer,general anesthesia coupled with epidural anesthesia and multi-mode analgesia can achieve better postoperative analgesia and sedation effects,promote postoperative rehabilitation of patients,improve inflammatory stress and immune status,and have higher safety. 展开更多
关键词 multimodal analgesia ANESTHESIA Colorectal cancer Postoperative pain
下载PDF
Multimodal emotion recognition in the metaverse era:New needs and transformation in mental health work
17
作者 Yan Zeng Jun-Wen Zhang Jian Yang 《World Journal of Clinical Cases》 SCIE 2024年第34期6674-6678,共5页
This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ... This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition. 展开更多
关键词 multimodal emotion recognition Metaverse Needs TRANSFORMATION Mental health
下载PDF
Multimodal imaging in the diagnosis of bone giant cell tumors:A retrospective study
18
作者 Ming-Qing Kou Bing-Qiang Xu Hui-Tong Liu 《World Journal of Clinical Cases》 SCIE 2024年第16期2722-2728,共7页
BACKGROUND Giant cell tumor of bone is a locally aggressive and rarely metastasizing tumor,and also a potential malignant tumor that may develop into a primary malignant giant cell tumor.AIM To evaluate the role of mu... BACKGROUND Giant cell tumor of bone is a locally aggressive and rarely metastasizing tumor,and also a potential malignant tumor that may develop into a primary malignant giant cell tumor.AIM To evaluate the role of multimodal imaging in the diagnosis of giant cell tumors of bone.METHODS The data of 32 patients with giant cell tumor of bone confirmed by core-needle biopsy or surgical pathology at our hospital between March 2018 and March 2023 were retrospectively selected.All the patients with giant cell tumors of the bone were examined by X-ray,computed tomography(CT)and magnetic resonance imaging(MRI),and 7 of them were examined by positron emission tomography(PET)-CT.RESULTS X-ray imaging can provide overall information on giant cell tumor lesions.CT and MRI can reveal the characteristics of the internal structure of the tumor as well as the adjacent relationships of the tumor,and these methods have unique advantages for diagnosing tumors and determining the scope of surgery.PET-CT can detect small lesions and is highly valuable for identifying benign and malignant tumors to aid in the early diagnosis of metastasis.CONCLUSION Multimodal imaging plays an important role in the diagnosis of giant cell tumor of bone and can provide a reference for the treatment of giant cell tumors. 展开更多
关键词 Giant cell tumor of bone multimodal imaging Computed tomography Magnetic resonance imaging Positron emission tomography-computed tomography
下载PDF
Large multimodal models assist in psychiatry disorders prevention and diagnosis of students
19
作者 Xin-Qiao Liu Xin Wang Hui-Rui Zhang 《World Journal of Psychiatry》 SCIE 2024年第10期1415-1421,共7页
Students are considered one of the groups most affected by psychological pro-blems.Given the highly dangerous nature of mental illnesses and the increasing-ly serious state of global mental health,it is imperative for... Students are considered one of the groups most affected by psychological pro-blems.Given the highly dangerous nature of mental illnesses and the increasing-ly serious state of global mental health,it is imperative for us to explore new me-thods and approaches concerning the prevention and treatment of mental illne-sses.Large multimodal models(LMMs),as the most advanced artificial intelligen-ce models(i.e.ChatGPT-4),have brought new hope to the accurate prevention,diagnosis,and treatment of psychiatric disorders.The assistance of these models in the promotion of mental health is critical,as the latter necessitates a strong foundation of medical knowledge and professional skills,emotional support,stigma mitigation,the encouragement of more honest patient self-disclosure,reduced health care costs,improved medical efficiency,and greater mental health service coverage.However,these models must address challenges related to health,safety,hallucinations,and ethics simultaneously.In the future,we should address these challenges by developing relevant usage manuals,accountability rules,and legal regulations;implementing a human-centered approach;and intelligently upgrading LMMs through the deep optimization of such models,their algorithms,and other means.This effort will thus substantially contribute not only to the maintenance of students’health but also to the achievement of global sustainable development goals. 展开更多
关键词 Large multimodal models ChatGPT Psychiatric disorders Mental health STUDENT
下载PDF
Multimodal imaging for the diagnosis of oligodendroglioma associated with arteriovenous malformation:A case report
20
作者 Peng Guo Wei Sun +2 位作者 Ling-Xie Song Wen-Yu Cao Jin-Ping Li 《World Journal of Radiology》 2024年第8期348-355,共8页
BACKGROUND The rare co-occurrence of oligodendroglioma and arteriovenous malformation(AVM)in the same intracranial location.CASE SUMMARY In a 61-year-old man presenting with progressive headaches,is described in this ... BACKGROUND The rare co-occurrence of oligodendroglioma and arteriovenous malformation(AVM)in the same intracranial location.CASE SUMMARY In a 61-year-old man presenting with progressive headaches,is described in this case study.Preoperative multimodal imaging techniques(computed tomography,magnetic resonance imaging,magnetic resonance spectroscopy,digital subtraction angiography,and computed tomography angiography)were employed to detect hemorrhage,cystic and solid lesions,and arteriovenous shunting in the right temporal lobe.The patient underwent right temporal craniotomy for lesion removal,and postoperative pathological analysis confirmed the presence of oligodendroglioma(World Health Organization grade II,not otherwise specified)and AVM.CONCLUSION The preoperative utilization of multimodal imaging examination can help clinicians reduce the likelihood of misdiagnosis or oversight of these conditions,and provides important information for subsequent treatment.This case supports the feasibility of craniotomy for the removal of glioma with AVM. 展开更多
关键词 OLIGODENDROGLIOMA Arteriovenous malformation Angioglioma multimodal imaging Case report
下载PDF
上一页 1 2 32 下一页 到第
使用帮助 返回顶部