Multimodal sensor fusion can make full use of the advantages of various sensors,make up for the shortcomings of a single sensor,achieve information verification or information security through information redundancy,a...Multimodal sensor fusion can make full use of the advantages of various sensors,make up for the shortcomings of a single sensor,achieve information verification or information security through information redundancy,and improve the reliability and safety of the system.Artificial intelligence(AI),referring to the simulation of human intelligence in machines that are programmed to think and learn like humans,represents a pivotal frontier in modern scientific research.With the continuous development and promotion of AI technology in Sensor 4.0 age,multimodal sensor fusion is becoming more and more intelligent and automated,and is expected to go further in the future.With this context,this review article takes a comprehensive look at the recent progress on AI-enhanced multimodal sensors and their integrated devices and systems.Based on the concept and principle of sensor technologies and AI algorithms,the theoretical underpinnings,technological breakthroughs,and pragmatic applications of AI-enhanced multimodal sensors in various fields such as robotics,healthcare,and environmental monitoring are highlighted.Through a comparative study of the dual/tri-modal sensors with and without using AI technologies(especially machine learning and deep learning),AI-enhanced multimodal sensors highlight the potential of AI to improve sensor performance,data processing,and decision-making capabilities.Furthermore,the review analyzes the challenges and opportunities afforded by AI-enhanced multimodal sensors,and offers a prospective outlook on the forthcoming advancements.展开更多
BACKGROUND Pancreatic cancer involving the pancreas neck and body often invades the retroperitoneal vessels,making its radical resection challenging.Multimodal treatment strategies,including neoadjuvant therapy,surger...BACKGROUND Pancreatic cancer involving the pancreas neck and body often invades the retroperitoneal vessels,making its radical resection challenging.Multimodal treatment strategies,including neoadjuvant therapy,surgery,and postoperative adjuvant therapy,are contributing to a paradigm shift in the treatment of pancreatic cancer.This strategy is also promising in the treatment of pancreatic neckbody cancer.AIM To evaluate the feasibility and effectiveness of a multimodal strategy for the treatment of borderline/locally advanced pancreatic neck-body cancer.METHODS From January 2019 to December 2021,we reviewed the demographic characteristics,neoadjuvant and adjuvant treatment data,intraoperative and postoperative variables,and follow-up outcomes of patients who underwent multimodal treatment for pancreatic neck-body cancer in a prospectively collected database of our hospital.This investigation was reported in line with the Preferred Reporting of Case Series in Surgery criteria.RESULTS A total of 11 patients with pancreatic neck-body cancer were included in this study,of whom 6 patients were borderline resectable and 5 were locally advanced.Through multidisciplinary team discussion,all patients received neoadjuvant therapy,of whom 8(73%)patients achieved a partial response and 3 patients maintained stable disease.After multidisciplinary team reassessment,all patients underwent laparoscopic subtotal distal pancreatectomy and portal vein reconstruction and achieved R0 resection.Postoperatively,two patients(18%)developed ascites,and two patients(18%)developed pancreatic fistulae.The median length of stay of the patients was 11 days(range:10-15 days).All patients received postoperative adjuvant therapy.During the follow-up,three patients experienced tumor recurrence,with a median disease-free survival time of 13.3 months and a median overall survival time of 20.5 months.CONCLUSION A multimodal treatment strategy combining neoadjuvant therapy,laparoscopic subtotal distal pancreatectomy,and adjuvant therapy is safe and feasible in patients with pancreatic neck-body cancer.展开更多
Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.There...Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.Therefore,it is necessary to establish thunderstorm wind gust identification techniques based on multisource high-resolution observations.This paper introduces a new algorithm,called thunderstorm wind gust identification network(TGNet).It leverages multimodal feature fusion to fuse the temporal and spatial features of thunderstorm wind gust events.The shapelet transform is first used to extract the temporal features of wind speeds from automatic weather stations,which is aimed at distinguishing thunderstorm wind gusts from those caused by synoptic-scale systems or typhoons.Then,the encoder,structured upon the U-shaped network(U-Net)and incorporating recurrent residual convolutional blocks(R2U-Net),is employed to extract the corresponding spatial convective characteristics of satellite,radar,and lightning observations.Finally,by using the multimodal deep fusion module based on multi-head cross-attention,the temporal features of wind speed at each automatic weather station are incorporated into the spatial features to obtain 10-minutely classification of thunderstorm wind gusts.TGNet products have high accuracy,with a critical success index reaching 0.77.Compared with those of U-Net and R2U-Net,the false alarm rate of TGNet products decreases by 31.28%and 24.15%,respectively.The new algorithm provides grid products of thunderstorm wind gusts with a spatial resolution of 0.01°,updated every 10minutes.The results are finer and more accurate,thereby helping to improve the accuracy of operational warnings for thunderstorm wind gusts.展开更多
BACKGROUND Recent advancements in artificial intelligence(AI)have significantly enhanced the capabilities of endoscopic-assisted diagnosis for gastrointestinal diseases.AI has shown great promise in clinical practice,...BACKGROUND Recent advancements in artificial intelligence(AI)have significantly enhanced the capabilities of endoscopic-assisted diagnosis for gastrointestinal diseases.AI has shown great promise in clinical practice,particularly for diagnostic support,offering real-time insights into complex conditions such as esophageal squamous cell carcinoma.CASE SUMMARY In this study,we introduce a multimodal AI system that successfully identified and delineated a small and flat carcinoma during esophagogastroduodenoscopy,highlighting its potential for early detection of malignancies.The lesion was confirmed as high-grade squamous intraepithelial neoplasia,with pathology results supporting the AI system’s accuracy.The multimodal AI system offers an integrated solution that provides real-time,accurate diagnostic information directly within the endoscopic device interface,allowing for single-monitor use without disrupting endoscopist’s workflow.CONCLUSION This work underscores the transformative potential of AI to enhance endoscopic diagnosis by enabling earlier,more accurate interventions.展开更多
As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocrea...As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.展开更多
Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the ...Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.展开更多
Attention mechanism has been a successful method for multimodal affective analysis in recent years. Despite the advances, several significant challenges remain in fusing language and its nonverbal context information....Attention mechanism has been a successful method for multimodal affective analysis in recent years. Despite the advances, several significant challenges remain in fusing language and its nonverbal context information. One is to generate sparse attention coefficients associated with acoustic and visual modalities, which helps locate critical emotional se-mantics. The other is fusing complementary cross‐modal representation to construct optimal salient feature combinations of multiple modalities. A Conditional Transformer Fusion Network is proposed to handle these problems. Firstly, the authors equip the transformer module with CNN layers to enhance the detection of subtle signal patterns in nonverbal sequences. Secondly, sentiment words are utilised as context conditions to guide the computation of cross‐modal attention. As a result, the located nonverbal fea-tures are not only salient but also complementary to sentiment words directly. Experi-mental results show that the authors’ method achieves state‐of‐the‐art performance on several multimodal affective analysis datasets.展开更多
Multimodal freight transportation emerges as the go-to strategy for cost-effectively and sustainably moving goods over long distances. In a multimodal freight system, where a single contract includes various transport...Multimodal freight transportation emerges as the go-to strategy for cost-effectively and sustainably moving goods over long distances. In a multimodal freight system, where a single contract includes various transportation methods, businesses aiming for economic success must make well-informed decisions about which modes of transport to use. These decisions prioritize secure deliveries, competitive cost advantages, and the minimization of environmental footprints associated with transportation-related pollution. Within the dynamic landscape of logistics innovation, various multicriteria decision-making (MCDM) approaches empower businesses to evaluate freight transport options thoroughly. In this study, we utilize a case study to demonstrate the application of the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) algorithm for MCDM decision-making in freight mode selection. We further enhance the TOPSIS framework by integrating the entropy weight coefficient method. This enhancement aids in assigning precise weights to each criterion involved in mode selection, leading to a more reliable decision-making process. The proposed model provides cost-effective and timely deliveries, minimizing environmental footprint and meeting consumers’ needs. Our findings reveal that freight carbon footprint is the primary concern, followed by freight cost, time sensitivity, and service reliability. The study identifies the combination of Rail/Truck as the ideal mode of transport and containers in flat cars (COFC) as the next best option for the selected case. The proposed algorithm, incorporating the enhanced TOPSIS framework, benefits companies navigating the complexities of multimodal transport. It empowers making more strategic and informed transportation decisions. This demonstration will be increasingly valuable as companies navigate the ever-growing trade within the global supply chains.展开更多
The digital twin is the concept of transcending reality,which is the reverse feedback from the real physical space to the virtual digital space.People hold great prospects for this emerging technology.In order to real...The digital twin is the concept of transcending reality,which is the reverse feedback from the real physical space to the virtual digital space.People hold great prospects for this emerging technology.In order to realize the upgrading of the digital twin industrial chain,it is urgent to introduce more modalities,such as vision,haptics,hearing and smell,into the virtual digital space,which assists physical entities and virtual objects in creating a closer connection.Therefore,perceptual understanding and object recognition have become an urgent hot topic in the digital twin.Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality,ignoring the complementarity between multiple modalities.In order to overcome this dilemma,we propose a multimodal fusion network in our article that combines two modalities,visual and haptic,for surface material recognition.On the one hand,the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping.On the other hand,the network is extensible and can be used as a universal architecture to include more modalities.Experiments show that the constructed multimodal fusion network can achieve 99.42%classification accuracy while reducing complexity.展开更多
In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve...In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve MMOPs with higher-dimensional decision variables.Due to the increase in the dimensions of decision variables in real-world MMOPs,it is diffi-cult for current multimodal multiobjective optimization evolu-tionary algorithms(MMOEAs)to find multiple Pareto optimal solutions.The proposed algorithm adopts a dual-population framework and an improved environmental selection method.It utilizes a convergence archive to help the first population improve the quality of solutions.The improved environmental selection method enables the other population to search the remaining decision space and reserve more Pareto optimal solutions through the information of the first population.The combination of these two strategies helps to effectively balance and enhance conver-gence and diversity performance.In addition,to study the per-formance of the proposed algorithm,a novel set of multimodal multiobjective optimization test functions with extensible decision variables is designed.The proposed MMOEA is certified to be effective through comparison with six state-of-the-art MMOEAs on the test functions.展开更多
The fracture toughness of extruded Mg-1Zn-2Y(at.%)alloys,featuring a multimodal microstructure containing fine dynamically recrystallized(DRXed)grains with random crystallographic orientation and coarse-worked grains ...The fracture toughness of extruded Mg-1Zn-2Y(at.%)alloys,featuring a multimodal microstructure containing fine dynamically recrystallized(DRXed)grains with random crystallographic orientation and coarse-worked grains with a strong fiber texture,was investigated.The DRXed grains comprised randomly oriented equiaxedα-Mg grains.In contrast,the worked grains includedα-Mg and long-period stacking ordered(LPSO)phases that extended in the extrusion direction(ED).Both types displayed a strong texture,aligning the(10.10)direction parallel to the ED.The volume fractions of the DRXed and worked grains were controlled by adjusting the extrusion temperature.In the longitudinal-transverse(L-T)orientation,where the loading direction was aligned parallel to the ED,there was a tendency for the conditional fracture toughness,KQ,tended to increase as the volume fraction of the worked grains increased.However,the KQ values in the T-L orientation,where the loading direction was perpendicular to the ED,decreased with an increase in the volume fraction of the worked grains.This suggests strong anisotropy in the fracture toughness of the specimen with a high volume fraction of the worked grains,relative to the test direction.The worked grains,which included the LPSO phase and were elongated perpendicular to the initial crack plane,suppressed the straight crack extension,causing crack deflection,and generating secondary cracks.Thus,these worked grains significantly contributed to the fracture toughness of the extruded Mg-1Zn-2Y alloys in the L-T orientation.展开更多
BACKGROUND According to clinical data,a significant percentage of patients experience pain after surgery,highlighting the importance of alleviating postoperative pain.The current approach involves intravenous self-con...BACKGROUND According to clinical data,a significant percentage of patients experience pain after surgery,highlighting the importance of alleviating postoperative pain.The current approach involves intravenous self-control analgesia,often utilizing opioid analgesics such as morphine,sufentanil,and fentanyl.Surgery for colo-rectal cancer typically involves general anesthesia.Therefore,optimizing anes-thetic management and postoperative analgesic programs can effectively reduce perioperative stress and enhance postoperative recovery.The study aims to analyze the impact of different anesthesia modalities with multimodal analgesia on patients'postoperative pain.AIM To explore the effects of different anesthesia methods coupled with multi-mode analgesia on postoperative pain in patients with colorectal cancer.METHODS Following the inclusion criteria and exclusion criteria,a total of 126 patients with colorectal cancer admitted to our hospital from January 2020 to December 2022 were included,of which 63 received general anesthesia coupled with multi-mode labor pain and were set as the control group,and 63 received general anesthesia associated with epidural anesthesia coupled with multi-mode labor pain and were set as the research group.After data collection,the effects of postoperative analgesia,sedation,and recovery were compared.RESULTS Compared to the control group,the research group had shorter recovery times for orientation,extubation,eye-opening,and spontaneous respiration(P<0.05).The research group also showed lower Visual analog scale scores at 24 h and 48 h,higher Ramany scores at 6 h and 12 h,and improved cognitive function at 24 h,48 h,and 72 h(P<0.05).Additionally,interleukin-6 and interleukin-10 levels were significantly reduced at various time points in the research group compared to the control group(P<0.05).Levels of CD3+,CD4+,and CD4+/CD8+were also lower in the research group at multiple time points(P<0.05).CONCLUSION For patients with colorectal cancer,general anesthesia coupled with epidural anesthesia and multi-mode analgesia can achieve better postoperative analgesia and sedation effects,promote postoperative rehabilitation of patients,improve inflammatory stress and immune status,and have higher safety.展开更多
User identity linkage(UIL)refers to identifying user accounts belonging to the same identity across different social media platforms.Most of the current research is based on text analysis,which fails to fully explore ...User identity linkage(UIL)refers to identifying user accounts belonging to the same identity across different social media platforms.Most of the current research is based on text analysis,which fails to fully explore the rich image resources generated by users,and the existing attempts touch on the multimodal domain,but still face the challenge of semantic differences between text and images.Given this,we investigate the UIL task across different social media platforms based on multimodal user-generated contents(UGCs).We innovatively introduce the efficient user identity linkage via aligned multi-modal features and temporal correlation(EUIL)approach.The method first generates captions for user-posted images with the BLIP model,alleviating the problem of missing textual information.Subsequently,we extract aligned text and image features with the CLIP model,which closely aligns the two modalities and significantly reduces the semantic gap.Accordingly,we construct a set of adapter modules to integrate the multimodal features.Furthermore,we design a temporal weight assignment mechanism to incorporate the temporal dimension of user behavior.We evaluate the proposed scheme on the real-world social dataset TWIN,and the results show that our method reaches 86.39%accuracy,which demonstrates the excellence in handling multimodal data,and provides strong algorithmic support for UIL.展开更多
In practical engineering,multi-objective optimization often encounters situations where multiple Pareto sets(PS)in the decision space correspond to the same Pareto front(PF)in the objective space,known as Multi-Modal ...In practical engineering,multi-objective optimization often encounters situations where multiple Pareto sets(PS)in the decision space correspond to the same Pareto front(PF)in the objective space,known as Multi-Modal Multi-Objective Optimization Problems(MMOP).Locating multiple equivalent global PSs poses a significant challenge in real-world applications,especially considering the existence of local PSs.Effectively identifying and locating both global and local PSs is a major challenge.To tackle this issue,we introduce an immune-inspired reproduction strategy designed to produce more offspring in less crowded,promising regions and regulate the number of offspring in areas that have been thoroughly explored.This approach achieves a balanced trade-off between exploration and exploitation.Furthermore,we present an interval allocation strategy that adaptively assigns fitness levels to each antibody.This strategy ensures a broader survival margin for solutions in their initial stages and progressively amplifies the differences in individual fitness values as the population matures,thus fostering better population convergence.Additionally,we incorporate a multi-population mechanism that precisely manages each subpopulation through the interval allocation strategy,ensuring the preservation of both global and local PSs.Experimental results on 21 test problems,encompassing both global and local PSs,are compared with eight state-of-the-art multimodal multi-objective optimization algorithms.The results demonstrate the effectiveness of our proposed algorithm in simultaneously identifying global Pareto sets and locally high-quality PSs.展开更多
Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural net...Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural network models and semantic matching techniques.Experiments conducted on the Flickr8k and AraImg2k benchmark datasets,featuring images and descriptions in English and Arabic,showcase remarkable performance improvements over state-of-the-art methods.Our model,equipped with the Image&Cross-Language Semantic Matching module and the Target Language Domain Evaluation module,significantly enhances the semantic relevance of generated image descriptions.For English-to-Arabic and Arabic-to-English cross-language image descriptions,our approach achieves a CIDEr score for English and Arabic of 87.9%and 81.7%,respectively,emphasizing the substantial contributions of our methodology.Comparative analyses with previous works further affirm the superior performance of our approach,and visual results underscore that our model generates image captions that are both semantically accurate and stylistically consistent with the target language.In summary,this study advances the field of cross-lingual image description,offering an effective solution for generating image captions across languages,with the potential to impact multilingual communication and accessibility.Future research directions include expanding to more languages and incorporating diverse visual and textual data sources.展开更多
This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent ...This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.展开更多
AIM:To describe the multimodal imaging features,treatment,and outcomes of patients diagnosed with adultonset Coats disease.METHODS:This retrospective study included patients first diagnosed with Coats disease at≥18 y...AIM:To describe the multimodal imaging features,treatment,and outcomes of patients diagnosed with adultonset Coats disease.METHODS:This retrospective study included patients first diagnosed with Coats disease at≥18 years of age between September 2017 and September 2021.Some patients received anti-vascular endothelial growth factor(VEGF)therapy(conbercept,0.5 mg)as the initial treatment,which was combined with laser photocoagulation as needed.All the patients underwent best corrected visual acuity(BCVA)and intraocular pressure examinations,fundus color photography,spontaneous fluorescence tests,fundus fluorescein angiography,optical coherence tomography(OCT),OCT angiography,and other examinations.BCVA alterations and multimodal image findings in the affected eyes following treatment were compared and the prognostic factors were analyzed.RESULTS:The study included 15 patients who were aged 24-72(57.33±12.61)y at presentation.Systemic hypertension was the most common associated systemic condition,occurring in 13(86.7%)patients.Baseline BCVA ranged from 2.0 to 5.0(4.0±1.1),which showed improvement following treatment(4.2±1.0).Multimodal imaging revealed retinal telangiectasis in 13 patients(86.7%),patchy hemorrhage in 5 patients(33.3%),and stage 2B disease(Shield’s staging criteria)in 11 patients(73.3%).OCT revealed that the baseline central macular thickness(CMT)ranged from 129 to 964μm(473.0±230.1μm),with 13 patients(86.7%)exhibiting a baseline CMT exceeding 250μm.Furthermore,8 patients(53.3%)presented with an epiretinal membrane at baseline or during follow-up.Hyper-reflective scars were observed on OCT in five patients(33.3%)with poor visual prognosis.Vision deteriorated in one patient who did not receive treatment.Final vision was stable in three patients who received laser treatment,whereas improvement was observed in one of two patients who received anti-VEGF therapy alone.In addition,8 of 9 patients(88.9%)who received laser treatment and conbercept exhibited stable or improved BCVA.CONCLUSION:Multimodal imaging can help diagnose adult-onset Coats disease.Anti-VEGF treatment combined with laser therapy can be an option for improving or maintaining BCVA and resolving macular edema.The final visual outcome depends on macular involvement and the disease stage.展开更多
Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and sha...Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines.展开更多
The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising sol...The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.展开更多
Post-earthquake rescue missions are full of challenges due to the unstable structure of ruins and successive aftershocks.Most of the current rescue robots lack the ability to interact with environments,leading to low ...Post-earthquake rescue missions are full of challenges due to the unstable structure of ruins and successive aftershocks.Most of the current rescue robots lack the ability to interact with environments,leading to low rescue efficiency.The multimodal electronic skin(e-skin)proposed not only reproduces the pressure,temperature,and humidity sensing capabilities of natural skin but also develops sensing functions beyond it—perceiving object proximity and NO2 gas.Its multilayer stacked structure based on Ecoflex and organohydrogel endows the e-skin with mechanical properties similar to natural skin.Rescue robots integrated with multimodal e-skin and artificial intelligence(AI)algorithms show strong environmental perception capabilities and can accurately distinguish objects and identify human limbs through grasping,laying the foundation for automated post-earthquake rescue.Besides,the combination of e-skin and NO2 wireless alarm circuits allows robots to sense toxic gases in the environment in real time,thereby adopting appropriate measures to protect trapped people from the toxic environment.Multimodal e-skin powered by AI algorithms and hardware circuits exhibits powerful environmental perception and information processing capabilities,which,as an interface for interaction with the physical world,dramatically expands intelligent robots’application scenarios.展开更多
基金supported by the National Natural Science Foundation of China(No.62404111)Natural Science Foundation of Jiangsu Province(No.BK20240635)+2 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions of China(No.24KJB510025)Natural Science Research Start-up Foundation of Recruiting Talents of Nanjing University of Posts and Telecommunications(No.NY223157 and NY223156)Opening Project of Advanced Inte-grated Circuit Package and Testing Research Center of Jiangsu Province(No.NTIKFJJ202303).
文摘Multimodal sensor fusion can make full use of the advantages of various sensors,make up for the shortcomings of a single sensor,achieve information verification or information security through information redundancy,and improve the reliability and safety of the system.Artificial intelligence(AI),referring to the simulation of human intelligence in machines that are programmed to think and learn like humans,represents a pivotal frontier in modern scientific research.With the continuous development and promotion of AI technology in Sensor 4.0 age,multimodal sensor fusion is becoming more and more intelligent and automated,and is expected to go further in the future.With this context,this review article takes a comprehensive look at the recent progress on AI-enhanced multimodal sensors and their integrated devices and systems.Based on the concept and principle of sensor technologies and AI algorithms,the theoretical underpinnings,technological breakthroughs,and pragmatic applications of AI-enhanced multimodal sensors in various fields such as robotics,healthcare,and environmental monitoring are highlighted.Through a comparative study of the dual/tri-modal sensors with and without using AI technologies(especially machine learning and deep learning),AI-enhanced multimodal sensors highlight the potential of AI to improve sensor performance,data processing,and decision-making capabilities.Furthermore,the review analyzes the challenges and opportunities afforded by AI-enhanced multimodal sensors,and offers a prospective outlook on the forthcoming advancements.
基金Supported by the Hunan Province Clinical Medical Technology Innovation Guidance Project,No.2020SK50912Annual Scientific Research Plan Project of Hunan Provincial Health Commission,No.C2019057Hunan Provincial Natural Science Foundation of China,No.2023JJ40381.
文摘BACKGROUND Pancreatic cancer involving the pancreas neck and body often invades the retroperitoneal vessels,making its radical resection challenging.Multimodal treatment strategies,including neoadjuvant therapy,surgery,and postoperative adjuvant therapy,are contributing to a paradigm shift in the treatment of pancreatic cancer.This strategy is also promising in the treatment of pancreatic neckbody cancer.AIM To evaluate the feasibility and effectiveness of a multimodal strategy for the treatment of borderline/locally advanced pancreatic neck-body cancer.METHODS From January 2019 to December 2021,we reviewed the demographic characteristics,neoadjuvant and adjuvant treatment data,intraoperative and postoperative variables,and follow-up outcomes of patients who underwent multimodal treatment for pancreatic neck-body cancer in a prospectively collected database of our hospital.This investigation was reported in line with the Preferred Reporting of Case Series in Surgery criteria.RESULTS A total of 11 patients with pancreatic neck-body cancer were included in this study,of whom 6 patients were borderline resectable and 5 were locally advanced.Through multidisciplinary team discussion,all patients received neoadjuvant therapy,of whom 8(73%)patients achieved a partial response and 3 patients maintained stable disease.After multidisciplinary team reassessment,all patients underwent laparoscopic subtotal distal pancreatectomy and portal vein reconstruction and achieved R0 resection.Postoperatively,two patients(18%)developed ascites,and two patients(18%)developed pancreatic fistulae.The median length of stay of the patients was 11 days(range:10-15 days).All patients received postoperative adjuvant therapy.During the follow-up,three patients experienced tumor recurrence,with a median disease-free survival time of 13.3 months and a median overall survival time of 20.5 months.CONCLUSION A multimodal treatment strategy combining neoadjuvant therapy,laparoscopic subtotal distal pancreatectomy,and adjuvant therapy is safe and feasible in patients with pancreatic neck-body cancer.
基金supported by the National Key Research and Development Program of China(Grant No.2022YFC3004104)the National Natural Science Foundation of China(Grant No.U2342204)+4 种基金the Innovation and Development Program of the China Meteorological Administration(Grant No.CXFZ2024J001)the Open Research Project of the Key Open Laboratory of Hydrology and Meteorology of the China Meteorological Administration(Grant No.23SWQXZ010)the Science and Technology Plan Project of Zhejiang Province(Grant No.2022C03150)the Open Research Fund Project of Anyang National Climate Observatory(Grant No.AYNCOF202401)the Open Bidding for Selecting the Best Candidates Program(Grant No.CMAJBGS202318)。
文摘Thunderstorm wind gusts are small in scale,typically occurring within a range of a few kilometers.It is extremely challenging to monitor and forecast thunderstorm wind gusts using only automatic weather stations.Therefore,it is necessary to establish thunderstorm wind gust identification techniques based on multisource high-resolution observations.This paper introduces a new algorithm,called thunderstorm wind gust identification network(TGNet).It leverages multimodal feature fusion to fuse the temporal and spatial features of thunderstorm wind gust events.The shapelet transform is first used to extract the temporal features of wind speeds from automatic weather stations,which is aimed at distinguishing thunderstorm wind gusts from those caused by synoptic-scale systems or typhoons.Then,the encoder,structured upon the U-shaped network(U-Net)and incorporating recurrent residual convolutional blocks(R2U-Net),is employed to extract the corresponding spatial convective characteristics of satellite,radar,and lightning observations.Finally,by using the multimodal deep fusion module based on multi-head cross-attention,the temporal features of wind speed at each automatic weather station are incorporated into the spatial features to obtain 10-minutely classification of thunderstorm wind gusts.TGNet products have high accuracy,with a critical success index reaching 0.77.Compared with those of U-Net and R2U-Net,the false alarm rate of TGNet products decreases by 31.28%and 24.15%,respectively.The new algorithm provides grid products of thunderstorm wind gusts with a spatial resolution of 0.01°,updated every 10minutes.The results are finer and more accurate,thereby helping to improve the accuracy of operational warnings for thunderstorm wind gusts.
基金Supported by the 135 High-end Talent Project of West China Hospital,Sichuan University,No.ZYDG23029.
文摘BACKGROUND Recent advancements in artificial intelligence(AI)have significantly enhanced the capabilities of endoscopic-assisted diagnosis for gastrointestinal diseases.AI has shown great promise in clinical practice,particularly for diagnostic support,offering real-time insights into complex conditions such as esophageal squamous cell carcinoma.CASE SUMMARY In this study,we introduce a multimodal AI system that successfully identified and delineated a small and flat carcinoma during esophagogastroduodenoscopy,highlighting its potential for early detection of malignancies.The lesion was confirmed as high-grade squamous intraepithelial neoplasia,with pathology results supporting the AI system’s accuracy.The multimodal AI system offers an integrated solution that provides real-time,accurate diagnostic information directly within the endoscopic device interface,allowing for single-monitor use without disrupting endoscopist’s workflow.CONCLUSION This work underscores the transformative potential of AI to enhance endoscopic diagnosis by enabling earlier,more accurate interventions.
基金the National Natural Science Foundation of China(No.62302540)with author F.F.S.For more information,please visit their website at https://www.nsfc.gov.cn/.Additionally,it is also funded by the Open Foundation of Henan Key Laboratory of Cyberspace Situation Awareness(No.HNTS2022020)+1 种基金where F.F.S is an author.Further details can be found at http://xt.hnkjt.gov.cn/data/pingtai/.The research is also supported by the Natural Science Foundation of Henan Province Youth Science Fund Project(No.232300420422)for more information,you can visit https://kjt.henan.gov.cn/2022/09-02/2599082.html.Lastly,it receives funding from the Natural Science Foundation of Zhongyuan University of Technology(No.K2023QN018),where F.F.S is an author.You can find more information at https://www.zut.edu.cn/.
文摘As social networks become increasingly complex, contemporary fake news often includes textual descriptionsof events accompanied by corresponding images or videos. Fake news in multiple modalities is more likely tocreate a misleading perception among users. While early research primarily focused on text-based features forfake news detection mechanisms, there has been relatively limited exploration of learning shared representationsin multimodal (text and visual) contexts. To address these limitations, this paper introduces a multimodal modelfor detecting fake news, which relies on similarity reasoning and adversarial networks. The model employsBidirectional Encoder Representation from Transformers (BERT) and Text Convolutional Neural Network (Text-CNN) for extracting textual features while utilizing the pre-trained Visual Geometry Group 19-layer (VGG-19) toextract visual features. Subsequently, the model establishes similarity representations between the textual featuresextracted by Text-CNN and visual features through similarity learning and reasoning. Finally, these features arefused to enhance the accuracy of fake news detection, and adversarial networks have been employed to investigatethe relationship between fake news and events. This paper validates the proposed model using publicly availablemultimodal datasets from Weibo and Twitter. Experimental results demonstrate that our proposed approachachieves superior performance on Twitter, with an accuracy of 86%, surpassing traditional unimodalmodalmodelsand existing multimodal models. In contrast, the overall better performance of our model on the Weibo datasetsurpasses the benchmark models across multiple metrics. The application of similarity reasoning and adversarialnetworks in multimodal fake news detection significantly enhances detection effectiveness in this paper. However,current research is limited to the fusion of only text and image modalities. Future research directions should aimto further integrate features fromadditionalmodalities to comprehensively represent themultifaceted informationof fake news.
基金We acknowledge funding from NSFC Grant 62306283.
文摘Since the 1950s,when the Turing Test was introduced,there has been notable progress in machine language intelligence.Language modeling,crucial for AI development,has evolved from statistical to neural models over the last two decades.Recently,transformer-based Pre-trained Language Models(PLM)have excelled in Natural Language Processing(NLP)tasks by leveraging large-scale training corpora.Increasing the scale of these models enhances performance significantly,introducing abilities like context learning that smaller models lack.The advancement in Large Language Models,exemplified by the development of ChatGPT,has made significant impacts both academically and industrially,capturing widespread societal interest.This survey provides an overview of the development and prospects from Large Language Models(LLM)to Large Multimodal Models(LMM).It first discusses the contributions and technological advancements of LLMs in the field of natural language processing,especially in text generation and language understanding.Then,it turns to the discussion of LMMs,which integrates various data modalities such as text,images,and sound,demonstrating advanced capabilities in understanding and generating cross-modal content,paving new pathways for the adaptability and flexibility of AI systems.Finally,the survey highlights the prospects of LMMs in terms of technological development and application potential,while also pointing out challenges in data integration,cross-modal understanding accuracy,providing a comprehensive perspective on the latest developments in this field.
基金National Key Research and Development Plan of China, Grant/Award Number: 2021YFB3600503National Natural Science Foundation of China, Grant/Award Numbers: 62276065, U21A20472。
文摘Attention mechanism has been a successful method for multimodal affective analysis in recent years. Despite the advances, several significant challenges remain in fusing language and its nonverbal context information. One is to generate sparse attention coefficients associated with acoustic and visual modalities, which helps locate critical emotional se-mantics. The other is fusing complementary cross‐modal representation to construct optimal salient feature combinations of multiple modalities. A Conditional Transformer Fusion Network is proposed to handle these problems. Firstly, the authors equip the transformer module with CNN layers to enhance the detection of subtle signal patterns in nonverbal sequences. Secondly, sentiment words are utilised as context conditions to guide the computation of cross‐modal attention. As a result, the located nonverbal fea-tures are not only salient but also complementary to sentiment words directly. Experi-mental results show that the authors’ method achieves state‐of‐the‐art performance on several multimodal affective analysis datasets.
文摘Multimodal freight transportation emerges as the go-to strategy for cost-effectively and sustainably moving goods over long distances. In a multimodal freight system, where a single contract includes various transportation methods, businesses aiming for economic success must make well-informed decisions about which modes of transport to use. These decisions prioritize secure deliveries, competitive cost advantages, and the minimization of environmental footprints associated with transportation-related pollution. Within the dynamic landscape of logistics innovation, various multicriteria decision-making (MCDM) approaches empower businesses to evaluate freight transport options thoroughly. In this study, we utilize a case study to demonstrate the application of the Technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) algorithm for MCDM decision-making in freight mode selection. We further enhance the TOPSIS framework by integrating the entropy weight coefficient method. This enhancement aids in assigning precise weights to each criterion involved in mode selection, leading to a more reliable decision-making process. The proposed model provides cost-effective and timely deliveries, minimizing environmental footprint and meeting consumers’ needs. Our findings reveal that freight carbon footprint is the primary concern, followed by freight cost, time sensitivity, and service reliability. The study identifies the combination of Rail/Truck as the ideal mode of transport and containers in flat cars (COFC) as the next best option for the selected case. The proposed algorithm, incorporating the enhanced TOPSIS framework, benefits companies navigating the complexities of multimodal transport. It empowers making more strategic and informed transportation decisions. This demonstration will be increasingly valuable as companies navigate the ever-growing trade within the global supply chains.
基金the National Natural Science Foundation of China(62001246,62001248,62171232)Key R&D Program of Jiangsu Province Key project and topics under Grant BE2021095+3 种基金the Natural Science Foundation of Jiangsu Province Higher Education Institutions(20KJB510020)the Future Network Scientific Research Fund Project(FNSRFP-2021-YB-16)the open research fund of Key Lab of Broadband Wireless Communication and Sensor Network Technology(JZNY202110)the NUPTSF under Grant(NY220070).
文摘The digital twin is the concept of transcending reality,which is the reverse feedback from the real physical space to the virtual digital space.People hold great prospects for this emerging technology.In order to realize the upgrading of the digital twin industrial chain,it is urgent to introduce more modalities,such as vision,haptics,hearing and smell,into the virtual digital space,which assists physical entities and virtual objects in creating a closer connection.Therefore,perceptual understanding and object recognition have become an urgent hot topic in the digital twin.Existing surface material classification schemes often achieve recognition through machine learning or deep learning in a single modality,ignoring the complementarity between multiple modalities.In order to overcome this dilemma,we propose a multimodal fusion network in our article that combines two modalities,visual and haptic,for surface material recognition.On the one hand,the network makes full use of the potential correlations between multiple modalities to deeply mine the modal semantics and complete the data mapping.On the other hand,the network is extensible and can be used as a universal architecture to include more modalities.Experiments show that the constructed multimodal fusion network can achieve 99.42%classification accuracy while reducing complexity.
基金supported in part by National Natural Science Foundation of China(62106230,U23A20340,62376253,62176238)China Postdoctoral Science Foundation(2023M743185)Key Laboratory of Big Data Intelligent Computing,Chongqing University of Posts and Telecommunications Open Fundation(BDIC-2023-A-007)。
文摘In multimodal multiobjective optimization problems(MMOPs),there are several Pareto optimal solutions corre-sponding to the identical objective vector.This paper proposes a new differential evolution algorithm to solve MMOPs with higher-dimensional decision variables.Due to the increase in the dimensions of decision variables in real-world MMOPs,it is diffi-cult for current multimodal multiobjective optimization evolu-tionary algorithms(MMOEAs)to find multiple Pareto optimal solutions.The proposed algorithm adopts a dual-population framework and an improved environmental selection method.It utilizes a convergence archive to help the first population improve the quality of solutions.The improved environmental selection method enables the other population to search the remaining decision space and reserve more Pareto optimal solutions through the information of the first population.The combination of these two strategies helps to effectively balance and enhance conver-gence and diversity performance.In addition,to study the per-formance of the proposed algorithm,a novel set of multimodal multiobjective optimization test functions with extensible decision variables is designed.The proposed MMOEA is certified to be effective through comparison with six state-of-the-art MMOEAs on the test functions.
基金supported by the JST CREST for Research Area“Nanomechanics”[JPMJCR2094]the JSPS KAKENHI for Scientific Research B[JP21H01673]the AMADA Foundation[AF-2023044-C2].
文摘The fracture toughness of extruded Mg-1Zn-2Y(at.%)alloys,featuring a multimodal microstructure containing fine dynamically recrystallized(DRXed)grains with random crystallographic orientation and coarse-worked grains with a strong fiber texture,was investigated.The DRXed grains comprised randomly oriented equiaxedα-Mg grains.In contrast,the worked grains includedα-Mg and long-period stacking ordered(LPSO)phases that extended in the extrusion direction(ED).Both types displayed a strong texture,aligning the(10.10)direction parallel to the ED.The volume fractions of the DRXed and worked grains were controlled by adjusting the extrusion temperature.In the longitudinal-transverse(L-T)orientation,where the loading direction was aligned parallel to the ED,there was a tendency for the conditional fracture toughness,KQ,tended to increase as the volume fraction of the worked grains increased.However,the KQ values in the T-L orientation,where the loading direction was perpendicular to the ED,decreased with an increase in the volume fraction of the worked grains.This suggests strong anisotropy in the fracture toughness of the specimen with a high volume fraction of the worked grains,relative to the test direction.The worked grains,which included the LPSO phase and were elongated perpendicular to the initial crack plane,suppressed the straight crack extension,causing crack deflection,and generating secondary cracks.Thus,these worked grains significantly contributed to the fracture toughness of the extruded Mg-1Zn-2Y alloys in the L-T orientation.
文摘BACKGROUND According to clinical data,a significant percentage of patients experience pain after surgery,highlighting the importance of alleviating postoperative pain.The current approach involves intravenous self-control analgesia,often utilizing opioid analgesics such as morphine,sufentanil,and fentanyl.Surgery for colo-rectal cancer typically involves general anesthesia.Therefore,optimizing anes-thetic management and postoperative analgesic programs can effectively reduce perioperative stress and enhance postoperative recovery.The study aims to analyze the impact of different anesthesia modalities with multimodal analgesia on patients'postoperative pain.AIM To explore the effects of different anesthesia methods coupled with multi-mode analgesia on postoperative pain in patients with colorectal cancer.METHODS Following the inclusion criteria and exclusion criteria,a total of 126 patients with colorectal cancer admitted to our hospital from January 2020 to December 2022 were included,of which 63 received general anesthesia coupled with multi-mode labor pain and were set as the control group,and 63 received general anesthesia associated with epidural anesthesia coupled with multi-mode labor pain and were set as the research group.After data collection,the effects of postoperative analgesia,sedation,and recovery were compared.RESULTS Compared to the control group,the research group had shorter recovery times for orientation,extubation,eye-opening,and spontaneous respiration(P<0.05).The research group also showed lower Visual analog scale scores at 24 h and 48 h,higher Ramany scores at 6 h and 12 h,and improved cognitive function at 24 h,48 h,and 72 h(P<0.05).Additionally,interleukin-6 and interleukin-10 levels were significantly reduced at various time points in the research group compared to the control group(P<0.05).Levels of CD3+,CD4+,and CD4+/CD8+were also lower in the research group at multiple time points(P<0.05).CONCLUSION For patients with colorectal cancer,general anesthesia coupled with epidural anesthesia and multi-mode analgesia can achieve better postoperative analgesia and sedation effects,promote postoperative rehabilitation of patients,improve inflammatory stress and immune status,and have higher safety.
文摘User identity linkage(UIL)refers to identifying user accounts belonging to the same identity across different social media platforms.Most of the current research is based on text analysis,which fails to fully explore the rich image resources generated by users,and the existing attempts touch on the multimodal domain,but still face the challenge of semantic differences between text and images.Given this,we investigate the UIL task across different social media platforms based on multimodal user-generated contents(UGCs).We innovatively introduce the efficient user identity linkage via aligned multi-modal features and temporal correlation(EUIL)approach.The method first generates captions for user-posted images with the BLIP model,alleviating the problem of missing textual information.Subsequently,we extract aligned text and image features with the CLIP model,which closely aligns the two modalities and significantly reduces the semantic gap.Accordingly,we construct a set of adapter modules to integrate the multimodal features.Furthermore,we design a temporal weight assignment mechanism to incorporate the temporal dimension of user behavior.We evaluate the proposed scheme on the real-world social dataset TWIN,and the results show that our method reaches 86.39%accuracy,which demonstrates the excellence in handling multimodal data,and provides strong algorithmic support for UIL.
基金supported in part by the Science and Technology Project of Yunnan Tobacco Industrial Company under Grant JB2022YL02in part by the Natural Science Foundation of Henan Province of China under Grant 242300421413in part by the Henan Province Science and Technology Research Projects under Grants 242102110334 and 242102110375.
文摘In practical engineering,multi-objective optimization often encounters situations where multiple Pareto sets(PS)in the decision space correspond to the same Pareto front(PF)in the objective space,known as Multi-Modal Multi-Objective Optimization Problems(MMOP).Locating multiple equivalent global PSs poses a significant challenge in real-world applications,especially considering the existence of local PSs.Effectively identifying and locating both global and local PSs is a major challenge.To tackle this issue,we introduce an immune-inspired reproduction strategy designed to produce more offspring in less crowded,promising regions and regulate the number of offspring in areas that have been thoroughly explored.This approach achieves a balanced trade-off between exploration and exploitation.Furthermore,we present an interval allocation strategy that adaptively assigns fitness levels to each antibody.This strategy ensures a broader survival margin for solutions in their initial stages and progressively amplifies the differences in individual fitness values as the population matures,thus fostering better population convergence.Additionally,we incorporate a multi-population mechanism that precisely manages each subpopulation through the interval allocation strategy,ensuring the preservation of both global and local PSs.Experimental results on 21 test problems,encompassing both global and local PSs,are compared with eight state-of-the-art multimodal multi-objective optimization algorithms.The results demonstrate the effectiveness of our proposed algorithm in simultaneously identifying global Pareto sets and locally high-quality PSs.
文摘Cross-lingual image description,the task of generating image captions in a target language from images and descriptions in a source language,is addressed in this study through a novel approach that combines neural network models and semantic matching techniques.Experiments conducted on the Flickr8k and AraImg2k benchmark datasets,featuring images and descriptions in English and Arabic,showcase remarkable performance improvements over state-of-the-art methods.Our model,equipped with the Image&Cross-Language Semantic Matching module and the Target Language Domain Evaluation module,significantly enhances the semantic relevance of generated image descriptions.For English-to-Arabic and Arabic-to-English cross-language image descriptions,our approach achieves a CIDEr score for English and Arabic of 87.9%and 81.7%,respectively,emphasizing the substantial contributions of our methodology.Comparative analyses with previous works further affirm the superior performance of our approach,and visual results underscore that our model generates image captions that are both semantically accurate and stylistically consistent with the target language.In summary,this study advances the field of cross-lingual image description,offering an effective solution for generating image captions across languages,with the potential to impact multilingual communication and accessibility.Future research directions include expanding to more languages and incorporating diverse visual and textual data sources.
基金Supported by Education and Teaching Reform Project of the First Clinical College of Chongqing Medical University,No.CMER202305Natural Science Foundation of Tibet Autonomous Region,No.XZ2024ZR-ZY100(Z).
文摘This editorial comments on an article recently published by López del Hoyo et al.The metaverse,hailed as"the successor to the mobile Internet",is undoubtedly one of the most fashionable terms in recent years.Although metaverse development is a complex and multifaceted evolutionary process influenced by many factors,it is almost certain that it will significantly impact our lives,including mental health services.Like any other technological advancements,the metaverse era presents a double-edged sword for mental health work,which must clearly understand the needs and transformations of its target audience.In this editorial,our primary focus is to contemplate potential new needs and transformation in mental health work during the metaverse era from the pers-pective of multimodal emotion recognition.
文摘AIM:To describe the multimodal imaging features,treatment,and outcomes of patients diagnosed with adultonset Coats disease.METHODS:This retrospective study included patients first diagnosed with Coats disease at≥18 years of age between September 2017 and September 2021.Some patients received anti-vascular endothelial growth factor(VEGF)therapy(conbercept,0.5 mg)as the initial treatment,which was combined with laser photocoagulation as needed.All the patients underwent best corrected visual acuity(BCVA)and intraocular pressure examinations,fundus color photography,spontaneous fluorescence tests,fundus fluorescein angiography,optical coherence tomography(OCT),OCT angiography,and other examinations.BCVA alterations and multimodal image findings in the affected eyes following treatment were compared and the prognostic factors were analyzed.RESULTS:The study included 15 patients who were aged 24-72(57.33±12.61)y at presentation.Systemic hypertension was the most common associated systemic condition,occurring in 13(86.7%)patients.Baseline BCVA ranged from 2.0 to 5.0(4.0±1.1),which showed improvement following treatment(4.2±1.0).Multimodal imaging revealed retinal telangiectasis in 13 patients(86.7%),patchy hemorrhage in 5 patients(33.3%),and stage 2B disease(Shield’s staging criteria)in 11 patients(73.3%).OCT revealed that the baseline central macular thickness(CMT)ranged from 129 to 964μm(473.0±230.1μm),with 13 patients(86.7%)exhibiting a baseline CMT exceeding 250μm.Furthermore,8 patients(53.3%)presented with an epiretinal membrane at baseline or during follow-up.Hyper-reflective scars were observed on OCT in five patients(33.3%)with poor visual prognosis.Vision deteriorated in one patient who did not receive treatment.Final vision was stable in three patients who received laser treatment,whereas improvement was observed in one of two patients who received anti-VEGF therapy alone.In addition,8 of 9 patients(88.9%)who received laser treatment and conbercept exhibited stable or improved BCVA.CONCLUSION:Multimodal imaging can help diagnose adult-onset Coats disease.Anti-VEGF treatment combined with laser therapy can be an option for improving or maintaining BCVA and resolving macular edema.The final visual outcome depends on macular involvement and the disease stage.
基金supported by STI 2030-Major Projects 2021ZD0200400National Natural Science Foundation of China(62276233 and 62072405)Key Research Project of Zhejiang Province(2023C01048).
文摘Multimodal sentiment analysis utilizes multimodal data such as text,facial expressions and voice to detect people’s attitudes.With the advent of distributed data collection and annotation,we can easily obtain and share such multimodal data.However,due to professional discrepancies among annotators and lax quality control,noisy labels might be introduced.Recent research suggests that deep neural networks(DNNs)will overfit noisy labels,leading to the poor performance of the DNNs.To address this challenging problem,we present a Multimodal Robust Meta Learning framework(MRML)for multimodal sentiment analysis to resist noisy labels and correlate distinct modalities simultaneously.Specifically,we propose a two-layer fusion net to deeply fuse different modalities and improve the quality of the multimodal data features for label correction and network training.Besides,a multiple meta-learner(label corrector)strategy is proposed to enhance the label correction approach and prevent models from overfitting to noisy labels.We conducted experiments on three popular multimodal datasets to verify the superiority of ourmethod by comparing it with four baselines.
基金the financial support by the National Natural Science Foundation of China(52230004 and 52293445)the Key Research and Development Project of Shandong Province(2020CXGC011202-005)the Shenzhen Science and Technology Program(KCXFZ20211020163404007 and KQTD20190929172630447).
文摘The potential for reducing greenhouse gas(GHG)emissions and energy consumption in wastewater treatment can be realized through intelligent control,with machine learning(ML)and multimodality emerging as a promising solution.Here,we introduce an ML technique based on multimodal strategies,focusing specifically on intelligent aeration control in wastewater treatment plants(WWTPs).The generalization of the multimodal strategy is demonstrated on eight ML models.The results demonstrate that this multimodal strategy significantly enhances model indicators for ML in environmental science and the efficiency of aeration control,exhibiting exceptional performance and interpretability.Integrating random forest with visual models achieves the highest accuracy in forecasting aeration quantity in multimodal models,with a mean absolute percentage error of 4.4%and a coefficient of determination of 0.948.Practical testing in a full-scale plant reveals that the multimodal model can reduce operation costs by 19.8%compared to traditional fuzzy control methods.The potential application of these strategies in critical water science domains is discussed.To foster accessibility and promote widespread adoption,the multimodal ML models are freely available on GitHub,thereby eliminating technical barriers and encouraging the application of artificial intelligence in urban wastewater treatment.
基金supports from the National Natural Science Foundation of China(61801525)the independent fund of the State Key Laboratory of Optoelectronic Materials and Technologies(Sun Yat-sen University)under grant No.OEMT-2022-ZRC-05+3 种基金the Opening Project of State Key Laboratory of Polymer Materials Engineering(Sichuan University)(Grant No.sklpme2023-3-5))the Foundation of the state key Laboratory of Transducer Technology(No.SKT2301),Shenzhen Science and Technology Program(JCYJ20220530161809020&JCYJ20220818100415033)the Young Top Talent of Fujian Young Eagle Program of Fujian Province and Natural Science Foundation of Fujian Province(2023J02013)National Key R&D Program of China(2022YFB2802051).
文摘Post-earthquake rescue missions are full of challenges due to the unstable structure of ruins and successive aftershocks.Most of the current rescue robots lack the ability to interact with environments,leading to low rescue efficiency.The multimodal electronic skin(e-skin)proposed not only reproduces the pressure,temperature,and humidity sensing capabilities of natural skin but also develops sensing functions beyond it—perceiving object proximity and NO2 gas.Its multilayer stacked structure based on Ecoflex and organohydrogel endows the e-skin with mechanical properties similar to natural skin.Rescue robots integrated with multimodal e-skin and artificial intelligence(AI)algorithms show strong environmental perception capabilities and can accurately distinguish objects and identify human limbs through grasping,laying the foundation for automated post-earthquake rescue.Besides,the combination of e-skin and NO2 wireless alarm circuits allows robots to sense toxic gases in the environment in real time,thereby adopting appropriate measures to protect trapped people from the toxic environment.Multimodal e-skin powered by AI algorithms and hardware circuits exhibits powerful environmental perception and information processing capabilities,which,as an interface for interaction with the physical world,dramatically expands intelligent robots’application scenarios.