随着基于互联网的社交媒体兴起,Emoji由于具有以图形化方式快速准确地表达情绪的特点,目前已经成为用户在日常交流中广泛使用的图像文本。已有研究工作表明,在基于文本的情绪识别模型中考虑Emoji信息,对于提升模型性能具有重要的作用。...随着基于互联网的社交媒体兴起,Emoji由于具有以图形化方式快速准确地表达情绪的特点,目前已经成为用户在日常交流中广泛使用的图像文本。已有研究工作表明,在基于文本的情绪识别模型中考虑Emoji信息,对于提升模型性能具有重要的作用。目前,考虑Emoji信息的情绪识别模型大多采用词嵌入模型学习Emoji表示,得到的Emoji向量缺乏与目标情绪的直接关联,Emoji表示蕴含的情绪识别信息较少。针对上述问题,该文通过软标签为Emoji构建与目标情绪直接关联的情感分布向量,并将Emoji情感分布信息与基于预训练模型的文本语义信息相结合,提出融合Emoji情感分布的多标签情绪识别方法(Emoji Emotion Distribution Information Fusion for Multi-label Emotion Recognition,EIFER)。EIFER方法在经典的二元交叉熵损失函数的基础上,通过引入标签相关感知损失对情绪标签间存在的相关性进行建模,以提升模型的多标签情绪识别性能。EIFER方法的模型结构由语义信息模块、Emoji信息模块和多损失函数预测模块组成,采用端到端的方式对模型进行训练。在SemEval2018英文数据集上的情绪预测对比实验结果表明,该文提出的EIFER方法比已有的情绪识别方法具有更优的性能。展开更多
The developed system for eye and face detection using Convolutional Neural Networks(CNN)models,followed by eye classification and voice-based assistance,has shown promising potential in enhancing accessibility for ind...The developed system for eye and face detection using Convolutional Neural Networks(CNN)models,followed by eye classification and voice-based assistance,has shown promising potential in enhancing accessibility for individuals with visual impairments.The modular approach implemented in this research allows for a seamless flow of information and assistance between the different components of the system.This research significantly contributes to the field of accessibility technology by integrating computer vision,natural language processing,and voice technologies.By leveraging these advancements,the developed system offers a practical and efficient solution for assisting blind individuals.The modular design ensures flexibility,scalability,and ease of integration with existing assistive technologies.However,it is important to acknowledge that further research and improvements are necessary to enhance the system’s accuracy and usability.Fine-tuning the CNN models and expanding the training dataset can improve eye and face detection as well as eye classification capabilities.Additionally,incorporating real-time responses through sophisticated natural language understanding techniques and expanding the knowledge base of ChatGPT can enhance the system’s ability to provide comprehensive and accurate responses.Overall,this research paves the way for the development of more advanced and robust systems for assisting visually impaired individuals.By leveraging cutting-edge technologies and integrating them into amodular framework,this research contributes to creating a more inclusive and accessible society for individuals with visual impairments.Future work can focus on refining the system,addressing its limitations,and conducting user studies to evaluate its effectiveness and impact in real-world scenarios.展开更多
In the digital era,emojis have enriched the way people communicate and research on emojis explosively increased in recent years.However,few noticed their functions from the neurocognitive perspective,especially their ...In the digital era,emojis have enriched the way people communicate and research on emojis explosively increased in recent years.However,few noticed their functions from the neurocognitive perspective,especially their similarities and differences with facial expressions in traditional face-to-face communication.To fill this gap,we conducted a Meta-analysis with 25 independent effect sizes from previous experimental studies.The present study shows that emojis have slight advantages in processing efficiency,which might be attributed to their simplicity in design,namely the omission of complex facial features,but the difference between emoji and face processing is not significant.In addition,emotional valence and experimental methods do not have significant influences,which suggests that emojis are equally effective as human faces in emotional expression.The current research contributes to the knowledge in digital communication and the crucial role played by emojis therein.展开更多
Using the multimodal metaphor theory,this article studies the multimodal metaphor of emotion.Emotions can be divided into positive emotions and negative emotions.Positive emotion metaphors include happiness metaphors ...Using the multimodal metaphor theory,this article studies the multimodal metaphor of emotion.Emotions can be divided into positive emotions and negative emotions.Positive emotion metaphors include happiness metaphors and love metaphors,while negative emotion metaphors include anger metaphors,fear metaphors and sadness metaphors.They intuitively represent the source domain through physical signs,sensory effects,orientation dynamics and physical presentation close to the actual life,and the emotional multimodal metaphors in emojis have narrative and social functions.展开更多
文摘随着基于互联网的社交媒体兴起,Emoji由于具有以图形化方式快速准确地表达情绪的特点,目前已经成为用户在日常交流中广泛使用的图像文本。已有研究工作表明,在基于文本的情绪识别模型中考虑Emoji信息,对于提升模型性能具有重要的作用。目前,考虑Emoji信息的情绪识别模型大多采用词嵌入模型学习Emoji表示,得到的Emoji向量缺乏与目标情绪的直接关联,Emoji表示蕴含的情绪识别信息较少。针对上述问题,该文通过软标签为Emoji构建与目标情绪直接关联的情感分布向量,并将Emoji情感分布信息与基于预训练模型的文本语义信息相结合,提出融合Emoji情感分布的多标签情绪识别方法(Emoji Emotion Distribution Information Fusion for Multi-label Emotion Recognition,EIFER)。EIFER方法在经典的二元交叉熵损失函数的基础上,通过引入标签相关感知损失对情绪标签间存在的相关性进行建模,以提升模型的多标签情绪识别性能。EIFER方法的模型结构由语义信息模块、Emoji信息模块和多损失函数预测模块组成,采用端到端的方式对模型进行训练。在SemEval2018英文数据集上的情绪预测对比实验结果表明,该文提出的EIFER方法比已有的情绪识别方法具有更优的性能。
文摘The developed system for eye and face detection using Convolutional Neural Networks(CNN)models,followed by eye classification and voice-based assistance,has shown promising potential in enhancing accessibility for individuals with visual impairments.The modular approach implemented in this research allows for a seamless flow of information and assistance between the different components of the system.This research significantly contributes to the field of accessibility technology by integrating computer vision,natural language processing,and voice technologies.By leveraging these advancements,the developed system offers a practical and efficient solution for assisting blind individuals.The modular design ensures flexibility,scalability,and ease of integration with existing assistive technologies.However,it is important to acknowledge that further research and improvements are necessary to enhance the system’s accuracy and usability.Fine-tuning the CNN models and expanding the training dataset can improve eye and face detection as well as eye classification capabilities.Additionally,incorporating real-time responses through sophisticated natural language understanding techniques and expanding the knowledge base of ChatGPT can enhance the system’s ability to provide comprehensive and accurate responses.Overall,this research paves the way for the development of more advanced and robust systems for assisting visually impaired individuals.By leveraging cutting-edge technologies and integrating them into amodular framework,this research contributes to creating a more inclusive and accessible society for individuals with visual impairments.Future work can focus on refining the system,addressing its limitations,and conducting user studies to evaluate its effectiveness and impact in real-world scenarios.
基金supported by Science Foundation of China University of Petroleum,Beijing(No.2462023YXZZ006)Undergraduate Key Teaching Reform Project(30GK2312).
文摘In the digital era,emojis have enriched the way people communicate and research on emojis explosively increased in recent years.However,few noticed their functions from the neurocognitive perspective,especially their similarities and differences with facial expressions in traditional face-to-face communication.To fill this gap,we conducted a Meta-analysis with 25 independent effect sizes from previous experimental studies.The present study shows that emojis have slight advantages in processing efficiency,which might be attributed to their simplicity in design,namely the omission of complex facial features,but the difference between emoji and face processing is not significant.In addition,emotional valence and experimental methods do not have significant influences,which suggests that emojis are equally effective as human faces in emotional expression.The current research contributes to the knowledge in digital communication and the crucial role played by emojis therein.
文摘Using the multimodal metaphor theory,this article studies the multimodal metaphor of emotion.Emotions can be divided into positive emotions and negative emotions.Positive emotion metaphors include happiness metaphors and love metaphors,while negative emotion metaphors include anger metaphors,fear metaphors and sadness metaphors.They intuitively represent the source domain through physical signs,sensory effects,orientation dynamics and physical presentation close to the actual life,and the emotional multimodal metaphors in emojis have narrative and social functions.