Automatic image classification is the first step toward semantic understanding of an object in the computer vision area.The key challenge of problem for accurate object recognition is the ability to extract the robust...Automatic image classification is the first step toward semantic understanding of an object in the computer vision area.The key challenge of problem for accurate object recognition is the ability to extract the robust features from various viewpoint images and rapidly calculate similarity between features in the image database or video stream.In order to solve these problems,an effective and rapid image classification method was presented for the object recognition based on the video learning technique.The optical-flow and RANSAC algorithm were used to acquire scene images from each video sequence.After the selection of scene images,the local maximum points on comer of object around local area were found using the Harris comer detection algorithm and the several attributes from local block around each feature point were calculated by using scale invariant feature transform (SIFT) for extracting local descriptor.Finally,the extracted local descriptor was learned to the three-dimensional pyramid match kernel.Experimental results show that our method can extract features in various multi-viewpoint images from query video and calculate a similarity between a query image and images in the database.展开更多
Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse...Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse facial features of individual frames.In this paper, a frame-level attention module is integrated into an improved VGG-based frame work and a lightweight facial expression recognition method is proposed.The proposed network takes a sub video cut from an experimental video sequence as its input and generates a fixed-dimension representation.The VGG-based network with an enhanced branch embeds face images into feature vectors.The frame-level attention module learns weights which are used to adaptively aggregate the feature vectors to form a single discriminative video representation.Finally, a regression module outputs the classification results.The experimental results on CK+and AFEW databases show that the recognition rates of the proposed method can achieve the state-of-the-art performance.展开更多
基于词频反文档频率(term frequency inverse document frequency,TFIDF)的现有文本特征提取算法及其改进算法未能考虑类别内部词语之间的语义关联,如果脱离语义,提取出的特征不能很好地刻画文档的内容。为准确提取特征,在信息熵与信息...基于词频反文档频率(term frequency inverse document frequency,TFIDF)的现有文本特征提取算法及其改进算法未能考虑类别内部词语之间的语义关联,如果脱离语义,提取出的特征不能很好地刻画文档的内容。为准确提取特征,在信息熵与信息增益的基础上,加入词语的语义关联因素,实现融合语义信息的特征提取,进而提出语义和信息增益相结合的TFIDF改进算法,该算法弥补了统计方法丢失语义信息的弊端。实验结果表明,该算法有效地提高了文本分类的精准率。展开更多
文摘Automatic image classification is the first step toward semantic understanding of an object in the computer vision area.The key challenge of problem for accurate object recognition is the ability to extract the robust features from various viewpoint images and rapidly calculate similarity between features in the image database or video stream.In order to solve these problems,an effective and rapid image classification method was presented for the object recognition based on the video learning technique.The optical-flow and RANSAC algorithm were used to acquire scene images from each video sequence.After the selection of scene images,the local maximum points on comer of object around local area were found using the Harris comer detection algorithm and the several attributes from local block around each feature point were calculated by using scale invariant feature transform (SIFT) for extracting local descriptor.Finally,the extracted local descriptor was learned to the three-dimensional pyramid match kernel.Experimental results show that our method can extract features in various multi-viewpoint images from query video and calculate a similarity between a query image and images in the database.
基金Supported by the Future Network Scientific Research Fund Project of Jiangsu Province (No. FNSRFP2021YB26)the Jiangsu Key R&D Fund on Social Development (No. BE2022789)the Science Foundation of Nanjing Institute of Technology (No. ZKJ202003)。
文摘Facial expression recognition(FER) in video has attracted the increasing interest and many approaches have been made.The crucial problem of classifying a given video sequence into several basic emotions is how to fuse facial features of individual frames.In this paper, a frame-level attention module is integrated into an improved VGG-based frame work and a lightweight facial expression recognition method is proposed.The proposed network takes a sub video cut from an experimental video sequence as its input and generates a fixed-dimension representation.The VGG-based network with an enhanced branch embeds face images into feature vectors.The frame-level attention module learns weights which are used to adaptively aggregate the feature vectors to form a single discriminative video representation.Finally, a regression module outputs the classification results.The experimental results on CK+and AFEW databases show that the recognition rates of the proposed method can achieve the state-of-the-art performance.
基金国家自然科学基金(the National Natural Science Foundation of China under Grant No.60773004)山西省自然科学基金(the NaturalScience Foundation of Shanxi Province of China under Grant No.2006011030)
文摘基于词频反文档频率(term frequency inverse document frequency,TFIDF)的现有文本特征提取算法及其改进算法未能考虑类别内部词语之间的语义关联,如果脱离语义,提取出的特征不能很好地刻画文档的内容。为准确提取特征,在信息熵与信息增益的基础上,加入词语的语义关联因素,实现融合语义信息的特征提取,进而提出语义和信息增益相结合的TFIDF改进算法,该算法弥补了统计方法丢失语义信息的弊端。实验结果表明,该算法有效地提高了文本分类的精准率。