期刊文献+
共找到11篇文章
< 1 >
每页显示 20 50 100
Human-computer interactions for virtual reality
1
作者 Feng TIAN 《Virtual Reality & Intelligent Hardware》 2019年第3期I0001-I0002,共2页
Human-computer interactions constitute an important subject for the development and popularization of information technologies,as they are not only an important frontier technology in computer science but also an impo... Human-computer interactions constitute an important subject for the development and popularization of information technologies,as they are not only an important frontier technology in computer science but also an important auxiliary technology in virtual reality(VR).In recent years,Chinese researchers have made significant advances in human-computer interactions.To systematically display China's latest advances in human-computer interactions and thus provide an impetus for the development of VR and other related fields,we have solicited articles for this special issue from experts in this area to participate in the review process.The following articles have been selected for publication in this special issue. 展开更多
关键词 COMPUTER HUMAN FRONTIER
下载PDF
Gesture interaction in virtual reality 被引量:9
2
作者 Yang LI Jin HUANG +2 位作者 Feng TIAN Hong-An WANG Guo-Zhong DAI 《Virtual Reality & Intelligent Hardware》 2019年第1期84-112,共29页
With the development of virtual reality(VR)and human-computer interaction technology,how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research.Gesture is one of... With the development of virtual reality(VR)and human-computer interaction technology,how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research.Gesture is one of the most important communication methods of human beings,which can effectively express users'demands.In the past few decades,gesture-based interaction has made significant progress.This article focuses on the gesture interaction technology and discusses the definition and classification of gestures,input devices for gesture interaction,and gesture interaction recognition technology.The application of gesture interaction technology in virtual reality is studied,the existing problems in the current gesture interaction are summarized,and the future development is prospected. 展开更多
关键词 Virtual reality Gesture interaction Gesture recognition
下载PDF
眼动数据可视化综述 被引量:29
3
作者 程时伟 孙凌云 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2014年第5期698-707,共10页
随着眼动跟踪技术在实际应用中的普及,大量眼动数据需要通过合理的可视化方式进行处理与分析,在这种背景下,眼动数据可视化在基础理论、方法和应用研究等方面得到了快速发展.文中总结了眼动数据的预处理与参数化方法,并在此基础上介绍... 随着眼动跟踪技术在实际应用中的普及,大量眼动数据需要通过合理的可视化方式进行处理与分析,在这种背景下,眼动数据可视化在基础理论、方法和应用研究等方面得到了快速发展.文中总结了眼动数据的预处理与参数化方法,并在此基础上介绍了眼动数据可视化的基本框架和4种主要可视化方法:扫描路径法、热区图法、感兴趣区法和三维空间法;进而介绍了眼动数据可视化在用户界面可用性评估等方面的应用实例.最后对眼动数据可视化未来的研究趋势进行了展望. 展开更多
关键词 眼动跟踪 可视化 热区图 扫描路径 可视分析 人机交互
下载PDF
用于移动设备人机交互的眼动跟踪方法 被引量:17
4
作者 程时伟 孙志强 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2014年第8期1354-1361,共8页
传统眼动跟踪设备构造复杂、体积和重量大,通常只能以桌面固定方式使用,无法支持普适计算环境下的移动式交互.为此,提出一种面向移动式交互的眼动跟踪方法,包括眼动图像处理、眼动特征检测、眼动数据计算和眼动交互应用4个层次.首先对... 传统眼动跟踪设备构造复杂、体积和重量大,通常只能以桌面固定方式使用,无法支持普适计算环境下的移动式交互.为此,提出一种面向移动式交互的眼动跟踪方法,包括眼动图像处理、眼动特征检测、眼动数据计算和眼动交互应用4个层次.首先对眼球红外图像进行滤波、二值化处理,进而基于瞳孔-角膜反射法,结合二次定位和改进的椭圆拟合方法检测瞳孔,设计缩放因子驱动的模板匹配法检测普洱钦斑;在此基础上,计算注视点坐标等眼动数据;最后设计与开发了一个基于单个红外摄像头的头戴式眼动跟踪原型系统.实际用户测试结果表明,该系统舒适度适中,并具有较高精度和鲁棒性,从而验证了文中方法在移动式交互中的可行性和有效性. 展开更多
关键词 眼动跟踪 瞳孔-角膜反射法 普洱钦斑 移动设备 人机交互
下载PDF
面向多设备交互的眼动跟踪方法 被引量:9
5
作者 程时伟 孙志强 陆煜华 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2016年第7期1094-1104,共11页
当前,越来越多的人机交互应用需要依靠多个设备共同完成,传统针对单个设备的眼动跟踪方法已很难适应多设备交互的需求.为此,提出一种面向多设备交互的眼动跟踪方法.针对用户眼球运动幅度显著变大给图像识别带来的影响,采用待选瞳孔区域... 当前,越来越多的人机交互应用需要依靠多个设备共同完成,传统针对单个设备的眼动跟踪方法已很难适应多设备交互的需求.为此,提出一种面向多设备交互的眼动跟踪方法.针对用户眼球运动幅度显著变大给图像识别带来的影响,采用待选瞳孔区域和瞳孔中心识别相结合的方法识别瞳孔;同时对普洱钦斑位置进行预测,插补识别过程中丢失的普洱钦斑;在此基础上,建立瞳孔-普洱钦斑反射向量.另一方面,利用边缘检测方法识别设备屏幕,并通过建立不同设备屏幕的顶点位置列表,比较屏幕形状和面积以区分不同设备;再根据瞳孔-普洱钦斑反射向量进行眼动注视点坐标拟合计算,并结合头部运动误差补偿方法提高多设备之间眼动注视点坐标的计算精度.最后设计开发了头戴式眼动跟踪系统Multi Gaze,用户测试结果表明,文中方法在多设备交互环境下能有效地提高注视点计算精度. 展开更多
关键词 眼动跟踪 注视点 多设备 人机交互
下载PDF
Influence of multi-modality on moving target selection in virtual reality 被引量:1
6
作者 Yang LI Dong WU +3 位作者 Jin HUANG Feng TIAN Hong'an WANG Guozhong DAI 《Virtual Reality & Intelligent Hardware》 2019年第3期303-315,共13页
Background Owing to recent advances in virtual reality(VR)technologies,effective user interaction with dynamic content in 3D scenes has become a research hotspot.Moving target selection is a basic interactive task in ... Background Owing to recent advances in virtual reality(VR)technologies,effective user interaction with dynamic content in 3D scenes has become a research hotspot.Moving target selection is a basic interactive task in which the user performance research in tasks is significant to user interface design in VR.Different from the existing static target selection studies,the moving target selection in VR is affected by the change in target speed,angle and size,and lack of research on some key factors.Methods This study designs an experimental scenario in which the users play badminton under the condition of VR.By adding seven kinds of modal clues such as vision,audio,haptics,and their combinations,five kinds of moving speed and four kinds of serving angles,and the effect of these factors on the performance and subjective feelings in moving target selection in VR,is studied.Results The results show that the moving speed of the shuttlecock has a significant impact on the user performance.The angle of service has a significant impact on hitting rate,but has no significant impact on the hitting distance.The acquisition of the user performance by the moving target is mainly influenced by vision under the combined modalities;adding additional modalities can improve user performance.Although the hitting distance of the target is increased in the trimodal condition,the hitting rate decreases.Conclusion This study analyses the results of user performance and subjective perception,and then provides suggestions on the combination of modality clues in different scenarios. 展开更多
关键词 MULTIMODAL Moving target selection Virtual reality
下载PDF
Trajectory prediction model for crossing-based target selection
7
作者 Hao ZHANG Jin HUANG +2 位作者 Feng TIAN Guozhong DAI Hongan WANG 《Virtual Reality & Intelligent Hardware》 2019年第3期330-340,共11页
Background Crossing-based target selection motion may attain less error rates and higher interactive speed in some cases.Most of the research in target selection fields are focused on the analysis of the interaction r... Background Crossing-based target selection motion may attain less error rates and higher interactive speed in some cases.Most of the research in target selection fields are focused on the analysis of the interaction results.Additionally,as trajectories play a much more important role in crossing-based target selection compared to the other interactive techniques,an ideal model for trajectories can help computer designers make predictions about interaction results during the process of target selection rather than at the end of the whole process.Methods In this paper,a trajectory prediction model for crossing based target selection tasks is proposed by taking the reference of a dynamic model theory.Results Simulation results demonstrate that our model performed well with regard to the prediction of trajectories,endpoints and hitting time for target-selection motion,and the average error of trajectories,endpoints and hitting time values were found to be 17.28%,2.73mm and 11.50%,respectively. 展开更多
关键词 Target selection Crossing-based selection Trajectory prediction
下载PDF
Activity Recognition Based on RFID Object Usage for Smart Mobile Devices 被引量:2
8
作者 Jaeyoung Yang Joonwhan Lee Joongmin Choi 《Journal of Computer Science & Technology》 SCIE EI CSCD 2011年第2期239-246,共8页
Activity recognition is a core aspect of ubiquitous computing applications. In order to deploy activity recognition systems in the real world, we need simple sensing systems with lightweight computational modules to a... Activity recognition is a core aspect of ubiquitous computing applications. In order to deploy activity recognition systems in the real world, we need simple sensing systems with lightweight computational modules to accurately analyze sensed data. In this paper, we propose a simple method to recognize human activities using simple object information involved in activities. We apply activity theory for representing complex human activities and propose a penalized naive Bayes classifier for performing activity recognition. Our results show that our method reduces computation up to an order of magnitude in both learning and inference without penalizing accuracy, when compared to hidden Markov models and conditional random fields. 展开更多
关键词 activity recognition activity theory CONTEXT-AWARENESS RFID
原文传递
Non-Frontal Facial Expression Recognition Using a Depth-Patch Based Deep Neural Network 被引量:2
9
作者 Nai-Ming Yao Hui Chen +1 位作者 Qing-Pei Guo Hong-An Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第6期1172-1185,共14页
The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications... The challenge of coping with non-frontal head poses during facial expression recognition results in considerable reduction of accuracy and robustness when capturing expressions that occur during natural communications. In this paper, we attempt to recognize facial expressions under poses with large rotation angles from 2D videos. A depth^patch based 4D expression representation model is proposed. It was reconstructed from 2D dynamic images for delineating continuous spatial changes and temporal context under non-frontal cases. Furthermore, we present an effective deep neural network classifier, which can accurately capture pose-variant expression features from the depth patches and recognize non-frontal expressions. Experimental results on the BU-4DFE database show that the proposed method achieves a high recognition accuracy of 86.87% for non-frontal facial expressions within a range of head rotation angle of up to 52%, outperforming existing methods. We also present a quantitative analysis of the components contributing to the performance gain through tests on the BU-4DFE and Multi-PIE datasets. 展开更多
关键词 facial expression recognition non-frontal head pose DEPTH spatial-temporal convolutional neural network
原文传递
Flexible computational photodetectors for self-powered activity sensing 被引量:1
10
作者 Dingtian Zhang Canek Fuentes-Hernandez +13 位作者 Raaghesh Vijayan Yang Zhang Yunzhi Li Jung Wook Park Yiyang Wang Yuhui Zhao Nivedita Arora Ali Mirzazadeh Youngwook Do Tingyu Cheng Saiganesh Swaminathan Thad Starner Trisha L.Andrew Gregory D.Abowd 《npj Flexible Electronics》 SCIE 2022年第1期45-52,共8页
Conventional vision-based systems,such as cameras,have demonstrated their enormous versatility in sensing human activities and developing interactive environments.However,these systems have long been criticized for in... Conventional vision-based systems,such as cameras,have demonstrated their enormous versatility in sensing human activities and developing interactive environments.However,these systems have long been criticized for incurring privacy,power,and latency issues due to their underlying structure of pixel-wise analog signal acquisition,computation,and communication.In this research,we overcome these limitations by introducing in-sensor analog computation through the distribution of interconnected photodetectors in space,having a weighted responsivity,to create what we call a computational photodetector.Computational photodetectors can be used to extract mid-level vision features as a single continuous analog signal measured via a two-pin connection.We develop computational photodetectors using thin and flexible low-noise organic photodiode arrays coupled with a self-powered wireless system to demonstrate a set of designs that capture position,orientation,direction,speed,and identification information,in a range of applications from explicit interactions on everyday surfaces to implicit activity detection. 展开更多
关键词 EVERYDAY OVERCOME PHOTODETECTOR
原文传递
EmotionMap:Visual Analysis of Video Emotional Content on a Map
11
作者 Cui-Xia Ma Jian-Cheng Song +3 位作者 Qian Zhu Kevin Maher Ze-Yuan Huang Hong-An Wang 《Journal of Computer Science & Technology》 SCIE EI CSCD 2020年第3期576-591,共16页
Emotion plays a crucial role in gratifying users’needs during their experience of movies and TV series,and may be underutilized as a framework for exploring video content and analysis.In this paper,we present Emotion... Emotion plays a crucial role in gratifying users’needs during their experience of movies and TV series,and may be underutilized as a framework for exploring video content and analysis.In this paper,we present EmotionMap,a novel way of presenting emotion for daily users in 2D geography,fusing spatio-temporal information with emotional data.The interface is composed of novel visualization elements interconnected to facilitate video content exploration,understanding,and searching.EmotionMap allows understanding of the overall emotion at a glance while also giving a rapid understanding of the details.Firstly,we develop EmotionDisc which is an effective tool for collecting audiences’emotion based on emotion representation models.We collect audience and character emotional data,and then integrate the metaphor of a map to visualize video content and emotion in a hierarchical structure.EmotionMap combines sketch interaction,providing a natural approach for users’active exploration.The novelty and the effectiveness of EmotionMap have been demonstrated by the user study and experts’feedback. 展开更多
关键词 video visualization emotion analysis visual analysis sketch interaction
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部