The multi-modal information presentation, integrated into the virtual environment (VE), has potential for stimulating different senses, improving the user's impression of immersion, and increasing the amount of inf...The multi-modal information presentation, integrated into the virtual environment (VE), has potential for stimulating different senses, improving the user's impression of immersion, and increasing the amount of information that is accepted and processed by the user's perception system. The increase of the useful feedback information may reduce the user's cognitive load, thus enhancing the user's efficiency and performance while interacting with VEs. This paper presents our creation of a multi-sensory virtual assembly environment (VAE) and the evaluation of the effects of multi-sensory feedback on the usability. The VAE brings together complex technologies such as constraint-based assembly simulation, optical motion tracking technology, and real-time 3D sound generation technology around a virtual reality workbench and a common software platform. The usability evaluation is in terms of its three attributes: efficiency of use, user satisfaction, and reliability. These are addressed by using task completion times (TCTs), questionnaires, and human performance error rates (HPERs), respectively. Two assembly tasks have been used to perform the experiments, using sixteen participants. The outcomes showed that the multi-sensory feedback could improve the usability. They also indicated that the integrated feedback offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed by using a unifying human computer interaction framework. The implications, concluded from the outcomes of this work, can serve as useful guidelines for improving VE system design and implementation.展开更多
In order to lessen adverse influences of excessive evaluative indicators of the initial set in multi-sensory evaluation,a2-tuple and rough set based reduction model is built to simplify the initial set of evaluative i...In order to lessen adverse influences of excessive evaluative indicators of the initial set in multi-sensory evaluation,a2-tuple and rough set based reduction model is built to simplify the initial set of evaluative indicators. In the model,a great variety of descriptive forms of the multi-sensory evaluation are also taken into consideration. As a result,the method proves effective in reducing redundant indexes and minimizing index overlaps without compromising the integrity of the evaluation system. By applying the model in a multi-sensory evaluation involving community public information service facilities,the research shows that the results are satisfactory when using genetic algorithm optimized BP neural network as a calculation tool. It shows that using the reduced and simplified set of indicators has a better predication performance than the initial set,and 2-tuple and rough set based model offers an efficient way to reduce indicator redundancy and improves prediction capability of the evaluation model.展开更多
Urban green space is an effective psychological restoration landscape site.However,Adelaide's urban green space tends to focus on visual design but neglect non-visual senses.Sensory influence is interactive and ha...Urban green space is an effective psychological restoration landscape site.However,Adelaide's urban green space tends to focus on visual design but neglect non-visual senses.Sensory influence is interactive and has profound and subtle effects on tourists.The purpose of this study was to investigate the relationship between auditory,tactile,and psychological restoration ofxirban green space from a multi-sensory perspective.This study used quantitative research methods to investigate auditory and tactile-related landscape factors that have a positive effect on restoration.It is expected that this study will contribute to the high-quality restoration of urban green space and promote human health.展开更多
Sensors based Human Activity Recognition(HAR)have numerous applications in eHeath,sports,fitness assessments,ambient assisted living(AAL),human-computer interaction and many more.The human physical activity can be mon...Sensors based Human Activity Recognition(HAR)have numerous applications in eHeath,sports,fitness assessments,ambient assisted living(AAL),human-computer interaction and many more.The human physical activity can be monitored by using wearable sensors or external devices.The usage of external devices has disadvantages in terms of cost,hardware installation,storage,computational time and lighting conditions dependencies.Therefore,most of the researchers used smart devices like smart phones,smart bands and watches which contain various sensors like accelerometer,gyroscope,GPS etc.,and adequate processing capabilities.For the task of recognition,human activities can be broadly categorized as basic and complex human activities.Recognition of complex activities have received very less attention of researchers due to difficulty of problem by using either smart phones or smart watches.Other reasons include lack of sensor-based labeled dataset having several complex human daily life activities.Some of the researchers have worked on the smart phone’s inertial sensors to perform human activity recognition,whereas a few of them used both pocket and wrist positions.In this research,we have proposed a novel framework which is capable to recognize both basic and complex human activities using builtin-sensors of smart phone and smart watch.We have considered 25 physical activities,including 20 complex ones,using smart device’s built-in sensors.To the best of our knowledge,the existing literature consider only up to 15 activities of daily life.展开更多
We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation,...We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.展开更多
Context cognition involves abstractly deriving meaning from situational information in the world and is an important psychological function of higher cognition. However, due to the complexity of contextual information...Context cognition involves abstractly deriving meaning from situational information in the world and is an important psychological function of higher cognition. However, due to the complexity of contextual information processing, along with the lack of relevant technical tools, little remains known about the neural mechanisms and behavioral regulation of context cognition. At present, behavioral training with rodents using virtual reality techniques is considered a potential key for uncovering the neurobiological mechanisms of context cognition. Although virtual reality technology has been preliminarily applied in the study of context cognition in recent years, there remains a lack of virtual scenario integration of multi-sensory information, along with a need for convenient experimental design platforms for researchers who have little programming experience. Therefore, in order to solve problems related to the authenticity, immersion, interaction, and flexibility of rodent virtual reality systems, an immersive virtual reality system based on visual programming was constructed in this study. The system had the ability to flexibly modulate rodent interactive 3 D dynamic experimental environments. The system included a central control unit, virtual perception unit, virtual motion unit, virtual vision unit, and video recording unit. The neural circuit mechanisms in various environments could be effectively studied by combining two-photon imaging and other neural activity recording methods. In addition, to verify the proposed system′s performance, licking experiments were conducted with experimental mice. The results demonstrated that the system could provide a new method and tool for analyzing the neural circuits of the higher cognitive functions in rodents.展开更多
基金This work was supported in part by EPSRC(No.GR/M69333/01(P)).
文摘The multi-modal information presentation, integrated into the virtual environment (VE), has potential for stimulating different senses, improving the user's impression of immersion, and increasing the amount of information that is accepted and processed by the user's perception system. The increase of the useful feedback information may reduce the user's cognitive load, thus enhancing the user's efficiency and performance while interacting with VEs. This paper presents our creation of a multi-sensory virtual assembly environment (VAE) and the evaluation of the effects of multi-sensory feedback on the usability. The VAE brings together complex technologies such as constraint-based assembly simulation, optical motion tracking technology, and real-time 3D sound generation technology around a virtual reality workbench and a common software platform. The usability evaluation is in terms of its three attributes: efficiency of use, user satisfaction, and reliability. These are addressed by using task completion times (TCTs), questionnaires, and human performance error rates (HPERs), respectively. Two assembly tasks have been used to perform the experiments, using sixteen participants. The outcomes showed that the multi-sensory feedback could improve the usability. They also indicated that the integrated feedback offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed by using a unifying human computer interaction framework. The implications, concluded from the outcomes of this work, can serve as useful guidelines for improving VE system design and implementation.
基金National Natural Science Foundation of China(No.50775108)Priority Academic Program Development of Jiangsu Higher Education Institutions,China(PAPD)
文摘In order to lessen adverse influences of excessive evaluative indicators of the initial set in multi-sensory evaluation,a2-tuple and rough set based reduction model is built to simplify the initial set of evaluative indicators. In the model,a great variety of descriptive forms of the multi-sensory evaluation are also taken into consideration. As a result,the method proves effective in reducing redundant indexes and minimizing index overlaps without compromising the integrity of the evaluation system. By applying the model in a multi-sensory evaluation involving community public information service facilities,the research shows that the results are satisfactory when using genetic algorithm optimized BP neural network as a calculation tool. It shows that using the reduced and simplified set of indicators has a better predication performance than the initial set,and 2-tuple and rough set based model offers an efficient way to reduce indicator redundancy and improves prediction capability of the evaluation model.
文摘Urban green space is an effective psychological restoration landscape site.However,Adelaide's urban green space tends to focus on visual design but neglect non-visual senses.Sensory influence is interactive and has profound and subtle effects on tourists.The purpose of this study was to investigate the relationship between auditory,tactile,and psychological restoration ofxirban green space from a multi-sensory perspective.This study used quantitative research methods to investigate auditory and tactile-related landscape factors that have a positive effect on restoration.It is expected that this study will contribute to the high-quality restoration of urban green space and promote human health.
基金This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(2018R1D1A1B07042967)and the Soonchunhyang University Research Fund.
文摘Sensors based Human Activity Recognition(HAR)have numerous applications in eHeath,sports,fitness assessments,ambient assisted living(AAL),human-computer interaction and many more.The human physical activity can be monitored by using wearable sensors or external devices.The usage of external devices has disadvantages in terms of cost,hardware installation,storage,computational time and lighting conditions dependencies.Therefore,most of the researchers used smart devices like smart phones,smart bands and watches which contain various sensors like accelerometer,gyroscope,GPS etc.,and adequate processing capabilities.For the task of recognition,human activities can be broadly categorized as basic and complex human activities.Recognition of complex activities have received very less attention of researchers due to difficulty of problem by using either smart phones or smart watches.Other reasons include lack of sensor-based labeled dataset having several complex human daily life activities.Some of the researchers have worked on the smart phone’s inertial sensors to perform human activity recognition,whereas a few of them used both pocket and wrist positions.In this research,we have proposed a novel framework which is capable to recognize both basic and complex human activities using builtin-sensors of smart phone and smart watch.We have considered 25 physical activities,including 20 complex ones,using smart device’s built-in sensors.To the best of our knowledge,the existing literature consider only up to 15 activities of daily life.
文摘We advance here a novel methodology for robust intelligent biometric information management with inferences and predictions made using randomness and complexity concepts. Intelligence refers to learning, adap- tation, and functionality, and robustness refers to the ability to handle incomplete and/or corrupt adversarial information, on one side, and image and or device variability, on the other side. The proposed methodology is model-free and non-parametric. It draws support from discriminative methods using likelihood ratios to link at the conceptual level biometrics and forensics. It further links, at the modeling and implementation level, the Bayesian framework, statistical learning theory (SLT) using transduction and semi-supervised lea- rning, and Information Theory (IY) using mutual information. The key concepts supporting the proposed methodology are a) local estimation to facilitate learning and prediction using both labeled and unlabeled data;b) similarity metrics using regularity of patterns, randomness deficiency, and Kolmogorov complexity (similar to MDL) using strangeness/typicality and ranking p-values;and c) the Cover – Hart theorem on the asymptotical performance of k-nearest neighbors approaching the optimal Bayes error. Several topics on biometric inference and prediction related to 1) multi-level and multi-layer data fusion including quality and multi-modal biometrics;2) score normalization and revision theory;3) face selection and tracking;and 4) identity management, are described here using an integrated approach that includes transduction and boosting for ranking and sequential fusion/aggregation, respectively, on one side, and active learning and change/ outlier/intrusion detection realized using information gain and martingale, respectively, on the other side. The methodology proposed can be mapped to additional types of information beyond biometrics.
文摘Context cognition involves abstractly deriving meaning from situational information in the world and is an important psychological function of higher cognition. However, due to the complexity of contextual information processing, along with the lack of relevant technical tools, little remains known about the neural mechanisms and behavioral regulation of context cognition. At present, behavioral training with rodents using virtual reality techniques is considered a potential key for uncovering the neurobiological mechanisms of context cognition. Although virtual reality technology has been preliminarily applied in the study of context cognition in recent years, there remains a lack of virtual scenario integration of multi-sensory information, along with a need for convenient experimental design platforms for researchers who have little programming experience. Therefore, in order to solve problems related to the authenticity, immersion, interaction, and flexibility of rodent virtual reality systems, an immersive virtual reality system based on visual programming was constructed in this study. The system had the ability to flexibly modulate rodent interactive 3 D dynamic experimental environments. The system included a central control unit, virtual perception unit, virtual motion unit, virtual vision unit, and video recording unit. The neural circuit mechanisms in various environments could be effectively studied by combining two-photon imaging and other neural activity recording methods. In addition, to verify the proposed system′s performance, licking experiments were conducted with experimental mice. The results demonstrated that the system could provide a new method and tool for analyzing the neural circuits of the higher cognitive functions in rodents.