The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applicatio...The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applications like image retrieval or indexing,virtual assistants,image understanding,and support of visually impaired people(VIP).Though the VIP uses other senses,touch and hearing,for recognizing objects and events,the quality of life of those persons is lower than the standard level.Automatic Image captioning generates captions that will be read loudly to the VIP,thereby realizing matters happening around them.This article introduces a Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System(RDOAI-ICS)for Visually Impaired People.The presented RDOAI-ICS technique aids in generating image captions for VIPs.The presented RDOAIICS technique utilizes a neural architectural search network(NASNet)model to produce image representations.Besides,the RDOAI-ICS technique uses the radial basis function neural network(RBFNN)method to generate a textual description.To enhance the performance of the RDOAI-ICS method,the parameter optimization process takes place using the RDO algorithm for NasNet and the butterfly optimization algorithm(BOA)for the RBFNN model,showing the novelty of the work.The experimental evaluation of the RDOAI-ICS method can be tested using a benchmark dataset.The outcomes show the enhancements of the RDOAI-ICS method over other recent Image captioning approaches.展开更多
Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several ...Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.展开更多
Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of weara...Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of wearables pave a way for assisting visually-impaired persons.The models developed earlier specifically for visually-impaired people work effectually on single object detection in unconstrained environment.But,in real-time scenarios,these systems are inconsistent in providing effective guidance for visually-impaired people.In addition to object detection,extra information about the location of objects in the scene is essential for visually-impaired people.Keeping this in mind,the current research work presents an Efficient Object Detection Model with Audio Assistive System(EODM-AAS)using DL-based YOLO v3 model for visually-impaired people.The aim of the research article is to construct a model that can provide a detailed description of the objects around visually-impaired people.The presented model involves a DL-based YOLO v3 model for multi-label object detection.Besides,the presented model determines the position of object in the scene and finally generates an audio signal to notify the visually-impaired people.In order to validate the detection performance of the presented method,a detailed simulation analysis was conducted on four datasets.The simulation results established that the presented model produces effectual outcome over existing methods.展开更多
The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been inc...The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been increasing their usage of smart devices,this method is inaccessible for them as it requires them to select points on the touch screen.Therefore,this paper exploits the haptic technology by introducing a vibration-based pattern password approach in which the vibration feedback plays an important role.This approach allows visually impaired people to use a pattern password through two developed vibration feedback:pulses,which are counted by the user,and duration,which has to be estimated by the user.In order to make the proposed approach capable to prevent shoulder-surfing attacks,a camouflage pattern approach is applied.An experimental study is conducted to evaluate the proposed approach,the results of which show that the vibration pulses feedback is usable and resistant to shoulder-surfing attacks.展开更多
Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displa...Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.展开更多
Children’s playgrounds are open spaces,the basis for children’s recreation,important for the inclusion and mobility of visually impaired children in the social environment,through inclusive urban facilities that sti...Children’s playgrounds are open spaces,the basis for children’s recreation,important for the inclusion and mobility of visually impaired children in the social environment,through inclusive urban facilities that stimulate new experiences for their cognitive development.In this context,the use of co-design with visually impaired people,in the design processes of children’s playgrounds,assumes an importance for an inclusive project based on their experiences.Thus,it aimed to promote a project together,to provide more comfort and safety to users.It presents as main results as better colors,materials and types of toys for children with visual impairment to be competent in a playground including from the application of methods,tools and resources in the co-design process.展开更多
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-017.
文摘The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applications like image retrieval or indexing,virtual assistants,image understanding,and support of visually impaired people(VIP).Though the VIP uses other senses,touch and hearing,for recognizing objects and events,the quality of life of those persons is lower than the standard level.Automatic Image captioning generates captions that will be read loudly to the VIP,thereby realizing matters happening around them.This article introduces a Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System(RDOAI-ICS)for Visually Impaired People.The presented RDOAI-ICS technique aids in generating image captions for VIPs.The presented RDOAIICS technique utilizes a neural architectural search network(NASNet)model to produce image representations.Besides,the RDOAI-ICS technique uses the radial basis function neural network(RBFNN)method to generate a textual description.To enhance the performance of the RDOAI-ICS method,the parameter optimization process takes place using the RDO algorithm for NasNet and the butterfly optimization algorithm(BOA)for the RBFNN model,showing the novelty of the work.The experimental evaluation of the RDOAI-ICS method can be tested using a benchmark dataset.The outcomes show the enhancements of the RDOAI-ICS method over other recent Image captioning approaches.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-030。
文摘Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.
文摘Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of wearables pave a way for assisting visually-impaired persons.The models developed earlier specifically for visually-impaired people work effectually on single object detection in unconstrained environment.But,in real-time scenarios,these systems are inconsistent in providing effective guidance for visually-impaired people.In addition to object detection,extra information about the location of objects in the scene is essential for visually-impaired people.Keeping this in mind,the current research work presents an Efficient Object Detection Model with Audio Assistive System(EODM-AAS)using DL-based YOLO v3 model for visually-impaired people.The aim of the research article is to construct a model that can provide a detailed description of the objects around visually-impaired people.The presented model involves a DL-based YOLO v3 model for multi-label object detection.Besides,the presented model determines the position of object in the scene and finally generates an audio signal to notify the visually-impaired people.In order to validate the detection performance of the presented method,a detailed simulation analysis was conducted on four datasets.The simulation results established that the presented model produces effectual outcome over existing methods.
文摘The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been increasing their usage of smart devices,this method is inaccessible for them as it requires them to select points on the touch screen.Therefore,this paper exploits the haptic technology by introducing a vibration-based pattern password approach in which the vibration feedback plays an important role.This approach allows visually impaired people to use a pattern password through two developed vibration feedback:pulses,which are counted by the user,and duration,which has to be estimated by the user.In order to make the proposed approach capable to prevent shoulder-surfing attacks,a camouflage pattern approach is applied.An experimental study is conducted to evaluate the proposed approach,the results of which show that the vibration pulses feedback is usable and resistant to shoulder-surfing attacks.
基金partially supported by the National Natural Science Foundation of China under Grant No.61228206the Grant-in-Aid for Scientific Research of Japan under Grant Nos.23300048 and 25330241
文摘Large displays have become ubiquitous in our everyday lives, but these displays are designed for sighted people.This paper addresses the need for visually impaired people to access targets on large wall-mounted displays. We developed an assistive interface which exploits mid-air gesture input and haptic feedback, and examined its potential for pointing and steering tasks in human computer interaction(HCI). In two experiments, blind and blindfolded users performed target acquisition tasks using mid-air gestures and two different kinds of feedback(i.e., haptic feedback and audio feedback). Our results show that participants perform faster in Fitts' law pointing tasks using the haptic feedback interface rather than the audio feedback interface. Furthermore, a regression analysis between movement time(MT) and the index of difficulty(ID)demonstrates that the Fitts' law model and the steering law model are both effective for the evaluation of assistive interfaces for the blind. Our work and findings will serve as an initial step to assist visually impaired people to easily access required information on large public displays using haptic interfaces.
基金the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior-Brasil 436(CAPES)-Finance Code 001National Council for Scientific and Technological Development(CNPq)Foundation for Research Support of the State of Rio Grande do Sul(FAPERGS).
文摘Children’s playgrounds are open spaces,the basis for children’s recreation,important for the inclusion and mobility of visually impaired children in the social environment,through inclusive urban facilities that stimulate new experiences for their cognitive development.In this context,the use of co-design with visually impaired people,in the design processes of children’s playgrounds,assumes an importance for an inclusive project based on their experiences.Thus,it aimed to promote a project together,to provide more comfort and safety to users.It presents as main results as better colors,materials and types of toys for children with visual impairment to be competent in a playground including from the application of methods,tools and resources in the co-design process.