Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of weara...Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of wearables pave a way for assisting visually-impaired persons.The models developed earlier specifically for visually-impaired people work effectually on single object detection in unconstrained environment.But,in real-time scenarios,these systems are inconsistent in providing effective guidance for visually-impaired people.In addition to object detection,extra information about the location of objects in the scene is essential for visually-impaired people.Keeping this in mind,the current research work presents an Efficient Object Detection Model with Audio Assistive System(EODM-AAS)using DL-based YOLO v3 model for visually-impaired people.The aim of the research article is to construct a model that can provide a detailed description of the objects around visually-impaired people.The presented model involves a DL-based YOLO v3 model for multi-label object detection.Besides,the presented model determines the position of object in the scene and finally generates an audio signal to notify the visually-impaired people.In order to validate the detection performance of the presented method,a detailed simulation analysis was conducted on four datasets.The simulation results established that the presented model produces effectual outcome over existing methods.展开更多
The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applicatio...The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applications like image retrieval or indexing,virtual assistants,image understanding,and support of visually impaired people(VIP).Though the VIP uses other senses,touch and hearing,for recognizing objects and events,the quality of life of those persons is lower than the standard level.Automatic Image captioning generates captions that will be read loudly to the VIP,thereby realizing matters happening around them.This article introduces a Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System(RDOAI-ICS)for Visually Impaired People.The presented RDOAI-ICS technique aids in generating image captions for VIPs.The presented RDOAIICS technique utilizes a neural architectural search network(NASNet)model to produce image representations.Besides,the RDOAI-ICS technique uses the radial basis function neural network(RBFNN)method to generate a textual description.To enhance the performance of the RDOAI-ICS method,the parameter optimization process takes place using the RDO algorithm for NasNet and the butterfly optimization algorithm(BOA)for the RBFNN model,showing the novelty of the work.The experimental evaluation of the RDOAI-ICS method can be tested using a benchmark dataset.The outcomes show the enhancements of the RDOAI-ICS method over other recent Image captioning approaches.展开更多
The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been inc...The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been increasing their usage of smart devices,this method is inaccessible for them as it requires them to select points on the touch screen.Therefore,this paper exploits the haptic technology by introducing a vibration-based pattern password approach in which the vibration feedback plays an important role.This approach allows visually impaired people to use a pattern password through two developed vibration feedback:pulses,which are counted by the user,and duration,which has to be estimated by the user.In order to make the proposed approach capable to prevent shoulder-surfing attacks,a camouflage pattern approach is applied.An experimental study is conducted to evaluate the proposed approach,the results of which show that the vibration pulses feedback is usable and resistant to shoulder-surfing attacks.展开更多
Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several ...Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.展开更多
Today eMarketplaces play a significant role in contemporary life by providing a lot of income and business opportunities to people and organizations throughout the world. Despite innovations in the field of IT, many o...Today eMarketplaces play a significant role in contemporary life by providing a lot of income and business opportunities to people and organizations throughout the world. Despite innovations in the field of IT, many of eMarketplaces lack the ability to provide appropriate services for people with special needs, especially the blind.? Therefore, this paper is focused on incorporating an interface for blind people to participate in the business of eMarketplaces. A proposed model of a voice-based eMarketplace has been introduced using voice recognition technology. Specific blind users of the system are uniquely identified using voice recognition technology to enable them to access the eMarketplace in a secure manner. Further work of this project involves building such as module on an existing eMarketplace.展开更多
Indoor Scene understanding and indoor objects detection is a complex high-level task for automated systems applied to natural environments.Indeed,such a task requires huge annotated indoor images to train and test int...Indoor Scene understanding and indoor objects detection is a complex high-level task for automated systems applied to natural environments.Indeed,such a task requires huge annotated indoor images to train and test intelligent computer vision applications.One of the challenging questions is to adopt and to enhance technologies to assist indoor navigation for visually impaired people(VIP)and thus improve their daily life quality.This paper presents a new labeled indoor object dataset elaborated with a goal of indoor object detection(useful for indoor localization and navigation tasks).This dataset consists of 8000 indoor images containing 16 different indoor landmark objects and classes.The originality of the annotations comes from two new facts taken into account:(1)the spatial relationships between objects present in the scene and(2)actions possible to apply to those objects(relationships between VIP and an object).This collected dataset presents many specifications and strengths as it presents various data under various lighting conditions and complex image background to ensure more robustness when training and testing objects detectors.The proposed dataset,ready for use,provides 16 vital indoor object classes in order to contribute for indoor assistance navigation for VIP.展开更多
AIM:To evaluate the knowledge,attitudes,and practices regarding eye-care seeking practices of visually impaired adults in a rural area Yueqing,and explore factors influencing their behavior.METHODS:A stratified sampli...AIM:To evaluate the knowledge,attitudes,and practices regarding eye-care seeking practices of visually impaired adults in a rural area Yueqing,and explore factors influencing their behavior.METHODS:A stratified sampling method was used to select 48 villages in Yueqing,from which 2400 people were selected to receive vision screenings conducted by oculists during a household visit.Those presenting visual acuity≥0.5 log MAR in either eye completed a self-designed questionnaire investigating their knowledge about medical eye-care seeking,attitudes about eye health and eye-careseeking behavior.RESULTS:Totally 165 people with moderate-to-severe visual impairment were identified(6.9%,165/2400),and 146 eligible participants were recruited(response rate:88.4%,mean age:68.6±15.0 y),among which 88(60.3%)were female.They had 82(56.2%)and 64(43.8%)monocular and binocular visual impairments respectively.A total of 67(45.9%)subjects demonstrated a high knowledge level about medical eye-care seeking and 88(60.3%)had self-rated poor vision,with 23(15%)receiving regular vision checks.The 105(71.9%)subjects had never been to hospital for an eye examination."No need"and"schedule conflicts"were the main reasons for not seeking eye care.Having extensive knowledge of medical eye-care seeking was positively associated with high education levels(OR=3.73,P=0.045)and negatively correlated with older age(OR=0.97,P=0.043).Both the self-perceived vision condition(OR=2.59,P=0.03)and regular vision check behavior(OR=6.50,P<0.01)were related with seeking eye care services.CONCLUSION:In rural Yueqing,intervention is required to increase public knowledge about seeking medical eye care among people with moderate-to-severe visual impairment,especially for the elderly and poorly education.Regular vision checks may be useful to promote their medical eye-care utilization.展开更多
文摘Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of wearables pave a way for assisting visually-impaired persons.The models developed earlier specifically for visually-impaired people work effectually on single object detection in unconstrained environment.But,in real-time scenarios,these systems are inconsistent in providing effective guidance for visually-impaired people.In addition to object detection,extra information about the location of objects in the scene is essential for visually-impaired people.Keeping this in mind,the current research work presents an Efficient Object Detection Model with Audio Assistive System(EODM-AAS)using DL-based YOLO v3 model for visually-impaired people.The aim of the research article is to construct a model that can provide a detailed description of the objects around visually-impaired people.The presented model involves a DL-based YOLO v3 model for multi-label object detection.Besides,the presented model determines the position of object in the scene and finally generates an audio signal to notify the visually-impaired people.In order to validate the detection performance of the presented method,a detailed simulation analysis was conducted on four datasets.The simulation results established that the presented model produces effectual outcome over existing methods.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-017.
文摘The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applications like image retrieval or indexing,virtual assistants,image understanding,and support of visually impaired people(VIP).Though the VIP uses other senses,touch and hearing,for recognizing objects and events,the quality of life of those persons is lower than the standard level.Automatic Image captioning generates captions that will be read loudly to the VIP,thereby realizing matters happening around them.This article introduces a Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System(RDOAI-ICS)for Visually Impaired People.The presented RDOAI-ICS technique aids in generating image captions for VIPs.The presented RDOAIICS technique utilizes a neural architectural search network(NASNet)model to produce image representations.Besides,the RDOAI-ICS technique uses the radial basis function neural network(RBFNN)method to generate a textual description.To enhance the performance of the RDOAI-ICS method,the parameter optimization process takes place using the RDO algorithm for NasNet and the butterfly optimization algorithm(BOA)for the RBFNN model,showing the novelty of the work.The experimental evaluation of the RDOAI-ICS method can be tested using a benchmark dataset.The outcomes show the enhancements of the RDOAI-ICS method over other recent Image captioning approaches.
文摘The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been increasing their usage of smart devices,this method is inaccessible for them as it requires them to select points on the touch screen.Therefore,this paper exploits the haptic technology by introducing a vibration-based pattern password approach in which the vibration feedback plays an important role.This approach allows visually impaired people to use a pattern password through two developed vibration feedback:pulses,which are counted by the user,and duration,which has to be estimated by the user.In order to make the proposed approach capable to prevent shoulder-surfing attacks,a camouflage pattern approach is applied.An experimental study is conducted to evaluate the proposed approach,the results of which show that the vibration pulses feedback is usable and resistant to shoulder-surfing attacks.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-030。
文摘Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.
文摘Today eMarketplaces play a significant role in contemporary life by providing a lot of income and business opportunities to people and organizations throughout the world. Despite innovations in the field of IT, many of eMarketplaces lack the ability to provide appropriate services for people with special needs, especially the blind.? Therefore, this paper is focused on incorporating an interface for blind people to participate in the business of eMarketplaces. A proposed model of a voice-based eMarketplace has been introduced using voice recognition technology. Specific blind users of the system are uniquely identified using voice recognition technology to enable them to access the eMarketplace in a secure manner. Further work of this project involves building such as module on an existing eMarketplace.
文摘Indoor Scene understanding and indoor objects detection is a complex high-level task for automated systems applied to natural environments.Indeed,such a task requires huge annotated indoor images to train and test intelligent computer vision applications.One of the challenging questions is to adopt and to enhance technologies to assist indoor navigation for visually impaired people(VIP)and thus improve their daily life quality.This paper presents a new labeled indoor object dataset elaborated with a goal of indoor object detection(useful for indoor localization and navigation tasks).This dataset consists of 8000 indoor images containing 16 different indoor landmark objects and classes.The originality of the annotations comes from two new facts taken into account:(1)the spatial relationships between objects present in the scene and(2)actions possible to apply to those objects(relationships between VIP and an object).This collected dataset presents many specifications and strengths as it presents various data under various lighting conditions and complex image background to ensure more robustness when training and testing objects detectors.The proposed dataset,ready for use,provides 16 vital indoor object classes in order to contribute for indoor assistance navigation for VIP.
基金Supported by the Science and Technology Benefiting Program of Zhejiang Province(No.2014H01007)the Zhejiang Medical Science and Technology Program(No.2018KY543)。
文摘AIM:To evaluate the knowledge,attitudes,and practices regarding eye-care seeking practices of visually impaired adults in a rural area Yueqing,and explore factors influencing their behavior.METHODS:A stratified sampling method was used to select 48 villages in Yueqing,from which 2400 people were selected to receive vision screenings conducted by oculists during a household visit.Those presenting visual acuity≥0.5 log MAR in either eye completed a self-designed questionnaire investigating their knowledge about medical eye-care seeking,attitudes about eye health and eye-careseeking behavior.RESULTS:Totally 165 people with moderate-to-severe visual impairment were identified(6.9%,165/2400),and 146 eligible participants were recruited(response rate:88.4%,mean age:68.6±15.0 y),among which 88(60.3%)were female.They had 82(56.2%)and 64(43.8%)monocular and binocular visual impairments respectively.A total of 67(45.9%)subjects demonstrated a high knowledge level about medical eye-care seeking and 88(60.3%)had self-rated poor vision,with 23(15%)receiving regular vision checks.The 105(71.9%)subjects had never been to hospital for an eye examination."No need"and"schedule conflicts"were the main reasons for not seeking eye care.Having extensive knowledge of medical eye-care seeking was positively associated with high education levels(OR=3.73,P=0.045)and negatively correlated with older age(OR=0.97,P=0.043).Both the self-perceived vision condition(OR=2.59,P=0.03)and regular vision check behavior(OR=6.50,P<0.01)were related with seeking eye care services.CONCLUSION:In rural Yueqing,intervention is required to increase public knowledge about seeking medical eye care among people with moderate-to-severe visual impairment,especially for the elderly and poorly education.Regular vision checks may be useful to promote their medical eye-care utilization.