Localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles is a current need. Many techniques have been discussed in the literature with respect to location-ba...Localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles is a current need. Many techniques have been discussed in the literature with respect to location-based services and techniques used for the positioning of devices. Time difference of arrival (TDOA), time of arrival (TOA) and received signal strength (RSS) have been widely used for the positioning but narrow band signals such as Bluetooth cannot efficiently utilize TDOA or TOA. Received signal strength indicator (RSSI) to measure RSS, has been found to be more reliable. RSSI measurement estimations depend heavily on the environmental interference. RSSI measurement estimations of Bluetooth systems can be improved either by improving the existing methodologies used to implement them or by using fusion techniques that employ Kalman filters to combine more than one RSSI method to improve the results significantly. This paper focuses on improving the existing methodology of measuring RSSI by proposing a new method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the new method, class 2 Bluetooth devices (Blue Giga WT-12) were used with an evaluation board. The software required was developed in National Instruments LabView. The PCB was designed and manufactured as well. Experiments were then conducted, and surface plots of Bluetooth modules were obtained to show the signal interference and other environmental effects. Lastly, the results were discussed, and relevant conclusions were drawn.展开更多
Traditional image encryption algorithms transform a plain image into a noise-like image.To lower the chances for the encrypted image being detected by the attacker during the image transmission,a visually meaningful i...Traditional image encryption algorithms transform a plain image into a noise-like image.To lower the chances for the encrypted image being detected by the attacker during the image transmission,a visually meaningful image encryption scheme is suggested to hide the encrypted image using another carrier image.This paper proposes a visually meaningful encrypted image algorithm that hides a secret image and a digital signature which provides authenticity and confidentiality.The recovered digital signature is used for the purpose of identity authentication while the secret image is encrypted to protect its confidentiality.Least Significant Bit(LSB)method to embed signature on the encrypted image and Lifting Wavelet Transform(LWT)to generate a visually meaningful encrypted image are designed.The proposed algorithm has a keyspace of 139.5-bit,a Normalized Correlation(NC)value of 0.9998 which is closer to 1 and a Peak Signal to Noise Ratio(PSNR)with a value greater than 50 dB.Different analyses are also performed on the proposed algorithm using different images.The experimental results show that the proposed scheme is with high key sensitivity and strong robustness against pepper and salt attack and cropping attack.Moreover,the histogram analysis shows that the original carrier image and the final visual image are very similar.展开更多
Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of weara...Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of wearables pave a way for assisting visually-impaired persons.The models developed earlier specifically for visually-impaired people work effectually on single object detection in unconstrained environment.But,in real-time scenarios,these systems are inconsistent in providing effective guidance for visually-impaired people.In addition to object detection,extra information about the location of objects in the scene is essential for visually-impaired people.Keeping this in mind,the current research work presents an Efficient Object Detection Model with Audio Assistive System(EODM-AAS)using DL-based YOLO v3 model for visually-impaired people.The aim of the research article is to construct a model that can provide a detailed description of the objects around visually-impaired people.The presented model involves a DL-based YOLO v3 model for multi-label object detection.Besides,the presented model determines the position of object in the scene and finally generates an audio signal to notify the visually-impaired people.In order to validate the detection performance of the presented method,a detailed simulation analysis was conducted on four datasets.The simulation results established that the presented model produces effectual outcome over existing methods.展开更多
Objective: Aggression is one of the common social disorders in adolescence. Blindness is a disability, which can lead to immature and inappropriate behaviors in children, and increase aggression in teenagers. The pres...Objective: Aggression is one of the common social disorders in adolescence. Blindness is a disability, which can lead to immature and inappropriate behaviors in children, and increase aggression in teenagers. The present study was conducted to investigate the effect of music on aggressive behavior in visually impaired students. Methods: This research was an experimental pretest-posttest study with a control group and was conducted in 2012. The study population of this research was teenagers with visual impairments in Bojnord, northeast of Iran. For this purpose, Buss and Perry aggression questionnaire and Rutter behavior questionnaire for teachers were used. Twelve music therapy sessions were held, each lasting 90 minutes. T-tests and analysis of covariance (ANCOVA) were used for data analysis. Results: There were not significant differences between the two groups regarding age, socioeconomic status, and education level of parents, as ascertained prior to the pretest. In the intervention group, the declines of aggression scores were statistically significant. There were significant differences between the results of posttest in the intervention and control groups. Conclusion: Music therapy reduces aggression in teens with blindness and can be used as a non-pharmacological intervention to reduce emotional states in this group.展开更多
The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applicatio...The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applications like image retrieval or indexing,virtual assistants,image understanding,and support of visually impaired people(VIP).Though the VIP uses other senses,touch and hearing,for recognizing objects and events,the quality of life of those persons is lower than the standard level.Automatic Image captioning generates captions that will be read loudly to the VIP,thereby realizing matters happening around them.This article introduces a Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System(RDOAI-ICS)for Visually Impaired People.The presented RDOAI-ICS technique aids in generating image captions for VIPs.The presented RDOAIICS technique utilizes a neural architectural search network(NASNet)model to produce image representations.Besides,the RDOAI-ICS technique uses the radial basis function neural network(RBFNN)method to generate a textual description.To enhance the performance of the RDOAI-ICS method,the parameter optimization process takes place using the RDO algorithm for NasNet and the butterfly optimization algorithm(BOA)for the RBFNN model,showing the novelty of the work.The experimental evaluation of the RDOAI-ICS method can be tested using a benchmark dataset.The outcomes show the enhancements of the RDOAI-ICS method over other recent Image captioning approaches.展开更多
The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been inc...The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been increasing their usage of smart devices,this method is inaccessible for them as it requires them to select points on the touch screen.Therefore,this paper exploits the haptic technology by introducing a vibration-based pattern password approach in which the vibration feedback plays an important role.This approach allows visually impaired people to use a pattern password through two developed vibration feedback:pulses,which are counted by the user,and duration,which has to be estimated by the user.In order to make the proposed approach capable to prevent shoulder-surfing attacks,a camouflage pattern approach is applied.An experimental study is conducted to evaluate the proposed approach,the results of which show that the vibration pulses feedback is usable and resistant to shoulder-surfing attacks.展开更多
Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,fo...Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,for example,handlingmultiple class images,images that get augmented when captured by a camera and so on.The test images include all these variants as well.These detection models alert them about their surroundings when they want to walk independently.This study compares four CNN-based pre-trainedmodels:ResidualNetwork(ResNet-50),Inception v3,DenseConvolutional Network(DenseNet-121),and SqueezeNet,predominantly used in image recognition applications.Based on the analysis performed on these test images,the study infers that Inception V3 outperformed other pre-trained models in terms of accuracy and speed.To further improve the performance of the Inception v3 model,the thermal exchange optimization(TEO)algorithm is applied to tune the hyperparameters(number of epochs,batch size,and learning rate)showing the novelty of the work.Better accuracy was achieved owing to the inclusion of an auxiliary classifier as a regularizer,hyperparameter optimizer,and factorization approach.Additionally,Inception V3 can handle images of different sizes.This makes Inception V3 the optimum model for assisting visually challenged people in real-world communication when integrated with Internet of Things(IoT)-based devices.展开更多
Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several ...Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.展开更多
Text-To-Speech(TTS)is a speech processing tool that is highly helpful for visually-challenged people.The TTS tool is applied to transform the texts into human-like sounds.However,it is highly challenging to accomplish...Text-To-Speech(TTS)is a speech processing tool that is highly helpful for visually-challenged people.The TTS tool is applied to transform the texts into human-like sounds.However,it is highly challenging to accomplish the TTS out-comes for the non-diacritized text of the Arabic language since it has multiple unique features and rules.Some special characters like gemination and diacritic signs that correspondingly indicate consonant doubling and short vowels greatly impact the precise pronunciation of the Arabic language.But,such signs are not frequently used in the texts written in the Arabic language since its speakers and readers can guess them from the context itself.In this background,the current research article introduces an Optimal Deep Learning-driven Arab Text-to-Speech Synthesizer(ODLD-ATSS)model to help the visually-challenged people in the Kingdom of Saudi Arabia.The prime aim of the presented ODLD-ATSS model is to convert the text into speech signals for visually-challenged people.To attain this,the presented ODLD-ATSS model initially designs a Gated Recurrent Unit(GRU)-based prediction model for diacritic and gemination signs.Besides,the Buckwalter code is utilized to capture,store and display the Arabic texts.To improve the TSS performance of the GRU method,the Aquila Optimization Algorithm(AOA)is used,which shows the novelty of the work.To illustrate the enhanced performance of the proposed ODLD-ATSS model,further experi-mental analyses were conducted.The proposed model achieved a maximum accu-racy of 96.35%,and the experimental outcomes infer the improved performance of the proposed ODLD-ATSS model over other DL-based TSS models.展开更多
Objective: To compare the learning of visually impaired individuals after the use of the educational game “Drugs: playing it clean”. Method: Quasi-experimental, comparative, before-after study. Results: The particip...Objective: To compare the learning of visually impaired individuals after the use of the educational game “Drugs: playing it clean”. Method: Quasi-experimental, comparative, before-after study. Results: The participants’ mean age in Brazil was lower than in Portugal;a significant difference in information acquisition was found between the pre and post-test for the low-complexity (Brazil p = 0.018 and Portugal p = 0.002), without a difference in the number of correct answers for the medium/high-complexity questions between the two countries (p = 0.655 and p = 0.0792);when comparing the number of correct answers before and after the game intervention, an increase was found in Brazil and Portugal, respectively (21.8% - 61.1%;11.2% - 38.9%);a significant difference was found in the number of correct answers between the low and medium/high-complexity questions (p = 0.030). Conclusion: The educational game permits information access and can be used as a teaching-learning strategy.展开更多
Objective: To validate the educational board game “Drugs: playing fair” for visually impaired people in Brazil and Portugal. Methods: Study of apparent validation carried out in two associations for visually impaire...Objective: To validate the educational board game “Drugs: playing fair” for visually impaired people in Brazil and Portugal. Methods: Study of apparent validation carried out in two associations for visually impaired people in Fortaleza, Brazil, and in Porto, Portugal. Thirty-six visually impaired people, 18 from each country, participated in the study. An evaluation tool with 23 items on specifications, content and motivation of the game was applied. Results: The scores awarded in both countries were excellent, with means varying in Brazil from 9.0 to 9.6 and in Portugal, from 8.4 to 9.2. As for the categories and subcategories, the best means in Brazil were: content (9.5);theoretical and methodological consistency (9.6) and concepts/information (9.5). In Portugal, the best means were concepts/information (9.2) and curiosity (9.2). Only two items showed a significant difference: “it allows interaction” (p = 0.024) and “compatible degree of difficulty” (p = 0.012). Conclusion: The educational game on drugs was validated in Brazil and Portugal.展开更多
A result revealed by the Department of Statistics, 29,000 elderly people have registered for Visually Impaired Card in 2014, which was 51.2% of all visually impaired people. With annual growth rate of 1.56%, this numb...A result revealed by the Department of Statistics, 29,000 elderly people have registered for Visually Impaired Card in 2014, which was 51.2% of all visually impaired people. With annual growth rate of 1.56%, this number increased yearly by 1,100 people. According to WHO, 285 million people worldwide were estimated to be visually impaired, which is 4.24% of overall population in 2012. From age distribution point of view, most visually impaired people, accounted for 2.76% of overall population, were above age 50. In average, there are 7 visually impaired people in every 100 people above 50 years old. In Taiwan, 549,000 people over age 50 are estimated to be visually impaired. Therefore we expect there is a large amount of visually impaired people. The main purpose of this research is to collect adaptability information on the cognitive model of senior visually impaired people on their work status, social participation status, and leisure activities via questionnaire survey. Furthermore, descriptive video service is used to as an accessible long-term care application for visually impaired senior users. To solve disability circumstances and improve home care quality through visually impaired APP POe (proof-of-concept). The result of this research will serve as an important basis for other researches in related study field, a reference for practice application. As a result in encouraging profit-seeking enterprises design a user oriented products. And even be an opening for mobile accessibility services benchmark on technology social care for disability and senior users.展开更多
Today eMarketplaces play a significant role in contemporary life by providing a lot of income and business opportunities to people and organizations throughout the world. Despite innovations in the field of IT, many o...Today eMarketplaces play a significant role in contemporary life by providing a lot of income and business opportunities to people and organizations throughout the world. Despite innovations in the field of IT, many of eMarketplaces lack the ability to provide appropriate services for people with special needs, especially the blind.? Therefore, this paper is focused on incorporating an interface for blind people to participate in the business of eMarketplaces. A proposed model of a voice-based eMarketplace has been introduced using voice recognition technology. Specific blind users of the system are uniquely identified using voice recognition technology to enable them to access the eMarketplace in a secure manner. Further work of this project involves building such as module on an existing eMarketplace.展开更多
Visually impaired persons have difficulty in business that dealing with banknote.This paper proposed a Malaysian banknotes detection system using image processing technology and fuzzy logic algorithm for the visually ...Visually impaired persons have difficulty in business that dealing with banknote.This paper proposed a Malaysian banknotes detection system using image processing technology and fuzzy logic algorithm for the visually impaired.The Malaysian banknote reader will first capture the inserted banknote image,sending it to the cloud server for image processing via Wi-Fi medium.The cloud server is established to receive the banknote image sending from the banknote reader,processing them using perceptual hashing based image searching and fuzzy logic algorithm,then return the detected banknote’s value results back to the banknote reader.The banknote reader will display the results in terms of voice message played on the mini speaker attached on it,to allow visually impaired persons knowing the banknote’s value.This hardware mechanism reduces the size and costs for the banknote reader carried by the visually impaired persons.Experimental results showed that this Malaysian banknotes detection system reached an accuracy beyond 95%by running test on 600 different worn,torn and new Malaysian banknotes.After the banknote image being taken by the banknote reader’s camera,the system able to detect the banknote value in about 480 mili-seconds to 560 mili-seconds for a single sided banknote recognition.The banknotes detection speed was also comparable with human observers reading banknotes,with the response of 1.0908 second per banknote slight difference reading time.The IoT and image processing concepts were successfully blended and it provides an alternative to aid the visually impaired person their daily business transaction activities in a better way.展开更多
Cosmetics are used to improve physical appearance, but the benefits may be limited to people without visual impairment. The importance of attractiveness among blind persons has not been assessed. We investigated the i...Cosmetics are used to improve physical appearance, but the benefits may be limited to people without visual impairment. The importance of attractiveness among blind persons has not been assessed. We investigated the influence of makeup on brain activity of blind persons using functional magnetic resonance imaging (fMRI). Participants were 7 blind females (BFs) who learned to fully apply makeup and 9 mostly age-matched normally sighted females (NSFs). Brain activity was measured using fMRI before and after application of makeup and during a makeup image task in each state. In the default mode network at rest, there was no difference between the BFs and NSFs. However, a lateral visual network to the opposite side was observed in the NSFs, whereas no such network was noted in the BFs. A weak network was noted in the BFs in the occipital fusiform gyrus and temporal occipital fusiform cortex, and an extensive visual area network defect was noted. Also, activity after makeup application was significantly higher in the nucleus accumbens, pallidum, and hippocampus. Activity in the right middle cingulate gyrus, right cerebral white matter, and right anterior cingulate gyrus was higher before makeup in both BFs and NSFs, and the activity was significantly higher and more extensive in the BFs. In conclusion, applying makeup is a personally rewarding activity, even for BFs, as it strongly activates the reward system and the reward/memory system network, even in the absence of a visual area network.展开更多
Purpose:The research aims to investigate the information needs of visually impaired library users in China in order to increase our understanding of these users and help the Chinese public libraries improve their serv...Purpose:The research aims to investigate the information needs of visually impaired library users in China in order to increase our understanding of these users and help the Chinese public libraries improve their services for them.Design/methodology/approach:A questionnaire survey was used to study the library users’ information needs.Eleven large public libraries in different areas of China,which were pioneers in implementing services for visually impaired people,were chosen to conduct the survey.Data analysis was based on 97 valid questionnaires retrieved.Findings:Radio and television were still the preferred sources of information for visually impaired users.In information seeking,they had a strong preference for obtaining information in the most convenient way,and accessing a vast amount of information,which was updated quickly.They paid more attention to the information closely related to their work and life.Their purposes of seeking information were mainly for learning,relaxation and intercommunication.Visually impaired users felt some barriers in their access to library services such as a lack of time or a sighted companion who can come along for the trip to the library.Moreover,it was difficult for them to use the Internet to search for information,because many websites do not support the auxiliary software designed especially for visually impaired users or the websites offer only a paid subscriber service.Research limitations:A majority of the respondents were young and middle-aged people and engaged in work.The sample size needs to be enlarged,and different groups of users such as old people and students should be included to yield more useful results.Practical implications:The survey results provide insights into the information needs of the Chinese visually impaired library users.Meanwhile,the research can serve as a reference source for the future studies of the service for visually impaired library users.Originality/value:So far,few studies were conducted to investigate the information needs of visually impaired library users in China.展开更多
The National University Corporation Tsukuba University of Technology(NTUT) is the only institute of higher education for the hearing and the visually impaired in Japan. In our university, hearing or visually impaire...The National University Corporation Tsukuba University of Technology(NTUT) is the only institute of higher education for the hearing and the visually impaired in Japan. In our university, hearing or visually impaired students are studying to be technicians after they graduate, toward social independence. From previous experience of higher education for students with disabilities, effects are increased when modeling is used by the teachers involved in professional education. In the Mechanical Engineering Course, we are using modeling, to match the drawing and shape for beginning students. It includes support for enhancing one's view, and how to draw out the ability of mechanical engineering students for the basics. For students to study Mechanical Design and Drawing, Modeling of Gear Pump, Jack and Globe Valve are easily shown through drawings and the operation of each mechanism through sample drawings in the textbook. It is possible to make an opportunity to think about the machine mechanism. It will be shown by students' works. The assembling of the model triggers the need for form accuracy by making a function, and improves the quality of learning. It is possible that a three-dimensional molding machine can be produced through experiential learning by the model, and modeling with the dimension numerical data. Moreover, it is also embodied in a three-dimensional modeling which results in the image processing programming created. Confirming the improvement of the program through the shape with the quality. In the Department of Synthetic Design, students have chances to realize and self-evaluate from the design of the lamp shade with a complicated shape. In the Faculty of Health Science from Department of Health, high quality teaching of visually-impaired students through the use of bone model teaching materials has become possible in the medical-related courses.展开更多
文摘Localization for visually impaired people in dynamically changing environments with unexpected hazards and obstacles is a current need. Many techniques have been discussed in the literature with respect to location-based services and techniques used for the positioning of devices. Time difference of arrival (TDOA), time of arrival (TOA) and received signal strength (RSS) have been widely used for the positioning but narrow band signals such as Bluetooth cannot efficiently utilize TDOA or TOA. Received signal strength indicator (RSSI) to measure RSS, has been found to be more reliable. RSSI measurement estimations depend heavily on the environmental interference. RSSI measurement estimations of Bluetooth systems can be improved either by improving the existing methodologies used to implement them or by using fusion techniques that employ Kalman filters to combine more than one RSSI method to improve the results significantly. This paper focuses on improving the existing methodology of measuring RSSI by proposing a new method using trilateration for localization of Bluetooth devices for visually impaired people. To validate the new method, class 2 Bluetooth devices (Blue Giga WT-12) were used with an evaluation board. The software required was developed in National Instruments LabView. The PCB was designed and manufactured as well. Experiments were then conducted, and surface plots of Bluetooth modules were obtained to show the signal interference and other environmental effects. Lastly, the results were discussed, and relevant conclusions were drawn.
基金supported in part by the National Natural Science Foundation of China (No.61972103)the Natural Science Foundation of Guangdong Province of China (No.2019A1515011361)+2 种基金the Postgraduate Education Innovation Project of Guangdong Ocean University of China (No.202143)the Guangdong Postgraduate Education Innovation Project of China (No.2020JGXM059)the Key Scientific Research Project of Education Department of Guangdong Province of China (2020ZDZX3064).
文摘Traditional image encryption algorithms transform a plain image into a noise-like image.To lower the chances for the encrypted image being detected by the attacker during the image transmission,a visually meaningful image encryption scheme is suggested to hide the encrypted image using another carrier image.This paper proposes a visually meaningful encrypted image algorithm that hides a secret image and a digital signature which provides authenticity and confidentiality.The recovered digital signature is used for the purpose of identity authentication while the secret image is encrypted to protect its confidentiality.Least Significant Bit(LSB)method to embed signature on the encrypted image and Lifting Wavelet Transform(LWT)to generate a visually meaningful encrypted image are designed.The proposed algorithm has a keyspace of 139.5-bit,a Normalized Correlation(NC)value of 0.9998 which is closer to 1 and a Peak Signal to Noise Ratio(PSNR)with a value greater than 50 dB.Different analyses are also performed on the proposed algorithm using different images.The experimental results show that the proposed scheme is with high key sensitivity and strong robustness against pepper and salt attack and cropping attack.Moreover,the histogram analysis shows that the original carrier image and the final visual image are very similar.
文摘Vision impairment is a latent problem that affects numerous people across the globe.Technological advancements,particularly the rise of computer processing abilities like Deep Learning(DL)models and emergence of wearables pave a way for assisting visually-impaired persons.The models developed earlier specifically for visually-impaired people work effectually on single object detection in unconstrained environment.But,in real-time scenarios,these systems are inconsistent in providing effective guidance for visually-impaired people.In addition to object detection,extra information about the location of objects in the scene is essential for visually-impaired people.Keeping this in mind,the current research work presents an Efficient Object Detection Model with Audio Assistive System(EODM-AAS)using DL-based YOLO v3 model for visually-impaired people.The aim of the research article is to construct a model that can provide a detailed description of the objects around visually-impaired people.The presented model involves a DL-based YOLO v3 model for multi-label object detection.Besides,the presented model determines the position of object in the scene and finally generates an audio signal to notify the visually-impaired people.In order to validate the detection performance of the presented method,a detailed simulation analysis was conducted on four datasets.The simulation results established that the presented model produces effectual outcome over existing methods.
文摘Objective: Aggression is one of the common social disorders in adolescence. Blindness is a disability, which can lead to immature and inappropriate behaviors in children, and increase aggression in teenagers. The present study was conducted to investigate the effect of music on aggressive behavior in visually impaired students. Methods: This research was an experimental pretest-posttest study with a control group and was conducted in 2012. The study population of this research was teenagers with visual impairments in Bojnord, northeast of Iran. For this purpose, Buss and Perry aggression questionnaire and Rutter behavior questionnaire for teachers were used. Twelve music therapy sessions were held, each lasting 90 minutes. T-tests and analysis of covariance (ANCOVA) were used for data analysis. Results: There were not significant differences between the two groups regarding age, socioeconomic status, and education level of parents, as ascertained prior to the pretest. In the intervention group, the declines of aggression scores were statistically significant. There were significant differences between the results of posttest in the intervention and control groups. Conclusion: Music therapy reduces aggression in teens with blindness and can be used as a non-pharmacological intervention to reduce emotional states in this group.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-017.
文摘The problem of producing a natural language description of an image for describing the visual content has gained more attention in natural language processing(NLP)and computer vision(CV).It can be driven by applications like image retrieval or indexing,virtual assistants,image understanding,and support of visually impaired people(VIP).Though the VIP uses other senses,touch and hearing,for recognizing objects and events,the quality of life of those persons is lower than the standard level.Automatic Image captioning generates captions that will be read loudly to the VIP,thereby realizing matters happening around them.This article introduces a Red Deer Optimization with Artificial Intelligence Enabled Image Captioning System(RDOAI-ICS)for Visually Impaired People.The presented RDOAI-ICS technique aids in generating image captions for VIPs.The presented RDOAIICS technique utilizes a neural architectural search network(NASNet)model to produce image representations.Besides,the RDOAI-ICS technique uses the radial basis function neural network(RBFNN)method to generate a textual description.To enhance the performance of the RDOAI-ICS method,the parameter optimization process takes place using the RDO algorithm for NasNet and the butterfly optimization algorithm(BOA)for the RBFNN model,showing the novelty of the work.The experimental evaluation of the RDOAI-ICS method can be tested using a benchmark dataset.The outcomes show the enhancements of the RDOAI-ICS method over other recent Image captioning approaches.
文摘The pattern password method is amongst the most attractive authentication methods and involves drawing a pattern;this is seen as easier than typing a password.However,since people with visual impairments have been increasing their usage of smart devices,this method is inaccessible for them as it requires them to select points on the touch screen.Therefore,this paper exploits the haptic technology by introducing a vibration-based pattern password approach in which the vibration feedback plays an important role.This approach allows visually impaired people to use a pattern password through two developed vibration feedback:pulses,which are counted by the user,and duration,which has to be estimated by the user.In order to make the proposed approach capable to prevent shoulder-surfing attacks,a camouflage pattern approach is applied.An experimental study is conducted to evaluate the proposed approach,the results of which show that the vibration pulses feedback is usable and resistant to shoulder-surfing attacks.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2023R191)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR61)This study is supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2023/R/1444).
文摘Artificial Intelligence(AI)and Computer Vision(CV)advancements have led to many useful methodologies in recent years,particularly to help visually-challenged people.Object detection includes a variety of challenges,for example,handlingmultiple class images,images that get augmented when captured by a camera and so on.The test images include all these variants as well.These detection models alert them about their surroundings when they want to walk independently.This study compares four CNN-based pre-trainedmodels:ResidualNetwork(ResNet-50),Inception v3,DenseConvolutional Network(DenseNet-121),and SqueezeNet,predominantly used in image recognition applications.Based on the analysis performed on these test images,the study infers that Inception V3 outperformed other pre-trained models in terms of accuracy and speed.To further improve the performance of the Inception v3 model,the thermal exchange optimization(TEO)algorithm is applied to tune the hyperparameters(number of epochs,batch size,and learning rate)showing the novelty of the work.Better accuracy was achieved owing to the inclusion of an auxiliary classifier as a regularizer,hyperparameter optimizer,and factorization approach.Additionally,Inception V3 can handle images of different sizes.This makes Inception V3 the optimum model for assisting visually challenged people in real-world communication when integrated with Internet of Things(IoT)-based devices.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-030。
文摘Visual impairment is one of the major problems among people of all age groups across the globe.Visually Impaired Persons(VIPs)require help from others to carry out their day-to-day tasks.Since they experience several problems in their daily lives,technical intervention can help them resolve the challenges.In this background,an automatic object detection tool is the need of the hour to empower VIPs with safe navigation.The recent advances in the Internet of Things(IoT)and Deep Learning(DL)techniques make it possible.The current study proposes IoT-assisted Transient Search Optimization with a Lightweight RetinaNetbased object detection(TSOLWR-ODVIP)model to help VIPs.The primary aim of the presented TSOLWR-ODVIP technique is to identify different objects surrounding VIPs and to convey the information via audio message to them.For data acquisition,IoT devices are used in this study.Then,the Lightweight RetinaNet(LWR)model is applied to detect objects accurately.Next,the TSO algorithm is employed for fine-tuning the hyperparameters involved in the LWR model.Finally,the Long Short-Term Memory(LSTM)model is exploited for classifying objects.The performance of the proposed TSOLWR-ODVIP technique was evaluated using a set of objects,and the results were examined under distinct aspects.The comparison study outcomes confirmed that the TSOLWR-ODVIP model could effectually detect and classify the objects,enhancing the quality of life of VIPs.
基金The authors extend their appreciation to the King Salman center for Disability Research for funding this work through Research Group no KSRG-2022-030.
文摘Text-To-Speech(TTS)is a speech processing tool that is highly helpful for visually-challenged people.The TTS tool is applied to transform the texts into human-like sounds.However,it is highly challenging to accomplish the TTS out-comes for the non-diacritized text of the Arabic language since it has multiple unique features and rules.Some special characters like gemination and diacritic signs that correspondingly indicate consonant doubling and short vowels greatly impact the precise pronunciation of the Arabic language.But,such signs are not frequently used in the texts written in the Arabic language since its speakers and readers can guess them from the context itself.In this background,the current research article introduces an Optimal Deep Learning-driven Arab Text-to-Speech Synthesizer(ODLD-ATSS)model to help the visually-challenged people in the Kingdom of Saudi Arabia.The prime aim of the presented ODLD-ATSS model is to convert the text into speech signals for visually-challenged people.To attain this,the presented ODLD-ATSS model initially designs a Gated Recurrent Unit(GRU)-based prediction model for diacritic and gemination signs.Besides,the Buckwalter code is utilized to capture,store and display the Arabic texts.To improve the TSS performance of the GRU method,the Aquila Optimization Algorithm(AOA)is used,which shows the novelty of the work.To illustrate the enhanced performance of the proposed ODLD-ATSS model,further experi-mental analyses were conducted.The proposed model achieved a maximum accu-racy of 96.35%,and the experimental outcomes infer the improved performance of the proposed ODLD-ATSS model over other DL-based TSS models.
文摘Objective: To compare the learning of visually impaired individuals after the use of the educational game “Drugs: playing it clean”. Method: Quasi-experimental, comparative, before-after study. Results: The participants’ mean age in Brazil was lower than in Portugal;a significant difference in information acquisition was found between the pre and post-test for the low-complexity (Brazil p = 0.018 and Portugal p = 0.002), without a difference in the number of correct answers for the medium/high-complexity questions between the two countries (p = 0.655 and p = 0.0792);when comparing the number of correct answers before and after the game intervention, an increase was found in Brazil and Portugal, respectively (21.8% - 61.1%;11.2% - 38.9%);a significant difference was found in the number of correct answers between the low and medium/high-complexity questions (p = 0.030). Conclusion: The educational game permits information access and can be used as a teaching-learning strategy.
文摘Objective: To validate the educational board game “Drugs: playing fair” for visually impaired people in Brazil and Portugal. Methods: Study of apparent validation carried out in two associations for visually impaired people in Fortaleza, Brazil, and in Porto, Portugal. Thirty-six visually impaired people, 18 from each country, participated in the study. An evaluation tool with 23 items on specifications, content and motivation of the game was applied. Results: The scores awarded in both countries were excellent, with means varying in Brazil from 9.0 to 9.6 and in Portugal, from 8.4 to 9.2. As for the categories and subcategories, the best means in Brazil were: content (9.5);theoretical and methodological consistency (9.6) and concepts/information (9.5). In Portugal, the best means were concepts/information (9.2) and curiosity (9.2). Only two items showed a significant difference: “it allows interaction” (p = 0.024) and “compatible degree of difficulty” (p = 0.012). Conclusion: The educational game on drugs was validated in Brazil and Portugal.
文摘A result revealed by the Department of Statistics, 29,000 elderly people have registered for Visually Impaired Card in 2014, which was 51.2% of all visually impaired people. With annual growth rate of 1.56%, this number increased yearly by 1,100 people. According to WHO, 285 million people worldwide were estimated to be visually impaired, which is 4.24% of overall population in 2012. From age distribution point of view, most visually impaired people, accounted for 2.76% of overall population, were above age 50. In average, there are 7 visually impaired people in every 100 people above 50 years old. In Taiwan, 549,000 people over age 50 are estimated to be visually impaired. Therefore we expect there is a large amount of visually impaired people. The main purpose of this research is to collect adaptability information on the cognitive model of senior visually impaired people on their work status, social participation status, and leisure activities via questionnaire survey. Furthermore, descriptive video service is used to as an accessible long-term care application for visually impaired senior users. To solve disability circumstances and improve home care quality through visually impaired APP POe (proof-of-concept). The result of this research will serve as an important basis for other researches in related study field, a reference for practice application. As a result in encouraging profit-seeking enterprises design a user oriented products. And even be an opening for mobile accessibility services benchmark on technology social care for disability and senior users.
文摘Today eMarketplaces play a significant role in contemporary life by providing a lot of income and business opportunities to people and organizations throughout the world. Despite innovations in the field of IT, many of eMarketplaces lack the ability to provide appropriate services for people with special needs, especially the blind.? Therefore, this paper is focused on incorporating an interface for blind people to participate in the business of eMarketplaces. A proposed model of a voice-based eMarketplace has been introduced using voice recognition technology. Specific blind users of the system are uniquely identified using voice recognition technology to enable them to access the eMarketplace in a secure manner. Further work of this project involves building such as module on an existing eMarketplace.
基金Fundamental Research Grant Scheme(FRGS)2020(FRGS/1/2020/ICT06/MMU/02/1),Ministry of Higher Education Malaysia.
文摘Visually impaired persons have difficulty in business that dealing with banknote.This paper proposed a Malaysian banknotes detection system using image processing technology and fuzzy logic algorithm for the visually impaired.The Malaysian banknote reader will first capture the inserted banknote image,sending it to the cloud server for image processing via Wi-Fi medium.The cloud server is established to receive the banknote image sending from the banknote reader,processing them using perceptual hashing based image searching and fuzzy logic algorithm,then return the detected banknote’s value results back to the banknote reader.The banknote reader will display the results in terms of voice message played on the mini speaker attached on it,to allow visually impaired persons knowing the banknote’s value.This hardware mechanism reduces the size and costs for the banknote reader carried by the visually impaired persons.Experimental results showed that this Malaysian banknotes detection system reached an accuracy beyond 95%by running test on 600 different worn,torn and new Malaysian banknotes.After the banknote image being taken by the banknote reader’s camera,the system able to detect the banknote value in about 480 mili-seconds to 560 mili-seconds for a single sided banknote recognition.The banknotes detection speed was also comparable with human observers reading banknotes,with the response of 1.0908 second per banknote slight difference reading time.The IoT and image processing concepts were successfully blended and it provides an alternative to aid the visually impaired person their daily business transaction activities in a better way.
文摘Cosmetics are used to improve physical appearance, but the benefits may be limited to people without visual impairment. The importance of attractiveness among blind persons has not been assessed. We investigated the influence of makeup on brain activity of blind persons using functional magnetic resonance imaging (fMRI). Participants were 7 blind females (BFs) who learned to fully apply makeup and 9 mostly age-matched normally sighted females (NSFs). Brain activity was measured using fMRI before and after application of makeup and during a makeup image task in each state. In the default mode network at rest, there was no difference between the BFs and NSFs. However, a lateral visual network to the opposite side was observed in the NSFs, whereas no such network was noted in the BFs. A weak network was noted in the BFs in the occipital fusiform gyrus and temporal occipital fusiform cortex, and an extensive visual area network defect was noted. Also, activity after makeup application was significantly higher in the nucleus accumbens, pallidum, and hippocampus. Activity in the right middle cingulate gyrus, right cerebral white matter, and right anterior cingulate gyrus was higher before makeup in both BFs and NSFs, and the activity was significantly higher and more extensive in the BFs. In conclusion, applying makeup is a personally rewarding activity, even for BFs, as it strongly activates the reward system and the reward/memory system network, even in the absence of a visual area network.
基金supported by the Library Society of Guangdong Province(Grant No.:GDTK1113)
文摘Purpose:The research aims to investigate the information needs of visually impaired library users in China in order to increase our understanding of these users and help the Chinese public libraries improve their services for them.Design/methodology/approach:A questionnaire survey was used to study the library users’ information needs.Eleven large public libraries in different areas of China,which were pioneers in implementing services for visually impaired people,were chosen to conduct the survey.Data analysis was based on 97 valid questionnaires retrieved.Findings:Radio and television were still the preferred sources of information for visually impaired users.In information seeking,they had a strong preference for obtaining information in the most convenient way,and accessing a vast amount of information,which was updated quickly.They paid more attention to the information closely related to their work and life.Their purposes of seeking information were mainly for learning,relaxation and intercommunication.Visually impaired users felt some barriers in their access to library services such as a lack of time or a sighted companion who can come along for the trip to the library.Moreover,it was difficult for them to use the Internet to search for information,because many websites do not support the auxiliary software designed especially for visually impaired users or the websites offer only a paid subscriber service.Research limitations:A majority of the respondents were young and middle-aged people and engaged in work.The sample size needs to be enlarged,and different groups of users such as old people and students should be included to yield more useful results.Practical implications:The survey results provide insights into the information needs of the Chinese visually impaired library users.Meanwhile,the research can serve as a reference source for the future studies of the service for visually impaired library users.Originality/value:So far,few studies were conducted to investigate the information needs of visually impaired library users in China.
文摘The National University Corporation Tsukuba University of Technology(NTUT) is the only institute of higher education for the hearing and the visually impaired in Japan. In our university, hearing or visually impaired students are studying to be technicians after they graduate, toward social independence. From previous experience of higher education for students with disabilities, effects are increased when modeling is used by the teachers involved in professional education. In the Mechanical Engineering Course, we are using modeling, to match the drawing and shape for beginning students. It includes support for enhancing one's view, and how to draw out the ability of mechanical engineering students for the basics. For students to study Mechanical Design and Drawing, Modeling of Gear Pump, Jack and Globe Valve are easily shown through drawings and the operation of each mechanism through sample drawings in the textbook. It is possible to make an opportunity to think about the machine mechanism. It will be shown by students' works. The assembling of the model triggers the need for form accuracy by making a function, and improves the quality of learning. It is possible that a three-dimensional molding machine can be produced through experiential learning by the model, and modeling with the dimension numerical data. Moreover, it is also embodied in a three-dimensional modeling which results in the image processing programming created. Confirming the improvement of the program through the shape with the quality. In the Department of Synthetic Design, students have chances to realize and self-evaluate from the design of the lamp shade with a complicated shape. In the Faculty of Health Science from Department of Health, high quality teaching of visually-impaired students through the use of bone model teaching materials has become possible in the medical-related courses.