Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each o...Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each other through hand gestures.Recognition of hand gestures has become an important challenge for the recognition of sign language.There are many existing models that can produce a good accuracy,but if the model test with rotated or translated images,they may face some difficulties to make good performance accuracy.To resolve these challenges of hand gesture recognition,we proposed a Rotation,Translation and Scale-invariant sign word recognition system using a convolu-tional neural network(CNN).We have followed three steps in our work:rotated,translated and scaled(RTS)version dataset generation,gesture segmentation,and sign word classification.Firstly,we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation,Translation and Scale of the ori-ginal images to create the RTS version dataset.Then we have applied the gesture segmentation technique.The segmentation consists of three levels,i)Otsu Thresholding with YCbCr,ii)Morphological analysis:dilation through opening morphology and iii)Watershed algorithm.Finally,our designed CNN model has been trained to classify the hand gesture as well as the sign word.Our model has been evaluated using the twenty sign word dataset,five sign word dataset and the RTS version of these datasets.We achieved 99.30%accuracy from the twenty sign word dataset evaluation,99.10%accuracy from the RTS version of the twenty sign word evolution,100%accuracy from thefive sign word dataset evaluation,and 98.00%accuracy from the RTS versionfive sign word dataset evolution.Furthermore,the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition.展开更多
<strong>Purpose:</strong> We explored parents’ perceptions and judgment formation processes concerning their infants’ health-related quality of life (HRQOL). <strong>Method:</strong> The Peds...<strong>Purpose:</strong> We explored parents’ perceptions and judgment formation processes concerning their infants’ health-related quality of life (HRQOL). <strong>Method:</strong> The PedsQL<sup>TM</sup> Infant Scales—an instrument specifically designed for infants aged 1 - 24 months—were translated into Japanese. Forward and backward translations were performed, evaluating the semantic and conceptual equivalencies. Parents with infants younger than two-years-old were recruited and interviewed using think-aloud and probing techniques. Participants completed the questionnaire while speaking aloud about what came to their mind, what they thought each question meant, and how they reached each answer. <strong>Results:</strong> Seven mothers and three fathers participated. The median age was 33.4 (28 - 43) years. Four had infants younger than six-months-old. All infants were healthy. Parents’ perceptions of their infants’ HRQOL varied across their ages. Some parents with infants younger than six months experienced difficulty discussing “emotional functioning” and “cognitive functioning” because their infants were too young to articulate the actions mentioned in the items. In those cases, the parents responded, “never a problem”. Seventy-five percent of parents recalled their infants’ daily “physical functioning”, while only 58% recalled “physical symptoms”. Some parents’ perceptions and judgment formation were compromised by their own perceptions. For example, they answered “often a problem” when the items were problematic to themselves instead of to their child. However, many distinguished their infants’ HRQOL from their own perceptions, indicating they understood the intention of the questionnaire. <strong>Conclusion:</strong> Parents’ formed judgement may compromise by their own perceptions. The result of this study will be helpful in improving healthcare communication and interpreting parents’ judgments of their infants’ HRQOL in future studies.展开更多
Some estimate results about translation, scaling transform and pointwise product withinthe Meyer-Yan's framework are obtained. Some applications of there adults to wiener product andGateaux differentiation are al...Some estimate results about translation, scaling transform and pointwise product withinthe Meyer-Yan's framework are obtained. Some applications of there adults to wiener product andGateaux differentiation are also proposed.展开更多
This paper proposes a new set of 3D rotation scaling and translation invariants of 3D radially shifted Legendre moments. We aim to develop two kinds of transformed shifted Legendre moments: a 3D substituted radial sh...This paper proposes a new set of 3D rotation scaling and translation invariants of 3D radially shifted Legendre moments. We aim to develop two kinds of transformed shifted Legendre moments: a 3D substituted radial shifted Legendre moments (3DSRSLMs) and a 3D weighted radial one (3DWRSLMs). Both are centered on two types of polynomials. In the first case, a new 3D ra- dial complex moment is proposed. In the second case, new 3D substituted/weighted radial shifted Legendremoments (3DSRSLMs/3DWRSLMs) are introduced using a spherical representation of volumetric image. 3D invariants as derived from the sug- gested 3D radial shifted Legendre moments will appear in the third case. To confirm the proposed approach, we have resolved three is- sues. To confirm the proposed approach, we have resolved three issues: rotation, scaling and translation invariants. The result of experi- ments shows that the 3DSRSLMs and 3DWRSLMs have done better than the 3D radial complex moments with and without noise. Sim- ultaneously, the reconstruction converges rapidly to the original image using 3D radial 3DSRSLMs and 3DWRSLMs, and the test of 3D images are clearly recognized from a set of images that are available in Princeton shape benchmark (PSB) database for 3D image.展开更多
基金This work was supported by the Competitive Research Fund of The University of Aizu,Japan.
文摘Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each other through hand gestures.Recognition of hand gestures has become an important challenge for the recognition of sign language.There are many existing models that can produce a good accuracy,but if the model test with rotated or translated images,they may face some difficulties to make good performance accuracy.To resolve these challenges of hand gesture recognition,we proposed a Rotation,Translation and Scale-invariant sign word recognition system using a convolu-tional neural network(CNN).We have followed three steps in our work:rotated,translated and scaled(RTS)version dataset generation,gesture segmentation,and sign word classification.Firstly,we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation,Translation and Scale of the ori-ginal images to create the RTS version dataset.Then we have applied the gesture segmentation technique.The segmentation consists of three levels,i)Otsu Thresholding with YCbCr,ii)Morphological analysis:dilation through opening morphology and iii)Watershed algorithm.Finally,our designed CNN model has been trained to classify the hand gesture as well as the sign word.Our model has been evaluated using the twenty sign word dataset,five sign word dataset and the RTS version of these datasets.We achieved 99.30%accuracy from the twenty sign word dataset evaluation,99.10%accuracy from the RTS version of the twenty sign word evolution,100%accuracy from thefive sign word dataset evaluation,and 98.00%accuracy from the RTS versionfive sign word dataset evolution.Furthermore,the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition.
文摘<strong>Purpose:</strong> We explored parents’ perceptions and judgment formation processes concerning their infants’ health-related quality of life (HRQOL). <strong>Method:</strong> The PedsQL<sup>TM</sup> Infant Scales—an instrument specifically designed for infants aged 1 - 24 months—were translated into Japanese. Forward and backward translations were performed, evaluating the semantic and conceptual equivalencies. Parents with infants younger than two-years-old were recruited and interviewed using think-aloud and probing techniques. Participants completed the questionnaire while speaking aloud about what came to their mind, what they thought each question meant, and how they reached each answer. <strong>Results:</strong> Seven mothers and three fathers participated. The median age was 33.4 (28 - 43) years. Four had infants younger than six-months-old. All infants were healthy. Parents’ perceptions of their infants’ HRQOL varied across their ages. Some parents with infants younger than six months experienced difficulty discussing “emotional functioning” and “cognitive functioning” because their infants were too young to articulate the actions mentioned in the items. In those cases, the parents responded, “never a problem”. Seventy-five percent of parents recalled their infants’ daily “physical functioning”, while only 58% recalled “physical symptoms”. Some parents’ perceptions and judgment formation were compromised by their own perceptions. For example, they answered “often a problem” when the items were problematic to themselves instead of to their child. However, many distinguished their infants’ HRQOL from their own perceptions, indicating they understood the intention of the questionnaire. <strong>Conclusion:</strong> Parents’ formed judgement may compromise by their own perceptions. The result of this study will be helpful in improving healthcare communication and interpreting parents’ judgments of their infants’ HRQOL in future studies.
文摘Some estimate results about translation, scaling transform and pointwise product withinthe Meyer-Yan's framework are obtained. Some applications of there adults to wiener product andGateaux differentiation are also proposed.
文摘This paper proposes a new set of 3D rotation scaling and translation invariants of 3D radially shifted Legendre moments. We aim to develop two kinds of transformed shifted Legendre moments: a 3D substituted radial shifted Legendre moments (3DSRSLMs) and a 3D weighted radial one (3DWRSLMs). Both are centered on two types of polynomials. In the first case, a new 3D ra- dial complex moment is proposed. In the second case, new 3D substituted/weighted radial shifted Legendremoments (3DSRSLMs/3DWRSLMs) are introduced using a spherical representation of volumetric image. 3D invariants as derived from the sug- gested 3D radial shifted Legendre moments will appear in the third case. To confirm the proposed approach, we have resolved three is- sues. To confirm the proposed approach, we have resolved three issues: rotation, scaling and translation invariants. The result of experi- ments shows that the 3DSRSLMs and 3DWRSLMs have done better than the 3D radial complex moments with and without noise. Sim- ultaneously, the reconstruction converges rapidly to the original image using 3D radial 3DSRSLMs and 3DWRSLMs, and the test of 3D images are clearly recognized from a set of images that are available in Princeton shape benchmark (PSB) database for 3D image.