Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each o...Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each other through hand gestures.Recognition of hand gestures has become an important challenge for the recognition of sign language.There are many existing models that can produce a good accuracy,but if the model test with rotated or translated images,they may face some difficulties to make good performance accuracy.To resolve these challenges of hand gesture recognition,we proposed a Rotation,Translation and Scale-invariant sign word recognition system using a convolu-tional neural network(CNN).We have followed three steps in our work:rotated,translated and scaled(RTS)version dataset generation,gesture segmentation,and sign word classification.Firstly,we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation,Translation and Scale of the ori-ginal images to create the RTS version dataset.Then we have applied the gesture segmentation technique.The segmentation consists of three levels,i)Otsu Thresholding with YCbCr,ii)Morphological analysis:dilation through opening morphology and iii)Watershed algorithm.Finally,our designed CNN model has been trained to classify the hand gesture as well as the sign word.Our model has been evaluated using the twenty sign word dataset,five sign word dataset and the RTS version of these datasets.We achieved 99.30%accuracy from the twenty sign word dataset evaluation,99.10%accuracy from the RTS version of the twenty sign word evolution,100%accuracy from thefive sign word dataset evaluation,and 98.00%accuracy from the RTS versionfive sign word dataset evolution.Furthermore,the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition.展开更多
In a dynamic CT, the acquired projections are corrupted due to strong dynamic nature of the object, for example: lungs, heart etc. In this paper, we present fan-beam reconstruction algorithm without position-dependent...In a dynamic CT, the acquired projections are corrupted due to strong dynamic nature of the object, for example: lungs, heart etc. In this paper, we present fan-beam reconstruction algorithm without position-dependent backprojection weight which compensates for the time-dependent translational, uniform scaling and rotational deformations occurring in the object of interest during the data acquisition process. We shall also compare the computational cost of the proposed reconstruction algorithm with the existing one which has position-dependent weight. To accomplish the objective listed above, we first formulate admissibility conditions on deformations that is required to exactly reconstruct the object from acquired sequential deformed projections and then derive the reconstruction algorithm to compensate the above listed deformations satisfying the admissibility conditions. For this, 2-D time-dependent deformation model is incorporated in the fan-beam FBP reconstruction algorithm with no backprojection weight, assuming the motion parameters being known. Finally the proposed reconstruction algorithm is evaluated with the motion corrupted projection data simulated on the computer.展开更多
This paper proposes a new set of 3D rotation scaling and translation invariants of 3D radially shifted Legendre moments. We aim to develop two kinds of transformed shifted Legendre moments: a 3D substituted radial sh...This paper proposes a new set of 3D rotation scaling and translation invariants of 3D radially shifted Legendre moments. We aim to develop two kinds of transformed shifted Legendre moments: a 3D substituted radial shifted Legendre moments (3DSRSLMs) and a 3D weighted radial one (3DWRSLMs). Both are centered on two types of polynomials. In the first case, a new 3D ra- dial complex moment is proposed. In the second case, new 3D substituted/weighted radial shifted Legendremoments (3DSRSLMs/3DWRSLMs) are introduced using a spherical representation of volumetric image. 3D invariants as derived from the sug- gested 3D radial shifted Legendre moments will appear in the third case. To confirm the proposed approach, we have resolved three is- sues. To confirm the proposed approach, we have resolved three issues: rotation, scaling and translation invariants. The result of experi- ments shows that the 3DSRSLMs and 3DWRSLMs have done better than the 3D radial complex moments with and without noise. Sim- ultaneously, the reconstruction converges rapidly to the original image using 3D radial 3DSRSLMs and 3DWRSLMs, and the test of 3D images are clearly recognized from a set of images that are available in Princeton shape benchmark (PSB) database for 3D image.展开更多
基金This work was supported by the Competitive Research Fund of The University of Aizu,Japan.
文摘Communication between people with disabilities and people who do not understand sign language is a growing social need and can be a tedious task.One of the main functions of sign language is to communicate with each other through hand gestures.Recognition of hand gestures has become an important challenge for the recognition of sign language.There are many existing models that can produce a good accuracy,but if the model test with rotated or translated images,they may face some difficulties to make good performance accuracy.To resolve these challenges of hand gesture recognition,we proposed a Rotation,Translation and Scale-invariant sign word recognition system using a convolu-tional neural network(CNN).We have followed three steps in our work:rotated,translated and scaled(RTS)version dataset generation,gesture segmentation,and sign word classification.Firstly,we have enlarged a benchmark dataset of 20 sign words by making different amounts of Rotation,Translation and Scale of the ori-ginal images to create the RTS version dataset.Then we have applied the gesture segmentation technique.The segmentation consists of three levels,i)Otsu Thresholding with YCbCr,ii)Morphological analysis:dilation through opening morphology and iii)Watershed algorithm.Finally,our designed CNN model has been trained to classify the hand gesture as well as the sign word.Our model has been evaluated using the twenty sign word dataset,five sign word dataset and the RTS version of these datasets.We achieved 99.30%accuracy from the twenty sign word dataset evaluation,99.10%accuracy from the RTS version of the twenty sign word evolution,100%accuracy from thefive sign word dataset evaluation,and 98.00%accuracy from the RTS versionfive sign word dataset evolution.Furthermore,the influence of our model exists in competitive results with state-of-the-art methods in sign word recognition.
文摘In a dynamic CT, the acquired projections are corrupted due to strong dynamic nature of the object, for example: lungs, heart etc. In this paper, we present fan-beam reconstruction algorithm without position-dependent backprojection weight which compensates for the time-dependent translational, uniform scaling and rotational deformations occurring in the object of interest during the data acquisition process. We shall also compare the computational cost of the proposed reconstruction algorithm with the existing one which has position-dependent weight. To accomplish the objective listed above, we first formulate admissibility conditions on deformations that is required to exactly reconstruct the object from acquired sequential deformed projections and then derive the reconstruction algorithm to compensate the above listed deformations satisfying the admissibility conditions. For this, 2-D time-dependent deformation model is incorporated in the fan-beam FBP reconstruction algorithm with no backprojection weight, assuming the motion parameters being known. Finally the proposed reconstruction algorithm is evaluated with the motion corrupted projection data simulated on the computer.
文摘This paper proposes a new set of 3D rotation scaling and translation invariants of 3D radially shifted Legendre moments. We aim to develop two kinds of transformed shifted Legendre moments: a 3D substituted radial shifted Legendre moments (3DSRSLMs) and a 3D weighted radial one (3DWRSLMs). Both are centered on two types of polynomials. In the first case, a new 3D ra- dial complex moment is proposed. In the second case, new 3D substituted/weighted radial shifted Legendremoments (3DSRSLMs/3DWRSLMs) are introduced using a spherical representation of volumetric image. 3D invariants as derived from the sug- gested 3D radial shifted Legendre moments will appear in the third case. To confirm the proposed approach, we have resolved three is- sues. To confirm the proposed approach, we have resolved three issues: rotation, scaling and translation invariants. The result of experi- ments shows that the 3DSRSLMs and 3DWRSLMs have done better than the 3D radial complex moments with and without noise. Sim- ultaneously, the reconstruction converges rapidly to the original image using 3D radial 3DSRSLMs and 3DWRSLMs, and the test of 3D images are clearly recognized from a set of images that are available in Princeton shape benchmark (PSB) database for 3D image.