Search results
1 – 10 of over 5000Kinjal Bhargavkumar Mistree, Devendra Thakor and Brijesh Bhatt
According to the Indian Sign Language Research and Training Centre (ISLRTC), India has approximately 300 certified human interpreters to help people with hearing loss. This paper…
Abstract
Purpose
According to the Indian Sign Language Research and Training Centre (ISLRTC), India has approximately 300 certified human interpreters to help people with hearing loss. This paper aims to address the issue of Indian Sign Language (ISL) sentence recognition and translation into semantically equivalent English text in a signer-independent mode.
Design/methodology/approach
This study presents an approach that translates ISL sentences into English text using the MobileNetV2 model and Neural Machine Translation (NMT). The authors have created an ISL corpus from the Brown corpus using ISL grammar rules to perform machine translation. The authors’ approach converts ISL videos of the newly created dataset into ISL gloss sequences using the MobileNetV2 model and the recognized ISL gloss sequence is then fed to a machine translation module that generates an English sentence for each ISL sentence.
Findings
As per the experimental results, pretrained MobileNetV2 model was proven the best-suited model for the recognition of ISL sentences and NMT provided better results than Statistical Machine Translation (SMT) to convert ISL text into English text. The automatic and human evaluation of the proposed approach yielded accuracies of 83.3 and 86.1%, respectively.
Research limitations/implications
It can be seen that the neural machine translation systems produced translations with repetitions of other translated words, strange translations when the total number of words per sentence is increased and one or more unexpected terms that had no relation to the source text on occasion. The most common type of error is the mistranslation of places, numbers and dates. Although this has little effect on the overall structure of the translated sentence, it indicates that the embedding learned for these few words could be improved.
Originality/value
Sign language recognition and translation is a crucial step toward improving communication between the deaf and the rest of society. Because of the shortage of human interpreters, an alternative approach is desired to help people achieve smooth communication with the Deaf. To motivate research in this field, the authors generated an ISL corpus of 13,720 sentences and a video dataset of 47,880 ISL videos. As there is no public dataset available for ISl videos incorporating signs released by ISLRTC, the authors created a new video dataset and ISL corpus.
Details
Keywords
Jestin Joy, Kannan Balakrishnan and Sreeraj M.
Vocabulary learning is a difficult task for children without hearing ability. Absence of enough learning centers and effective learning tools aggravate the problem. Modern…
Abstract
Purpose
Vocabulary learning is a difficult task for children without hearing ability. Absence of enough learning centers and effective learning tools aggravate the problem. Modern technology can be utilized fruitfully to find solutions to the learning difficulties experienced by the deaf. The purpose of this paper is to present SiLearn – a novel technology based tool for teaching/learning sign vocabulary.
Design/methodology/approach
The proposed mobile application can act as a visual dictionary for deaf people. SiLearn is equipped with features that can automatically detect both text and physical objects and convert them to their corresponding signs. For testing the effectiveness of the proposed mobile application quantitative analyses were done. Quantitative analysis is based on testing a class of 28 students belonging to St Clare Oral School for the Deaf, Kerala, India. This group consisted of 17 boys and 11 girls. Analysis was also done through questionnaire. Questionnaires were given to teachers, parents of deaf students learning sign language and other sign language learners.
Findings
Results indicate that as SiLearn is very effective in sign vocabulary development. It can enhance vocabulary learning rate considerably.
Originality/value
This is the first time that artificial intelligence (AI) based techniques are used for early stage sign language learning. SiLearn can equally be used by children, parents and teachers for learning sign language.
Details
Keywords
Ali Abbas, Summaira Sarfraz and Umbreen Tariq
The current study aims to determine the viability of the tool developed by Abbas and Sarfraz (2018) to translate English speech and text to Pakistan Sign Language (PSL) with…
Abstract
Purpose
The current study aims to determine the viability of the tool developed by Abbas and Sarfraz (2018) to translate English speech and text to Pakistan Sign Language (PSL) with bilingual subtitles.
Design/methodology/approach
Focus group interviews of 30 teachers of a Pakistani private university were conducted; who used the PSL translation tool in their classrooms for lecture delivery and communication with the deaf students.
Findings
The findings of the study determined the viability of the developed tool and showed that it is helpful in teaching deaf students efficiently. With the availability of this tool, teachers are not dependent on human sign language (SL) interpreters in their classrooms.
Originality/value
Overall, this tool is an effective addition to educational technology for special education. Due to the lack of Sign Language (SL) understanding, learning resources and availability of human SL interpreters in Pakistan, institutions feel dependency and scarcity to educate deaf students in a classroom. Unimpaired people and especially teachers face problems communicating with deaf people to arrange one interpreter for a student(s) in multiple classes at the same time which creates a communication gap between a teacher and a deaf student.
Details
Keywords
The purpose of this paper is to identify and describe the reference works useful for finding written information on the North American Indian (that is, Indians presently and in…
Abstract
The purpose of this paper is to identify and describe the reference works useful for finding written information on the North American Indian (that is, Indians presently and in the past living in what is now the United States and Canada).
Arup Varma, Parth Patel, Verma Prikshat, Deepak Hota and Vijay Pereira
Given that the policy is rather comprehensive and detailed, this paper aims to identify some of the key features and discuss the mechanisms by which the benefits of the policy…
Abstract
Purpose
Given that the policy is rather comprehensive and detailed, this paper aims to identify some of the key features and discuss the mechanisms by which the benefits of the policy might reach all sections of society.
Design/methodology/approach
In this paper, we analyse India’s new education policy (NEP) and discuss how it might impact education and employment in India and the neighbourhood.
Findings
This paper believes that the NEP (2020) is likely to alter the educational landscape of India and make education accessible to all sections of society. In addition, the impact of this bill will be felt in the Indian workplace.
Research limitations/implications
This paper would urge the policymakers, educationists and corporate leaders to conduct research on the benefits of the NEP in two phases. In the short run, they could study the implementation – in the long run, all three stakeholders should track the changes in the quality of graduates being produced as a result of the new policy.
Originality/value
This is the first known critique of the NEP (2020) written by five Indian-origin academics and practitioners, offering insight into the policy for scholars and practitioners.
Heini Utunen, Ranil Appuhamy, Melissa Attias, Ngouille Ndiaye, Richelle George, Elham Arabi and Anna Tokar
OpenWHO is the World Health Organization's online learning platform that was launched in 2017. The COVID-19 pandemic led to massive growth in the number of courses, enrolments and…
Abstract
Purpose
OpenWHO is the World Health Organization's online learning platform that was launched in 2017. The COVID-19 pandemic led to massive growth in the number of courses, enrolments and reach of the platform. The platform is built on a stable and scalable basis that can host a large volume of learners. The authors aim to identify key factors that led to this growth.
Design/methodology/approach
In this research paper, the authors examined OpenWHO metadata, end-of-course surveys and internal processes using a quantitative approach.
Findings
OpenWHO metadata showed that the platform has hosted over 190 health courses in 65 languages and over seven million course enrolments. Since the onset of the pandemic, there have been more women, older people and people from middle income countries accessing courses than before. Following data analysis of the platform metadata and course production process, it was found that several key factors contributed to the growth of the platform. First, OpenWHO has a standardised course production pathway that ensures efficiency, consistency and quality. Further, providing courses in different languages increased its reach to a variety of populations throughout the world. For this, multi-language translation is achieved through a network of translators and an automated system to ensure the efficient translation of learning products. Lastly, it was found that access was promoted for learners with disabilities by optimising accessibility in course production. Data analysis of learner feedback surveys for selected courses showed that the courses were well received in that learners found it useful to complete courses that were self-paced and flexible. In addition, results indicated that preferred learning methods included videos, downloadable documents, slides, quizzes and learning exercises.
Originality/value
Lessons learnt from the WHO's learning response will help prepare researchers for the next health emergency to ensure timely, equitable access to quality health knowledge for everyone. Findings of this study will provide valuable insights for educators, policymakers and researchers in the field who intend to use online learning to optimise knowledge acquisition and performance.
Details
Keywords
Abstract
Details
Keywords
Prema P. Nedungadi, Rajani Menon, Georg Gutjahr, Lynnea Erickson and Raghu Raman
The purpose of this paper is to illustrate an Inclusive Digital Literacy Framework for vulnerable populations in rural areas under the Digital India program. Key challenges…
Abstract
Purpose
The purpose of this paper is to illustrate an Inclusive Digital Literacy Framework for vulnerable populations in rural areas under the Digital India program. Key challenges include addressing multiple literacies such as health literacy, financial literacy and eSafety for low-literate learners in low-resource settings with low internet bandwidth, lack of ICT facilities and intermittent electricity.
Design/methodology/approach
This research implemented an educational model based on the proposed framework to train over 1,000 indigenous people using an integrated curriculum for digital literacies at remote settlements. The model uses mobile technology adapted for remote areas, context enabled curriculum, along with flexible learning schedules.
Findings
The education model exemplifies a viable strategy to overcome persistent challenges by taking tablet-based digital literacies directly to communities. It engages different actors such as existing civil societies, schools and government organizations to provide digital literacy and awareness thereby improving both digital and life skills. It demonstrates the potential value of a comprehensive Digital Literacy framework as a powerful lever for Digital Inclusion.
Practical Implications
Policy makers can use this transformational model to extend the reach and effectiveness of Digital Inclusion through the last mile enhancing existing training and service centers that offer the traditional model of Digital Literacy Education.
Originality/value
This innovative mobile learning model based on the proposed Digital Framework for Inclusion instilled motivation, interest and confidence while providing effective digital training and conducting exams directly in the tribal settlements for low-literate learners in remote settings. Through incorporating multiple literacies, this model serves to empower learners, enhance potential, improve well-being and reduce the risk of exploitation.
Details
Keywords
Neethu P.S., Suguna R. and Palanivel Rajan S.
This paper aims to propose a novel methodology for classifying the gestures using support vector machine (SVM) classification method. Initially, the Red Green Blue color hand…
Abstract
Purpose
This paper aims to propose a novel methodology for classifying the gestures using support vector machine (SVM) classification method. Initially, the Red Green Blue color hand gesture image is converted into YCbCr image in preprocessing stage and then palm with finger region is segmented by threshold process. Then, distance transformation method is applied on the palm with finger segmented image. Further, the center point (centroid) of palm region is detected and the fingertips are detected using SVM classification algorithm based on the detected centroids of the detected palm region.
Design/methodology/approach
Gesture is a physical indication of the body to convey information. Though any bodily movement can be considered a gesture, generally it originates from the movement of hand or face or combination of both. Combined gestures are quiet complex and difficult for a machine to classify. This paper proposes a novel methodology for classifying the gestures using SVM classification method. Initially, the color hand gesture image is converted into YCbCr image in preprocessing stage and then palm with finger region is segmented by threshold process. Then, distance transformation method is applied on the palm with finger segmented image. Further, the center point of the palm region is detected and the fingertips are detected using SVM classification algorithm. The proposed hand gesture image classification system is applied and tested on “Jochen Triesch,” “Sebastien Marcel” and “11Khands” data set hand gesture images to evaluate the efficiency of the proposed system. The performance of the proposed system is analyzed with respect to sensitivity, specificity, accuracy and recognition rate. The simulation results of the proposed method on these different data sets are compared with the conventional methods.
Findings
This paper proposes a novel methodology for classifying the gestures using SVM classification method. Distance transform method is used to detect the center point of the segmented palm region. The proposed hand gesture detection methodology achieves 96.5% of sensitivity, 97.1% of specificity, 96.9% of accuracy and 99.3% of recognition rate on “Jochen Triesch” data set. The proposed hand gesture detection methodology achieves 94.6% of sensitivity, 95.4% of specificity, 95.3% of accuracy and 97.8% of recognition rate on “Sebastien Marcel” data set. The proposed hand gesture detection methodology achieves 97% of sensitivity, 98% of specificity, 98.1% of accuracy and 98.8% of recognition rate on “11Khands” data set. The proposed hand gesture detection methodology consumes 0.52 s as recognition time on “Jochen Triesch” data set images, 0.71 s as recognition time on “Sebastien Marcel” data set images and 0.22 s as recognition time on “11Khands” data set images. It is very clear that the proposed hand gesture detection methodology consumes less recognition rate on “11Khands” data set when compared with other data set images. Hence, this data set is very suitable for real-time hand gesture applications with multi background environments.
Originality/value
The modern world requires more numbers of automated systems for improving our daily routine activities in an efficient manner. This present day technology emerges touch screen methodology for operating or functioning many devices or machines with or without wire connections. This also makes impact on automated vehicles where the vehicles can be operated without any interfacing with the driver. This is possible through hand gesture recognition system. This hand gesture recognition system captures the real-time hand gestures, a physical movement of human hand, as a digital image and recognizes them with the pre stored set of hand gestures.
Details
Keywords