Crowdsourced real-time captioning of sign language by deaf and hard-of-hearing people
International Journal of Pervasive Computing and Communications
ISSN: 1742-7371
Article publication date: 3 April 2017
Abstract
Purpose
The purpose of this paper is to explore the issues on how to achieve crowdsourced real-time captioning of sign language by deaf and hard-of-hearing (DHH) people, such that how a system structure should be designed, how a continuous task of sign language captioning should be divided into microtasks and how many DHH people are required to maintain a high-quality real-time captioning.
Design/methodology/approach
The authors first propose a system structure, including the new design of worker roles, task division and task assignment. Then, based on an implemented prototype, the authors analyze the necessary setting for achieving a crowdsourced real-time captioning of sign language, test the feasibility of the proposed system and explore its robustness and improvability through four experiments.
Findings
The results of Experiment 1 have revealed the optimal method for task division, the necessary minimum number of groups and the necessary minimum number of workers in a group. The results of Experiment 2 have verified the feasibility of the crowdsourced real-time captioning of sign language by DHH people. The results of Experiment 3 and Experiment 4 have shown the robustness and improvability of the captioning system.
Originality/value
Although some crowdsourcing-based systems have been developed for the captioning of voice to text, the authors intend to resolve the issues on the captioning of sign language to text, for which the existing approaches do not work well due to the unique properties of sign language. Moreover, DHH people are generally considered as the ones who receive support from others, but our proposal helps them become the ones who offer support to others.
Keywords
Acknowledgements
Research funding: This work was partially supported by JSPS KAKENHI Grant Numbers #25240012, #26870090, #15K01056, #16K16460, Collaborative Research Program at NII, Expense for Strengthening Functions in NTUT Budgetary Request for Fiscal 2016 and Promotional Projects for Advanced Education and Research in NTUT and JST CREST.
Citation
Shiraishi, Y., Zhang, J., Wakatsuki, D., Kumai, K. and Morishima, A. (2017), "Crowdsourced real-time captioning of sign language by deaf and hard-of-hearing people", International Journal of Pervasive Computing and Communications, Vol. 13 No. 1, pp. 2-25. https://doi.org/10.1108/IJPCC-02-2017-0014
Publisher
:Emerald Publishing Limited
Copyright © 2017, Emerald Publishing Limited