Mobile app communication aid for Cypriot deaf people

Katerina Pieri (School of Computer Science, University of Nottingham, Nottingham, UK)
Sue Valerie Gray Cobb (Human Factors Research Group, University of Nottingham, Nottingham, UK)

Journal of Enabling Technologies

ISSN: 2398-6263

Article publication date: 19 July 2019

Issue publication date: 19 August 2019

Abstract

Purpose

People with severe or profound hearing loss face daily communication problems mainly due to the language barrier between themselves and the hearing community. Their hearing deficiency, as well as their use of sign language, often makes it difficult for them to use and understand spoken language. Cyprus is amongst the top 5 European countries with a relatively high proportion of registered deaf people (0.12 per cent of the population: GUL, 2010). However, lack of technological and financial support to the Deaf Community of Cyprus leaves the Cypriot deaf people unsupported and marginalised. The paper aims to discuss this issue.

Design/methodology/approach

This study implemented user-centred design methods to explore the communication needs and requirements of Cypriot deaf people and develop a functional prototype of a mobile app to help them to communicate more effectively with hearing people. A total of 76 deaf adults were involved in various stages of the research. This paper presents the participatory design activities (N=8) and results of usability testing (N=8).

Findings

The study found that users were completely satisfied with the mobile app and, in particular, they liked the use of Cypriot Sign Language (CSL) videos of a real person interpreting hearing people’s speech in real time and the custom onscreen keyboard to allow faster selection of text input.

Originality/value

Despite advances in communication aid technologies, there is currently no technology available that supports CSL or real-time speech to sign language conversion for the deaf people of Cyprus.

Keywords

Citation

Pieri, K. and Cobb, S.V.G. (2019), "Mobile app communication aid for Cypriot deaf people", Journal of Enabling Technologies, Vol. 13 No. 2, pp. 70-81. https://doi.org/10.1108/JET-12-2018-0058

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Emerald Publishing Limited


1. Introduction

Trying to perceive the speech of a hearing person is often hard and exhausting for people with severe and profound hearing loss (Munoz-Baell and Ruiz, 2000; Kuenburg et al., 2016). Even if the hearing person articulates well, it is still difficult for a deaf person to maintain effective communication; they often have to depend on other non-deaf family members or friends who take the role of interpreter whenever they have to communicate with hearing people (Kyle and Cain, 2015; Hadjikakou et al., 2009). The communication problems faced by deaf people are primarily associated with the use of language; their mother tongue may be sign language (SL), whereas the first language of hearing people is a spoken language. In the general community, hearing people are unlikely to know SL, and deaf people cannot always understand the spoken language. This inability to communicate effectively is an obstacle for deaf people not only in their social life, but it also affects their employment, education and healthcare (Compton, 1993; Luft, 2000; Nunes et al., 2001; Ubido et al., 2002; Sirch et al., 2016).

According to the World Health Organization, almost 5.3 per cent of the world’s population have a hearing loss greater than 40 decibels and, specifically within Cyprus, there are more than 1,000 people with severe and profound deafness (CDF, 2017). Cyprus ranks amongst the top 5 European countries with regard to the percentage of population who are deaf, with 0.12 per cent of the population registered as deaf (GUL, 2010). Although Cyprus is a developed country, the Government of Cyprus provides no financial support to the Deaf Community of Cyprus (Hadjikakou et al., 2009). Furthermore, due to only recent recognition of Cypriot Sign Language (CSL) as the official SL of the deaf of Cyprus, it is not supported by any of the current communication aid technologies.

1.1 Existing real-time STT/STSL mobile technologies for deaf users

In recent years, various technologies have emerged to support communication between deaf and hearing people. One of the most recent commercially available communication aid technologies for the deaf is “Uni”, a portable device that converts American SL to speech, and speech (English spoken language) to text and enables users to create and upload custom signs (Epstain, 2015). Key advantages of the device are that it can be used on the move, and offers real-time interpretation of SL and spoken language to hearing and deaf individuals, respectively (MotionSavvy, 2017). However, it requires use of a tablet device that is too big for the user to hold with one hand while using the other hand to communicate, and it does not include facial movements or expressions, which are important elements for the interpretation of the signs (Woll, 2001; Elliot and Jacobs, 2013).

ASRAR (Mirzaei et al., 2012) is a mobile device that recognises the speech of a hearing person and converts it into text for the deaf user to read. It also consists of a camera and a face recognition software, which are used to capture the face of the speaker and display it on the device’s screen. In addition, the deaf user can type texts using an integrated keyboard to communicate with the hearing person. ASRAR proved to be over 85 per cent accurate in its speech to text conversion and the majority of the users expressed their interest in using ASRAR system as a communication aid. However, this device assumes that deaf users do not experience reading and writing difficulties and thus, it does not present the text using SL requiring the deaf people to read text content, which is often inaccessible to the majority of the deaf (Wauters et al., 2006).

A mobile-based application supporting communication between deaf patients and pharmacists allows the pharmacist to input text which is then translated into SL (Motlhabi et al., 2013). The information given by the pharmacist is presented on the mobile screen via video enabling the deaf person to get detailed medical instructions as well as to take the correct medicine. However, the application does not enable the deaf user to communicate back to the pharmacist.

Ava is a mobile app that enables deaf people to host group conversations with others and offers real-time captioning for each member of the conversation. It can also be used in a one-to-one conversation in which the speech of the deaf user’s interlocutor is converted to text (Ava, 2017). However, it does not present the content in signing, requiring the deaf users to read in English. Furthermore, it does not enable the deaf users to respond either by interpreting their SL or by allowing them to write something using a custom keyboard. Furthermore, it requires all members of the group conversation to install the application even if they are not deaf and they are required to pay a subscription to use the application effectively and be invited into a group conversation.

A more recent mobile application called LifeKey was designed to enable deaf people to report an emergency event quickly without the need of relying on a non-deaf person. A custom onscreen keyboard allows the deaf user to summarise the emergency by tapping through different categories (type of incident and exact location within the area). The users can also set up a custom introduction to state their identity and send it automatically along with their location. LifeKey supports also voice-to-text and text-to-voice conversion for the communication between the deaf person and the emergency staff (Slyper et al., 2016). This, however, requires the deaf users to be capable of reading and perceiving possibly complicated text content.

It is apparent that deaf people have limited support in their everyday communication with hearing people. Additionally, it is noted that none of the existing mobile apps offers comprehensive real-time speech to SL conversion, requiring the deaf users to either read text content, which is often inaccessible for many of them, or understand SL presented by 3D animated hands and not a real interpreter, which is not the ideal way of presenting SL (Wauters et al., 2006). Finally, it is noticed that the Greek language and CSL are not supported by any of the existing mobile apps. Consequently, the development of a mobile app that can act as a portable deaf interpreter and offer a comprehensive support at anytime and anywhere is indispensable for the deaf people of Cyprus. Such an app could contribute greatly to their social life as well as their independence, since it will enable them to better engage in communication within society.

1.2 Objectives and requirement considerations

The purpose of this study was to explore the communication needs and requirements of deaf adults in Cyprus and to identify how a mobile app could help them to communicate more effectively with hearing people. During initial discussions with members from the Cyprus Deaf Federation (CDF), they described communication as a barrier to their social life and healthcare, and commented that they generally experience a lot of impatience from hearing people when they try to communicate with them. Since mobile phones are ubiquitous and allow users to perform tasks in a mobile context, it was proposed that a mobile app could act as an affordable portable interpreter, which could convert spoken language into text and SL and similarly, text into speech.

For the development of a functional prototype, the needs and requirements of deaf users were taken into consideration and informed the design and evaluation process. The characteristics and unique competencies of users were studied through both primary and secondary research. It was noted that one of the major problems that the mobile app would have to deal with would be weak writing and reading comprehension skills of the deaf (Marschark and Harris, 1996; Kyle and Cain, 2015). To overcome these issues, a custom onscreen keyboard was developed that includes pre-defined phrases, words, personal information and other categories of text which allow the deaf user to construct sentences more easily than with a traditional keyboard. Additionally, a dictionary of SL videos was developed to enable speech to SL conversion so that the deaf users could better perceive the spoken language.

1.3 Ethical considerations

Pre-lingual deaf people have a considerably limited comprehension of words and sentences. To ensure that the participants were fully aware of the purpose of the study, the data captured, the use of the data and their rights, an interpreter was present throughout the study to explain and clarify every detail and enable the deaf participants to ask questions if necessary. In addition, the presence of an interpreter made communication with the participants more accessible and helped them to express their opinions and views about the mobile app. Finally, to minimise participants’ strain, the questionnaire was formulated with brief, concrete questions and open questions were avoided as much as possible. Two interpreters as well as two post-lingual deaf members were consulted, in order to avoid overly complicated and abstract language while developing the questionnaire, the information sheets and the prototypes. Ethical approval for all studies conducted was granted by the University of Nottingham School of Computer Science Ethics Committee.

2. App development and outcomes

2.1 Approach

Both qualitative and quantitative methods were used in the research. Action Research aimed at addressing the social problems identified by the user community was combined with a user-centred design approach to involve users in the design and evaluation of the app through all stages of an iterative product development process. This approach was taken as both methodologies focus on users, their involvement in an iterative process and the evaluation of actions/design solutions by the users themselves (O’Brien, 2001; W3C, 2004). Figure 1 illustrates the entire process, participant involvement in each stage and outcomes generated to inform the product development. Due to space limitations, only the participatory design and usability testing stages are detailed here.

2.2 Participatory design

Upon specifying the user groups and their requirements through data gathering, the project moved to its second phase: the creation of design solutions. Before developing the initial prototype, design principles and guidelines were considered and the list of requirements was updated accordingly (Benyon, 2014). A low fidelity prototype was then created and discussed informally with a deaf user. This paper-based representation included draft sketches of the app’s navigation and layout (Figure 2, left). Feedback from the deaf user was then incorporated into a medium-fidelity prototype showing the look and feel of the app (Figure 2, right). Two participatory design sessions were then conducted through which pairs of users commented on design features, sequencing and overall appearance of the mobile app.

2.2.1 Participants

For the design and evaluation of the medium-fidelity prototypes, two participatory design sessions were conducted with four participants each. Eight participants were selected in total (four for each session) using convenience sampling on the basis of availability. Two couples took part in the first participatory design session (two of them representing User Group 1 – deaf users who have reading and writing difficulties – and two representing User Group 2 – deaf users who do not experience such difficulties). Two were aged between 29 and 39 years and the other two between 40 and 50; two male and two female; one participant was severely deaf, whereas the others were profoundly deaf.

For the second participatory design session, a family of three pre-lingual deaf members and one hard-of-hearing member was invited. In this group, two were aged between 40 and 50 years and the other two were aged between 18 and 28; one male and three female; three were severely deaf, while the other had moderate hearing loss. An interpreter was present in both sessions to maintain an effective communication between the participants and the researcher.

2.2.2 Materials

Medium-fidelity prototypes were created using Adobe Illustrator software and printed out at the size of a mobile phone device. They conveyed all of the app’s characteristics such as fonts, colours, arrangement, structure, content, etc. In total, 16 empty mobile screens were printed (one for each page) and displayed on large sheets of white paper using re-usable adhesive to allow participants to change the sequence order of pages. User comments and required changes were noted next to the corresponding page (see Plate 1).

2.2.3 Procedure

The four most critical pages were reduced to separate design elements (icons, boxes, pictures, inputs, etc.). The participants were then given the elements of each page one by one and were asked to assemble them on an empty mobile screen. They were then asked to put all the pages in order, beginning from the landing page, to create a possible user journey. Finally, they were encouraged to share their thoughts and opinions on how the pages should be ordered and explain their reasoning. The comments and changes derived from the first participatory design session were implemented in the app design prior to the second session.

2.2.4 Results

The changes requested by the participants were mainly non-functional and fed directly into the design and development of the high-fidelity prototype. One non-functional requirement derived from the first participatory design session was to change the main colour of the app to blue (the colour of the international symbol of the deaf). Other changes related to the sequence order of inputs in the login pages and the background colour of the SL videos; the users requested light blue instead of black. The requirements specification document was updated according to the new requirements.

2.2.5 Functional prototypes

Based on the updated requirements specification document, high-fidelity and final prototypes were created using the Apache Cordova (2017) framework. The prototypes are native mobile apps, designed and tested on an Android Samsung S5 device. An example is shown in Figure 3.

The mobile app includes three major features requested by deaf people through the data collection process:

  1. The “Hear” section where speech is recorded and converted into either text only, SL only or both text and SL.

  2. The “Speak” section, where a custom keyboard enables the deaf user to type texts using pre-defined words and phrases, which can then be converted into speech.

  3. Text storage, allowing users to create texts using the custom keyboard and save them for future use. This enables users to prepare what they want to communicate in advance and use it whenever they want to.

The custom keyboard through which the text is composed includes eight different categories of text options (letters, yes/no, phrases, words, personal details, address details, days and months, and food and drinks), six of which were operational in the final prototype, allowing the users to type faster.

2.3 Usability testing

Usability testing was conducted on the final prototype where users were asked to perform five tasks using the application and then complete a short rating scale questionnaire. The time taken to complete a task, the number of tasks completed and the number of errors were recorded. The users were also observed during the usability testing to provide insights about their behaviour and difficulties.

2.3.1 Participants

Eight users were recruited to take part in the usability testing. Four of the participants were considered to be “design aware users” since they took part in the first participatory design session, and the second group of participants were novice users. Both groups had equal number of users representing target User Group 1 (deaf users who experience reading and writing difficulties) and target User Group 2 (deaf users who do not experience reading and writing difficulties). Equal number of female and male users was selected.

2.3.2 Materials

A task sheet comprising five evaluation tasks was created and a data collection sheet was used to note the time taken for task completion, the number of errors in each task and additional comments if needed. A timer was used to measure the time taken per task and a video camera to record the hand movements of the users and the mobile screen. A semi-standardised questionnaire was developed and used for measuring six subjective usability metrics (appearance, terminology, app capabilities, usefulness, ease of use and satisfaction). The questionnaire consisted of different seven-point Likert scale questions and two open questions about the most liked/disliked aspects of the app.

2.3.3 Evaluation tasks

The users were asked to complete the following tasks using the final prototype:

  • sign up to Ubider;

  • communicate with the waiter to order a meal;

  • add details to your profile;

  • write and save text for future use; and

  • use the saved text to communicate with a clerk in the Social Insurance office.

2.3.4 Results

All participants successfully completed all tasks with non-critical errors (Table IV). Some tasks were inherently more difficult to complete than others, as reflected by the average total time on task (Table II). Task 2 took the longest and had the most errors, as it required the participants to communicate with the waiter, order food and confirm it. Some participants were writing faster than others and this was also observed by comparing the times on the second task of P3 (more than 4 min) with P4 (less than 2 min). P3 was comparatively slower than the others; however, this was not due to errors but because she was typing at a slower pace compared to the rest of the participants. She was also a user representing deaf people who experience reading and writing difficulties and reported that German was her first spoken language, so it would be easier for her if the app offered an option for German language as well.

All of the errors were non-critical, since they did not prevent successful completion of the scenarios. P4 and P6 were confused with the terminology used for the main two tabs at the bottom of the screen, which caused most of their errors (see Table III). Moreover, P6 reported difficulties in using the onscreen keyboard, because he was used to using a keyboard consisting of nine large buttons (Tables I–III).

After task session completion, participants rated the app on six overall measures derived from USE and QUIS questionnaires (Lund, 2001; Chin et al., 1988). Each measure included a number of seven-point Likert scale questions (0 – Strongly disagree and 6 – strongly agree). High positive ratings were reported on all usability dimensions, with scores of 5 (agree) or 6 (strongly agree) given by at least seven/eight participants on all measures (Table IV). Generally, all participants agreed that the app is useful and satisfactory and that the app is designed for all levels of users; however, some offered suggestions for simplification of some features and provision of more than one way to find something within the app.

3. Discussion

Advances in smartphone technology enable automatic speech recognition powered by deep learning neural networking (Google, 2017), which can be used for speech transcription and automatic SL presentation to allow deaf users to understand the speech of the people around them. The interactive prototype of the proposed mobile app utilises technologies such as the Google Speech API (Google, 2017) to recognise voice and convert it into text. An algorithm was then developed to convert that text into synthesised SL videos to help deaf people with low literacy skills to perceive someone else’s speech and communicate with them. In order for the mobile app to work effectively, internet connection or the use of mobile data is required. Slow connectivity can affect users’ experience; hence, the use of the simple STT feature can be beneficial in those situations to save data and increase bandwidth.

3.1 Attitudes of deaf users

Through the earlier data collection in the project, 36/40 deaf participants (90 per cent) expressed a need for a mobile app as a personal interpreter; only 10 per cent prefer to use a real interpreter to help them communicate with hearing people. This, however, could also be inferred as a lack of trust towards the use of technology for communicating with hearing people. Similarly, the interpreter who was present through the study commented that from her experience, deaf people would not easily trust a mobile app for communication with hearing people without having someone next to them to reassure them that what is transcribed is correct. Consequently, it is important to test the app in various contexts and with more users before making it generally available.

The eight deaf users who took part in the evaluation of the interactive prototypes exhibited positive attitudes towards the app. They expressed considerable interest in the overall utility of the app and offered a number of recommendations for improvements. This was their first time using a mobile app that converts speech into CSL and text into Greek voice and they were particularly excited by this. Their responses reinforce the potential value of such an app as a positive benefit to users, and confirm their need for communication support.

3.2 Usability metrics

In order to ensure that a product or a system is usable, the aspects of effectiveness, efficiency and satisfaction need to be measured and examined (ISO, 1998). The results attained through the usability testing showed that the app is effective and that participants were completely satisfied. However, further evaluations should be carried out with more participants and different settings in order for the overall usability to be ensured.

3.2.1 Effectiveness

Effectiveness was assessed task completion. All users completed all of the tasks they were asked to perform with the mobile app. The characteristics of users were varied to ensure that the app is designed to be used by different types of users. In addition, the complexity of the tasks varied as well, to enable the users to review and use as much features of the app as possible.

3.2.2 Efficiency

To assess efficiency, the time taken by the users to complete a task as well as their errors was measured (Table III). P4 made the second most errors, but was one of the fastest users: he corrected his mistakes very quickly, and thus, demonstrates that the app enables easy and quick correction of errors. P3 and P8 were native German speakers who have lived in Cyprus for many years and learned to use Greek language and CSL as their primary languages. However, it was difficult for them to use the app in Greek and they would prefer to use it in German. Their results showed that they took a long time on tasks compared to the number of errors they made. This implies that they spent more time writing and reading text, but they did not necessarily find the mobile app complicated. Finally, Tasks 2 and 4 had the most errors and highest time, since they required the user to communicate with a hearing individual based on a given scenario. It can be inferred that the majority of users were either slow in writing content or they were confused with the two sections of “Speak” and “Hear” (see Figure 4). Finally, it was observed that participants did not use the shortcuts for their personal information, and short answers, but instead they used the keyboard with the letters to compose their text from scratch. This could also have affected task performance time.

3.2.3 Satisfaction

A semi-standardised questionnaire was developed and given to the participants upon completion of the user testing. The questionnaire included attitude rating scales derived from USE and QUIS questionnaires (Lund, 2001; Chin et al., 1988). The majority of participants (n=7) agreed in all of the statements apart from one statement which had six agreements and two neutral responses. These results indicate the app has a great potential to be used by deaf users as a personal interpreter. Participants were amazed when they saw the speech being converted into CSL and they were excited to use it on their devices. However, it is important for the app to be evaluated in different settings in order to ensure and validate its effectiveness, efficiency and user’s satisfaction.

3.3 Most significant features of the mobile app

Evaluation of interactive prototypes of the mobile app with deaf users identified that the most useful feature of the app was conversion of speech to SL. Users with reading difficulties required such a feature to allow them to better understand a hearing person’s spoken language. Further development of this feature, providing the facility for streaming speech recognition to allow the users to have non-stop speech to SL conversion while the hearing person is still speaking, would be beneficial to the users.

Another feature that was considered important and valuable to the users was the custom onscreen keyboard with categories of text. Users were amazed when they saw their personal information, and other categories of words and phrases, ready to be used in the keyboard. Users with writing difficulties described the keyboard as particularly helpful. Additionally, the fact that they could type text using the custom keyboard and save it for future use was something that excited them too. This feature would allow them to communicate faster and could eliminate the need for pen- and paper-based back-and-forth writing for communication with others.

Finally, the presentation of synonyms of words on the speech transcription to allow deaf users to understand the meaning of an unknown word used by the hearing person was another important feature. The CSL lexicon is not as rich as that of the Greek language; therefore, undoubtedly, deaf users will see words that they would not recognise. Consequently, such a feature could enable them not only to understand the meaning of those words but also to learn them.

3.4 Research limitations

Although the mobile app seems to offer great potential to enhance communication between deaf and hearing people, there were several limitations to the study. The first limitation was related to the sample of the study and its size. The sample size used for the usability testing of the final interactive prototype (eight participants) was small. Therefore, usability problems may remain undiscovered. Finally, a limited number of tasks were given to users for the evaluation of the final prototype, mainly due to the limitations of resource and time. The interactive prototypes required weeks of development and hence limited time was available for conducting large-scale evaluations. More complex tasks will need to be evaluated in order to test how difficult is for deaf users with a low level of education to complete them.

4. Conclusion

The study applied a user-centred design approach to the development and evaluation of a mobile app to support communication between deaf and hearing people. Members from the CDF took part in data collection, iterative design and evaluation of app prototypes from lo-fi and medium-fidelity through to a functioning demo version. The project cycle was iterated four times and five prototypes were created in total. The integration of multiple methods enabled the researchers to gain in-depth understanding of the research problem, user characteristics and requirements, and direct feedback from target end users. Further development and testing of the mobile app is required to ensure validity and reliability of the results. More experiments should be carried out to test how the mobile app performs in noisy environments and to identify any other functional issues affecting its usefulness. However, it is concluded that such an app has the potential to offer greater independence to deaf users by reducing the need for them to rely on the presence of someone to act as an interpreter to support their completion of everyday activities. Initial user interaction performance, satisfaction ratings and usability feedback indicate high potential for this technology to support the deaf people of Cyprus to engage more effectively in communication within society and achieve social acceptance.

Figures

Overview diagram of the product development process, including user involvement at different stages and the methods used for the data collection, design and evaluation of the prototypes

Figure 1

Overview diagram of the product development process, including user involvement at different stages and the methods used for the data collection, design and evaluation of the prototypes

Prototype development of the mobile app: lo-fidelity prototype with comments about the structure and the elements (left); initial medium-fidelity prototype used in the first participatory design session (right)

Figure 2

Prototype development of the mobile app: lo-fidelity prototype with comments about the structure and the elements (left); initial medium-fidelity prototype used in the first participatory design session (right)

Example pages of the final interactive prototype used for the usability testing and the experiment

Figure 3

Example pages of the final interactive prototype used for the usability testing and the experiment

The two sections of “Hear” (left picture) and “Speak” (right picture) of the final interactive prototype

Figure 4

The two sections of “Hear” (left picture) and “Speak” (right picture) of the final interactive prototype

The outputs of the two participatory design sessions: first session (left); second session (right)

Plate 1

The outputs of the two participatory design sessions: first session (left); second session (right)

Time (minutes) on task scenario

P1 P2 P3 P4 P5 P6 P7 P8 Average total
Task 1 0.4 0.55 2.07 1.03 0.4 1.2 2 1.2 1.11
Task 2 3.33 2.55 4.15 1.45 2.19 4.23 4.1 3.05 3.13
Task 3 0.44 0.25 1.19 0.51 0.2 1.05 1.03 0.54 0.65
Task 4 1.47 1.51 3.33 1.53 1.03 3.14 2.57 2.35 2.12
Task 5 2.28 3.02 4.19 3.37 2.4 3.54 2.03 1.12 2.74

Errors on task scenario

P1 P2 P3 P4 P5 P6 P7 P8 Total
Task 1 0 0 0 0 0 0 1 0 1
Task 2 2 3 2 1 0 4 4 0 17
Task 3 0 1 1 2 0 2 1 1 8
Task 4 0 0 1 4 0 4 0 1 10
Task 5 2 1 2 4 0 3 2 0 14

Average usability measures per task

Task completion Number of errors Average time on task (min)
Task 1 8 1 1.11
Task 2 8 17 3.13
Task 3 8 8 0.65
Task 4 8 10 2.12
Task 5 8 14 2.74

Descriptive statistics for six overall usability measures

Subjective usability metrics Mean rating SD (n−1) Average per cent agreea
Screens, layout and appearance (6 questions) 5.89 0.144 97.8
Terminology and system information (6 questions) 5.85 0.228 95.8
App’s capabilities (4 questions) 5.78 0.275 96.9
App’s usefulness (8 questions) 5.98 0.042 100.0
App’s ease of use (10 questions) 5.78 0.311 97.5
Overall user’s satisfaction (7 questions) 5.98 0.045 100.0

Notes: aPer cent agree=agree, moderately agree and strongly agree responses combined

References

Apache Cordova (2017), “Mobile apps with HTML, CSS & JS”, available at: https://cordova.apache.org/ (accessed 22 April 2017).

Ava (2017), “Communicate beyond barriers”, available at: www.ava.me/ (accessed 22 April 2017).

Benyon, D. (2014), Designing Interactive Systems: A Comprehensive Guide to HCI, UX and Interaction Design, Pearson, London.

CDF (2017), “Deaf Associations (Σωματία Κωφών)”, available at: http://eid-scholi-kofon-lef.schools.ac.cy/index.php?id=somateia-kopon (accessed 26 June 2017).

Chin, J.P., Diehl, V.A. and Norman, K.L. (1988), “Development of an instrument measuring user satisfaction of the human-computer interface”, SIGCHI Conference on Human Factors in Computing Systems, ACM, Washington, DC, pp. 213-18, doi: 10.1145/57167.57203.

Compton, C. (1993), “Status of deaf employees in the federal government”, The Volta Review, Vol. 95 No. 4, pp. 379-90.

Elliot, E.A. and Jacobs, A.M. (2013), “Facial expressions, emotions, and sign languages”, Frontiers in Psychology, Vol. 4 No. 115, pp. 1-4, doi: 10.3389/fpsyg.2013.00115.

Epstain, Z. (2015), “New device gives voice to the deaf by translating sign language to speech in real time”, available at: http://bgr.com/2015/12/04/sign-language-translator-uni-deaf/ (accessed 20 April 2017).

Google (2017), “Google Speech API”, available at: https://cloud.google.com/speech/?gclid=CN2czo-Tl9ICFe4Q0wodP9wKuQ (accessed 30 July 2017).

GUL (2010), “Deaf populations overseas”, available at: http://libguides.gallaudet.edu/content.php?pid=119476&sid=1061103 (accessed 24 July 2017).

Hadjikakou, K., Christodoulou, D., Hadjidemetri, E., Konidari, M. and Nicolaou, N. (2009), “The experiences of Cypriot hearing adults with deaf parents in family, school, and society”, Journal of Deaf Studies and Deaf Education, Vol. 14 No. 4, pp. 486-502, available at: https://doi.org/10.1093/deafed/enp011

ISO (1998), “Ergonomic requirements for office work with visual display terminals (VDTs) – Part 11: guidance on usability”, ISO 9241-11:1998, International Organization for Standardization, available at: www.iso.org/standard/16883.html (accessed 27 July 2017).

Kuenburg, A., Fellinger, P. and Fellinger, J. (2016), “Health care access among deaf people”, Journal of Deaf Studies and Deaf Education, Vol. 21 No. 1, pp. 1-10, doi: 10.1093/deafed/env042.

Kyle, F.E. and Cain, K. (2015), “A comparison of deaf and hearing children’s reading comprehension profiles”, Topics in Language Disorders, Vol. 35 No. 2, pp. 144-56, doi: 10.1097/TLD.0000000000000053.

Luft, P. (2000), “Communication barriers for deaf employees: needs assessment and problem-solving strategies”, Work, Vol. 14 No. 1, pp. 51-9.

Lund, A. (2001), “Measuring usability with the USE Questionnaire”, Usability and User Experience Newsletter of the STC Usability SIG, Vol. 8 No. 2.

Marschark, M. and Harris, M. (1996), “Success and failure in learning to read: the special case of deaf children”, in Coronoldi, C. and Oakhill, J. (Eds), Reading Comprehension Difficulties: Process and Intervention, Lawrence Erlbaum Associates Publishers, Mahwah, NJ, pp. 279-300.

Mirzaei, M.R., Ghorshi, S. and Mortazavi, M.M. (2012), “Helping deaf and hard-of-hearing people by combining augmented reality and speech technologies”, in Sharkey, P.M. and Klinger, E. (Eds), Proceedings of the 9th International Conference on Disability, Virtual Reality and Associated Technologies, ISBN 9780704915459, Laval, 10–12 September, pp. 149-58.

MotionSavvy (2017), “UNI”, available at: www.motionsavvy.com/ (accessed 30 July 2017).

Motlhabi, B.M., Glaser, M. and Tucker, D.W. (2013), “SignSupport: a limited communication domain mobile aid for a Deaf patient at the pharmacy”, in Volkwyn, R. (Ed.), Proceedings of the Southern African Telecommunication Networks and Applications Conference, Telkom, Stellenbosch, pp. 173-8.

Munoz-Baell, M.I. and Ruiz, M.T. (2000), “Empowering the deaf: let the deaf be deaf”, Journal of Epidemiology Community Health, Vol. 54 No. 1, pp. 40-4.

Nunes, T., Pretzlik, U. and Olsson, J. (2001), “Deaf children’s social relationships in mainstream schools”, Deafness & Education International, Vol. 3 No. 3, pp. 123-36, doi: 10.1179/146431501790560972.

O’Brien, R. (2001), “Um exame da abordagem metodológica da pesquisa ação (An overview of the methodological approach of action research)”, Teoria e Prática da Pesquisa Ação (Theory and practice of action research), Universidade Federal da Paraíba, João Pessoa, available at: www.web.ca/~robrien/papers/arfinal.html (accessed 28 June 2017).

Sirch, L., Salvador, L. and Palese, A. (2016), “Communication difficulties experienced by deaf male patients during their in-hospital stay: findings from a qualitative descriptive study”, Scandinavian Journal of Caring Sciences, Vol. 31 No. 2, pp. 368-77, doi: 10.1111/scs.12356.

Slyper, L., Ko, Y., Kim., K.M. and Sobek, I. (2016), “LifeKey: emergency communication tool for the deaf”, CHI’16 Extended Abstracts, ACM, San Jose, CA, pp. 62-7, doi: 10.1145/2851581.2890629.

Ubido, J., Huntington, J. and Warburton, D. (2002), “Inequalities in access to healthcare faced by women who are deaf”, Health and Social Care in the Community, Vol. 10 No. 4, pp. 247-53.

W3C (2004), “Notes on user centered design process (UCD)”, available at: www.w3.org/WAI/redesign/ucd (accessed 28 June 2017).

Wauters, N.L., Van Bon, J.H. and Tellings, E.A. (2006), “Reading comprehension of Dutch deaf children”, Reading and Writing, Vol. 19 No. 1, pp. 49-76.

Woll, B. (2001), “The sign that dares to speak its name: echo phonology in British Sign Language”, in Boyes-Braem, P. and Sutton-Spence, R.L. (Eds), The Hands are the Head of the Mouth, Signum-Verlag, Hamburg, pp. 87-98.

Further reading

Liddell, S.K. (1983), “Review: American Sign Language Syntax”, Language, Vol. 59 No. 1, pp. 221-4, doi: 10.2307/414075.

Wauters, L.N. (2005), “Reading comprehension in deaf children: the impact of the mode of acquisition”, unpublished doctoral dissertation) Proefschrift Radboud Universiteit, Nijmegen.

WHO (2017), “Deafness and hearing loss”, available at: www.who.int/mediacentre/factsheets/fs300/en/ (accessed 27 August 2017).

Acknowledgements

The authors would like to thank the Deaf Community of Cyprus and the Cyprus Deaf Federation for their participation in the study. The authors are particularly grateful for the support and interest shown by all deaf people who have taken part in the data collection and the evaluations of the mobile app. In addition, the authors would like to give a special thanks to the official deaf interpreter of the Deaf Community in Cyprus, Ms Panayiota Themistokleous who was present in all meetings and sessions and helped for the meeting arrangements and the recording of the sign language videos.

Corresponding author

Katerina Pieri is the corresponding author and can be contacted at: kv.pieri@gmail.com

About the authors

Katerina Pieri is Head of UX Design at co-hire, a simple communication platform that helps people looking for work to speak to people in start-ups who matters. She graduated with honours from the University of Nottingham with an MSc Degree in Human–Computer Interaction in 2017, where she was awarded by the School of Computer Science the HCI Dissertation Prize 2016–2017. Katerina possesses a solid experience in user experience design, front end development, interaction design and she is a great connoisseur of user-centred design and usability.

Dr Sue Valerie Gray Cobb is Associate Professor and Head of the Human Factors Research Group, University of Nottingham. She has over 20 years’ experience in user-centred design research with specific interest in user involvement in development and application of products and novel interactive technologies for special education and rehabilitation.