Definitive guidelines toward effective mobile devices crowdtesting methodology

Qamar Naith (Department of Computer Science, University of Sheffield, Sheffield, UK)
Fabio Ciravegna (Department of Computer Science, University of Sheffield, Sheffield, UK)

International Journal of Crowd Science

ISSN: 2398-7294

Article publication date: 28 April 2020

Issue publication date: 8 June 2020

1038

Abstract

Purpose

This paper aims to gauge developers’ perspectives regarding the participation of the public and anonymous crowd testers worldwide, with a range of varied experiences. It also aims to gather their needs that could reduce their concerns of dealing with the public crowd testers and increase the opportunity of using the crowdtesting platforms.

Design/methodology/approach

An online exploratory survey was conducted to gather information from the participants, which included 50 mobile application developers from various countries with diverse experiences across Android and iOS mobile platforms.

Findings

The findings revealed that a significant proportion (90%) of developers is potentially willing to perform testing via the public crowd testers worldwide. This on condition that several fundamental features were available, which enable them to achieve more realistic tests without artificial environments on large numbers of devices. The results also demonstrated that a group of developers does not consider testing as a serious job that they have to pay for, which can affect the gig-economy and global market.

Originality/value

This paper provides new insights for future research in the study of how acceptable it is to work with public and anonymous crowd workers, with varying levels of experience, to perform tasks in different domains and not only in software testing. In addition, it will assist individual or small development teams who have limited resources or who do not have thousands of testers in their private testing community, to perform large-scale testing of their products.

Keywords

Citation

Naith, Q. and Ciravegna, F. (2020), "Definitive guidelines toward effective mobile devices crowdtesting methodology", International Journal of Crowd Science, Vol. 4 No. 2, pp. 209-228. https://doi.org/10.1108/IJCS-01-2020-0002

Publisher

:

Emerald Publishing Limited

Copyright © 2020, Qamar Naith and Fabio Ciravegna.

License

Published in International Journal of Crowd Science. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) license. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this license may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In recent years, numerous mobile apps have been developed for different purposes, such as social, tourism, health, education, fitness, business and other domains (Holzer and Ondrus, 2011). However, testing these mobile apps to ensure their quality still remains challenging due to the diversity of mobile devices and operating system versions (OS) (Huang, 2014). Research into the area of the compatibility testing of mobile device is still a hot topic and in its early stages. This is due to a significant barrier regarding large-scale deployment of the app by small- and medium-sized enterprises (SMEs) resulting from their lack of ability to test a large number of devices. In addition, the dearth of methods or tools for large-scale mobile device compatibility testing creates another barrier (Almeida et al., 2018). Currently, developers mostly use automated testing tools to perform the test. Otherwise, other developers may spend several hours manually running the app for a better understanding in a variety of compatibility issues (Almeida et al., 2018; Onwuzurike and De Cristofaro, 2015). In the literature, several solutions have been proposed to address the issues of testing apps on a range of mobile device models and OS versions (Prathibhan et al., 2014; Kaasila et al., 2012; Huang and Gong, 2012; Bayley et al., 2012; Huang, 2014; Starov, 2013). These solutions have been classified into the following two categories:

  1. automated testing frameworks or tools over the cloud environment, MAT (Prathibhan et al., 2014), Testdroid (Kaasila et al., 2012) and MobiTest (Bayley et al., 2012). It allows developers to remotely access a pool of real mobile devices that are available and connected to the one central online server over a cloud (it covers only one geographical location).

  2. Cloud-based testing service tools, RTMS (Huang and Gong, 2012), AppACTS (Huang, 2014) and CTOMS (Starov, 2013), by comparison, allows developers to access a larger number of devices through different pools of real mobile devices connected to several online servers over many clouds (it covers limited geographical locations).

Most of these solutions have tried to address this testing issue. However, several challenges remain, which hamper the full success required from a developers’ perspectives. Challenges such as lack of realistic tests to cover most real-life scenarios or to capture all aspects of real mobile devices (Knott, 2015); lack of providing a complete set of devices and OS versions to be tested; a limited number of users, a restricted geographical location and limited users’ behaviors. Unfortunately, these limitations led developers to spend several hours manually running the app to better understand a variety of compatibility issues (Almeida et al., 2018; Onwuzurike and De Cristofaro, 2015). As a result, the developers later made moves to cooperate with traditional testing organizations opposed to involving more humans to test their mobile apps. Examples of these traditional testing organizations are uTest[1], 99tests[2], Mob4Hire[3], Applause[4], Passbrains[5], Global App Testing[6] and BugFinders[7], etc.

These testing organizations have served in minimizing the issue of mobile device compatibility testing through leveraging the power of a specific group of testers belonging to their particular communities and who have an excellent experience in testing. Though they may still be unable to cover the full breadth of global mobile devices, the startups target market would use. As crowd size and diversity of experience are two critical attributes for the success of crowdsourcing process (Robert and Romero, 2015), this makes developers more enthusiastic and desire in testing their apps on a larger number of devices by several external users with different behaviors to ensure the quality of their apps. The authors of this paper have recently found that several crowdtesting organizations have increased their crowd community size by using more crowd testers with a similar level of experience in their communities. Although this might be useful on one side, they may still face obstacles for testing apps and delivering high-quality and error-free apps due to several limitations.

  • The ability to take into account the large diversity of crowd testers experiences, which will help in finding more issues, opposed to focusing only on dealing with expert crowd testers.

  • The consideration of different behaviors and interactions between endusers and apps including their scope of work (SOW) and lifestyle.

  • The facilities to cover a larger geographical distribution of crowd testers.

To improve the solutions of the crowdtesting organizations and address these limitations, the authors in Naith and Ciravegna (2018a) proposed a new solution based on open crowdtesting with the participation of the public and anonymous testers from around the world. It expected that this solution will allow a larger number of users (crowd testers) with different behaviors and testing experiences, to participate and perform the test on their own devices. Thus, developers could cover more devices. Therefore, to explore how developers look at this solution and to understand their perception on this matter, two research questions have been asked:

RQ1.

To what extent mobile app developers are keen to work with the public and anonymous crowd testers with various levels of experience to perform their testing tasks?

RQ2.

What are the mobile app developers’ recommendations for using a large-scale and public crowdtesting methodology?

From the authors’ best knowledge, this is the first work to address these two research questions and provide meaningful indications in that respect. In this paper, an online exploratory survey (questionnaire) was carried out and shared on Twitter, Linked-In and Facebook. The questionnaire involved the participation of 50 Android and iOS mobile app developers. Mixed qualitative and quantitative analysis was conducted to analyze the collected data.

The findings show that more than half of the developers never used crowdtesting before while a minority of them have little experience with. In related to their willingness to use the open crowdtesting method, 90% of developers have agreed to perform their test by the public crowd testers worldwide. Seventy per cent of the developers stressed that direct interaction with crowd testers is more important than communicating with them through the mediator such as crowd manager or crowd leader. While 86% of developers have totally agreed that the easiness or difficulty of the testing results reporting mechanism will affect the encouragement of the public crowd testers to participate and use the crowdtesting platform. Fifty-seven per cent of developers thought that a critical mass of testing could be achieved if the public crowdtesting was used under several conditions. Sixty-eight per cent confirmed that they trust crowd testers from any part of the world if some serious cases are taken to ensure the reliability and accuracy of results. Eighty-three per cent of them stressed the importance of providing textual explanations about the entire testing process as evidence of the accuracy of results. The findings also clarify the developers’ requirements to use the public crowdtesting including:

  • the most critical searching elements that developers focus on when they seek a solution to programming problems or specific testing issues;

  • the essential pieces of information that must be exchanged between both developers and crowd testers for achieving a correct crowdtesting process;

  • the list of potential rewarding that developers are willing to offer to crowd testers.

The rest of this paper is organized as follows. Section 2 describes the main contributions of this paper. The methodology of designing the survey study is presented in Section 3. Section 4 presents the results of the survey analysis. Section 5 provides a discussion of the main findings. A list of recommendations is presented in Section 6. Finally, Section 7 concludes the paper.

2. Contributions

This paper provides several useful insights and contributions for developers and researchers in the field of crowdtesting and mobile app development. These insights and contributions are listed as follows:

  • To the best of the authors’ knowledge, this is the first study providing an in-depth investigation about the developers’ opinions and measures of their willingness to work with a member of the public crowd testers for compatibility testing of their apps with mobile devices.

  • It is bridging the gap between developers and crowd testers during software testing processes.

  • It identifies the ultimate requirements of public crowdtesting from developers’ perspectives that must be included in the public and large-scale crowdtesting platform including the factors that will reduce developers’ concern when dealing with the public and anonymous crowd testers and increase their confidence in them.

  • It helps developers to understand how to deal with the large-scale crowd testers in terms of motivation and trust methods, the way of defining and submission tasks as well as required information during the testing process.

3. Methodology

3.1 Study design

This exploratory study used a mix of qualitative and quantitative descriptive design methods (Bell et al., 2018) to examine collected data from the online questionnaire (using Google Forms) and to provide a clear answer to the two research questions mentioned in this paper. This questionnaire was released online to the public for over two months. A cross-sectional design method was used to collect the data from participants at a single point in time, rather than separate time periods (Bell et al., 2018). The responses of the participant were considered as consent to their participation in this research study. The following sections briefly describe the protocol have been used to achieve this exploratory study.

3.2 Population and sampling strategy

As the target population for this study is mobile app developers, the authors sampled the population by applying random sampling (Bryman and Burgess, 2002). The sampling has been carried out through randomly distributed questionnaire to a large variety of mobile apps development groups on Twitter, LinkedIn and Facebook. This online survey was fully completed by 50 random developers from different countries (such as Saudi Arabia, Egypt, UK, Canada, Germany, Singapore, Sweden, Romania United Arab Emirates and USA) who have diverse experiences in Android and/or iOS as shown in Figure 1. The random nature of sampling participants assists in the generalization of gathered information into a representation of the entire population without significant discrepancies (Bryman and Burgess, 2002; Mathers et al., 1998).

3.3 Questionnaire design structure

The questionnaire was designed to be strictly anonymous in terms of demographic or educational information Therefore, it was impossible to identify the details of participants or link any answers to a specific person. From the study of relating literature, gaps were discovered in previous studies. The authors designed the questionnaire questions, with the five main research questions discussed above in mind. This questionnaire was designed to include a mixture of close-ended (multiple choice, ranked and rating questions) and open-ended questions (Denscombe, 2014). The questionnaire involved 15 questions, split into seven sections. These sections covered the three main challenges that must be considered when developing crowdtesting methodology trustworthiness, motivation and job evaluation (Muntés-Mulero et al., 2012; Sánchez-Charles et al., 2014). Figure 2 shows the whole structure of the questionnaire, including the six sections and questions within each section.

As spreadsheets are suitable for summarizing and analyzing survey responses to obtain more insight (Mazumdar et al., 2017), the participants’ responses were collected and stored in a Google Spreadsheet in a structured manner. For responses analysis, the authors followed the same quantitative and qualitative analysis method used in Naith and Ciravegna (2019c).

4. Results

This section summarizes the main findings of the conducted survey to illustrate the scale on which mobile apps developers are in agreement to work with public and anonymous crowd testers directly without the need for a manager or leader, as in most of the crowdtesting methods used by testing organizations. In addition, it provides the requirements needed to be included to achieve effective crowdtesting process in terms of crowdtesting completion, crowd motivations, job evaluation and to ensure the reliability of the crowd. The responses of participants were highly insightful, highlighting a number of guidelines that must be considered in the proposed public crowdtesting solution to enhance trusting of crowd testers. The discussion of the survey findings is presented with more details in the next subsections.

4.1 Developers’ experiences with crowdsourcing

Question (Q1) in this survey was aimed at gaining more knowledge about the experiences of mobile apps developers in the use of crowdsourcing platforms. Table 1 shows how frequently crowdtesting platforms and other testing methods such as private testing companies and automated/cloud testing tools are used by mobile apps developers. As observed, over half (58%) of the developers never used crowdtesting platforms and (14%) rarely used them. Meanwhile, 26% of the participants said they sometimes used platforms, and a small minority representing 2% of the developers said they often used them. uTest[8], MyCrowd QA[9], 99tests[10], Mob4Hire[11], BugFinders[12] and TestIO[13]. The data in Table 1 also showed how often developers used automated tools or cloud testing services. It is clear that a good proportion (34%) of developers sometimes used them while only (6%) they always used them. Meanwhile, 24% of responded to say they never used while 22% rarely used. A small minority (14%) of participants responded that they often use them. The data in the table also clarified that 34% of developers always used testing companies. A similar proportion of developers answered that they used them sometimes. Only 6% of developers replied they rarely used them while 4% said they never used.

Similar to (Q1), participants were asked another question (Q2) whether they used crowdsourced programming websites such as Stack Overflow, GitHub and stack exchange to search and solve their programming issues. As can be seen from Table 2, more than half of the participants indicated that they always use stack overflow for searching for programming issues and their solutions. It is interesting that none of them said they do not. While 26% indicated that they very much use it and 12% moderately. Only 8% of the participants said they use it somewhat. GitHub is another crowdsourced programming platform that used slightly less than stack overflow as indicated by participants. Thirty-four per cent of participants mentioned that they use GitHub all the time while around 24% use it very much. It is surprising that none of the participants said they do not use GitHub while 26% moderately used. Sixteen per cent of participants said they use it somewhat. The least popular website among the developers was stack exchange with 18% of them using it all the time and only 8% indicated that they use it very much. Thirty-two per cent of participants mentioned that they have never used it.

4.2 General expectations and/or desired outcomes from the public crowdtesting process

4.2.1 The expectation of reaching enough critical mass.

For the development of new crowdtesting methods, reaching a critical mass of testers must be seen as a fundamental aim that needs to be obtained from the constructed crowd-based platforms. Marwell et al. (1988) stated that most people know that crowdsourcing methods rely on voluntary participation, and there is no guarantee that the critical mass of tester contributions will be fulfilled. In response to the survey (Q3) of whether a critical mass of testing could be achieved if public crowdtesting was used, interestingly, none of the participants answered a definite “No” to this question. While 57% of the developers believed that this was possible, 43% were not quite sure and answered “Maybe.” For further exploration, the participants who answered with yes or maybe, were asked “what are the expected benefits that they could obtain” when dealing with the public crowd testers?” Most of the participants shared a positive outlook on the use of this method. The outlook of the survey outlooks showed that most of the participants agreed that the distribution of tests to the public and the involvement of public testers with different levels of experience and from different backgrounds/environments, would help to cover more mobile devices. Moreover, discover more issues faster than traditional crowdtesting methods. The majority of participants highlighted that this method would provide more results and present better feedback more rapidly. Consequently, reducing the time needed to finish the testing process. A few numbers of participants indicated that this way of testing would also enable the study of more human behaviors according to the particular pattern of behavior from each crowd tester. Even fewer participants reported that testing by the public crowd testers could give more useful results than individual testers, and it would lead to improving the developers’ skills based on collected feedback. Two participants believed that public crowdtesting would help in performing testing many times in the early stages of the mobile app development life-cycle. One respondent stated that

Because the test would be opened to whole testers in the world, this gives more variety in testing scenarios, techniques, and different tools in testing, which reduces the need for testing apps by companies.

4.2.2 Desired outcomes from the public crowdtesting process.

Majority of the participants (96%) responded to question (Q4), which asked desired outcomes when using the public crowdtesting method. The responses displayed a broad set of desires. The most common desire was the ability to execute more realistic tests by more testers, rather than using artificial environments. In addition, finding more issues in a short time. This demonstrates that the factor of distribution the test on a larger-scale and the time are crucial for most of the participants. Some participants mentioned that they hope the use of public crowdtesting can enhance testers and developers skills by providing more testing information and knowledge, storing the performed testing scenarios and cases by public crowd testers for later use. Furthermore, the importance to improve communication between mobile app developers and testers in industrial and academia domain and exchange their experience was highlighted. A few participants have indicated that the ability to distribute a test on a large scale to cover a variety of devices and operating system (OS) versions is one of the critical and hoped for outcomes from public crowdtesting. Another group of participants showed their strong desire to obtain useful testing reports including all possible issues, exceptions or nonlogic operations; as well as detailed information for each crowd tester who performed the test. One participant said that

[…] provide a secure way to test apps with protection for identity especially for app ideas” is a significant factor in using crowdtesting.

Another participant said

I hope that testers have a good technical/programming background as this may lead us to perform Gray-box Testing, which is better than Black-box testing.

Surprisingly, one participant highlighted the importance and necessity that the use of public crowdtesting should be unbiased to a particular group of testers for any reason. The last desire extracted from the responses was the possibility of providing a dashboard to display a good sampling of data and metrics.

4.3 The essential requirements for public crowdtesting methodology

4.3.1 Typical starting keywords of the search for issues.

The responses to the question (Q5) demonstrated the most important elements of mobile devices that developers use to search for a solution to any issue they may face during mobile app development. The responses demonstrated that mobile device model represents the highest percentage (45%), followed by OS version (29%), mobile platforms (e.g. iOS or Android) (16%) and brand/manufacturers (10%). It is clear that the model of devices is the first and most important element that the developers look for during development and testing processes of mobile apps. While the brand is the least important element to be searched for.

4.3.2 The preferred method for posting or defining issues.

Defining issues is simple and any misunderstandings that arise might be due to the unclear explanation. In fact, issues can be defined correctly in many ways. The responses to (Q6) show that: 74% of participants prefer to use title and general description similar to stack overflow for defining their tasks and problems. While 26% of the participants found the structure form (divided into sections, e.g. payment method) more suitable for them.

4.3.3 Bridging the gap between developers and crowd testers during software testing processes.

The direct interaction between developers and crowd testers is vital to perform an effective crowdtesting process on a large-scale. In this study, participants had been asked (Q7) whether they considered the direct interaction between developers and public crowd testers important during the testing process. The responses show that none of the participant developers indicated that this is “Not important or Slightly important”, while 70% agreed that this was very important. Twenty per cent answered that this is important and 10% fairly important. Overall, it can be said that all of the surveyed developers agreed on the importance of the direct interaction between testers and developers rather than the need for middleman crowd manager or leader during the testing process.

4.3.4 Issues’ reporting method.

The next question (Q8), asked whether the difficult reporting system will negatively affect public crowd testers’ contributions. The responses revealed that (86%) of participants completely agreed that the difficulties of using test results reporting form will significantly affect the enthusiasm of the public crowd testers to participate and use the crowdtesting methodology. Likewise, 55% of participants have strongly agreed and 31% agreed that this would significantly affect. On the contrary, 4% disagreed that this would have any effect. The remaining 10% responded neutrally that this might have negative impacts on the participation of the public crowd testers.

4.4 Measuring level of trust in the public crowd testers

This section displays the responses of the question (Q9) in which mobile apps developers were asked about their confidence in testing their apps by public and anonymous crowd testers from any country over the world to perform their testing tasks. The responses showed that the proportion of developers who trusted crowd testers from any part of the world (68%) is significantly higher than the ones who did not (32%). About 69% of the developers who replied negatively to this question justified “why?” Their responses covered a range of reasons. Most participants mentioned that their reason for the lack of trust is linked to the security of data (lack of identity and guarantees), as one of the developers said:

An idea could easily be stolen and published before finishing the app development process.

Other participants’ reasoning is linked to the level of education and technological development of some countries over the world. While a few numbers of participants indicated that the main reason for not trusting is that the participation of the public crowd testers could only be for making money. Only one developer had a somewhat neutral response, which mentioned that trusting the public crowd testers from different countries depends fundamentally on the specific region of the world that the mobile app targeted; in that case, the developer does not trust other testers from the particular region.

For 68% of the participants who gave positive responses, indicating that they trust crowd testers from any country in the world, a further question was posed to them in respect of how much they will trust the information provided by these public and anonymous crowd testers. It was surprising that the participants’ responses were (50%) moderate and (50%) very much, while none of them answered with a little.

4.5 Evaluation of the performance and quality of work

4.5.1 Ensuring the correctness of the way the test was performed.

The responses to the question (Q10) discussed how mobile app developers will know that the public crowd testers have actually completed the test and hence produced several possible solutions. A review of the solutions indicated that most of the participants mentioned they could know that through the detailed description of the testing plans, test cases or testing scenarios that are reported by public crowd testers. Additionally, almost half of the participants mentioned that repeating the testing steps implemented by public crowd testers to reproduce the same issues could also be a possible solution. A minority of the participants indicated that they must integrate a tracking tool to capture and record the testing results, processes and activities carried out by crowd testers. Three participants believed that the backgrounds or practical experiences of the developers might help in that situation. Two participants considered that making an issue or more intentionally in the app can be one of the best solutions for measuring if the crowd testers actually executed the test. Interestingly, one developer pointed out that asking one or two precise questions at the last stage of the testing process is an accurate method of measuring if the crowd testers actually conducting the testing process with integrity.

4.5.2 Evidence on the validity and accuracy of results.

Subsequently, question (Q11) asked participants how do you want crowd testers to prove that the results are correct, thus, several different possible solutions were provided. The suggested solutions included the provision of images, video recordings and textual reporting and automatic reports (e.g. log file). From the data presented in Table 3, most of the participants given solution required a textual explanation as evidence of accurate results. However, a good proportion of participant mentioned the screenshots of issues as evidence. Another group of participants required video recording as a solution. The remaining possible solution as indicated by a minority of participants was to the importance of automatic reports to prove that the results are correct. While a small number (6) of participants did not provide any solution.

4.6 The incentives and motivation for crowd testers and developers

4.6.1 The attractive elements to work with the public crowd testers.

Only 90% of the participants responded to the open question (Q12) about the features that would attract and encourage them to work with the public crowd testers to execute your tests and would make them leave working with testing companies. The responses covered a broad range of views that are organized into six categories:

  1. Better quality: Most of the participant agreed that obtaining fast and accurate testing results could be the main reason to deal with the public crowd testers.

  2. Lower cost: Another group of the participants mentioned that the lower payment cost would be another reason for that.

  3. Flexibility: Only two participants referred to the flexibility for repeating the test more than once and any time during the development process as another reason that would motivate them to work with the public crowd testers.

  4. Diversity: This is related to the need to cover a wide variety of environments, cultures, processes and steps for testing mobile apps. From participants’ responses that belonged to this category, four sub-categories were identified:

    • Test diversity: The majority of participants mentioned the need to use a diverse set of real-world testing scenarios, test cases, techniques and steps for testing mobile apps.

    • Hardware resource diversity: Other participants pointed out that the ability to cover a large variety of mobile devices models and OS versions are another reason to deal with the public crowd testers.

    • Human knowledge diversity: Two participants considered the accessibility of various levels of crowd testers’ experiences is also essential factor. Another participant said that

      “the ability to find crowd testers adapted to many different functions or activities is really important.”

    • Human behavioral diversity: Only one participant had considered the possibility of covering a large variety of end-users behaviors as an important feature.

  5. Organization and User-friendliness: A small number of participants expressed that good organization of the testing processes, issues reporting mechanisms and supporting free automated testing tools were also considered important features that may motivate them to leave working with testing companies and start working with the public crowd testers. As one participant said, “using TFS tools to list issues, in turn, making developers aware of the complete problem is important.” Three participants mentioned the importance of the ease of use of the crowdtesting platform. Two participants indicated the need for a good communication method between the public crowd testers and developers.

  6. Other responses: There were two interesting responses; the first response was

    “The patience in repeating questions and frequent communication without increasing service charges or feeling bored is considered one of the significant reasons to work with crowd testers.”

    The second response was a neutral “choosing the testing method between either companies or crowdsourcingg depends on the type of the app itself whether it is allowed to be tested by the crowd”.

4.6.2 Possible incentives that could offer to the public crowd testers.

To acquire further knowledge, the participants were asked another open question (Q13) about the incentives they would be willing to offer to public crowd testers to motivate them and increase participation rate. The responses covered a vast array of ideas to encourage crowd testers to work sufficiently well in their testing role and in return provide adequate recognition for their good work. The most common extracted themes from the responses included gift cards and/or vouchers (e.g. restaurants, shopping, buying electronic devices, traveling, Amazon, etc.), providing money and provision of free apps or allowing the use of a paid version of the app. While a less significant proportion of incentives includes invitations to training courses, providing certifications and providing more knowledge related to testing scenarios and activities. Table 4 summarized all the potential incentives that developers can provide to the public crowd testers and the percentage of each one.

4.7 Required information for effective crowdtesting process

4.7.1 The required test information for defining testing tasks.

The responses of the participants to the open question (Q14), in related to important information that developers must provide to crowd testers for defining testing tasks clearly, which can assist them in achieving a correct crowdtesting process. Only 88% of the participants responded to this question, 5% indicating that the testing requirements are important without any explanation. While the remaining 83% of them provided interesting responses. From these responses, eight primary pieces of information have been identified:

  1. Functional Behaviors: Most of the participants mentioned the functional requirements of the apps, the components and the expected behaviors (out-put) as important information and insist their definment when announcing any testing task.

  2. Mobile device information: The need to provide the mobile platform, model and OS versions details that need to be tested against the app as mentioned by most of the participants.

  3. Timing information: A small number of participants emphasized the importance of the estimated time needed for singular test cycle, alongside the deadline for submitting the complete test reports and obtaining fast results.

  4. App information: Other participants indicated that type of app and URL is necessary to provide, as recently many apps are launched with some even sharing names, and all may have the same name. Interestingly, only one participant said that “logo or image of the app is important for crowd testers to know which app they need to test.”

  5. Test information: They are related to the need to provide a full description of the apps and testing scenarios or test cases. A large proportion of participants specified the need for a complete description of the whole app’s and the test instructions. Minimal participants indicated that providing testing scenarios or test cases by developers rather than always created by crowd testers. This might be useful for beginner crowd testers to perform an accurate test.

  6. Issues solved: Surprisingly, a minority of participants highlighted the importance of clarifying the discovered issues and that have previously solved; in addition to the parts of the app that may be influenced by the performed amendments as valuable information that needs to be presented at defining testing tasks.

  7. App development information: Only two participants mentioned the importance of providing the source code of the app in definitions of specific type of testing tasks (if needed), and thus decide if the issues belong to the mobile device characteristics or in the code itself.

  8. Users characteristics: Interestingly, very few (3) participants indicated that target users and their characteristics (e.g. location, language, age, working domain, etc.) are also important information when defining the task.

4.7.2 The required test information within test summary report.

The main information for submitting useful testing reports compared to open question (Q14), the same number of participants (88%) also answered the open question (Q15) in relation to the information that crowd testers must be considered in their reports to provide high-quality testing results. Seven per cent of participants did not providee clear information while 81% of them provided enlightening responses. Among provided responses, six primary pieces of information have been identified:

  • Testing environment: The details of mobile devices used in testing (platform, model, OS version) and its characteristics are classified as vital and need inclusion when submitting reports as suggested by the majority of participants.

  • Tester information: A small minority of participants mentioned that due to dealing with public and anonymous crowd testers, personal information (including name and contact information) and geographical information are deemed beneficial if included in submitted reports.

  • Execution information: This relates to the need to submit information about the testing process that was performed test cases or scenarios used (84%), a clear description of the steps that crowd testers followed (69%), error messages (37%), which could enhance the quality of the report. Interestingly, only two participants mentioned the importance of providing the number of test repetitions and time taken for each test cycle.

  • Issue information: Most of the participants highlighted the importance of receiving a clear description of issues within submitted reports, including issue id, issue name, category or type of issue, a priority of issues, severity and actual results. A minority of them (22%) mentioned that videos or screenshots of the issues are key and must be included in the submitted testing reports.

  • Supplementary information: Twenty-five per cent of participants stated that receiving additional information such as solutions or suggestions for solving issues or expected causes of issues within submitted reports would be significant.

5. Main finding and discussion

Based on the qualitative and quantitative data collected from the participants, the authors were able to draw some initial conclusions to address the selected research questions discussed in Section 1.

Q1.

To what extent mobile app developers agree to work with public and anonymous crowd testers with various levels of experience to perform their testing tasks?

The responses to (Q1), which presented in Table 1 were not expected. It was highly surprising that more than half of the participants have never used crowdtesting platforms. For this reason, it is highly likely that several of the developers do not possess sufficient knowledge concerning crowdsourcing because many testing companies have only recently used crowdtesting method, as they have their own crowd tester communities to perform testing. This is supported by Guaiani and Muccini (2015) stating that most of the big and common testing companies such as Clariter, uTest, Telcom Italia, Pass Brains and Bug Finders follow the crowdtesting approach in their usual testing method. The lack of knowledge is possibly the cause of, 58% of participants in the survey have never using crowdsourcing before and 34% always using testing companies. This probably means that of 92% have previous experience with a crowdtesting method and only 24% have never used one.

Responses to (Q2) proved the participants’ lack knowledge regarding crowdsourcing. From Table 2 it was clear that all the participants are more inclined to using programming websites such as Stack Overflow, GitHub and Stack Exchange. Moreover, the data presented shows that Stack Overflow is the most commonly used platform as clarified by 56% of participants, followed by GitHub with 34%. As these websites are public crowdsourcing platforms for programming/coding, which deal with public and anonymous crowd programmers (Vasilescu et al., 2013), the participants’ acceptance to working with public crowd testers is demonstrated. Besides this, none of the participants indicated that they never used them before, this expresses an indication to how much mobile apps developers agree with the idea of working with anonymous crowd worker, not only for programming but also for testing.

Despite the lack of knowledge, the authors noted that all participants provided positive outlook regarding the finding of sufficient critical mass when using public crowdtesting [answer to (Q3)]. Relatedly, providing a point of view concerning the future expectations and benefits that could be produced when using the public crowdtesting method [answer to (Q3)]. In addition, they provided their desires from this novel crowdtesting method [answer to (Q4)], which were grouped into five main categories:

  1. wider distribution of the test;

  2. reduce consumed testing time;

  3. broader understanding of issues;

  4. increases in knowledge and experience; and

  5. Enhanced social networking, interconnection and cooperation between experts in industry and academia. As a result, developers and testers will have opportunity to recognize each other, leading to an increase of social networking among developers and testers of mobile apps, in encouraging the sharing of more knowledge and insight.

Although significant numbers (90%) of participants showed willingness to move and work with public crowdtesting platforms, many stressed the need to include the fundamental testing features [answer to (Q12)]. Still, it was surprising that a small group of participants (32%) still had concerns and hesitations about using this public crowdtesting method. Their concerns included a set of reasons [answer to (Q9)], the highest percentage was linked to the security of data (lack of identity and guarantees) followed by the deterioration of the level of education and technological development in some countries over the world, as well as, random execution of the test by public crowd testers for the sole purpose of making money.

However, the majority of participants (68%) trusted very much or extremely the crowd testers from any part of the world [answer to (Q9)]. Many possible solutions were presented to enable the trust of the public and anonymous crowd testers [answer to (Q10)]. For example, through analysis of the data that crowd testers have submitted in the reports or other methods like a deliberate mistakes or deep questions related to one of the test steps. Likewise, [answer to (Q11)] several possible ways that can serve as a guide for the public crowd testers to prove the validity of their results. A combination of a very high percentage (90%) of willing participants with moving and their possible solutions and desires strongly suggest a general positive outlook on the testing mobile apps by public and anonymous crowd testers around the world. This would be a good indication and clear evidence of how much mobile apps developers accepted the concept of public crowdtesting and their willingness to engage with the public and anonymous crowd testers.

Q2.

What are the mobile app developers’ recommendations for using a large-scale and public crowdtesting methodology?

The most important topics discussed in this survey are the diversity in searching criteria, definition style of the task and issue, direct interaction, difficulty level of the system, the variety of incentives and the necessary information to be provided by both testers and developers. Based on answers to (Q5), the authors found that different groups of participants seeking their testing issues via different searching elements of the mobile device. Interestingly, a higher percentage was for the mobile device model. A possible reasoning would be the models ability to indicate to the brand and platform simultaneously. Therefore, building a searching mechanism with diversity in searching criteria is of great importance to developers that seek different solutions for a wide variety of mobile phone models. This is due to the fact that diversity criteria in the searching mechanism may reduce the time required for searching for the complex problems at the testing and programming stage. Similarly, the diversity criteria may also provide a broad set of solutions that do not immediately appear. Thus, further enhancing the ability to easily reach target solutions.

While the general description has considered the most preferred method for the developers to define their testing issue due to the flexibility to describe the testing issues well enough to be understood by the crowd testers or other developers [answer to (Q6)]. While a small group of participants considered the reduction of time needed for typing is more important, hence the use of a structured form was preferred (divided into sections e.g. payment method). While, the majority of the surveyed developers agreed on the importance of direct interaction between testers and developers [answer to (Q7)]. This is probably due to the fact that they will deal with public and anonymous testers from different societies and geographical locations over the world. Consequently, the developers may need to:

  • recognize who performed the tests;

  • understand more information about the results and issues found (Naith and Ciravegna, 2018a);

  • understanding the raising and explanations for some testing issues are occasionally due to the differences in cultures and spoken languages.

From percentages associated with participants responses to (Q7), the authors concluded that direct interaction between public crowd testers and developers will lead to faster tests compared to the existing crowdtesting methods. As highlighted out by crowd testers in (Guaiani and Muccini, 2015), delays can happen due to managers or leaders who organizing and leading the test. Additionally, building an online space for developers and public testers to share and discuss the results of testing will lead to a seamless environment and a better understanding of the testing results (including causes of the problems occurred) (Naith and Ciravegna, 2018a). Thus, this would enhance and facilitate the development process and deliver better accepted apps of high quality. As public testing method deals with crowd testers from the public who represent the end-users, the optimistic view of direct inter-actions coincides with the literature (Alvertis et al., 2016). Alvertis et al. (2016) states that the direct communication and collaboration between software developers and end-users during the software development process is important to develop better-accepted software.

Besides, the difficulty of the issue reporting system was indicated as one of the major obstacles faced by public crowd testers with various level of experience, which can negatively affect the enthusiasm of the public crowd testers to participate and use the crowdtesting platform as completely agreed by 86% of the participants in (Q8). This is due to the different behaviors and interactions with the reporting system. Where each crowd tester behaves and interacts differently from other crowd testers with the same system. This is assured by Rosson et al. (2002), who proved the complicated interfaces of the system can reduce the use of the system by target users and at the same time shows their dissatisfaction to continue using the system. Methods of motivating crowd testers are considered important factor in crowdtesting systems. Although different types of incentives are suggested in (Q13), a small group of participants interestingly recommended the diversity in the incentives. They mentioned that the identification of the type of incentives to provide needs basing on the type of test performed, the kind of app, the complexity level of test, and crowd testers efforts and their performance in finding more issues. They believe that this could be much better than what is done by existing crowdtesting companies. Which typically provided only one type of incentive (money) and paid the only the testers who discover critical issues or discover issues first (Naith and Ciravegna, 2018a). Such incentives may reduce the motivation of the crowd, as the effort of each crowd testers’ performance must be considered and rewarded. Nevertheless, the underpinning factor that needs consideration is the extent, which testers are actually satisfied with the rewards offered for motivation purposes. Based on all types of incentives collected from (Q13), such as vouchers, providing free apps, points and reputations were the most preferable among all provided incentives, except the money. Unfortunately, this implied that the groups of developers think that testing is not a serious primary job, it is only simple cooperation.

Consequently, developers preferred to provide goods instead of payment to testers. Unfortunately, this is not always suitable, as testers who need to earn a living may not show interest when rewarded with free access to the app, gift cards, ability to work in other apps, etc. Therefore, developers should be mindful of the cost regards the gig-economy and globalization, as well as paying testers according to the standard living in their countries. Testing is not easy and requires a lot of experience and qualifications to be of a professional standard, similar to any other job. Hence, raises a question for future researchers in the significance of studying the nature of the testing process depending on what testers do. Authors found that money needs to be the main reward according to Brabham (2008) and Kaufmann et al. (2011), who proved that money is the most dominant motivation for workers. The value of the money could be based on the type of test performed, kind of app, the complexity level of test, as well as crowd testers efforts and finding issues as mentioned by developers. Hence, these incentives listed provided in (Q13) could be additional motivators (bonus) to motivate crowd testers further. In conclusion, these responses brought attention to the significance of providing additional incentives to ensure the reliability of good crowd testers and encouraging others who provide lower results quality to work harder to gain better results.

The introduction of the suitable incentives, facilitating of public crowdtesting activities, provision of sufficient and clear information to crowd testers, enables the execution of the test correctly alongside the fabrication of accurate reports with high-quality results. According to Guaiani and Muccini (2015), a good number of the crowd testers indicated that the amount of information provided to them by the testing companies is not sufficient to carry out the test to the necessary standards. Crowd testers only receive information about the app itself such as test scenarios, information about specific inputs and occasionally information about the devices that need to be tested. The study addresses this issue by providing a broader view of the information that the developers need to provide to the public crowd testers, as well as the information that must be returned within the reports.

For the developers: To ensure that the testing task is defined in a clear and structured way, which may be understood by crowd testers, more information about the results including bugs or defects (Naith and Ciravegna, 2018b), in addition to all pieces of information reported in (Q14) must be included. Such information will assist crowd testers with different levels of experience to clearly understand the requirement and to perform an accurate and effective testing process. For example, providing the details of mobile devices required for testing will help in saving time, effort and avoid time-consuming executing test cases on not required devices (Afzal, 2007). Due to that each app includes several functionalities and a test case needs to be created for each function in the app, the providing and standardizing the test cases or scenarios among crowd testers with limited experience might be useful to enhance their testing knowledge and experiences. Despite beginner crowd testers having a comprehensive knowledge and understanding about the app matter involved; these standardized tests help them in obtaining a broader background of how generating better scenarios and/or test cases in the futures, which leads to performing more accurate testing processes. On the other hand, these standardized tests could help developers to evaluate crowd testers’ performance and knowledge, as well as find the gaps in their testing techniques they follow to improve their experiences.

The general description of the whole app could help in obtaining a broader background of the app’s purposes and its functionalities so they can play with different angles of the app, which leads to a higher probability of finding more testing issues. In addition, providing clear information about the issues that have solved will assist public crowd testers in performing a regression test on all changes occurred to make sure that do not negatively effect other functions and so help in maintaining the quality of the app (Afzal, 2007). Therefore, the authors emphasise on the importance of providing this information to complete the testing process easily, accurately and with less effort.

For the public crowd testers: To understand more about the testing tasks and ensure the testing report has fully included all important details of a task’s results to ensure report will not be rejected, all pieces of information that were reported in (Q15) must be included. The authors see all these pieces of information presented in (Q15) as the key factors for measuring the quality of testing reports collected by crowd testers and so provide suitable incentives. The clear description of the steps that followed by crowd testers might help developers to reproduce issues so enhancing the quality of the report. Besides, the additional information (optional) provided by crowd testers, especially, expected causes of issues they found in the test might help developers to understand the cause of the issues and be able to fix it easily.

As a summary of the discussion, the authors deem the diversity in searching criteria, free space to define the issue, direct interaction between testers and developers, easy-use of the system interfaces, the variety of incentives and the necessary information provided by both testers and developers, as key recommendations to encourage them to work with the public and anonymous crowd testers for testing their tasks.

6. Recommendations

Based on the results discussed above, the authors feel comfortable to provide the following recommendations:

  • The difficulty for distributing the test and covering as many old and modern mobile devices as possible with different OS versions and different hardware characteristics has been indicated by a large number of participants in many questions (Q2, Q4 and Q12). Therefore, researchers must be thinking to find innovative solutions that help, as far as possible, to address these two challenges and facilitate the testing process at the lowest possible cost.

  • The direct interaction between mobile apps developers and crowd testers without the need for an intermediary leader or manager is perceived positively, such as in (Q7). Evidence needs to be provided showing the benefits that individual developers or small development teams may gain from using the methodology. Therefore, a new implementation for this methodology should be proposed and empirically validated.

  • The need for open communication and social collaborations among developers, testers, even customers is evident from participants’ responses to (Q4). In addition, the desire to obtain more knowledge and experience for mobile app testing is also evident in a number of participants’ answers, such as in (Q3, Q4, Q5, Q12 and Q15). Therefore, an online space for communication and work collaborations needs to be available for both developers and public testers to share more knowledge about apps’ behavior and discuss the useful testing information and results.

  • As this methodology allows participation of public crowd testers with different levels of experience, it is possible to find that some crowd testers could find difficulty in executing a specific type of tasks due to their limited experience. This draws attention to the paramount need for designing easy and clear methods for announcing the task (Q6), accessing its requirements and method of reporting issues (Q8) including information they need to provide (Q14 and Q15). Thus, that crowd testers can use it easily, efficiently and attract more crowd testers with a different level of experience.

  • The desire for a knowledge base related to issues of mobile device compatibility testing and a searching mechanism for the issues with different criteria is also highlighted by a number of participants (Q5). Therefore, a knowledge base with different criteria needs to be available for developers and testers, which store all testing issues, including results of internal complexity and the architecture of mobile devices.

  • Motivational factors and low cost are other elements reported in (Q13.). Therefore, a new motivational mechanism with payment schemes needs to be developed that are accepted by crowd testers. This mechanism needs to take into consideration the reduction of the financial budget compared to the testing organization and the ability to increase the participation of public crowd testers.

  • The strong desire to understand more of the behaviors of apps and how users with different backgrounds and from demographics regions interact with the apps is also shown in a number of participants’ answers (Q3 and Q12). Therefore, a new methodology needs to be available for covering such this desire, which will help developers to develop more apps with higher quality that works well with different groups and demographic of users.

Based on all these recommendations the authors highlight the necessity to develop a new informative, communicative and practical crowdtesting methodology to serve individual developers or small development teams, which take into consideration the aforementioned recommendations. Furthermore, an empirical validation needs to be performed to evaluate this methodology, including recommendations mentioned above (in the case of constructed).

7. Conclusions and future work

Although crowdsourcing has gained much attention among mobile app developers as demonstrated in this study, much still needs to be done to change the perspectives of participants who are still concerned with using of the public crowdtesting for mobile apps testing. This paper has presented an exploratory study investigating developers’ point of views in agreeing to work with the public and anonymous crowd testers in crowdtesting processes of mobile apps. In addition, it has identified the desirable features or properties required by developers to use public crowdtesting methodology for testing their apps. Further, it has provided information on how they can ensure the reliability of the public crowd, motivate them and evaluate their works from developers perspectives. In total, the authors analyzed responses from 50 mobile app developers with different experience in iOS and Android from different countries around the world.

The results demonstrated that app developers are willing to use the public crowdtesting methodology if some challenges can be addressed and the factors detailed in the study are fully met. Further, the study concludes that the direct interaction and development of trust between the public crowd testers and the mobile app developers is key to performing an effective testing process and to establishing a long-term working relationship between these two groups. The external validity of this study is verified as it involves the participation of mobile app developers from different countries around the world. In addition, this paper discussed the results in detail and have given several recommendations that software developers need to consider when developing a new crowdtesting methodology. This study has helped to understand the key requirements and issues that the participants concerns about in the public crowdtesting method. However, there is still a need for further research as the links used in sharing this only survey may not have reached large-scale potential participants with significant experience in this field.

In future work, the authors plan to provide an empirical evaluation regarding the use of public crowdtesting method with the participation of a higher number and more representative set of mobile app developers. The authors will be studying the nature of app testing by the various participant crowd testers with various levels of experience to provide an effective motivation mechanism that does not impact the gig-economy and global market. The authors do expect that some developers will favor such public crowdtesting methodology more than dealing with the traditional crowdtesting organization. The authors would also like to examine some of the issues that have been raised in this research study such as the effect of providing different types of incentive on the crowd, and impacts of difficult system interfaces on the participation of the crowd.

Figures

The proportion of participated developers who have working experience with Android and iOS mobile platforms

Figure 1.

The proportion of participated developers who have working experience with Android and iOS mobile platforms

Structural design of the questionnaire

Figure 2.

Structural design of the questionnaire

The proportional use of mobile apps testing methods from the participants’ perspective

Testing method Never (%) Rarely (%) Sometimes (%) Often (%) Always (%)
Crowd-testing platforms 58 14 26 2 0
Automated/Cloud testing tools 24 22 34 14 6
Testing company 4 6 34 22 34

The proportional use of the three crowd sourced programming websites Stack Overflow, GitHub and Stack Exchange from the participants’ perspective

Programming platform Never (%) Somewhat (%) Moderately (%) Very much (%) Always (%)
Stack Overflow 0 8 12 26 56
GitHub 0 16 26 24 34
Stack Exchange 32 22 20 8 18

Important factors used as evidence on accuracy of results

A detailed report (including full issues description, steps to rediscover the issue, test cases used) 83%
Screenshots of issues 42%
Video recording of testing process 25%
An automated testing report for each test cases (similar to Google Analytics or Fabric Crashlytics reports) 16%
Automatic log files 8%

Responders’ perspective on possible incentives that can offer to crowd testers

Vouchers/Gift cards 56%
Some money 41%
Free app 38%
Points and reputation 33%
Job offer 24%
Priority to work on next project 19%
Invitation to attend events, conference or workshops 8%
Certifications 4%
More knowledge related to testing scenarios and activities 2%

References

Afzal, W. (2007), “Metrics in software test planning and test design processes”.

Almeida, M., Bilal, M., Finamore, A., Leontiadis, I., Grunenberger, Y., Varvello, M. and Blackburn, J. (2018), “Chimp: crowdsourcing human inputs for mobile phones”, in Proceedings of the 2018 World Wide Web Conference, International World Wide Web Conferences Steering Committee, pp. 45-54.

Alvertis, I., Koussouris, S., Papaspyros, D., Arvanitakis, E., Mouzakitis, S., Franken, S., Kolvenbach, S. and Prinz, W. (2016), “User involvement in software development processes”, Procedia Computer Science, Vol. 97, pp. 73-83.

Bayley, I. Flood, D. Harrison, R. and Martin, C. (2012), “Mobitest: a cross-platform tool for testing mobile applications”.

Bell, E., Bryman, A. and Harley, B. (2018), Business Research Methods, Oxford university press.

Brabham, D.C. (2008), “Moving the crowd at istockphoto: the composition of the crowd and motivations for participation in a crowdsourcing application”, First Monday, Vol. 13 No. 6.

Bryman, A. and Burgess, B. (2002), Analyzing Qualitative Data, Routledge.

Denscombe, M. (2014), The Good Research Guide: For Small-Scale Social Research Projects, McGraw-Hill Education.

Guaiani, F. and Muccini, H. (2015), “Crowd and laboratory testing, can they co-exist? An exploratory study”, 2015 IEEE/ACM 2nd International Workshop on CrowdSourcing in Software Engineering, IEEE, pp. 32-37.

Holzer, A. and Ondrus, J. (2011), “Mobile application market: a developers perspective”, Telematics and Informatics, Vol. 28 No. 1, pp. 22-31.

Huang, J-F. (2014), “Appacts: mobile app automated compatibility testing service”, In Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on, IEEE, pp. 85-90.

Huang, J-F. and Gong, Y-Z. (2012), “Remote mobile test system: a mobile phone cloud for application testing”, in 2012 IEEE 4th International Conference on Cloud Computing Technology and Science (CloudCom), IEEE, pp. 1-4.

Kaasila, J., Ferreira, D., Kostakos, V. and Ojala, T. (2012), “Testdroid: automated remote UI testing on android”, Proceedings of the 11th International Conference on Mobile and Ubiquitous Multimedia, ACM, p. 28.

Kaufmann, N., Schulze, T. and Veit, D. (2011), “More than fun and money. Worker motivation in crowdsourcing – a study on mechanical Turk”, Amcis, Vol. 11, Detroit, pp. 1-11.

Knott, D. (2015), Hands-on Mobile App Testing, Pearson education Inc.

Marwell, G., Oliver, P.E. and Prahl, R. (1988), “Social networks and collective action: a theory of the critical mass. iii”, American Journal of Sociology, Vol. 94 No. 3, pp. 502-534.

Mathers, N.J., Fox, N.J. and Hunn, A. (1998), Surveys and Questionnaires, NHS Executive, Trent.

Mazumdar, S., Wrigley, S. and Ciravegna, F. (2017), “Citizen science and crowdsourcing for earth observations: an analysis of stakeholder opinions on the present and future”, Remote Sensing, Vol. 9 No. 1, p. 87.

Muntés-Mulero, V., Paladini, P., Manzoor, J., Gritti, A., Larriba-Pey, J.-L. and Mijnhardt, F. (2012), “Crowdsourcing for industrial problems”, International Workshop on Citizen in Sensor Networks, Springer, pp. 6-18.

Naith, Q. and Ciravegna, F. (2018a), “Mobile devices compatibility testing strategy via crowdsourcing”, International Journal of Crowd Science, Vol. 2 No. 3, pp. 225-246.

Naith, Q. and Ciravegna, F. (2018b), “Hybrid crowd-powered approach for compatibility testing of mobile devices and applications”, Proceedings of the 3rd International Conference on Crowd Science and Engineering, ACM, p. 1.

Naith, Q. and Ciravegna, F. (2019c), “The key considerations in building a crowd-testing platform for software developers”, Proceedings of the 4th International Conference on Crowd Science and Engineering, pp. 50-57.

Onwuzurike, L. and De Cristofaro, E. (2015), “Danger is my middle name: experimenting with SSL vulnerabilities in android apps”, Proceedings of the 8th ACM Conference on Security and Privacy in Wireless and Mobile Networks, ACM, p. 15.

Prathibhan, M., Malini, A., Venkatesh, N. and Sundarakantham, K. (2014), “An automated testing framework for testing android mobile applications in the cloud”, 2014 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT), IEEE, pp. 1216-1219.

Robert, L. and Romero, D.M. (2015), “Crowd size, diversity and performance”, in Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, ACM, pp. 1379-1382.

Rosson, M.B., Carroll, J.M. and Hill, N. (2002), Usability Engineering: Scenario Based Development of Human-Computer Interaction, Morgan Kaufmann.

Sánchez-Charles, D., Nin, J., Solé, M. and Muntés-Mulero, V. (2014), “Worker ranking determination in crowdsourcing platforms using aggregation functions”, 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), IEEE, pp. 1801-1808.

Starov, O. (2013), “Cloud platform for research crowdsourcing in mobile testing”.

Vasilescu, B., Filkov, V. and Serebrenik, A. (2013), “Stackoverflow and GitHub: associations between software development and crowdsourced knowledge”, 2013 International Conference on Social Computing, IEEE, pp. 188-195.

Corresponding author

Qamar Naith can be contacted at: qhnaith1@sheffield.ac.uk

Related articles