Abstract
Purpose
Service innovation is a key source of competence for service enterprises. Along with the emergence of crowdsourcing platforms, consumers are frequently involved in the process of service innovation. In this paper, the authors describe the crowdsourcing ideation website—MyStarbucksIdea.com—and find the motivations of customer-involved service innovation.
Design/methodology/approach
Using a rich data set obtained from the website MyStarbucksIdea.com, a dynamic structural model is proposed to illuminate the learning process of consumers.
Findings
The results indicate that initially individuals tend to underestimate the costs of the firm for implementing their ideas but overestimate the value of their ideas. By observing peer votes and feedbacks, individuals gradually learn about the true value of ideas, as well as the cost structure of the firm. Overall, the authors find that the cumulative feedback rate and the average potential of ideas will first increase and then decline.
Originality/value
First, the previous researches concerning the crowdsourcing show that the creative implementation rate is low and the number of creative ideas decreases, and few scholars have studied the causes behind the problems. Second, the data used in this paper are true and valid, and it is difficult to obtain now. These data can provide strong empirical support for the model proposed in this paper. Third, it is relatively novel to combine the customer learning mechanism and heterogeneity theory to explain the phenomenon of reduced creativity and low implementation rate in crowdsourcing platform, and the research results can provide a reasonable reference for the construction of this industry.
Keywords
Citation
Cui, L., Liang, Y. and Li, Y. (2020), "The study of customer involved service innovation under the crowdsourcing : A case study of MyStarbucksIdea.com", Journal of Industry - University Collaboration, Vol. 2 No. 1, pp. 22-33. https://doi.org/10.1108/JIUC-12-2019-0018
Publisher
:Emerald Publishing Limited
Copyright © 2020, Lixin Cui, Yibao Liang and Yiling Li
License
Published in Journal of Industry-University Collaboration. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
Service innovation is the source of business competitiveness. Recently, along with the development of information technology, crowdsourcing is beginning to gain popularity in various fields. Howe (2006) defined “crowdsourcing” as “the new pool of cheap labor: everyday people using their space cycles to create content, solve problems, even do corporate R and D. Such initiatives of crowdsourcing provide a platform for everyone to post their own ideas, and these ideas are usually generated from direct or indirect service experience. Therefore, the customer group is a rich source of preference information. A typical crowdsourcing platform allows customers to support or oppose others' ideas, so that the preliminary assessment of existing ideas can be obtained. Through such initiatives, the firm can acquire a great number of ideas that are innovative and beneficial. However, arguments about the real utility of crowdsourcing have never been settled. In fact, many crowdsourcing platforms are experiencing a decrease in the number of new-posted ideas, and the feedback rate (the percentage of ideas with official feedbacks in all existing ideas) remains low. Nevertheless, we have not found enough systemic and in-depth researches on these problems.
The majority of researches on crowdsourcing are aimed at crowdsourcing contests, in which customers post ideas to compete to win an award (Archak and Sundararajan, 2009; DiPalantino and Vojnovic, 2009; Mo et al., 2011; Terwiesch and Xu, 2008). Unlike crowdsourcing contests, in a permanent and open idea solicitation such as MyStarbucksIdea.com, there is no competition among idea contributors; instead, they help each other evaluate ideas. Unfortunately, only a few studies are conducted on this type of crowdsourcing initiatives. Through a reduced form approach, Bayus (2013) finds that individual creativity is positively related to current efforts, while negatively related to previous success. Di Gangi et al. (2010) find that there are two factors demonstrating whether an individual's idea will be adopted — the firm's ability to understand the technical requirements and to give feedback to concerns for ideas in the community. Lu et al. (2011) find that in crowdsourcing ideation initiatives, complementarities, and customer support provide people with chances of learning. This mechanism allows people to know about the problems that other customers have encountered, and help them come up with more ideas that are worth implementing. Yan et al. (2014) study the crowdsourcing platform IdeaStorm.com, which is affiliated to Dell. Their work has pioneered a research direction of structurally investigating new product ideas, as well as their development process, on the basis of real crowdsourcing data.
2. Data collection and analysis
Our data are collected from the crowdsourcing website MyStarbucksIdea.com, which is affiliated to Starbucks. This online community was established in March 2008, and it is dedicated to sharing and discussing ideas and allowing people to see how Starbucks is putting top ideas into action.
The structure of MyStarbucksIdea.com is simple, but quite efficient. Anyone (not only the customer of Starbucks) can register at the website and become a member of the online community. Afterward, anyone who owns the membership can post ideas on the website. Starbucks classifies all ideas into three categories—product ideas, experience ideas, and involvement ideas. Before an individual posts an idea, he or she shall select the category that the idea belongs to. As long as an idea is posted, other members can vote for it. If one supports an idea, he or she can submit a positive vote, which will add 10 points to the idea. And if one opposes an idea, he or she can submit a negative vote, which results in a deduction of 10 points. On the website, however, only the cumulative scores are available, while the specific number of positive and negative votes is not open to the public. Moreover, registered members can write their comments under an idea to explain detailed thoughts.
Typically, the change of an idea's status contains the following stages. Once an idea is posted, the voting and commenting function is then available to the public. The review team will select all ideas according to the scores, and deliver the good ones to the decision-makers. At this moment, the status of these ideas changes to “under review.” For those ideas that are already reviewed, the status becomes “reviewed.” Next, ideas that are worth implementing are selected, and their status evolves into “coming soon.” Finally, when an idea is completely implemented, its status changes to “launched.” In our paper, customers' learning process is gradually advancing based on two information sources—the scores and status of ideas.
There is a rich data set on MyStarbucksIdea.com, which contains detailed information about both ideas and members. We collected the public data on the website, and divided it into two groups—idea profile data and member profile data. We acquired 96,793 records of member profile data, and selected 21,305 individuals who posted more than two or more ideas (these individuals are called “selected members”). The time ranged from January 2009 to December 2015. We found a similar distribution between ideas contributed by selected members and those by the whole member groups, so the selected member group is representative.
In Figure 1, we present the relationship between the cumulative feedback rate of the three categories and time.
3. Model construction
Taking Yan et al. (2014) as reference, we modify the structural model to describe the decision-making process of costumers, and further explain the data generation process. Through the explicit modeling of individuals' utility function, we can use the data to empirically recover the parameters in the analytical model.
In each time period of our model, every member will make the decision about whether to post an idea in a certain category, but whether to put the decision into practice is determined by the corresponding utility expectation. Thus, we first explain the utility function.
Suppose that individuals are indexed i, j is the index of categories (j = 1, 2, 3), and t denotes time. The utility function consists of four factors. The first and second factors are related to benefit, which means that if an idea is implemented, the contributor will obtain better service experience, higher online reputation, or even job opportunities. We use the parameter ri to represent the reputation gain. Then, the third factor is the cost for posting an idea, including thinking, articling, and posting the idea. The whole cost is denoted as ci. Finally, the fourth factor involves the discontent; for example, if an idea is not accepted or responded, the contributor will gain discontent with such situations. In the model, whether individual i is discontent in period t is denoted as Dit (a binary variable, Dit = 1 means that the person is discontent, while Dit = 0 means content). In addition, the degree of such discontent is measured by the parameter di.
Hence, the utility function is given by the following equation:
When an individual posts an idea, he or she will hold a belief that the idea will be accepted based on the existing information, that is, the expectation of the utility function, which is represented as
Every member holds an expectation of the cost and value of their idea, and they update the expectation through the information from the website. Meanwhile, they will learn about the firm's cost structure and the true value of their idea.
Suppose that the cost for implementing an idea in category
We allow
In a certain period of time, there could be more than one idea implemented. If there are
We allow
The prior in period
We calculate the voting score that one's ideas receive in different categories, and we find no significant difference. Additionally, we have verified that one's abilities of coming up with new ideas are not influenced by the learning curve effect. Therefore, we can assume that the value of one's idea remains similar and will not change over time. As soon as individual
Moreover, the voting score is an excellent measurement, suppose that the natural logarithm of the voting score
For individual
We use the parameter
Note that members will update their perception of their ideas' value through voting scores; suppose that the natural logarithm of the voting score that a specific idea receives is
Here,
Similarly, we define
Respectively, we have
In addition, the prior values in period
As a result, the probability of implementing a specific idea can be written as
During the decision-making process, only the firm knows exactly about
Thus, in terms of the community members, the probability of implementing an idea with the value
Let
As is mentioned above, members will refer to their utility function while making decisions about whether to post an idea or not. Suppose that they make their decisions independently and without the influence of idea category. Besides, they know that the firm will take into consideration both the cost and the value. Then, the
Note that
We assume that the parameter
4. Model estimation and result analysis
4.1 Parameter estimates and analysis of cost and value
Taking into consideration both amount and validity, we select a group of individuals who proposed three or more ideas during January 2009 and February 2016 to serve as model samples. Compared with other members, the selected individuals tend to behave more actively and learn faster through a variety of signals; therefore, they fit the model well (details are explained in the following sections). Our samples contain 371 members, together with 2,466 posted ideas. Next, we estimate the model parameters through MATLAB, and the results are summarized in Table I and Table II. We set the value of some parameters which remain constant among individuals (pooled parameters) in Table I, while the estimates of the other pooled parameters are presented in Table II.
Comparing the estimates of
In Table II, we see that
In Figure 2, we plot histograms of the distribution of the individual-level parameters shown in Table III.
From Table III, we see that the average of
We also observe that the average of
4.2 Analysis of parameters in utility gain
To explore the relationship between the individual mean value of ideas
As is shown in Figure 3 (a), most points gathered in the region of
4.3 The filtering process of the crowdsourcing platform
Our estimates of the model parameters can explicitly represent the filtering process of idea contributors, especially marginal idea contributors, whose ideas are worse than the overall level. In the following analysis, we study the members who posted two or more ideas during the first and the last 20 months, respectively. Figure 4 (a) ∼ (b) illuminate the distributions of average value in the two subgroups.
In order to elaborate the difference in abilities of posting high-value ideas between new members and early members, we present the relationship between the time when an individual first conducted their posting behavior and the average value of her ideas in Figure 5. Through the gathering state of data points, we observe that there exists a great difference in average value among individuals who posted the first idea during the first 40 months (we call them “early members”). In addition, these points scattered across the coordinate plane, and several individuals have posted ideas better than the average level. Moreover, members who posted their first idea during the last 50 months (we call them “new members”) tend to submit ideas that are close to the overall mean value, and no obvious high-value proposer is found. In other words, good idea contributors who are also early members gradually become inactive, while new members cannot raise enough ideas that are as good as those of the previous contributors.
5. Conclusion
We modify an existing structural model to study customers' dynamic learning process using the actual data on MyStarbucksIdea.com. We research the efficiency of crowdsourcing initiatives in the background of customer-involved service innovation.
The results in our paper show that in the early stage of the website, members of the online community not only overestimate the value of their ideas, but also underestimate the cost for the firm to implement ideas. Therefore, members tend to be excessively optimistic and post a large number of new ideas with little value. Along with the learning process, individuals gradually realize the true value of ideas, and know about the firm's cost structure. For marginal idea contributors, the expectation of posting new ideas starts to drop. As a result, customers' learning process plays the role of self-selection, which filters out marginal idea contributors, leading to the decrease in the number of ideas after the website reaches a stable stage.
However, when the website reaches the stable stage (during the 30th and the 40th month in our model), the mean value of ideas and the cumulative feedback rate starts to fall. The reason is that only a few early members remain active; most individuals, including marginal contributors, gradually fade out. In addition, new members are not able to submit enough high-value ideas, so the vacancy for good contributors is not filled.
Figures
Setting value of the pooled parameters
Notation | Setting value |
---|---|
−6 | |
50 | |
50 | |
50 | |
50 | |
1 | |
50 | |
10 |
Estimates of the pooled parameters
Notation | Parameter estimates | Standard deviation |
---|---|---|
−7.12 | 8.59 | |
−8.55 | 2.46 | |
−14.32 | 10.13 | |
1.82 | 1.21 | |
2.46 | 0.46 |
Estimates of the individual-level parameters
Notation | Parameter estimates | Standard deviation |
---|---|---|
0.79 | 52.78 | |
52.00 | 116.93 | |
−1.22 | 0.86 | |
0.05 | 0.04 | |
0.08 | 0.50 | |
0.01 | 0.47 | |
0.03 | 0.55 |
Note(s): For each individual, the individual-level parameters change over time, so they have average and variance. However, Table III only shows the average and variance from the sample group level
References
Archak, N. and Sundararajan, A. (2009), “Optimal design of crowdsourcing contests”, ICIS 2009 Proceedings Paper 200, available at: http://aisel.aisnet.org/icis2009/200.
Bayus, B.L. (2013), “Crowdsourcing new product ideas over time: an analysis of the Dell IdeaStorm community”, Management Science, Vol. 59 No. 1, pp. 226-244.
DiPalantino, D. and Vojnovic, M. (2009), “Crowdsourcing and all-pay auctions”, Proceedings of the 10th ACM Conference on Electronic Commerce, ACM, pp. 119-128.
Di Gangi, P.M., Wasko, M.M. and Hooker, R.E. (2010), “Getting customers' ideas to work for you: learning from dell how to succeed with online user innovation communities”, MIS Quarterly Executive, Vol. 9 No. 4, pp. 213-228.
Howe, J. (2006), “The rise of crowdsourcing”, Wired Magazine, Vol. 14 No. 6, pp. 1-5.
Lu, Y., Singh, P.V. and Srinivasan, K. (2011), “How to retain smart customers in crowdsourcing efforts? A dynamic structural analysis of crowdsourcing customer support and ideation”, Conference of Information Systems and Technology, Charlotte.
Mo, J., Zheng, Z. and Geng, X. (2011), “Winning crowdsourcing contests: a micro-structural analysis of multi-relational social network”, Workshop on Information Management (CSWIM).
Terwiesch, C. and Xu, Y. (2008), “Innovation contests, open innovation, and multiagent problem solving”, Management Science, Vol. 54 No. 9, pp. 1529-1543.
Yan, H., Param, V.S. and Kanan, S. (2014), “Crowdsourcing new product ideas under consumer learning”, Management Science, Vol. 60 No. 9, pp. 2138-2159.