Search results
1 – 10 of 640Thorsten Roser, Robert DeFillippi and Alain Samson
The purpose of this paper is to make a contribution to co‐creation theory by integrating conceptual insights from the management and marketing literatures that are both concerned…
Abstract
Purpose
The purpose of this paper is to make a contribution to co‐creation theory by integrating conceptual insights from the management and marketing literatures that are both concerned with co‐creation phenomena. It aims to develop a reference model for comparing how different organizations organize and manage their co‐creation ventures. It also aims to apply the authors' framework to four distinct cases that illustrate the differences in co‐creation practice within different co‐creation environments.
Design/methodology/approach
The authors compare four different companies based on case profiles. Each company is employing its own distinct approach to co‐creating. The authors employ a method mix including literature analysis, structured interviews, document and web site analysis, as well as participation.
Findings
The reference model offers a set of useful dimensions for case‐based inquiry. The case comparisons show how firms may decide to systematise and manage a mix of co‐creation activities within B2B versus B2C contexts, utilising either crowd‐sourced or non‐crowd‐sourced approaches. Further, the case comparisons suggest that there are less differences in B2B versus B2C co‐creation as compared with crowd‐sourced versus non‐crowd‐sourced approaches. Ultimately, implementation decisions in one dimension of co‐creation design (e.g. whom to involve in co‐creation) will affect other dimensions of implementation and governance (e.g. how much intimacy) and thus how co‐creation needs to be managed.
Originality/value
The paper presents case comparisons utilising B2B versus B2C, as well as crowd versus non‐crowd‐sourcing examples of co‐creation and an original decision support framework for assessing and comparing co‐creation choices.
Details
Keywords
Sharmistha Chatterjee, Jukka K. Nurminen and Matti Siekkinen
Detecting and tracking the position of a mobile user has become one of the important subjects in many mobile applications. Such applications use location based services (LBS) for…
Abstract
Purpose
Detecting and tracking the position of a mobile user has become one of the important subjects in many mobile applications. Such applications use location based services (LBS) for learning and training user movements in different places (cities, markets, airports, stations) along different modes of transport (bus, car, cycle, walk). To date, GPS is the key solution to all LBS but repeated GPS querying is not economical in terms of the battery life of the mobile phone. The purpose of this paper is to study how cheap and energy‐efficient air pressure sensors measuring the altitude could be used, as a complement to the dominant GPS system. The location detection and route tracking task is then accomplished by matching the collected altitude traces with the altitude curves of stored data to find the best matching routes.
Design/methodology/approach
The cornerstone of the authors' approach is that a huge amount of route data, collected with GPS devices, is available in various cloud services. In order to evaluate the mechanism of matching routes with altitude data, the authors build a prototype system of crowd‐sourced database containing only altitude data of different routes along different modes of transport. How accurately this stored altitude data could be matched with the collected altitude traces is the key question of this study.
Findings
Results show that, within a certain level of accuracy, older repeated routes can be detected from newly tracked altitude traces. Further, the level of accuracy varies depending on the length of path traversed, route curvature, speed of travel and sensor used for tracking.
Originality/value
The new contribution in this paper is to propose an alternative route detection mechanism which minimizes the use of GPS query. This concept will help in retrieving the GPS coordinates of already traversed routes stored in a large database by matching them with currently tracked altitude curves.
Details
Keywords
Ilija Subasic, Nebojsa Gvozdenovic and Kris Jack
The purpose of this paper is to describe a large-scale algorithm for generating a catalogue of scientific publication records (citations) from a crowd-sourced data, demonstrate…
Abstract
Purpose
The purpose of this paper is to describe a large-scale algorithm for generating a catalogue of scientific publication records (citations) from a crowd-sourced data, demonstrate how to learn an optimal combination of distance metrics for duplicate detection and introduce a parallel duplicate clustering algorithm.
Design/methodology/approach
The authors developed the algorithm and compared it with state-of-the art systems tackling the same problem. The authors used benchmark data sets (3k data points) to test the effectiveness of our algorithm and a real-life data ( > 90 million) to test the efficiency and scalability of our algorithm.
Findings
The authors show that duplicate detection can be improved by an additional step we call duplicate clustering. The authors also show how to improve the efficiency of map/reduce similarity calculation algorithm by introducing a sampling step. Finally, the authors find that the system is comparable to the state-of-the art systems for duplicate detection, and that it can scale to deal with hundreds of million data points.
Research limitations/implications
Academic researchers can use this paper to understand some of the issues of transitivity in duplicate detection, and its effects on digital catalogue generations.
Practical implications
Industry practitioners can use this paper as a use case study for generating a large-scale real-life catalogue generation system that deals with millions of records in a scalable and efficient way.
Originality/value
In contrast to other similarity calculation algorithms developed for m/r frameworks the authors present a specific variant of similarity calculation that is optimized for duplicate detection of bibliographic records by extending previously proposed e-algorithm based on inverted index creation. In addition, the authors are concerned with more than duplicate detection, and investigate how to group detected duplicates. The authors develop distinct algorithms for duplicate detection and duplicate clustering and use the canopy clustering idea for multi-pass clustering. The work extends the current state-of-the-art by including the duplicate clustering step and demonstrate new strategies for speeding up m/r similarity calculations.
Details
Keywords
After leading more than thirty co-creation projects, and observing more than 200 others, the author can offer a view on why co-creation with stakeholders is becoming a cornerstone…
Abstract
Purpose
After leading more than thirty co-creation projects, and observing more than 200 others, the author can offer a view on why co-creation with stakeholders is becoming a cornerstone of the creative economy and suggest how the most popular approaches contribute to helping firms gain a competitive advantage through connections that enable continuous innovation.
Design/methodology/approach
To tackle large, complex problems, co-creation, in its most generic form, requires adopting five processes that each represent a potential source of competitive advantage; an approach can utilize each process from very little to a lot. A co-creation strategy will be most powerful when all five processes are used in combination.
Findings
Leading theorists are predicting that in the foreseeable future the co-creation model will become a primary source of the firm's competitive advantage.
Practical implications
Opening up the traditional value chain to stakeholders could precipitate a race to co-creation, as every firm tries to connect each function and process to the relevant ecosystem and attract the best external players as partners.
Originality/value
Leading theorists anticipate that in the foreseeable future the co-creation model will become a primary source of the firm's competitive advantage. The article lays out five approaches.
Details
Keywords
Daniel Taylor, Sebastian Brockhaus, A. Michael Knemeyer and Paul Murphy
Since the emergence of e-commerce uprooted traditional brick-and-mortar retail in the early 2000s, many retailers have reacted by first independently servicing both the online and…
Abstract
Purpose
Since the emergence of e-commerce uprooted traditional brick-and-mortar retail in the early 2000s, many retailers have reacted by first independently servicing both the online and in-store channels (multichannel retailing) and subsequently integrating both channels to provide a seamless front-end customer interface (omnichannel retailing). Accordingly, firms had to adjust their logistics and supply chain management (SCM) processes from fulfilling orders for each channel separately to integrating channels on the back-end (omnichannel fulfillment). This development is mirrored by an emerging stream of academic publications. The purpose of this paper is to provide a snapshot of the current state of omnichannel fulfillment research via a systematic literature review (SLR) in order to identify omnichannel fulfillment strategies and to establish an agenda for future inquiry.
Design/methodology/approach
This SLR is based on 104 papers published in peer-reviewed journals through December 2018. It employs a six-step process, from research question to the presentation of the insights.
Findings
All selected manuscripts are categorized based on demographics such as publication date, outlet, methodology, etc. Analysis of the manuscripts suggests that the integration of fulfillment channel inventory and resources is becoming an important objective of fulfillment management. Appropriate omnichannel strategies based on retailer attributes are not well understood. Industry specific research has been conducted necessitating generalized extension for retailers. These findings provide a clear opportunity for the academic community to take more of the lead in terms of knowledge creation by proposing paths for industry pursuit of channel integration to successfully implement omnichannel fulfillment. Opportunities for future inquiry are highlighted.
Originality/value
This manuscript proposes a definition of omnichannel fulfillment strategies and identifies fulfillment links that are used interchangeably across channels as the key delimiter between omnichannel fulfillment strategies and related concepts. Six omnichannel fulfillment strategies from the extant literature are identified and conceptualized. Future research opportunities around omnichannel fulfillment, potential interdependencies between the established strategies and their impact on related SCM issues such as distribution and reverse logistics are detailed.
Details
Keywords
Sabina Seran (Potra) and Monica Izvercian
The purpose of this paper is to develop a new enriched approach regarding the prosumer concept and a framework for efficient managerial decision, making use of prosumer innovative…
Abstract
Purpose
The purpose of this paper is to develop a new enriched approach regarding the prosumer concept and a framework for efficient managerial decision, making use of prosumer innovative potential.
Design/methodology/approach
Based on relevant literature sources, this paper takes the prosumer concept one step further from usual interpretations suggesting its innovative potential for companies which adequately address this issue. Depending on their own objectives, the domain limitations and the creativity they are allowing regarding specific activities or campaigns, companies can open up and develop prosumer co-creation strategies.
Findings
The authors develop a new prosumer understanding of value co-creation and design prosumer-oriented marketing strategies as a starting point for important decision making and complex marketing campaign creation in an always changing environment.
Originality/value
The research contributes to the existing knowledge on prosumerism, being at the same time valuable for managers, especially in the marketing domain. Marketing corporate specialists do not have guidelines on how to understand, relate and engage these new consumers in corporate activities and therefore loose a potential creative external partner and a significant competitive advantage.
Details
Keywords
Innovation has been considered necessary for solving every problem today. In any business meeting, conference or media innovation is discussed and desired. However, key challenges…
Abstract
Innovation has been considered necessary for solving every problem today. In any business meeting, conference or media innovation is discussed and desired. However, key challenges in realizing innovation are a well understood framework and an infrastructure for its successful deployment. Current frameworks including Open Innovation, Crowd Sourcing and many others do not address corporate needs in terms of credibility and for its body of knowledge for developing competency. Having established a credible and teachable framework for innovation, the author realized that organization leadership is unable to drive innovation due to misunderstanding of innovation principles and change management at the leadership level. Recognizing success of Prof. Kotter's Leading Change model, the author adapts the model to managing the innovation change [1]. This paper presents application of the Leading Change to innovation management and providing guidance to organization leadership for innovation deployment. Innovation deployment has become a necessity for organizations in knowledge age for achieving competitive edge and sustaining profitable growth.
Johannes Braun, Jochen Hausler and Wolfgang Schäfers
The purpose of this paper is to use a text-based sentiment indicator to explain variations in direct property market liquidity in the USA.
Abstract
Purpose
The purpose of this paper is to use a text-based sentiment indicator to explain variations in direct property market liquidity in the USA.
Design/methodology/approach
By means of an artificial neural network, market sentiment is extracted from 66,070 US real estate market news articles from the S&P Global Market Intelligence database. For training of the network, a distant supervision approach utilizing 17,822 labeled investment ideas from the crowd-sourced investment advisory platform Seeking Alpha is applied.
Findings
According to the results of autoregressive distributed lag models including contemporary and lagged sentiment as independent variables, the derived textual sentiment indicator is not only significantly linked to the depth and resilience dimensions of market liquidity (proxied by Amihud’s (2002) price impact measure), but also to the breadth dimension (proxied by transaction volume).
Practical implications
These results suggest an intertemporal effect of sentiment on liquidity for the direct property market. Market participants should account for this effect in terms of their investment decisions, and also when assessing and pricing liquidity risk.
Originality/value
This paper not only extends the literature on text-based sentiment indicators in real estate, but is also the first to apply artificial intelligence for sentiment extraction from news articles in a market liquidity setting.
Details
Keywords
Arash Joorabchi, Michael English and Abdulhussain E. Mahdi
The use of social media and in particular community Question Answering (Q & A) websites by learners has increased significantly in recent years. The vast amounts of data…
Abstract
Purpose
The use of social media and in particular community Question Answering (Q & A) websites by learners has increased significantly in recent years. The vast amounts of data posted on these sites provide an opportunity to investigate the topics under discussion and those receiving most attention. The purpose of this paper is to automatically analyse the content of a popular computer programming Q & A website, StackOverflow (SO), determine the exact topics of posted Q & As, and narrow down their categories to help determine subject difficulties of learners. By doing so, the authors have been able to rank identified topics and categories according to their frequencies, and therefore, mark the most asked about subjects and, hence, identify the most difficult and challenging topics commonly faced by learners of computer programming and software development.
Design/methodology/approach
In this work the authors have adopted a heuristic research approach combined with a text mining approach to investigate the topics and categories of Q & A posts on the SO website. Almost 186,000 Q & A posts were analysed and their categories refined using Wikipedia as a crowd-sourced classification system. After identifying and counting the occurrence frequency of all the topics and categories, their semantic relationships were established. This data were then presented as a rich graph which could be visualized using graph visualization software such as Gephi.
Findings
Reported results and corresponding discussion has given an indication that the insight gained from the process can be further refined and potentially used by instructors, teachers, and educators to pay more attention to and focus on the commonly occurring topics/subjects when designing their course material, delivery, and teaching methods.
Research limitations/implications
The proposed approach limits the scope of the analysis to a subset of Q & As which contain one or more links to Wikipedia. Therefore, developing more sophisticated text mining methods capable of analysing a larger portion of available data would improve the accuracy and generalizability of the results.
Originality/value
The application of text mining and data analytics technologies in education has created a new interdisciplinary field of research between the education and information sciences, called Educational Data Mining (EDM). The work presented in this paper falls under this field of research; and it is an early attempt at investigating the practical applications of text mining technologies in the area of computer science (CS) education.
Details
Keywords
David Nickell, Minna Rollins and Karl Hellman
This study aims to investigate the marketing actions that companies performed during the Great Recession, and the resulting effect on firms' performance. The purpose is to…
Abstract
Purpose
This study aims to investigate the marketing actions that companies performed during the Great Recession, and the resulting effect on firms' performance. The purpose is to discover what marketing actions companies performed, what was the impact to the firm, and why the actions taken either helped them to excel, simply survive, or cease to exist.
Design/methodology/approach
The study uses a discovery‐oriented approach, consisting of a pilot study, a survey, field interviews and a focus group interview.
Findings
The findings suggest that successful companies invest in current customer relationships by strengthening their key account teams and by working with their clients who are suffering financial difficulties. Successful firms also began implementing new marketing techniques such as social media and crowd‐sourcing.
Originality/value
This study contributes to previous research in marketing that focuses on marketing activities during recessions.
Details