Search results

1 – 10 of 573
Article
Publication date: 23 August 2013

Chao‐Ching Wei, Iuan‐Yuan Lu, Tsuang Kuo and Sheng‐Chan Chiu

This present study attempted to examine the difference between brand‐name and bandit technology companies in terms of operating models. Furthermore, the paper aimed to investigate…

Abstract

Purpose

This present study attempted to examine the difference between brand‐name and bandit technology companies in terms of operating models. Furthermore, the paper aimed to investigate the origin and developmental model of bandit.

Design/methodology/approach

This study compared the open innovation approach against a closed one from the perspective of competitive strategies. This study used the content analysis and qualitative system dynamics (QSD) approach to explore the competitive strategies and inhibitory factors of bandit business model, and subsequently presents a causal‐effect loop of business operations development in different stages.

Findings

Bandit business model could be divided into three stages, according to the business operations development, namely “growth, inhibition, and re‐growth”. It often faces obstacles, when a bandit business expands to a certain level, stemming from the consequently disadvantageous conditions which limit the enterprise development. A bandit business will not remove such obstacles unless it reinforces its impregnable core values.

Originality/value

This study contributed the attempt to use the qualitative system dynamics (QSD) approach to explore the competitive strategies and inhibitory factors of emerging business model as “bandit”. As bandit products generate high utility for consumers and easily penetrate a demand‐driven market, the Chinese bandit economic behavior (including copycat business practices) omnipresent in the economic market has triggered a controversy across the society. The findings in bandit business model uncovered the reasons that encourage the founding of small‐sized start‐up firms by practitioners, and showed the unseen path for business and innovation model research in the future.

Details

Chinese Management Studies, vol. 7 no. 3
Type: Research Article
ISSN: 1750-614X

Keywords

Article
Publication date: 15 January 2021

Chiara Giachino, Luigi Bollani, Alessandro Bonadonna and Marco Bertetti

The aim of the paper is to test and demonstrate the potential benefits in applying reinforcement learning instead of traditional methods to optimize the content of a company's…

Abstract

Purpose

The aim of the paper is to test and demonstrate the potential benefits in applying reinforcement learning instead of traditional methods to optimize the content of a company's mobile application to best help travellers finding their ideal flights. To this end, two approaches were considered and compared via simulation: standard randomized experiments or A/B testing and multi-armed bandits.

Design/methodology/approach

The simulation of the two approaches to optimize the content of its mobile application and, consequently, increase flights conversions is illustrated as applied by Skyscanner, using R software.

Findings

The first results are about the comparison between the two approaches – A/B testing and multi-armed bandits – to identify the best one to achieve better results for the company. The second one is to gain experiences and suggestion in the application of the two approaches useful for other industries/companies.

Research limitations/implications

The case study demonstrated, via simulation, the potential benefits to apply the reinforcement learning in a company. Finally, the multi-armed bandit was implemented in the company, but the period of the available data was limited, and due to its strategic relevance, the company cannot show all the findings.

Practical implications

The right algorithm can change according to the situation and industry but would bring great benefits to the company's ability to surface content that is more relevant to users and help improving the experience for travellers. The study shows how to manage complexity and data to achieve good results.

Originality/value

The paper describes the approach used by an European leading company operating in the travel sector in understanding how to adapt reinforcement learning to its strategic goals. It presents a real case study and the simulation of the application of A/B testing and multi-armed bandit in Skyscanner; moreover, it highlights practical suggestion useful to other companies.

Details

Industrial Management & Data Systems, vol. 121 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 April 2005

Dmitriy V. Chulkov and Mayur S. Desai

This paper seeks to apply results from the study of bandit processes to cases of information technology (IT) project failures.

6137

Abstract

Purpose

This paper seeks to apply results from the study of bandit processes to cases of information technology (IT) project failures.

Design/methodology/approach

This paper examines three published case studies, and discusses whether managerial actions are in accordance with the predictions of bandit process studies.

Findings

Bandits are a class of decision‐making problems that involve choosing one action from a set. In terms of project management, the firm selects from several alternative IT projects, each with its own distribution of risks and rewards. The firm investigates technologies one by one, and keeps only the best‐performing technology. The bandit perspective implies that managers choosing a risky IT project with high potential reward before safer ones are behaving optimally. It is in the firm's interest to resolve the uncertainty about the innovative project first. In case of failure, the firm can later choose safer technology. A high proportion of risky projects adopted leads to a high number of project failures.

Practical implications

The bandit approach supports studies that advocate evaluating decision makers on the optimality of their decision process, rather than specific outcomes.

Originality/value

This paper demonstrates how insights from the bandit problem are relevant to studies of IT project failures. Whilst choosing high‐risk, high‐reward projects may be in a firm's interest, some observed project failures are optimal choices that do not work out.

Details

Information Management & Computer Security, vol. 13 no. 2
Type: Research Article
ISSN: 0968-5227

Keywords

Book part
Publication date: 3 August 2015

Alexander W. Salter and Abigail R. Hall

This paper applies the logic of economic calculation to the actions of autocrats. We model autocrats as stationary bandits who use profit-and-loss calculations to select…

Abstract

This paper applies the logic of economic calculation to the actions of autocrats. We model autocrats as stationary bandits who use profit-and-loss calculations to select institutions that maximize their extraction rents. We find in many cases autocrats achieve rent maximization through creating and protecting private property rights. This in turn yields high levels of production, with expropriation kept low enough to incentivize continued high production. Importantly, while this leads to increasing quantities of available goods and services over time, it does not lead to true development; that is, the coordination of consumer demand with producer supply through directing resources to their highest-valued uses. We apply our model to the authoritarian governments of Singapore and the United Arab Emirates, showing how they function as quasi-corporate governance organizations in the business of maximizing appropriable rents.

Details

New Thinking in Austrian Political Economy
Type: Book
ISBN: 978-1-78560-137-8

Keywords

Expert briefing
Publication date: 21 October 2021

This is part of an escalating crisis of insecurity in the region which has seen hundreds of kidnappings -- particularly of schoolchildren -- by armed bandits looking for ransom…

Details

DOI: 10.1108/OXAN-DB264897

ISSN: 2633-304X

Keywords

Geographic
Topical
Expert briefing
Publication date: 10 June 2022

The attacks were the bloodiest and highest-profile acts of violence in Plateau State in recent months and reflect a trend of growing rural violence in the north-central states…

Details

DOI: 10.1108/OXAN-DB270753

ISSN: 2633-304X

Keywords

Geographic
Topical
Open Access
Article
Publication date: 5 August 2021

Rui Qiu and Wen Ji

Many recommender systems are generally unable to provide accurate recommendations to users with limited interaction history, which is known as the cold-start problem. This issue…

Abstract

Purpose

Many recommender systems are generally unable to provide accurate recommendations to users with limited interaction history, which is known as the cold-start problem. This issue can be resolved by trivial approaches that select random items or the most popular one to recommend to the new users. However, these methods perform poorly in many cases. This paper aims to explore the problem that how to make accurate recommendations for the new users in cold-start scenarios.

Design/methodology/approach

In this paper, the authors propose embedded-bandit method, inspired by Word2Vec technique and contextual bandit algorithm. The authors describe user contextual information with item embedding features constructed by Word2Vec. In addition, based on the intelligence measurement model in Crowd Science, the authors propose a new evaluation method to measure the utility of recommendations.

Findings

The authors introduce Word2Vec technique for constructing user contextual features, which improved the accuracy of recommendations compared to traditional multi-armed bandit problem. Apart from this, using this study’s intelligence measurement model, the utility also outperforms.

Practical implications

Improving the accuracy of recommendations during the cold-start phase can greatly raise user stickiness and increase user favorability, which in turn contributes to the commercialization of the app.

Originality/value

The algorithm proposed in this paper reflects that user contextual features can be represented by clicked items embedding vector.

Details

International Journal of Crowd Science, vol. 5 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

Article
Publication date: 11 September 2023

Usman Adekunle Ojedokun, Olufikayo K. Oyelade, Adebimpe A. Adenugba and Olajide O. Akanji

Banditry is a major social problem in Nigeria that has over time defied series of intervention measures introduced by the federal and state governments to address it. Therefore…

Abstract

Purpose

Banditry is a major social problem in Nigeria that has over time defied series of intervention measures introduced by the federal and state governments to address it. Therefore, this study aims to investigate the counter-banditry strategies of the affected communities in Oyo State, Nigeria.

Design/methodology/approach

The research was exploratory and cross-sectional in design. Situational criminal prevention theory was used as conceptual guide. Data were elicited from community leaders, community members and local security guards using in-depth interview, key-informant interview and focus group discussion methods.

Findings

The results showed that communities affected by banditry problem were adopting different internal and external interventions to combat the criminal act. Although the counter-banditry strategies of the affected communities have brought about a reduction in the occurrence of the criminal act, the problem is yet to be totally eliminated as people still get victimised.

Originality/value

This research expanded the frontiers of knowledge by focusing on the counter-banditry strategies of the communities affected by the problem of banditry and also suggested relevant practical steps that can be taken to further strengthen the existing security architectures in such locations.

Details

Safer Communities, vol. 23 no. 1
Type: Research Article
ISSN: 1757-8043

Keywords

Executive summary
Publication date: 7 January 2022

NIGERIA: Airstrikes against bandits may backfire

Details

DOI: 10.1108/OXAN-ES266542

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 8 June 2010

Ole‐Christoffer Granmo

The two‐armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull…

Abstract

Purpose

The two‐armed Bernoulli bandit (TABB) problem is a classical optimization problem where an agent sequentially pulls one of two arms attached to a gambling machine, with each pull resulting either in a reward or a penalty. The reward probabilities of each arm are unknown, and thus one must balance between exploiting existing knowledge about the arms, and obtaining new information. The purpose of this paper is to report research into a completely new family of solution schemes for the TABB problem: the Bayesian learning automaton (BLA) family.

Design/methodology/approach

Although computationally intractable in many cases, Bayesian methods provide a standard for optimal decision making. BLA avoids the problem of computational intractability by not explicitly performing the Bayesian computations. Rather, it is based upon merely counting rewards/penalties, combined with random sampling from a pair of twin Beta distributions. This is intuitively appealing since the Bayesian conjugate prior for a binomial parameter is the Beta distribution.

Findings

BLA is to be proven instantaneously self‐correcting, and it converges to only pulling the optimal arm with probability as close to unity as desired. Extensive experiments demonstrate that the BLA does not rely on external learning speed/accuracy control. It also outperforms established non‐Bayesian top performers for the TABB problem. Finally, the BLA provides superior performance in a distributed application, namely, the Goore game (GG).

Originality/value

The value of this paper is threefold. First of all, the reported BLA takes advantage of the Bayesian perspective for tackling TABBs, yet avoids the computational complexity inherent in Bayesian approaches. Second, the improved performance offered by the BLA opens up for increased accuracy in a number of TABB‐related applications, such as the GG. Third, the reported results form the basis for a new avenue of research – even for cases when the reward/penalty distribution is not Bernoulli distributed. Indeed, the paper advocates the use of a Bayesian methodology, used in conjunction with the corresponding appropriate conjugate prior.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 3 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 573