Search results

1 – 10 of 463
Open Access
Article
Publication date: 21 November 2018

Varsha Jain, Meetu Chawla, B.E. Ganesh and Christopher Pich

This study aims to examine brand personality and its application to political branding. This study focuses on the brand personality of a political leader from the BJP Party brand…

5835

Abstract

Purpose

This study aims to examine brand personality and its application to political branding. This study focuses on the brand personality of a political leader from the BJP Party brand (Bharatiya Janta Party). The development of a strong political brand personality is crucial for success at the polls. Little research has been dedicated to this phenomenon particularly beyond Western political and post-election contexts.

Design/methodology/approach

The scope and development of the study required a qualitative approach. The theoretical frameworks of the study acted as the deductive base of the study. The insights of the respondents were the inductive base of the study. Semi-structured interviews were conducted with external stakeholders [voters]. In addition, semi-structured interviews were also adopted to capture the branding activities used by internal stakeholders [BJP].

Findings

The brand personality dimensions such as sincerity; agreeableness, competence, energy, openness, conscientiousness and emotional stability were clearly associated with a political leader. Negative qualities such as dictatorial attitudes and arrogance affected the political leader’s brand personality. Religious partisanship was another strong negative trait affecting the brand personality of the political leader.

Originality/value

The study has an actionable framework for political brand personality in the post-election context. It offers negative qualities to be avoided in the development of the political brand personality of the leader. It offers insights about the political brand personality of the leader in terms of young digitally savvy voters.

Propósito

Este trabajo examina la aplicación de la personalidad de marca al ámbito del marketing político y de la marca personal política. Concretamente se centra en la personalidad de marca de un líder político del partido Bharantiya Janta Party (BJP). El desarrollo de una fuerte marca personal política es crucial para el éxito en las elecciones. Pocos trabajos se han centrado hasta el momento en este fenómeno más allá del contexto político occidental.

Diseño/metodología/enfoque

El alcance y desarrollo del estudio requirió la adopción de un enfoque cualitativo. El marco teórico sirvió de base deductiva al tiempo que las entrevistas realizadas sirvieron de base inductiva. Estas entrevistas fueron semi-estructuradas y dirigidas a grupos de interés externos del BJP (los votantes). Además, se realizaron entrevistas también semi-estructuradas para capturar las actividades de marca desarrolladas por los grupos de interés internos (candidatos, políticos, trabajadores y gerentes del partido).

Resultados

Las dimensiones de personalidad de marca sinceridad, competencia, energía, estabilidad emocional, franqueza y escrupulosidad están claramente asociadas con un líder político. Por el contrario, rasgos negativos como las actitudes arrogantes y dictatoriales dañan la personalidad de marca de dicho líder, pero sobretodo el partidismo religioso.

Originalidad/valor

El trabajo proporciona un marco de acción para la marca personal política en un contexto post-electoral. Proporciona indicaciones de los rasgos y cualidades negativas que deben de evitarse en el desarrollo de una marca personal para un líder político. Ofrece también evidencias sobre la personalidad de marca que tiene que desarrollar un líder de cara a los votantes más dinámicos y digitales.

Open Access
Article
Publication date: 17 July 2020

Sheryl Brahnam, Loris Nanni, Shannon McMurtrey, Alessandra Lumini, Rick Brattin, Melinda Slack and Tonya Barrier

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex…

2646

Abstract

Diagnosing pain in neonates is difficult but critical. Although approximately thirty manual pain instruments have been developed for neonatal pain diagnosis, most are complex, multifactorial, and geared toward research. The goals of this work are twofold: 1) to develop a new video dataset for automatic neonatal pain detection called iCOPEvid (infant Classification Of Pain Expressions videos), and 2) to present a classification system that sets a challenging comparison performance on this dataset. The iCOPEvid dataset contains 234 videos of 49 neonates experiencing a set of noxious stimuli, a period of rest, and an acute pain stimulus. From these videos 20 s segments are extracted and grouped into two classes: pain (49) and nopain (185), with the nopain video segments handpicked to produce a highly challenging dataset. An ensemble of twelve global and local descriptors with a Bag-of-Features approach is utilized to improve the performance of some new descriptors based on Gaussian of Local Descriptors (GOLD). The basic classifier used in the ensembles is the Support Vector Machine, and decisions are combined by sum rule. These results are compared with standard methods, some deep learning approaches, and 185 human assessments. Our best machine learning methods are shown to outperform the human judges.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 5 December 2023

Manuel J. Sánchez-Franco and Sierra Rey-Tienda

This research proposes to organise and distil this massive amount of data, making it easier to understand. Using data mining, machine learning techniques and visual approaches…

1047

Abstract

Purpose

This research proposes to organise and distil this massive amount of data, making it easier to understand. Using data mining, machine learning techniques and visual approaches, researchers and managers can extract valuable insights (on guests' preferences) and convert them into strategic thinking based on exploration and predictive analysis. Consequently, this research aims to assist hotel managers in making informed decisions, thus improving the overall guest experience and increasing competitiveness.

Design/methodology/approach

This research employs natural language processing techniques, data visualisation proposals and machine learning methodologies to analyse unstructured guest service experience content. In particular, this research (1) applies data mining to evaluate the role and significance of critical terms and semantic structures in hotel assessments; (2) identifies salient tokens to depict guests' narratives based on term frequency and the information quantity they convey; and (3) tackles the challenge of managing extensive document repositories through automated identification of latent topics in reviews by using machine learning methods for semantic grouping and pattern visualisation.

Findings

This study’s findings (1) aim to identify critical features and topics that guests highlight during their hotel stays, (2) visually explore the relationships between these features and differences among diverse types of travellers through online hotel reviews and (3) determine predictive power. Their implications are crucial for the hospitality domain, as they provide real-time insights into guests' perceptions and business performance and are essential for making informed decisions and staying competitive.

Originality/value

This research seeks to minimise the cognitive processing costs of the enormous amount of content published by the user through a better organisation of hotel service reviews and their visualisation. Likewise, this research aims to propose a methodology and method available to tourism organisations to obtain truly useable knowledge in the design of the hotel offer and its value propositions.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Open Access
Article
Publication date: 22 September 2020

Hung T. Nguyen

While there exist many surveys on the use stochastic frontier analysis (SFA), many important issues and techniques in SFA were not well elaborated in the previous surveys, namely…

4776

Abstract

Purpose

While there exist many surveys on the use stochastic frontier analysis (SFA), many important issues and techniques in SFA were not well elaborated in the previous surveys, namely, regular models, copula modeling, nonparametric estimation by Grenander’s method of sieves, empirical likelihood and causality issues in SFA using regression discontinuity design (RDD) (sharp and fuzzy RDD). The purpose of this paper is to encourage more research in these directions.

Design/methodology/approach

A literature survey.

Findings

While there are many useful applications of SFA to econometrics, there are also many important open problems.

Originality/value

This is the first survey of SFA in econometrics that emphasizes important issues and techniques such as copulas.

Details

Asian Journal of Economics and Banking, vol. 4 no. 3
Type: Research Article
ISSN: 2615-9821

Keywords

Open Access
Article
Publication date: 21 May 2024

Vinicius Muraro and Sergio Salles-Filho

Currently, foresight studies have been adapted to incorporate new techniques based on big data and machine learning (BDML), which has led to new approaches and conceptual changes…

Abstract

Purpose

Currently, foresight studies have been adapted to incorporate new techniques based on big data and machine learning (BDML), which has led to new approaches and conceptual changes regarding uncertainty and how to prospect future. The purpose of this study is to explore the effects of BDML on foresight practice and on conceptual changes in uncertainty.

Design/methodology/approach

The methodology is twofold: a bibliometric analysis of BDML-supported foresight studies collected from Scopus up to 2021 and a survey analysis with 479 foresight experts to gather opinions and expectations from academics and practitioners related to BDML in foresight studies. These approaches provide a comprehensive understanding of the current landscape and future paths of BDML-supported foresight research, using quantitative analysis of literature and qualitative input from experts in the field, and discuss potential theoretical changes related to uncertainty.

Findings

It is still incipient but increasing the number of prospective studies that use BDML techniques, which are often integrated into traditional foresight methodologies. Although it is expected that BDML will boost data analysis, there are concerns regarding possible biased results. Data literacy will be required from the foresight team to leverage the potential and mitigate risks. The article also discusses the extent to which BDML is expected to affect uncertainty, both theoretically and in foresight practice.

Originality/value

This study contributes to the conceptual debate on decision-making under uncertainty and raises public understanding on the opportunities and challenges of using BDML for foresight and decision-making.

Details

foresight, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-6689

Keywords

Open Access
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 4 August 2020

Alaa Tharwat

Independent component analysis (ICA) is a widely-used blind source separation technique. ICA has been applied to many applications. ICA is usually utilized as a black box, without…

29526

Abstract

Independent component analysis (ICA) is a widely-used blind source separation technique. ICA has been applied to many applications. ICA is usually utilized as a black box, without understanding its internal details. Therefore, in this paper, the basics of ICA are provided to show how it works to serve as a comprehensive source for researchers who are interested in this field. This paper starts by introducing the definition and underlying principles of ICA. Additionally, different numerical examples in a step-by-step approach are demonstrated to explain the preprocessing steps of ICA and the mixing and unmixing processes in ICA. Moreover, different ICA algorithms, challenges, and applications are presented.

Details

Applied Computing and Informatics, vol. 17 no. 2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2252

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Open Access
Article
Publication date: 5 April 2022

Yixiang Jiang

At airport security checkpoints, baggage screening is aimed to prevent transportation of prohibited and potentially dangerous items. Observing the projection images generated by…

Abstract

Purpose

At airport security checkpoints, baggage screening is aimed to prevent transportation of prohibited and potentially dangerous items. Observing the projection images generated by X-rays scanner is a critical method. However, when multiple objects are stacked on top of each other, distinguishing objects only by a two-dimensional picture is difficult, which prompts the demand for more precise imaging technology to be investigated for use. Reconstructing from 2D X-ray images to 3D-computed tomography (CT) volumes is a reliable solution.

Design/methodology/approach

To more accurately distinguish the specific contour shape of items when stacked, multi-information fusion network (MFCT-GAN) based on generative adversarial network (GAN) and U-like network (U-NET) is proposed to reconstruct from two biplanar orthogonal X-ray projections into 3D CT volumes. The authors use three modules to enhance the reconstruction qualitative and quantitative effects, compared with the original network. The skip connection modification (SCM) and multi-channels residual dense block (MRDB) enable the network to extract more feature information and learn deeper with high efficiency; the introduction of subjective loss enables the network to focus on the structural similarity (SSIM) of images during training.

Findings

On account of the fusion of multiple information, MFCT-GAN can significantly improve the value of quantitative indexes and distinguish contour explicitly between different targets. In particular, SCM enables features more reasonable and accurate when expanded into three dimensions. The appliance of MRDB can alleviate problem of slow optimization during the late training period, as well as reduce the computational cost. The introduction of subjective loss guides network to retain more high-frequency information, which makes the rendered CT volumes clearer in details.

Originality/value

The authors' proposed MFCT-GAN is able to restore the 3D shapes of different objects greatly based on biplanar projections. This is helpful in security check places, where X-ray images of stacked objects need to be distinguished from the presence of prohibited objects. The authors adopt three new modules, SCM, MRDB and subjective loss, as well as analyze the role the modules play in 3D reconstruction. Results show a significant improvement on the reconstruction both in objective and subjective effects.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. 3 no. 1
Type: Research Article
ISSN: 2633-6596

Keywords

Open Access
Article
Publication date: 20 October 2022

Anna-Greta Nyström and Valtteri Kaartemo

The purpose of this paper is to develop Delphi methodology toward a holistic method for forecasting market change. Delphi methodology experienced its culmination in marketing…

1448

Abstract

Purpose

The purpose of this paper is to develop Delphi methodology toward a holistic method for forecasting market change. Delphi methodology experienced its culmination in marketing research during the 1970s–1980s, but still has much to offer to both marketing scholars and practitioners in contexts where future market changes are associated with ambiguity and uncertainty.

Design/methodology/approach

This study revives the Delphi methodology by exemplifying how a recently developed framework on market change can be combined with the Delphi technique for data collection to support forecasting activities and research. The authors demonstrate the benefits of the improved methodology in an empirical study on the impact of the fifth generation of wireless communications technologies (5G) on the Finnish media market.

Findings

The developed methodological approach aids marketing scholars in categorizing and analyzing the data collected for capturing market change; and better guiding experts/respondents to provide holistic projections of future market change. The authors show that using a predefined theoretical framework in combination with the Delphi method for data collection and analysis is beneficial for studying future market change.

Originality/value

This paper develops Delphi methodology and contributes with a novel methodological approach to assessing market change.

Details

Journal of Business & Industrial Marketing, vol. 37 no. 13
Type: Research Article
ISSN: 0885-8624

Keywords

1 – 10 of 463