Search results

1 – 10 of 745
Open Access
Article
Publication date: 13 March 2024

Tjaša Redek and Uroš Godnov

The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that…

Abstract

Purpose

The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that user-generated content can be efficiently utilised for business intelligence using data science and develops an approach to demonstrate the methods and benefits of the different techniques.

Design/methodology/approach

Using Python Selenium, Beautiful Soup and various text mining approaches in R to access, retrieve and analyse user-generated content, we argue that (1) companies can extract information about the product attributes that matter most to consumers and (2) user-generated reviews enable the use of text mining results in combination with other demographic and statistical information (e.g. ratings) as an efficient input for competitive analysis.

Findings

The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.

Research limitations/implications

The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.

Originality/value

The study makes several contributions to the marketing and management literature, mainly by illustrating the methodological advantages of text mining and accompanying statistical analysis, the different types of distilled information and their use in decision-making.

Details

Kybernetes, vol. 53 no. 13
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 18 January 2024

Yahan Xiong and Xiaodong Fu

Users often struggle to select choosing among similar online services. To help them make informed decisions, it is important to establish a service reputation measurement…

Abstract

Purpose

Users often struggle to select choosing among similar online services. To help them make informed decisions, it is important to establish a service reputation measurement mechanism. User-provided feedback ratings serve as a primary source of information for this mechanism, and ensuring the credibility of user feedback is crucial for a reliable reputation measurement. Most of the previous studies use passive detection to identify false feedback without creating incentives for honest reporting. Therefore, this study aims to develop a reputation measure for online services that can provide incentives for users to report honestly.

Design/methodology/approach

In this paper, the authors present a method that uses a peer prediction mechanism to evaluate user credibility, which evaluates users’ credibility with their reports by applying the strictly proper scoring rule. Considering the heterogeneity among users, the authors measure user similarity, identify similar users as peers to assess credibility and calculate service reputation using an improved expectation-maximization algorithm based on user credibility.

Findings

Theoretical analysis and experimental results verify that the proposed method motivates truthful reporting, effectively identifies malicious users and achieves high service rating accuracy.

Originality/value

The proposed method has significant practical value in evaluating the authenticity of user feedback and promoting honest reporting.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 6 February 2024

Somayeh Tamjid, Fatemeh Nooshinfard, Molouk Sadat Hosseini Beheshti, Nadjla Hariri and Fahimeh Babalhavaeji

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts…

Abstract

Purpose

The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts from unstructured text corpus. In the human disease domain, ontologies are found to be extremely useful for managing the diversity of technical expressions in favour of information retrieval objectives. The boundaries of these domains are expanding so fast that it is essential to continuously develop new ontologies or upgrade available ones.

Design/methodology/approach

This paper proposes a semi-automated approach that extracts entities/relations via text mining of scientific publications. Text mining-based ontology (TmbOnt)-named code is generated to assist a user in capturing, processing and establishing ontology elements. This code takes a pile of unstructured text files as input and projects them into high-valued entities or relations as output. As a semi-automated approach, a user supervises the process, filters meaningful predecessor/successor phrases and finalizes the demanded ontology-taxonomy. To verify the practical capabilities of the scheme, a case study was performed to drive glaucoma ontology-taxonomy. For this purpose, text files containing 10,000 records were collected from PubMed.

Findings

The proposed approach processed over 3.8 million tokenized terms of those records and yielded the resultant glaucoma ontology-taxonomy. Compared with two famous disease ontologies, TmbOnt-driven taxonomy demonstrated a 60%–100% coverage ratio against famous medical thesauruses and ontology taxonomies, such as Human Disease Ontology, Medical Subject Headings and National Cancer Institute Thesaurus, with an average of 70% additional terms recommended for ontology development.

Originality/value

According to the literature, the proposed scheme demonstrated novel capability in expanding the ontology-taxonomy structure with a semi-automated text mining approach, aiming for future fully-automated approaches.

Details

The Electronic Library , vol. 42 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Open Access
Article
Publication date: 9 February 2024

Martin Novák, Berenika Hausnerova, Vladimir Pata and Daniel Sanetrnik

This study aims to enhance merging of additive manufacturing (AM) techniques with powder injection molding (PIM). In this way, the prototypes could be 3D-printed and mass…

Abstract

Purpose

This study aims to enhance merging of additive manufacturing (AM) techniques with powder injection molding (PIM). In this way, the prototypes could be 3D-printed and mass production implemented using PIM. Thus, the surface properties and mechanical performance of parts produced using powder/polymer binder feedstocks [material extrusion (MEX) and PIM] were investigated and compared with powder manufacturing based on direct metal laser sintering (DMLS).

Design/methodology/approach

PIM parts were manufactured from 17-4PH stainless steel PIM-quality powder and powder intended for powder bed fusion compounded with a recently developed environmentally benign binder. Rheological data obtained at the relevant temperatures were used to set up the process parameters of injection molding. The tensile and yield strengths as well as the strain at break were determined for PIM sintered parts and compared to those produced using MEX and DMLS. Surface properties were evaluated through a 3D scanner and analyzed with advanced statistical tools.

Findings

Advanced statistical analyses of the surface properties showed the proximity between the surfaces created via PIM and MEX. The tensile and yield strengths, as well as the strain at break, suggested that DMLS provides sintered samples with the highest strength and ductility; however, PIM parts made from environmentally benign feedstock may successfully compete with this manufacturing route.

Originality/value

This study addresses the issues connected to the merging of two environmentally efficient processing routes. The literature survey included has shown that there is so far no study comparing AM and PIM techniques systematically on the fixed part shape and dimensions using advanced statistical tools to derive the proximity of the investigated processing routes.

Article
Publication date: 16 October 2023

Miguel Calvo and Marta Beltrán

This paper aims to propose a new method to derive custom dynamic cyber risk metrics based on the well-known Goal, Question, Metric (GQM) approach. A framework that complements it…

Abstract

Purpose

This paper aims to propose a new method to derive custom dynamic cyber risk metrics based on the well-known Goal, Question, Metric (GQM) approach. A framework that complements it and makes it much easier to use has been proposed too. Both, the method and the framework, have been validated within two challenging application domains: continuous risk assessment within a smart farm and risk-based adaptive security to reconfigure a Web application firewall.

Design/methodology/approach

The authors have identified a problem and provided motivation. They have developed their theory and engineered a new method and a framework to complement it. They have demonstrated the proposed method and framework work, validating them in two real use cases.

Findings

The GQM method, often applied within the software quality field, is a good basis for proposing a method to define new tailored cyber risk metrics that meet the requirements of current application domains. A comprehensive framework that formalises possible goals and questions translated to potential measurements can greatly facilitate the use of this method.

Originality/value

The proposed method enables the application of the GQM approach to cyber risk measurement. The proposed framework allows new cyber risk metrics to be inferred by choosing between suggested goals and questions and measuring the relevant elements of probability and impact. The authors’ approach demonstrates to be generic and flexible enough to allow very different organisations with heterogeneous requirements to derive tailored metrics useful for their particular risk management processes.

Details

Information & Computer Security, vol. 32 no. 2
Type: Research Article
ISSN: 2056-4961

Keywords

Open Access
Article
Publication date: 8 February 2024

Joseph F. Hair, Pratyush N. Sharma, Marko Sarstedt, Christian M. Ringle and Benjamin D. Liengaard

The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis

2589

Abstract

Purpose

The purpose of this paper is to assess the appropriateness of equal weights estimation (sumscores) and the application of the composite equivalence index (CEI) vis-à-vis differentiated indicator weights produced by partial least squares structural equation modeling (PLS-SEM).

Design/methodology/approach

The authors rely on prior literature as well as empirical illustrations and a simulation study to assess the efficacy of equal weights estimation and the CEI.

Findings

The results show that the CEI lacks discriminatory power, and its use can lead to major differences in structural model estimates, conceals measurement model issues and almost always leads to inferior out-of-sample predictive accuracy compared to differentiated weights produced by PLS-SEM.

Research limitations/implications

In light of its manifold conceptual and empirical limitations, the authors advise against the use of the CEI. Its adoption and the routine use of equal weights estimation could adversely affect the validity of measurement and structural model results and understate structural model predictive accuracy. Although this study shows that the CEI is an unsuitable metric to decide between equal weights and differentiated weights, it does not propose another means for such a comparison.

Practical implications

The results suggest that researchers and practitioners should prefer differentiated indicator weights such as those produced by PLS-SEM over equal weights.

Originality/value

To the best of the authors’ knowledge, this study is the first to provide a comprehensive assessment of the CEI’s usefulness. The results provide guidance for researchers considering using equal indicator weights instead of PLS-SEM-based weighted indicators.

Details

European Journal of Marketing, vol. 58 no. 13
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 3 October 2023

Renan Ribeiro Do Prado, Pedro Antonio Boareto, Joceir Chaves and Eduardo Alves Portela Santos

The aim of this paper is to explore the possibility of using the Define-Measure-Analyze-Improve-Control (DMAIC) cycle, process mining (PM) and multi-criteria decision methods in…

Abstract

Purpose

The aim of this paper is to explore the possibility of using the Define-Measure-Analyze-Improve-Control (DMAIC) cycle, process mining (PM) and multi-criteria decision methods in an integrated way so that these three elements combined result in a methodology called the Agile DMAIC cycle, which brings more agility and reliability in the execution of the Six Sigma process.

Design/methodology/approach

The approach taken by the authors in this study was to analyze the studies arising from this union of concepts and to focus on using PM tools where appropriate to accelerate the DMAIC cycle by improving the first two steps, and to test using the AHP as a decision-making process, to bring more excellent reliability in the definition of indicators.

Findings

It was indicated that there was a gain with acquiring indicators and process maps generated by PM. And through the AHP, there was a greater accuracy in determining the importance of the indicators.

Practical implications

Through the results and findings of this study, more organizations can understand the potential of integrating Six Sigma and PM. It was just developed for the first two steps of the DMAIC cycle, and it is also a replicable method for any Six Sigma project where data acquisition through mining is possible.

Originality/value

The authors develop a fully applicable and understandable methodology which can be replicated in other settings and expanded in future research.

Details

International Journal of Lean Six Sigma, vol. 15 no. 3
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 17 April 2024

Jahanzaib Alvi and Imtiaz Arif

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Abstract

Purpose

The crux of this paper is to unveil efficient features and practical tools that can predict credit default.

Design/methodology/approach

Annual data of non-financial listed companies were taken from 2000 to 2020, along with 71 financial ratios. The dataset was bifurcated into three panels with three default assumptions. Logistic regression (LR) and k-nearest neighbor (KNN) binary classification algorithms were used to estimate credit default in this research.

Findings

The study’s findings revealed that features used in Model 3 (Case 3) were the efficient and best features comparatively. Results also showcased that KNN exposed higher accuracy than LR, which proves the supremacy of KNN on LR.

Research limitations/implications

Using only two classifiers limits this research for a comprehensive comparison of results; this research was based on only financial data, which exhibits a sizeable room for including non-financial parameters in default estimation. Both limitations may be a direction for future research in this domain.

Originality/value

This study introduces efficient features and tools for credit default prediction using financial data, demonstrating KNN’s superior accuracy over LR and suggesting future research directions.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 6 March 2023

Punsara Hettiarachchi, Subodha Dharmapriya and Asela Kumudu Kulatunga

This study aims to minimize the transportation-related cost in distribution while utilizing a heterogeneous fixed fleet to deliver distinct demand at different geographical…

Abstract

Purpose

This study aims to minimize the transportation-related cost in distribution while utilizing a heterogeneous fixed fleet to deliver distinct demand at different geographical locations with a proper workload balancing approach. An increased cost in distribution is a major problem for many companies due to the absence of efficient planning methods to overcome operational challenges in distinct distribution networks. The problem addressed in this study is to minimize the transportation-related cost in distribution while using a heterogeneous fixed fleet to deliver distinct demand at different geographical locations with a proper workload balancing approach which has not gained the adequate attention in the literature.

Design/methodology/approach

This study formulated the transportation problem as a vehicle routing problem with a heterogeneous fixed fleet and workload balancing, which is a combinatorial optimization problem of the NP-hard category. The model was solved using both the simulated annealing and a genetic algorithm (GA) adopting distinct local search operators. A greedy approach has been used in generating an initial solution for both algorithms. The paired t-test has been used in selecting the best algorithm. Through a number of scenarios, the baseline conditions of the problem were further tested investigating the alternative fleet compositions of the heterogeneous fleet. Results were analyzed using analysis of variance (ANOVA) and Hsu’s MCB methods to identify the best scenario.

Findings

The solutions generated by both algorithms were subjected to the t-test, and the results revealed that the GA outperformed in solution quality in planning a heterogeneous fleet for distribution with load balancing. Through a number of scenarios, the baseline conditions of the problem were further tested investigating the alternative fleet utilization with different compositions of the heterogeneous fleet. Results were analyzed using ANOVA and Hsu’s MCB method and found that removing the lowest capacities trucks enhances the average vehicle utilization with reduced travel distance.

Research limitations/implications

The developed model has considered both planning of heterogeneous fleet and the requirement of work load balancing which are very common industry needs, however, have not been addressed adequately either individually or collectively in the literature. The adopted solution methodologies to solve the NP-hard distribution problem consist of metaheuristics, statistical analysis and scenario analysis are another significant contribution. The planning of distribution operations not only addresses operational-level decision, through a scenario analysis, but also strategic-level decision has also been considered.

Originality/value

The planning of distribution operations not only addresses operational-level decisions, but also strategic-level decisions conducting a scenario analysis.

Details

Journal of Global Operations and Strategic Sourcing, vol. 17 no. 2
Type: Research Article
ISSN: 2398-5364

Keywords

Article
Publication date: 24 July 2023

Mark R. Mallon and Stav Fainshmidt

Because family businesses are highly complex enterprises, researchers need appropriate theoretical and methodological tools to study them. The neoconfigurational perspective and…

Abstract

Purpose

Because family businesses are highly complex enterprises, researchers need appropriate theoretical and methodological tools to study them. The neoconfigurational perspective and its accompanying method, qualitative comparative analysis, are particularly well suited to phenomena characterized by complex causality, but their uptake in family business research has been slow and fragmented. To remedy this, the authors highlight their unique ability to address research questions for which other approaches are not well suited and discuss how they might be applied to family business phenomena.

Design/methodology/approach

The authors introduce the core tenets of the neoconfigurational perspective and how its set-theoretic epistemology differs from traditional approaches to theorizing and analysis. The authors then use a dataset of family firms to present a primer on conducting qualitative comparative analysis and interpreting the results.

Findings

The authors find that family firm resources can be combined in multiple ways to affect business survival, suggesting that resources are substitutable and complementary. The authors discuss how the unique features of the neoconfigurational approach, namely equifinality, conjunctural causation and causal asymmetry, can be fruitfully applied to break new ground in scholarly understanding of family businesses.

Originality/value

This article allows family business researchers to apply the neoconfigurational approach without first having to consult multiple and disparate sources often written for other disciplines. This article explicates how to leverage the theoretical and empirical advantages of the neoconfigurational approach in the context of family businesses, supporting a more widespread adoption of the neoconfigurational perspective in family business research.

1 – 10 of 745