Search results
1 – 10 of over 5000Michael Wayne Davidson, John Parnell and Shaun Wesley Davenport
The purpose of this study is to address a critical gap in enterprise resource planning (ERP) implementation process for small and medium-sized enterprises (SMEs) by acknowledging…
Abstract
Purpose
The purpose of this study is to address a critical gap in enterprise resource planning (ERP) implementation process for small and medium-sized enterprises (SMEs) by acknowledging and countering cognitive biases through a cognitive bias awareness matrix model. Cognitive biases such as temporal discounting and optimism bias often skew decision-making, leading SMEs to prioritize short-term benefits over long-term sustainability or underestimate the challenges involved in ERP implementation. These biases can result in costly missteps, underutilizing ERP systems and project failure. This study enhances decision-making processes in ERP adoption by introducing a matrix that allows SMEs to self-assess their level of awareness and proactivity when addressing cognitive biases in decision-making.
Design/methodology/approach
The design and methodology of this research involves a structured approach using the problem-intervention-comparison-outcome-context (PICOC) framework to systematically explore the influence of cognitive biases on ERP decision-making in SMEs. The study integrates a comprehensive literature review, empirical data analysis and case studies to develop the Cognitive Bias Awareness Matrix. This matrix enables SMEs to self-assess their susceptibility to biases like temporal discounting and optimism bias, promoting proactive strategies for more informed ERP decision-making. The approach is designed to enhance SMEs’ awareness and management of cognitive biases, aiming to improve ERP implementation success rates and operational efficiency.
Findings
The findings underscore the profound impact of cognitive biases and information asymmetry on ERP system selection and implementation in SMEs. Temporal discounting often leads decision-makers to favor immediate cost-saving solutions, potentially resulting in higher long-term expenses due to the lack of scalability. Optimism bias tends to cause underestimating risks and overestimating benefits, leading to insufficient planning and resource allocation. Furthermore, information asymmetry between ERP vendors and SME decision-makers exacerbates these biases, steering choices toward options that may not fully align with the SME’s long-term interests.
Research limitations/implications
The study’s primary limitation is its concentrated focus on temporal discounting and optimism bias, potentially overlooking other cognitive biases that could impact ERP decision-making in SMEs. The PICOC framework, while structuring the research effectively, may restrict the exploration of broader organizational and technological factors influencing ERP success. Future research should expand the range of cognitive biases and explore additional variables within the ERP implementation process. Incorporating a broader array of behavioral economic principles and conducting longitudinal studies could provide a more comprehensive understanding of the challenges and dynamics in ERP adoption and utilization in SMEs.
Practical implications
The practical implications of this study are significant for SMEs implementing ERP systems. By adopting the Cognitive Bias Awareness Matrix, SMEs can identify and mitigate cognitive biases like temporal discounting and optimism bias, leading to more rational and effective decision-making. This tool enables SMEs to shift focus from short-term gains to long-term strategic benefits, improving ERP system selection, implementation and utilization. Regular use of the matrix can help prevent costly implementation errors and enhance operational efficiency. Additionally, training programs designed around the matrix can equip SME personnel with the skills to recognize and address biases, fostering a culture of informed decision-making.
Social implications
The study underscores significant social implications by enhancing decision-making within SMEs through cognitive bias awareness. By mitigating biases like temporal discounting and optimism bias, SMEs can make more socially responsible decisions, aligning their business practices with long-term sustainability and ethical standards. This shift improves operational outcomes and promotes a culture of accountability and transparency. The widespread adoption of the Cognitive Bias Awareness Matrix can lead to a more ethical business environment, where decisions are made with a deeper understanding of their long-term impacts on employees, customers and the broader community, fostering trust and sustainability in the business ecosystem.
Originality/value
This research introduces the original concept of the Cognitive Bias Awareness Matrix, a novel tool designed specifically for SMEs to evaluate and mitigate cognitive biases in ERP decision-making. This matrix fills a critical gap in the existing literature by providing a structured, actionable framework that effectively empowers SMEs to recognize and address biases such as temporal discounting and optimism bias. Its practical application promises to enhance decision-making processes and increase the success rates of ERP implementations. This contribution is valuable to behavioral economics and information systems, offering a unique approach to integrating cognitive insights into business technology strategies.
Details
Keywords
Pedro Brinca, Nikolay Iskrev and Francesca Loria
Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of…
Abstract
Since its introduction by Chari, Kehoe, and McGrattan (2007), Business Cycle Accounting (BCA) exercises have become widespread. Much attention has been devoted to the results of such exercises and to methodological departures from the baseline methodology. Little attention has been paid to identification issues within these classes of models. In this chapter, the authors investigate whether such issues are of concern in the original methodology and in an extension proposed by Šustek (2011) called Monetary Business Cycle Accounting. The authors resort to two types of identification tests in population. One concerns strict identification as theorized by Komunjer and Ng (2011) while the other deals both with strict and weak identification as in Iskrev (2010). Most importantly, the authors explore the extent to which these weak identification problems affect the main economic takeaways and find that the identification deficiencies are not relevant for the standard BCA model. Finally, the authors compute some statistics of interest to practitioners of the BCA methodology.
Details
Keywords
A matrix is a
Abstract
A matrix is a
Details
Keywords
The purpose of this paper is to establish and implement a direct topological reanalysis algorithm for general successive structural modifications, based on the updating matrix…
Abstract
Purpose
The purpose of this paper is to establish and implement a direct topological reanalysis algorithm for general successive structural modifications, based on the updating matrix triangular factorization (UMTF) method for non-topological modification proposed by Song et al. [Computers and Structures, 143(2014):60-72].
Design/methodology/approach
In this method, topological modifications are viewed as a union of symbolic and numerical change of structural matrices. The numerical part is dealt with UMTF by directly updating the matrix triangular factors. For symbolic change, an integral structure which consists of all potential nodes/elements is introduced to avoid side effects on the efficiency during successive modifications. Necessary pre- and post processing are also developed for memory-economic matrix manipulation.
Findings
The new reanalysis algorithm is applicable to successive general structural modifications for arbitrary modification amplitudes and locations. It explicitly updates the factor matrices of the modified structure and thus guarantees the accuracy as full direct analysis while greatly enhancing the efficiency.
Practical implications
Examples including evolutionary structural optimization and sequential construction analysis show the capability and efficiency of the algorithm.
Originality/value
This innovative paper makes direct topological reanalysis be applicable for successive structural modifications in many different areas.
Details
Keywords
M. Neumayer, T. Suppan, T. Bretterklieber, H. Wegleiter and Colin Fox
Nonlinear solution approaches for inverse problems require fast simulation techniques for the underlying sensing problem. In this work, the authors investigate finite element (FE…
Abstract
Purpose
Nonlinear solution approaches for inverse problems require fast simulation techniques for the underlying sensing problem. In this work, the authors investigate finite element (FE) based sensor simulations for the inverse problem of electrical capacitance tomography. Two known computational bottlenecks are the assembly of the FE equation system as well as the computation of the Jacobian. Here, existing computation techniques like adjoint field approaches require additional simulations. This paper aims to present fast numerical techniques for the sensor simulation and computations with the Jacobian matrix.
Design/methodology/approach
For the FE equation system, a solution strategy based on Green’s functions is derived. Its relation to the solution of a standard FE formulation is discussed. A fast stiffness matrix assembly based on an eigenvector decomposition is shown. Based on the properties of the Green’s functions, Jacobian operations are derived, which allow the computation of matrix vector products with the Jacobian for free, i.e. no additional solves are required. This is demonstrated by a Broyden–Fletcher–Goldfarb–Shanno-based image reconstruction algorithm.
Findings
MATLAB-based time measurements of the new methods show a significant acceleration for all calculation steps compared to reference implementations with standard methods. E.g. for the Jacobian operations, improvement factors of well over 100 could be found.
Originality/value
The paper shows new methods for solving known computational tasks for solving inverse problems. A particular advantage is the coherent derivation and elaboration of the results. The approaches can also be applicable to other inverse problems.
Details
Keywords
The author studies forms over finite fields obtained as the determinant of Hermitian matrices and use these determinatal forms to define and study the base polynomial of a square…
Abstract
Purpose
The author studies forms over finite fields obtained as the determinant of Hermitian matrices and use these determinatal forms to define and study the base polynomial of a square matrix over a finite field.
Design/methodology/approach
The authors give full proofs for the new results, quoting previous works by other authors in the proofs. In the introduction, the authors quoted related references.
Findings
The authors get a few theorems, mainly describing some monic polynomial arising as a base polynomial of a square matrix.
Originality/value
As far as the author knows, all the results are new, and the approach is also new.
Details
Keywords
The purpose of this study is to present and explain a new customer segmentation approach inspired by failure mode and effect analysis (FMEA) which can help classify customers into…
Abstract
Purpose
The purpose of this study is to present and explain a new customer segmentation approach inspired by failure mode and effect analysis (FMEA) which can help classify customers into more accurate segments.
Design/methodology/approach
The present study offers a look at the three most commonly used approaches to assessing customer loyalty:net promoter score, loyalty ladder and loyalty matrix. A survey on the quality of restaurant services compares the results of categorizing customers according to these three most frequently used approaches.
Findings
A new way of categorizing customers through loyalty priority number (LPN) is proposed. LPN was designed as a major segmentation criterion consisting of customer loyalty rate, frequency of purchase of products or services and value of purchases. Using the proposed approach allows to categorize customers into four more comprehensive groups: random, bronze, silver and gold – according to their loyalty and value to the organization.
Practical implications
Survey will bring a more accurate way of categorizing customers even in those sectors where transaction data are not available. More accurate customer categorization will enable organizations to use targeting tools more effectively and improve product positioning.
Originality/value
The most commonly used categorization approaches such as net promoter score, loyalty ladder or loyalty matrix offer relatively general information about customer groups. The present study combines the benefits of these approaches with the principles of FMEA. The case study not only made it possible to offer a view of the real application of the proposed approach but also made it possible to make a uniform comparison of the accuracy of customer categorization.
Details
Keywords
Linzi Wang, Qiudan Li, Jingjun David Xu and Minjie Yuan
Mining user-concerned actionable and interpretable hot topics will help management departments fully grasp the latest events and make timely decisions. Existing topic models…
Abstract
Purpose
Mining user-concerned actionable and interpretable hot topics will help management departments fully grasp the latest events and make timely decisions. Existing topic models primarily integrate word embedding and matrix decomposition, which only generates keyword-based hot topics with weak interpretability, making it difficult to meet the specific needs of users. Mining phrase-based hot topics with syntactic dependency structure have been proven to model structure information effectively. A key challenge lies in the effective integration of the above information into the hot topic mining process.
Design/methodology/approach
This paper proposes the nonnegative matrix factorization (NMF)-based hot topic mining method, semantics syntax-assisted hot topic model (SSAHM), which combines semantic association and syntactic dependency structure. First, a semantic–syntactic component association matrix is constructed. Then, the matrix is used as a constraint condition to be incorporated into the block coordinate descent (BCD)-based matrix decomposition process. Finally, a hot topic information-driven phrase extraction algorithm is applied to describe hot topics.
Findings
The efficacy of the developed model is demonstrated on two real-world datasets, and the effects of dependency structure information on different topics are compared. The qualitative examples further explain the application of the method in real scenarios.
Originality/value
Most prior research focuses on keyword-based hot topics. Thus, the literature is advanced by mining phrase-based hot topics with syntactic dependency structure, which can effectively analyze the semantics. The development of syntactic dependency structure considering the combination of word order and part-of-speech (POS) is a step forward as word order, and POS are only separately utilized in the prior literature. Ignoring this synergy may miss important information, such as grammatical structure coherence and logical relations between syntactic components.
Details
Keywords
Hedi Khedhiri and Taher Mkademi
In this paper we talk about complex matrix quaternions (biquaternions) and we deal with some abstract methods in mathematical complex matrix analysis.
Abstract
Purpose
In this paper we talk about complex matrix quaternions (biquaternions) and we deal with some abstract methods in mathematical complex matrix analysis.
Design/methodology/approach
We introduce and investigate the complex space
Findings
We develop on
Originality/value
We give sufficient and necessary conditions in terms of Cauchy–Riemann type quaternionic differential equations for holomorphicity of a function of one complex matrix variable
Details
Keywords
Petar Jackovich, Bruce Cox and Raymond R. Hill
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and…
Abstract
Purpose
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and vertex-greedy subclasses. As these subclasses of heuristics can create subtours, two known methodologies for subtour elimination on symmetric instances are reviewed and are expanded to cover asymmetric problem instances. This paper introduces a third novel subtour elimination methodology, the greedy tracker (GT), and compares it to both known methodologies.
Design/methodology/approach
Computational results for all three subtour elimination methodologies are generated across 17 symmetric instances ranging in size from 29 vertices to 5,934 vertices, as well as 9 asymmetric instances ranging in size from 17 to 443 vertices.
Findings
The results demonstrate the GT is the fastest method for preventing subtours for instances below 400 vertices. Additionally, a distinction between fragment constructive heuristics and the subtour elimination methodology used to ensure the feasibility of resulting solutions enables the introduction of a new vertex-greedy fragment heuristic called ordered greedy.
Originality/value
This research has two main contributions: first, it introduces a novel subtour elimination methodology. Second, the research introduces the concept of ordered lists which remaps the TSP into a new space with promising initial computational results.
Details