Search results

1 – 10 of over 8000
Article
Publication date: 28 March 2022

Debaditya Mohanti and Souvik Banerjee

The present study aims to evaluate the aggregate consumption function from the perspective of the Euler equation using Indian macroeconomic data. Further, to examine the…

Abstract

Purpose

The present study aims to evaluate the aggregate consumption function from the perspective of the Euler equation using Indian macroeconomic data. Further, to examine the robustness of the findings for India, other developing nations are also studied.

Design/methodology/approach

Quarterly time-series data from 1996:1 to 2020:3 on consumption and income in India are used to evaluate the alternative model proposed by Campbell and Mankiw (1989). The alternative hypotheses in the present study are tested by estimating models using the instrumental variable approach. The lagged changes in the quarterly average of 91-day Treasury bill yields are used as the nominal interest rate instrumental variables along with other lagged instrumental variables.

Findings

The evidence presented in this study suggests that aggregate consumption is better explained when the permanent income model incorporates rule-of-thumb consumers, that is, individuals who consume their current income along with those who consume their permanent income.

Practical implications

The new rule-of-thumb framework better explains some of the observed phenomena, such as why the expected changes in consumption are related to the expected changes in income, why the expected changes in consumption are unrelated to real interest rates (i.e. why the intertemporal elasticity of substitution is near zero) and why a high consumption/income ratio is usually followed by an increase in income growth.

Originality/value

This study adds to the limited literature on the Euler-based consumption function in developing economies.

Details

Indian Growth and Development Review, vol. 15 no. 2/3
Type: Research Article
ISSN: 1753-8254

Keywords

Article
Publication date: 1 September 1996

Richard A. Bernardi and Karen V. Pincus

Researchers and practitioners have long debated the arguments in favor of and against providing specific mathematical materiality guidelines in auditing standards. Yet, there is…

Abstract

Researchers and practitioners have long debated the arguments in favor of and against providing specific mathematical materiality guidelines in auditing standards. Yet, there is little empirical evidence about the relationship between materiality thresholds and audit risk judgments in the absence of such guidelines. In this study, 152 Big Six managers evaluated materiality and risk for an audit simulation based on an actual case where material fraud was undetected. The auditor subjects were allowed to choose the evidence they would examine before reaching a decision. The major findings of the study are that while auditor materiality judgments differ, these differences were not statistically significantly related to either fraud risk judgments or the amount of evidence the auditors chose to examine before rendering their judgments. This empirical evidence does not support the need for specific quantitative guidance in accounting standards related to materiality. However, other considerations (such as concern for legal liability) could also have an impact on the advisability of providing specific quantitative guidance for setting materiality thresholds.

Details

Managerial Finance, vol. 22 no. 9
Type: Research Article
ISSN: 0307-4358

Article
Publication date: 5 October 2012

Susanne Engström and Erika Hedgren

Humans tend to rely on beliefs, assumptions and cognitive rulesofthumb for making judgments and are biased against taking more uncertain alternatives. Such inertia has…

1572

Abstract

Purpose

Humans tend to rely on beliefs, assumptions and cognitive rulesofthumb for making judgments and are biased against taking more uncertain alternatives. Such inertia has implications for client organizations' decision making about innovations, which are inherently more uncertain than conventional alternatives. The purpose of this paper is to contribute to furthering the understanding of barriers to overcoming inertia in client decision making in new‐build.

Design/methodology/approach

A descriptive behavioural decision‐making perspective is combined with an organizational information‐processing perspective. To identify and discuss individual and organizational barriers that potentially distort clients' decision making on innovation, the analysis addresses aggregated data from four studies. The analysis focuses on inferences and interpretations made by decision makers in Swedish client organizations, their information‐processing practices and the subsequent impacts on perceived meanings and judgments about industrialized multi‐storey, timber‐framed building innovations, which are perceived by Swedish clients as new and different building alternatives.

Findings

Cognitive and organizational barriers maintain status‐quo decisions. Clients are inclined to make biased judgments about industrialized‐building alternatives because non‐applicable cognitive rulesofthumb, based on their experiences of conventional‐building alternatives, are used. Furthermore, client organizations' information‐processing practices do not allow different meanings to surface, interact and potentially suggest different conclusions, at odds with established beliefs.

Originality/value

The paper's conclusions highlight how inertia is sustained in client decision making in new‐build. They illustrate the limitations of a common engineering approach, i.e. supporting decision making about innovations by focusing on providing more information to the decision maker in order to reduce uncertainty, as well as managing multiple meanings by reductionism.

Details

Construction Innovation, vol. 12 no. 4
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 4 January 2008

Nina Cole

This study aims to examine the question of how long a behavioral skills training program should be in order to result in measurable behavioral change.

3591

Abstract

Purpose

This study aims to examine the question of how long a behavioral skills training program should be in order to result in measurable behavioral change.

Design/methodology/approach

An empirical field study was conducted to compare two different lengths of time for a managerial skills training program aimed at achieving behavioral change. The training time for the first training condition was based on “rulesofthumb” found in the literature. The training time was increased in an “extended” training condition that covered the same material but permitted more time for lecture, role‐playing and discussion.

Findings

Results showed that, relative to a control group, participants in the “extended” training condition exhibited behavioral change, but those in the “rulesofthumb” training condition did not. Self‐efficacy increased significantly for trainees in both training conditions.

Practical implications

More attention is required to the length of training programs as they are being designed, especially if behavioral change is a goal of the training. Using rulesofthumb regarding training length may be insufficient for bringing about behavioral change. More importantly, the need for more effective management skills will not be met, and organizational performance outcomes may be jeopardized.

Originality/value

The results of this research have the potential to be broadly applicable to management training and may possibly generalize to training in other disciplines where the training is intended to effect behavioral change.

Details

Journal of Workplace Learning, vol. 20 no. 1
Type: Research Article
ISSN: 1366-5626

Keywords

Article
Publication date: 23 January 2020

Dario Pontiggia

The purpose of this paper is to study the optimal long-run rate of inflation in the presence of a hybrid Phillips curve, which nests a purely backward-looking Phillips curve and…

Abstract

Purpose

The purpose of this paper is to study the optimal long-run rate of inflation in the presence of a hybrid Phillips curve, which nests a purely backward-looking Phillips curve and the purely forward-looking New Keynesian Phillips curve (NKPC) as special limiting cases.

Design/methodology/approach

This paper derives the long-run rate of inflation in a basic New Keynesian (NK) model, characterized by sticky prices and rule-of-thumb behavior by price setters. The monetary authority possesses commitment and its objective function stems from an approximation to the utility of the representative household.

Findings

Commitment solution for the monetary authority leads to steady-state outcomes in which inflation, albeit small, is positive. Rising from zero under the purely forward-looking NKPC, the optimal long-run rate of inflation reaches its maximum under the purely backward-looking Phillips curve. In this case, inflation bias arises, while, under the hybrid Phillips curve, positive long-run inflation is associated with an output gain.

Research limitations/implications

This paper serves as a clarification against the misperception that log-linearized models take as given the steady-state inflation rate rather than being capable of determining it. Analysis is sensitive to the basic NK setting, with the assumed rule-of-thumb behavior by price setters and price staggering.

Originality/value

The results are the first to quantify the optimal long-run rate of inflation in a fully microfounded model that nests different Phillips curves.

Details

Journal of Economic Studies, vol. 47 no. 1
Type: Research Article
ISSN: 0144-3585

Keywords

Abstract

Details

Quality Control Procedure for Statutory Financial Audit
Type: Book
ISBN: 978-1-78714-226-8

Abstract

Details

Economic Complexity
Type: Book
ISBN: 978-0-44451-433-2

Article
Publication date: 21 June 2013

Hendry Raharjo

This paper aims to investigate the need of normalizing the relationship matrix in quality function deployment (QFD), especially when it leads to rank reversal, and eventually…

Abstract

Purpose

This paper aims to investigate the need of normalizing the relationship matrix in quality function deployment (QFD), especially when it leads to rank reversal, and eventually provide a guideline to know when it should be done.

Design/methodology/approach

The research was carried out based on some empirical observations and previous research data.

Findings

A rule of thumb is proposed to know when the rank reversal, as a result of normalizing QFD relationship matrix, can be desirable or undesirable.

Research limitations/implications

Since the rule of thumb is based on empirical basis, it might not work perfectly for every single case, especially for large‐sized QFD matrices. Hence, this opens up a new challenge for future research to complement the current findings.

Practical implications

This paper shows that any QFD practitioner should be aware of the fact that normalization in the QFD relationship matrix is not a trivial issue, especially when it causes rank reversal. Ignoring normalization might cause potentially misleading results. However, using normalization does not always guarantee that one may obtain reliable results.

Originality/value

There are two novel findings in this paper. First, it is the exposition of the pros and cons of normalization in QFD relationship matrix. Second, it is the proposed rule of thumb which may serve as an important guideline for any QFD practitioner when dealing with the relationship matrix.

Details

International Journal of Quality & Reliability Management, vol. 30 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 8 March 2022

Usman Ayub, Umara Noreen, Uzma Qaddus, Attayah Shafique and Imran Abbas Jadoon

Heuristics are a less complex and more understandable way to a more straightforward, astute and brisk basic decision-making strategy. The purpose of this study is the development…

Abstract

Purpose

Heuristics are a less complex and more understandable way to a more straightforward, astute and brisk basic decision-making strategy. The purpose of this study is the development of a rule of thumb called the “Crocodile rule” based on downside risk.

Design/methodology/approach

The crocodile rule is developed and tested in two steps by using data in the form of stock portfolios of the Pakistan Stock Exchange from January 2000 to November 2017. In the first phase of the study, researchers have forecasted the probabilities, while in the second phase, the researchers have used these probabilities to test the crocodile rule.

Findings

The findings show the acceptance of the null hypothesis, forecasting error for all categories of stocks for the first phase. The results also show that the minimum recovery chance is 58%, and the maximum recovery chance is 81% with an overall average of 69% chance of recovery. All recovery probabilities are above 50% for all portfolios; this is particularly impressive for a volatile market like Pakistan.

Research limitations/implications

The study also proposes another performance measure such as “value-at-risk” and compare it with present results to yield better outcomes. Furthermore, other categories of stock like profitability and growth can be tested as well.

Practical implications

The practical application of this rule is a choice between a “Buy-and-hold” strategy and showing myopic behavior as another extreme.

Originality/value

This pioneering research focuses on the development of the “Crocodile rule” by using the lower partial moments as a proxy of downside risk. This research adds value to the existing literature on performance measures. Furthermore, it also highlights and indicates which strategy should be used by the investors in case of falling trends in the market.

Details

Journal of Modelling in Management, vol. 18 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 24 April 2019

Dan Asher and Micha Popper

This paper aims to clarify the term “tacit knowledge” and suggests the “onion model” as a way to explore conceptually linked layers of tacit knowledge. The model allows the…

2581

Abstract

Purpose

This paper aims to clarify the term “tacit knowledge” and suggests the “onion model” as a way to explore conceptually linked layers of tacit knowledge. The model allows the application of different methodologies to elicit tacit knowledge in each layer, the ability to infer tacit knowledge in other layers from tacit knowledge gained in another layer and the exploration of the dynamics of tacit knowledge among the various layers presented in the model. Conceptual and practical advantages compared to prior works on tacit knowledge are discussed.

Design/methodology/approach

The main theoretical and methodological dilemmas discussed in the literature regarding tacit knowledge are reviewed. The “onion model” presented in this paper suggests an approach and methodologies that address the issues raised in the literature. The different layers of the model are demonstrated by prior research studies.

Findings

The “onion model” discussed in this study points to various layers of tacit knowledge and the links among them, allowing a research-based approach, as well as various practices.

Research limitations/implications

This paper discusses different layers of tacit knowledge relying on previous works that have dealt with these layers independently. The model as a whole and the dynamics among the layers are yet to be empirically investigated.

Practical implications

The “onion model” provides a conceptual framework that can be used for research and diagnosis aimed at exploring tacit knowledge that can serve individual and organizational development.

Originality/value

The approach discussed in this paper addresses some major problems discussed in the literature on tacit knowledge.

1 – 10 of over 8000