Search results

1 – 10 of over 100000
Book part
Publication date: 12 September 2003

Joel A.C Baum and Theresa K Lant

Organizations create their environments by constructing interpretations and then acting on them as if they were true. This study examines the cognitive spatial boundaries that…

Abstract

Organizations create their environments by constructing interpretations and then acting on them as if they were true. This study examines the cognitive spatial boundaries that managers of Manhattan hotels impose on their competitive environment. We derive and estimate a model that specifies how the attributes of managers’ own hotels and potential rival hotels influence their categorization of competing and non-competing hotels. We show that similarity in geographic location, price, and size are central to managers’ beliefs about the identity of their competitors, but that the weights they assign to these dimensions when categorizing competitors diverge from their influence on competitive outcomes, and indicate an overemphasis on geographic proximity. Although such categorization is commonly conceived as a rational process based on the assessment of similarities and differences, we suggest that significant distortions can occur in the categorization process and examine empirically how factors including managers’ attribution errors, cognitive limitations, and (in)experience lead them to make type I and type II competitor categorization errors and to frame competitive environments that are incomplete, erroneous, or even superstitious. Our findings suggest that understanding inter-firm competition may require greater attention being given to the cognitive foundations of competition.

Details

Geography and Strategy
Type: Book
ISBN: 978-0-76231-034-0

Article
Publication date: 18 November 2020

Stewart Li, Richard Fisher and Michael Falta

Auditors are required to perform analytical procedures during the planning and concluding phases of the audit. Such procedures typically use data aggregated at a high level. The…

360

Abstract

Purpose

Auditors are required to perform analytical procedures during the planning and concluding phases of the audit. Such procedures typically use data aggregated at a high level. The authors investigate whether artificial neural networks, a more sophisticated technique for analytical review than typically used by auditors, may be effective when using high level data.

Design/methodology/approach

Data from companies operating in the dairy industry were used to train an artificial neural network. Data with and without material seeded errors were used to test alternative techniques.

Findings

Results suggest that the artificial neural network approach was not significantly more effective (taking into account both Type I and II errors) than traditional ratio and regression analysis, and none of the three approaches provided more overall effectiveness than a purely random procedure. However, the artificial neural network approach did yield considerably fewer Type II errors than the other methods, which suggests artificial neural networks could be a candidate to improve the performance of analytical procedures in circumstances where Type II error rates are the primary concern of the auditor.

Originality/value

The authors extend the work of Coakley and Brown (1983) by investigating the application of artificial neural networks as an analytical procedure using aggregated data. Furthermore, the authors examine multiple companies from one industry and supplement financial information with both exogenous industry and macro-economic data.

Details

Meditari Accountancy Research, vol. 29 no. 6
Type: Research Article
ISSN: 2049-372X

Keywords

Article
Publication date: 9 April 2018

Silvana Maria R. Watson, João Lopes, Célia Oliveira and Sharon Judge

The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks.

Abstract

Purpose

The purpose of this descriptive study is to investigate why some elementary children have difficulties mastering addition and subtraction calculation tasks.

Design/methodology/approach

The researchers have examined error types in addition and subtraction calculation made by 697 Portuguese students in elementary grades. Each student completed a written assessment of mathematical knowledge. A system code (e.g. FR = failure to regroup) has been used to grade the tests. A reliability check has been performed on 65 per cent randomly selected exams.

Findings

Data frequency analyses reveal that the most common type of error was miscalculation for both addition (n = 164; 38.6 per cent) and subtraction (n = 180; 21.7 per cent). The second most common error type was related to failure to regroup in addition (n = 74; 17.5 per cent) and subtraction (n = 139; 16.3 per cent). Frequency of error types by grade level has been provided. Findings from the hierarchical regression analyses indicate that students’ performance differences emerged as a function of error types which indicated students’ types of difficulties.

Research limitations/implications

There are several limitations of this study: the use of a convenient sample; all schools were located in the northern region of Portugal; the limited number of problems; and the time of the year of assessment.

Practical implications

Students’ errors suggested that their performance in calculation tasks is related to conceptual and procedural knowledge and skills. Error analysis allows teachers to better understand the individual performance of a diverse group and to tailor instruction to ensure that all students have an opportunity to succeed in mathematics.

Social implications

Error analysis helps teachers uncover individual students’ difficulties and deliver meaningful instruction to all students.

Originality/value

This paper adds to the international literature on error analysis and reinforces its value in diagnosing students’ type and severity of math difficulties.

Details

Journal for Multicultural Education, vol. 12 no. 1
Type: Research Article
ISSN: 2053-535X

Keywords

Article
Publication date: 1 September 2004

Shee Boon Law and Roger Willett

To provide further evidence on the effectiveness of analytical procedures (APs) used in auditing. Computer simulation experiments are used to examine the error detection ability…

1507

Abstract

To provide further evidence on the effectiveness of analytical procedures (APs) used in auditing. Computer simulation experiments are used to examine the error detection ability of a set of APs. Two different types of errors are examined and compared on the basis of the Type I and Type II errors they produce. The results of the experiments support earlier performance assessments of APs based upon simulated data. Higher noise levels reduce performance but a more detailed modeling of the process generating the data appears to produce a compensatory increase in performance. Contrary to earlier findings, some annual APs performed better than their related monthly counterparts. Case study and experimental results are better reconciled than in previous studies. The findings are based upon simulated data and deal with two types of error only. The experiments model the data generating process underlying accounting numbers but are simplifications of the real situation. Future research based upon the same approach but using more sophisticated experimental models and dealing with a wider class of errors would be useful. The findings echo earlier recommendations that APs should not be relied upon as lone, substantive testing devices for error and fraud. The simulation experiments use Statistical Activity Cost Theory to generate accounting numbers from specified, underlying stochastic processes. This allows errors to be related to transactions, i.e. the level at which they typically occur, whereas in prior experimental work errors have only been related to accounts.

Details

Managerial Auditing Journal, vol. 19 no. 7
Type: Research Article
ISSN: 0268-6902

Keywords

Article
Publication date: 1 April 2001

Clarence N.W. Tan and Herlina Dihardjo

Outlines previous research on company failure prediction and discusses some of the methodological issues involved. Extends an earlier study (Tan 1997) using artificial neural…

1241

Abstract

Outlines previous research on company failure prediction and discusses some of the methodological issues involved. Extends an earlier study (Tan 1997) using artificial neural networks (ANN) to predict financial distress in Australian credit unions by extending the forecast period of the models, presents the results and compares them with probit model results. Finds the ANN models generally at least as good as the probit, although both types improved their accuracy rates (for Type I and Type II errors) when early warning signals were included. Believes ANN “is a promising technique” although more research is required, and suggests some avenues for this.

Details

Managerial Finance, vol. 27 no. 4
Type: Research Article
ISSN: 0307-4358

Keywords

Article
Publication date: 1 April 1990

B. Kirwan, B. Martin, H. Rycraft and A. Smith

Human error data in the form of human error probabilities should ideally form the corner‐stone of human reliability theory and practice. In the history of human reliability…

Abstract

Human error data in the form of human error probabilities should ideally form the corner‐stone of human reliability theory and practice. In the history of human reliability assessment, however, the collection and generation of valid and usable data have been remarkably elusive. In part the problem appears to extend from the requirement for a technique to assemble the data into meaningful assessments. There have been attempts to achieve this, THERP being one workable example of a (quasi) database which enables the data to be used meaningfully. However, in recent years more attention has been focused on the PerformanceShaping Factors (PSF) associated with human reliability. A “database for today” should therefore be developed in terms of PSF, as well as task/ behavioural descriptors, and possibly even psychological error mechanisms. However, this presumes that data on incidents and accidents are collected and categorised in terms of the PSF contributing to the incident, and such classification systems in practice are rare. The collection and generation of a small working database, based on incident records are outlined. This has been possible because the incident‐recording system at BNFL Sellafield does give information on PSF. Furthermore, the data have been integrated into the Human Reliability Management System which is a PSF‐based human reliability assessment system. Some of the data generated are presented, as well as the PSF associated with them, and an outline of the incident collection system is given. Lastly, aspects of human common mode failure or human dependent failures, particularly at the lower human error probability range, are discussed, as these are unlikely to be elicited from data collection studies, yet are important in human reliability assessment. One possible approach to the treatment of human dependent failures, the utilisation of human performance‐limiting values, is described.

Details

International Journal of Quality & Reliability Management, vol. 7 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 5 September 2016

Ying (Jessica) Cao, Calum Turvey, Jiujie Ma, Rong Kong, Guangwen He and Jubo Yan

The purpose of this paper is to investigate whether negative incentives in the pay-for-performance mechanism would trigger loan officers to strategically reject potentially good…

Abstract

Purpose

The purpose of this paper is to investigate whether negative incentives in the pay-for-performance mechanism would trigger loan officers to strategically reject potentially good loans. If so, what is the feasible solution to alleviate the problem.

Design/methodology/approach

A framed field experiment was conducted to test loan decision behaviors using loan officers from Rural Credit Cooperatives in Shandong, China. A 2 by 2 between-subject design was adopted to generate variation in incentives and prior information about credit risks.

Findings

Results showed that loan officers did ration credit by rejecting more loans when facing risks of personal income loss. However, providing risk information about the application pool boosted the approval rate and offset the behavioral responses by a roughly same magnitude.

Research limitations/implications

Findings in this study suggest that certain institutional settings can result in credit rationing via strategic loan misclassification. Further, information sometimes generates similar effects as those costly incentives or mechanisms that are not implementable in practice.

Originality/value

This study adopted an innovative monetized experimental design that allows researchers to examine the (otherwise unobservable) trade-offs between Type I and Type II error in loan misclassification as incentives change. In addition, an anchoring prior information treatment is used to solicit the relative power of almost costless information and costly monetary incentives, and to point out a potentially feasible solution.

Details

Agricultural Finance Review, vol. 76 no. 3
Type: Research Article
ISSN: 0002-1466

Keywords

Article
Publication date: 1 May 1998

Erkki K. Laitinen and Teija Laitinen

In this study the factors behind the decision‐makers’ erroneous judgements regarding failure prediction (classification of firms as bankrupt and non‐bankrupt) are analysed. The…

1868

Abstract

In this study the factors behind the decision‐makers’ erroneous judgements regarding failure prediction (classification of firms as bankrupt and non‐bankrupt) are analysed. The purpose is to find out the factors causing incorrect responses, i.e. the cases in which the decision‐maker is for some reason incapable of using the given information to arrive at the correct classification. The following five possible sources of disturbance in this decision‐making were hypothesized: firm‐specific factors, data, decision‐maker‐specific factors, external factors, and failure process. In further analysis these factors were empirically operationalized and their significance was tested applying logistic (logit) analysis separately for the Type I and Type II classification errors identified in an HIP study. The results indicated that the effect of all of the five hypothesized factors on misclassifications is statistically significant. The inconsistency of the cues (firm‐specific factors) may be the main factor causing errors in evaluation. Moreover, the failure process is another important factor (Type I error). Thus, human bankruptcy prediction can be improved mainly by checking the consistency of financial statements (that they give a true view of the firm’s economic status) and by paying special attention to timely identification of the possible failure process. Future HIP studies on bankruptcy prediction and also other economic events should pay attention to control the kinds of sources of disturbance identified in this study, to maintain validity.

Details

Accounting, Auditing & Accountability Journal, vol. 11 no. 2
Type: Research Article
ISSN: 0951-3574

Keywords

Article
Publication date: 17 March 2020

Lola García-Santiago and María-Dolores Olvera-Lobo

This paper presents an exploratory study on the accessibility of Spanish World Heritage website home pages in the Spanish language.

Abstract

Purpose

This paper presents an exploratory study on the accessibility of Spanish World Heritage website home pages in the Spanish language.

Design/methodology/approach

The study sample comprised 78 home pages from the institutional websites of the 47 cultural, natural and mixed assets considered as World Cultural Heritage by The United Nations Educational, Scientific and Cultural Organization (Unesco). These home pages have been analysed using online accessibility validator tools, following the Web Content Accessibility Guidelines (WCAG) 2.0 recommendation for the different levels of priority. The compiled data were employed in a quantitative study on adherence to WCAG guidelines. Furthermore, the types of errors made using the perspective of accessibility and usability were identified, and the application rate was calculated for these accessibility guidelines according to the type of entity managed by the websites and pages.

Findings

The results show that more than 25 percent of the cases analysed had ten accessibility errors or fewer. Moreover, it was only necessary to correct one or two types of errors in close to 40 percent of them. The paper draws the conclusion that, despite technological and legislative advances that make public entity websites accessible, there is still much to do before complete web accessibility and usability at AA and AAA level can be achieved.

Practical implications

Identifying accessibility problems on institutional websites constitutes the first step towards creating web content that is easy to access and manage for users with disabilities. In this regard, this study contributes to improving web content according to objective guidelines such as those encouraged by the WCAG 2.0.

Originality/value

This article provides information on how accessibility and usability guidelines are implemented by institutional websites for Cultural Heritage deemed especially important. This is an issue with significant implications for users and for which, however, there is a lack of prior studies. As a result, the value and originality of this paper can be considered evident.

Details

Library Hi Tech, vol. 39 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 5 March 2018

Veronika Anselmann and Regina H. Mulder

The study pursues two goals: first, as a replication study, the purpose of this paper is to test a model of learning from errors in the domain of insurance industry. Second, to…

Abstract

Purpose

The study pursues two goals: first, as a replication study, the purpose of this paper is to test a model of learning from errors in the domain of insurance industry. Second, to increase insights in learning from errors, the authors focussed on different types of errors.

Design/methodology/approach

The authors conducted a cross-sectional survey in the insurance industry (N=206). The authors used structural equation modelling and path modelling to analyse the data. To be able to analyse different types of errors, the authors used Critical Incident Technique and asked participants to describe error situations.

Findings

Findings from the study are that the model of learning from errors could partly be replicated. The results indicate that a non-punitive orientation towards errors is an important factor to reduce the tendency of insurance agents to cover up errors when knowledge and rule-based errors happen. In situations of slips and lapses error strain has a negative influence on trust and non-punitive orientation which in turn both reduce the tendency to cover up errors.

Research limitations/implications

Limitation is the small sample size. By using Critical Incidents Technique, the authors were able to analyse authentic error situations. Implications of the results concern the importance of error-friendly climate in organisations.

Originality/value

Replication studies are important to generalise results to different domains. To increase the insight in learning from errors, the authors analysed influencing factors with regard to different types of errors.

Details

Journal of Management Development, vol. 37 no. 2
Type: Research Article
ISSN: 0262-1711

Keywords

1 – 10 of over 100000