Search results

1 – 10 of over 16000
Click here to view access options
Book part
Publication date: 1 December 2016

Raffaella Calabrese and Johan A. Elkink

The most used spatial regression models for binary-dependent variable consider a symmetric link function, such as the logistic or the probit models. When the dependent…

Abstract

The most used spatial regression models for binary-dependent variable consider a symmetric link function, such as the logistic or the probit models. When the dependent variable represents a rare event, a symmetric link function can underestimate the probability that the rare event occurs. Following Calabrese and Osmetti (2013), we suggest the quantile function of the generalized extreme value (GEV) distribution as link function in a spatial generalized linear model and we call this model the spatial GEV (SGEV) regression model. To estimate the parameters of such model, a modified version of the Gibbs sampling method of Wang and Dey (2010) is proposed. We analyze the performance of our model by Monte Carlo simulations and evaluate the prediction accuracy in empirical data on state failure.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Click here to view access options
Article
Publication date: 5 June 2017

Guoquan Chen, Qiwei Zhou and Wei Liu

Based on a review of previous research of organizational learning from experience, this paper aims to point out the notable gaps and unresolved issues in the research area…

Abstract

Purpose

Based on a review of previous research of organizational learning from experience, this paper aims to point out the notable gaps and unresolved issues in the research area and proposes a “multilevel integrated model of learning from experience”, which could integrate current research findings and serve as the theoretical framework for further investigation.

Design/methodology/approach

This paper is a theoretical review.

Findings

From the individual, team, organizational and multiple levels, in an order of the outcome of success and failure, this study reviews previous research about organizational learning from experience down to the last detail and points out some of their limitations, including relative fragmented-wise, lack of grope about the underlying motivations, lack of overall framework, etc. Then, this study proposes the “multilevel integrated model of learning from experience”, which provides a systematic and fine-grained framework for studies in this field.

Research limitations/implications

This paper emphasizes that true underlying motivations impelling learning from experience shall be identified and exploration for the antecedents shall be further deepened. Besides, this study figures out that various factors played their parts in the process and outcome of learning from experience through both subjective perception and objective experience. Thus, future research shall distinguish the influence of learning from experience, respectively, into “knowing” and “doing”.

Originality/value

This study is an attempt to review and integrate current research of learning from experience in multiple levels and further differentiates the influences of different experience outcomes (success vs failure). The proposed theoretical model provides clear suggestions of where future research should be directed.

Details

Nankai Business Review International, vol. 8 no. 2
Type: Research Article
ISSN: 2040-8749

Keywords

Abstract

Details

Transport Survey Methods
Type: Book
ISBN: 978-1-84-855844-1

Click here to view access options
Article
Publication date: 11 November 2020

Komal

In recent years, the application of robots in different industrial sectors such as nuclear power generation, construction, automobile, firefighting and medicine, etc. is…

Abstract

Purpose

In recent years, the application of robots in different industrial sectors such as nuclear power generation, construction, automobile, firefighting and medicine, etc. is increasing day by day. In large industrial plants generally humans and robots work together to accomplish several tasks and lead to the problem of safety and reliability because any malfunction event of robots may cause human injury or even death. To access the reliability of a robot, sufficient amount of failure data is required which is sometimes very difficult to collect due to rare events of any robot failures. Also, different types of their failure pattern increase the difficulty which finally leads to the problem of uncertainty. To overcome these difficulties, this paper presents a case study by assessing fuzzy fault tree analysis (FFTA) to control robot-related accidents to provide safe working environment to human beings in any industrial plant.

Design/methodology/approach

Presented FFTA method uses different fuzzy membership functions to quantify different uncertainty factors and applies alpha-cut coupled weakest t-norm (Tω) based approximate fuzzy arithmetic operations to obtain fuzzy failure probability of robot-human interaction fault event which is the main contribution of the paper.

Findings

The result obtained from presented FFTA method is compared with other listing approaches. Critical basic events are also ranked using V-index for making suitable action plan to control robot-related accidents. Study indicates that the presented FFTA is a good alternative method to analyze fault in robot-human interaction for providing safe working environment in an industrial plant.

Originality/value

Existing fuzzy reliability assessment techniques designed for robots mainly use triangular fuzzy numbers (TFNs), triangle vague sets (TVS) or triangle intuitionistic fuzzy sets (IFS) to quantify data uncertainty. Present study overcomes this shortcoming and generalizes the idea of fuzzy reliability assessment for robots by adopting different IFS to control robot-related accidents to provide safe working environment to human. This is the main contribution of the paper.

Details

International Journal of Quality & Reliability Management, vol. 38 no. 6
Type: Research Article
ISSN: 0265-671X

Keywords

Click here to view access options
Article
Publication date: 14 November 2016

Eli Rohn, Gilad Sabari and Guy Leshem

This study aims to investigate information technology security practices of very small enterprises.

Abstract

Purpose

This study aims to investigate information technology security practices of very small enterprises.

Design/methodology/approach

The authors perform a formal information security field study using a representative sample. Using the Control Objectives for IT (COBIT) framework, the authors evaluate 67 information security controls and perform 206 related tests. The authors state six hypotheses about the findings and accept or reject those using inferential statistics. The authors explain findings using the social comparison theory and the rare events bias theory.

Findings

Only one-third of all the controls examined were designed properly and operated as expected. About half of the controls were either ill-designed or did not operate as intended. The social comparison theory and the rare events bias theory explain managers’s reliance on small experience samples which in turn leads to erroneous comprehension of their business environment, which relates to information security.

Practical implications

This information is valuable to executive branch policy makers striving to reduce information security vulnerability on local and national levels and small business organizations providing information and advice to their members.

Originality/value

Information security surveys are usually over-optimistic and avoid self-incrimination, yielding results that are less accurate than field work. To obtain grounded facts, the authors used the field research approach to gather qualitative and quantitative data by physically visiting active organizations, interviewing managers and staff, observing processes and reviewing written materials such as policies, procedure and logs, in accordance to common practices of security audits.

Details

Information & Computer Security, vol. 24 no. 5
Type: Research Article
ISSN: 2056-4961

Keywords

Open Access
Article
Publication date: 14 August 2018

Yiming Xu, Yajie Zou and Jian Sun

It would take billions of miles’ field road testing to demonstrate that the safety of automated vehicle is statistically significantly higher than the safety of human…

Downloads
1414

Abstract

Purpose

It would take billions of miles’ field road testing to demonstrate that the safety of automated vehicle is statistically significantly higher than the safety of human driving because that the accident of vehicle is rare event.

Design/methodology/approach

This paper proposes an accelerated testing method for automated vehicles safety evaluation based on improved importance sampling (IS) techniques. Taking the typical cut-in scenario as example, the proposed method extracts the critical variables of the scenario. Then, the distributions of critical variables are statistically fitted. The genetic algorithm is used to calculate the optimal IS parameters by solving an optimization problem. Considering the error of distribution fitting, the result is modified so that it can accurately reveal the safety benefits of automated vehicles in the real world.

Findings

Based on the naturalistic driving data in Shanghai, the proposed method is validated by simulation. The result shows that compared with the existing methods, the proposed method improves the test efficiency by 35 per cent, and the accuracy of accelerated test result is increased by 23 per cent.

Originality/value

This paper has three contributions. First, the genetic algorithm is used to calculate IS parameters, which improves the efficiency of test. Second, the result of test is modified by the error correction parameter, which improves the accuracy of test result. Third, typical high-risk cut-in scenarios in China are analyzed, and the proposed method is validated by simulation.

Details

Journal of Intelligent and Connected Vehicles, vol. 1 no. 1
Type: Research Article
ISSN: 2399-9802

Keywords

Click here to view access options
Article
Publication date: 20 November 2019

Rémi Boivin and Silas Nogueira de Melo

The purpose of this paper is to analyze the spatial patterns of different phenomena in the same geographical space. Andresen’s spatial point pattern test computes a global…

Abstract

Purpose

The purpose of this paper is to analyze the spatial patterns of different phenomena in the same geographical space. Andresen’s spatial point pattern test computes a global index (the S-index) that informs on the similarity or dissimilarity of spatial patterns. This paper suggests a generalized S-index that allows perfect similarity and dissimilarity in all situations.

Design/methodology/approach

The relevance of the generalized S-index is illustrated with police data from the San Francisco Police Department. In all cases, the original S-index, its robust version – which excludes zero-crime areas – and the generalized alternative were computed.

Findings

In the first example, the number of crimes greatly exceeds the number of areas and there are no zero-value areas. A key feature of the second example is that most street segments were free of any criminal activity in both patterns. Finally, in the third case, one type of event is considerably rarer than the other. The original S-index is equal to the generalized index (Case 1) or theoretically irrelevant (Cases 2 and 3). Furthermore, the robust index is unnecessary and potentially biased when the number of at least one phenomenon being compared is lower than the number of areas under study. Thus, this study suggests to replace the S-index with its generalized version.

Originality/value

The generalized S-index is relevant for situations when events are relatively rare –as is the case with crime – and the unit of analysis is small but plentiful – such as addresses or street segments.

Details

Policing: An International Journal, vol. 42 no. 6
Type: Research Article
ISSN: 1363-951X

Keywords

Click here to view access options
Article
Publication date: 1 April 1990

B. Kirwan, B. Martin, H. Rycraft and A. Smith

Human error data in the form of human error probabilities should ideally form the corner‐stone of human reliability theory and practice. In the history of human…

Abstract

Human error data in the form of human error probabilities should ideally form the corner‐stone of human reliability theory and practice. In the history of human reliability assessment, however, the collection and generation of valid and usable data have been remarkably elusive. In part the problem appears to extend from the requirement for a technique to assemble the data into meaningful assessments. There have been attempts to achieve this, THERP being one workable example of a (quasi) database which enables the data to be used meaningfully. However, in recent years more attention has been focused on the PerformanceShaping Factors (PSF) associated with human reliability. A “database for today” should therefore be developed in terms of PSF, as well as task/ behavioural descriptors, and possibly even psychological error mechanisms. However, this presumes that data on incidents and accidents are collected and categorised in terms of the PSF contributing to the incident, and such classification systems in practice are rare. The collection and generation of a small working database, based on incident records are outlined. This has been possible because the incident‐recording system at BNFL Sellafield does give information on PSF. Furthermore, the data have been integrated into the Human Reliability Management System which is a PSF‐based human reliability assessment system. Some of the data generated are presented, as well as the PSF associated with them, and an outline of the incident collection system is given. Lastly, aspects of human common mode failure or human dependent failures, particularly at the lower human error probability range, are discussed, as these are unlikely to be elicited from data collection studies, yet are important in human reliability assessment. One possible approach to the treatment of human dependent failures, the utilisation of human performance‐limiting values, is described.

Details

International Journal of Quality & Reliability Management, vol. 7 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Click here to view access options

Abstract

Details

Travel Survey Methods
Type: Book
ISBN: 978-0-08-044662-2

Click here to view access options
Book part
Publication date: 18 December 2003

Arthur De Vany

Abstract

Details

Economics of Art and Culture Invited Papers at the 12th International Conference of the Association of Cultural Economics International
Type: Book
ISBN: 978-0-44450-995-6

1 – 10 of over 16000