Search results

1 – 10 of over 9000
Article
Publication date: 1 February 2005

K. Lawson

This paper compares and contrasts two approaches to the treatment of pipeline corrosion “risk” – the probabilistic approach and the more traditional, deterministic approach. The…

3811

Abstract

Purpose

This paper compares and contrasts two approaches to the treatment of pipeline corrosion “risk” – the probabilistic approach and the more traditional, deterministic approach. The paper aims to discuss the merits and potential pitfalls of each approach.

Design/methodology/approach

Provides an outline of each approach. The probabilistic approach to the assessment of pipeline corrosion risks deals with many of the uncertainties that are common to the data employed and those with regard to the predictive models that are used also. Rather than considering each input parameter as an average value the approach considers the inputs as a series of probability density functions, the collective use during the assessment of risk yields a risk profile that is quantified on the basis of uncertain data. This approach differs from the traditional deterministic assessment in that the output yields a curve that shows how the “risk” of failure increases with time. The pipeline operator simply chooses the level of risk that is acceptable and then devises a strategy to deal with those risks. The traditional (deterministic) approach merely segments the output risks as either “high”, “medium” or “low”; a strategy for managing is devised based on the selection of an appropriate time interval to allow a reasonable prospect of detecting deterioration before the pipeline corrosion allowance is exceeded, or no longer complies with code. Applies both approaches to the case of a 16.1 km long, 14 in. main export line in the North Sea.

Findings

The deterministic assessment yielded a worst‐case failure probability of “medium” with a corresponding consequence of “high”; classifications that are clearly subjective. The probabilistic assessments quantified pipeline failure probabilities, although it is important to note that more effort was required when performing such an assessment. Using target probabilities for “high” and “normal” consequence pipeline segments, indications were that between 8.5 and 13 years was the time period for which the target (predicted) failure probabilities would be reached, again depending on how effective corrosion mitigation activities are in practice. Basing pipeline inspections in particular on the outputs from the deterministic assessment would therefore be conservative in this instance; but this may not necessarily always be so. That the probabilistic assessment indicates that inspections justifiably may be extended beyond that suggested by the deterministic assessment is a clear benefit, in that it affords the opportunity to defer expenditure on pipeline inspections to a later date, but it may be the case that the converse may be required. It may be argued therefore, that probabilistic assessment provides a superior basis for driving pipeline corrosion management activities given that the approach deals with the uncertainties in the basic input data.

Originality/value

A probabilistic assessment approach that effectively mirrors pipeline operations, provides a superior basis upon which to manage risk and would therefore likely maximize both safety and business performance.

Details

Anti-Corrosion Methods and Materials, vol. 52 no. 1
Type: Research Article
ISSN: 0003-5599

Keywords

Article
Publication date: 7 October 2019

Mario Ordaz, Mario Andrés Salgado-Gálvez, Benjamín Huerta, Juan Carlos Rodríguez and Carlos Avelar

The development of multi-hazard risk assessment frameworks has gained momentum in the recent past. Nevertheless, the common practice with openly available risk data sets, such as…

Abstract

Purpose

The development of multi-hazard risk assessment frameworks has gained momentum in the recent past. Nevertheless, the common practice with openly available risk data sets, such as the ones derived from the United Nations Office for Disaster Risk Reduction Global Risk Model, has been to assess risk individually for each peril and afterwards aggregate, when possible, the results. Although this approach is sufficient for perils that do not have any interaction between them, for the cases where such interaction exists, and losses can be assumed to occur simultaneously, there may be underestimation of losses. The paper aims to discuss these issues.

Design/methodology/approach

This paper summarizes a methodology to integrate simultaneous losses caused by earthquakes and tsunamis, with a peril-agnostic approach that can be expanded to other hazards. The methodology is applied in two relevant locations in Latin America, Acapulco (Mexico) and Callao (Peru), considering in each case building by building exposure databases with portfolios of different characteristics, where the results obtained with the proposed approach are compared against those obtained after the direct aggregation of individual losses.

Findings

The fully probabilistic risk assessment framework used herein is the same of the global risk model but applied at a much higher resolution level of the hazard and exposure data sets, showing its scalability characteristics and the opportunities to refine certain inputs to move forward into decision-making activities related to disaster risk management and reduction.

Originality/value

This paper applies for the first time the proposed methodology in a high-resolution multi-hazard risk assessment for earthquake and tsunami in two major coastal cities in Latin America.

Details

Disaster Prevention and Management: An International Journal, vol. 28 no. 6
Type: Research Article
ISSN: 0965-3562

Keywords

Article
Publication date: 7 January 2019

Ravinder Singh and Kuldeep Singh Nagla

An efficient perception of the complex environment is the foremost requirement in mobile robotics. At present, the utilization of glass as a glass wall and automated transparent…

Abstract

Purpose

An efficient perception of the complex environment is the foremost requirement in mobile robotics. At present, the utilization of glass as a glass wall and automated transparent door in the modern building has become a highlight feature for interior decoration, which has resulted in the wrong perception of the environment by various range sensors. The perception generated by multi-data sensor fusion (MDSF) of sonar and laser is fairly consistent to detect glass but is still affected by the issues such as sensor inaccuracies, sensor reliability, scan mismatching due to glass, sensor model, probabilistic approaches for sensor fusion, sensor registration, etc. The paper aims to discuss these issues.

Design/methodology/approach

This paper presents a modified framework – Advanced Laser and Sonar Framework (ALSF) – to fuse the sensory information of a laser scanner and sonar to reduce the uncertainty caused by glass in an environment by selecting the optimal range information corresponding to a selected threshold value. In the proposed approach, the conventional sonar sensor model is also modified to reduce the wrong perception in sonar as an outcome of the diverse range measurement. The laser scan matching algorithm is also modified by taking out the small cluster of laser point (w.r.t. range information) to get efficient perception.

Findings

The probability of the occupied cells w.r.t. the modified sonar sensor model becomes consistent corresponding to diverse sonar range measurement. The scan matching technique is also modified to reduce the uncertainty caused by glass and high computational load for the efficient and fast pose estimation of the laser sensor/mobile robot to generate robust mapping. These stated modifications are linked with the proposed ALSF technique to reduce the uncertainty caused by glass, inconsistent probabilities and high load computation during the generation of occupancy grid mapping with MDSF. Various real-world experiments are performed with the implementation of the proposed approach on a mobile robot fitted with laser and sonar, and the obtained results are qualitatively and quantitatively compared with conventional approaches.

Originality/value

The proposed ASIF approach generates efficient perception of the complex environment contains glass and can be implemented for various robotics applications.

Details

International Journal of Intelligent Unmanned Systems, vol. 7 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 7 January 2022

Ramon Swell Gomes Rodrigues Casado, Maisa Mendonca Silva and Lucio Camara Silva

The paper aims to propose a multi-criteria model for risk prioritisation associated to supply chain management involving multiple decision-makers.

Abstract

Purpose

The paper aims to propose a multi-criteria model for risk prioritisation associated to supply chain management involving multiple decision-makers.

Design/methodology/approach

The model integrates the composition of probabilistic preferences (CPP) on the failure modes analysis and its effects (FMEA) criteria. First, the authors carried out a probabilistic transformation of the numerical evaluations of the multiple decision-makers on the FMEA criteria regarding the internal risks that affect the chain of clothing pole in the Agreste region of Pernambuco. Then, the authors proposed the use of the Kendall's concordance coefficient W to aggregate these evaluations.

Findings

Contrary to expectations, the two main risks to be investigated as a model suggestion was related to the context of supply chain suppliers and not related to the raw material costs. Besides, a simulation with the traditional FMEA was carried out, and comparing with the model result, the simulation is worth highlighting seven consistent differences along the two rankings.

Research limitations/implications

The focus was restricted to the use of only internal chain risks.

Practical implications

The proposed model can contribute to the improvement of the decisions within organisations that make up the chains, thus guaranteeing a better quality in risk management.

Originality/value

Establishing a more effective representation of uncertain information related to traditional FMEA treatment involving multiple decision-makers means identifying in advance the potential risks, providing a better supply chain control.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 3 April 2017

Pawel D. Domanski and Mateusz Gintrowski

This paper aims to present the results of the comparison between different approaches to the prediction of electricity prices. It is well-known that the properties of the data…

Abstract

Purpose

This paper aims to present the results of the comparison between different approaches to the prediction of electricity prices. It is well-known that the properties of the data generation process may prefer some modeling methods over the others. The data having an origin in social or market processes are characterized by unexpectedly wide realization space resulting in the existence of the long tails in the probabilistic density function. These data may not be easy in time series prediction using standard approaches based on the normal distribution assumptions. The electricity prices on the deregulated market fall into this category.

Design/methodology/approach

The paper presents alternative approaches, i.e. memory-based prediction and fractal approach compared with established nonlinear method of neural networks. The appropriate interpretation of results is supported with the statistical data analysis and data conditioning. These algorithms have been applied to the problem of the energy price prediction on the deregulated electricity market with data from Polish and Austrian energy stock exchanges.

Findings

The first outcome of the analysis is that there are several situations in the task of time series prediction, when standard modeling approach based on the assumption that each change is independent of the last following random Gaussian bell pattern may not be a true. In this paper, such a case was considered: price data from energy markets. Electricity prices data are biased by the human nature. It is shown that more relevant for data properties was Cauchy probabilistic distribution. Results have shown that alternative approaches may be used and prediction for both data memory-based approach resulted in the best performance.

Research limitations/implications

“Personalization” of the model is crucial aspect in the whole methodology. All available knowledge should be used on the forecasted phenomenon and incorporate it into the model. In case of the memory-based modeling, it is a specific design of the history searching routine that uses the understanding of the process features. Importance should shift toward methodology structure design and algorithm customization and then to parameter estimation. Such modeling approach may be more descriptive for the user enabling understanding of the process and further iterative improvement in a continuous striving for perfection.

Practical implications

Memory-based modeling can be practically applied. These models have large potential that is worth to be exploited. One disadvantage of this modeling approach is large calculation effort connected with a need of constant evaluation of large data sets. It was shown that a graphics processing unit (GPU) approach through parallel calculation on the graphical cards can improve it dramatically.

Social implications

The modeling of the electricity prices has big impact of the daily operation of the electricity traders and distributors. From one side, appropriate modeling can improve performance mitigating risks associated with the process. Thus, the end users should receive higher quality of services ultimately with lower prices and minimized risk of the energy loss incidents.

Originality/value

The use of the alternative approaches, such as memory-based reasoning or fractals, is very rare in the field of the electricity price forecasting. Thus, it gives a new impact for further research enabling development of better solutions incorporating all available process knowledge and customized hybrid algorithms.

Details

International Journal of Energy Sector Management, vol. 11 no. 1
Type: Research Article
ISSN: 1750-6220

Keywords

Article
Publication date: 13 September 2011

Imoh Antai

The purpose of this paper is to propose a conceptualization of supply chain vs supply chain competition using the ecological niche approach. It suggests a probabilistic

3124

Abstract

Purpose

The purpose of this paper is to propose a conceptualization of supply chain vs supply chain competition using the ecological niche approach. It suggests a probabilistic methodology for evaluating competition from time series data, using overlap in the utilization of services provided by critical providers as a source of competition.

Design/methodology/approach

Literature on ecological niche theory and competition is explored and given the uncertainty that surrounds the operation and management of supply chains, a probabilistic approach to the analysis of supply chain vs supply chain competition (via the Bayesian inference) is advocated. Simulated data are used to illustrate the methodology.

Findings

Should an area of overlap be identified, ecological niche theory provides a sensible approach to identifying the nature and extent of competition between supply chains. Applicability of the methodology is not limited to supply chain vs supply chain competition.

Research limitations/implications

The data used for the analysis of competition between supply chains are computer generated and use a single niche dimension. Although this was done to merely test/validate the proposed model, the approach is somewhat oversimplified. However, the model is readily extendable to multiple niche dimensions.

Originality/value

The proposed approach offers a simple and straight‐forward method of estimating competition in general, and supply chains vs supply chain competition in particular. Attempts at using the niche theory of competition in this context are so far inconspicuous. Hence, approaching competition in this way contributes to furthering our understanding of competitive interaction especially in supply chains, whose prospect is yet to be pointed out in literature.

Article
Publication date: 14 April 2022

Nadeeshani Wanigarathna, Keith Jones, Federica Pascale, Mariantonietta Morga and Abdelghani Meslem

Recent earthquake-induced liquefaction events and associated losses have increased researchers’ interest into liquefaction risk reduction interventions. To the best of the…

Abstract

Purpose

Recent earthquake-induced liquefaction events and associated losses have increased researchers’ interest into liquefaction risk reduction interventions. To the best of the authors’ knowledge, there was no scholarly literature related to an economic appraisal of these risk reduction interventions. The purpose of this paper is to investigate the issues in applying cost–benefit analysis (CBA) principles to the evaluation of technical mitigations to reduce earthquake-induced liquefaction risk.

Design/methodology/approach

CBA has been substantially used for risk mitigation option appraisal for a number of hazard threats. Previous literature in the form of systematic reviews, individual research and case studies, together with liquefaction risk and loss modelling literature, was used to develop a theoretical model of CBA for earthquake-induced liquefaction mitigation interventions. The model was tested using a scenario in a two-day workshop.

Findings

Because liquefaction risk reduction techniques are relatively new, there is limited damage modelling and cost data available for use within CBAs. As such end users need to make significant assumptions when linking the results of technical investigations of damage to built-asset performance and probabilistic loss modelling resulting in many potential interventions being not cost-effective for low-impact disasters. This study questions whether a probabilistic approach should really be applied to localised rapid onset events like liquefaction, arguing that a deterministic approach for localised knowledge and context would be a better base for the cost-effectiveness mitigation interventions.

Originality/value

This paper makes an original contribution to literature through a critical review of CBA approaches applied to disaster mitigation interventions. Further, this paper identifies challenges and limitations of applying probabilistic based CBA models to localised rapid onset disaster events where human losses are minimal and historic data is sparse; challenging researchers to develop new deterministic based approaches that use localised knowledge and context to evaluate the cost-effectiveness of mitigation interventions.

Details

Journal of Financial Management of Property and Construction , vol. 28 no. 2
Type: Research Article
ISSN: 1366-4387

Keywords

Article
Publication date: 23 August 2018

Giovanni Falsone and Rossella Laudani

This paper aims to present an approach for the probabilistic characterization of the response of linear structural systems subjected to random time-dependent non-Gaussian actions.

Abstract

Purpose

This paper aims to present an approach for the probabilistic characterization of the response of linear structural systems subjected to random time-dependent non-Gaussian actions.

Design/methodology/approach

Its fundamental property is working directly on the probability density functions of the actions and responses. This avoids passing through the evaluation of the response statistical moments or cumulants, reducing the computational effort in a consistent measure.

Findings

It is an efficient method, for both its computational effort and its accuracy, above all when the input and output processes are strongly non-Gaussian.

Originality/value

This approach can be considered as a dynamic generalization of the probability transformation method recently used for static applications.

Details

Engineering Computations, vol. 35 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 5 January 2021

Xu Xiuqin, Xie Jialiang, Yue Na and Wang Honghui

The purpose of this paper is to develop a probabilistic uncertain linguistic (PUL) TODIM method based on the generalized Choquet integral, with respect to the interdependencies…

Abstract

Purpose

The purpose of this paper is to develop a probabilistic uncertain linguistic (PUL) TODIM method based on the generalized Choquet integral, with respect to the interdependencies between criteria, for the selection of the best alternate in the context of multiple criteria group decision-making (MCGDM).

Design/methodology/approach

Owing to decision makers (DMs) do not always show completely rational and may have the preference of bounded rational behavior, this may affect the result of the MCGDM. At the same time, criteria interaction is a focused issue in MCGDM. Hence, a novel TODIM method based on the generalized Choquet integral selects the best alternate using PUL evaluation, where the generalized Choquet integral is used to calculate the weight of criterion. The generalized PUL distance measure between two probabilistic uncertain linguistic elements (PULEs) is calculated and the perceived dominance degree matrices for each alternate relative to other alternates are obtained. Furthermore, the comprehensive perceived dominance degree of each alternate can be calculated to get the ranking.

Findings

Potential application of the PUL-TODIM method is demonstrated through an evaluation example with sensitivity and comparative analysis.

Originality/value

As per author's concern, there are no TODIM methods with probabilistic uncertain linguistic sets (PULTSs) to solve MCGDM problems under uncertainty. Compared with the result of existing methods, the final judgment value of alternates using the extended TODIM methodology is highly corroborated, which proves its potential in solving MCGDM problems under qualitative and quantitative environments.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 17 October 2008

Li‐Ping He and Fu‐Zheng Qu

To survey the approaches to design optimization based on possibility theory and evidence theory comparatively, as well as their prominent characteristics mainly for epistemic…

Abstract

Purpose

To survey the approaches to design optimization based on possibility theory and evidence theory comparatively, as well as their prominent characteristics mainly for epistemic uncertainty.

Design/methodology/approach

Owing to uncertainties encountered in engineering design problems and limitations of the conventional probabilistic approach in handling the impreciseness of data or knowledge, the possibility‐based design optimization (PBDO), evidence‐based design optimization (EBDO) and their integrated approaches are investigated from the viewpoints of computational development and performance improvement. After that, this paper discusses the fusion technologies and an example of integrated approach in reliability to reveal the qualitative value and efficiency.

Findings

It is recognized that more conservative results are obtained with both PBDO and EBDO, which may be appropriate for design against catastrophic failure compared with the probability‐based design. Furthermore, the EBDO design may be less conservative compared with the PBDO design.

Research limitations/implications

How to perfect already‐existing integration approaches in a more generally analytical framework is still an active domain of research.

Practical implications

The paper is a holistic reference for design engineers and industry managers.

Originality/value

The paper is focused on decomposition strategies and fusion technologies, especially addressing epistemic uncertainty for large‐scale and complex systems when statistical data are scarce or incomplete.

Details

Kybernetes, vol. 37 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 9000