Search results

1 – 10 of over 11000
Article
Publication date: 1 December 2003

Da Ruan, Jun Liu and Roland Carchon

A flexible and realistic linguistic assessment approach is developed to provide a mathematical tool for synthesis and evaluation analysis of nuclear safeguards indicator…

Abstract

A flexible and realistic linguistic assessment approach is developed to provide a mathematical tool for synthesis and evaluation analysis of nuclear safeguards indicator information. This symbolic approach, which acts by the direct computation on linguistic terms, is established based on fuzzy set theory. More specifically, a lattice‐valued linguistic algebra model, which is based on a logical algebraic structure of the lattice implication algebra, is applied to represent imprecise information and to deal with both comparable and incomparable linguistic terms (i.e. non‐ordered linguistic values). Within this framework, some weighted aggregation functions introduced by Yager are analyzed and extended to treat these kinds of lattice‐value linguistic information. The application of these linguistic aggregation operators for managing nuclear safeguards indicator information is successfully demonstrated.

Details

Logistics Information Management, vol. 16 no. 6
Type: Research Article
ISSN: 0957-6053

Keywords

Abstract

Details

The Theory of Monetary Aggregation
Type: Book
ISBN: 978-0-44450-119-6

Open Access
Article
Publication date: 25 February 2020

Zsolt Tibor Kosztyán, Tibor Csizmadia, Zoltán Kovács and István Mihálcz

The purpose of this paper is to generalize the traditional risk evaluation methods and to specify a multi-level risk evaluation framework, in order to prepare customized risk…

3594

Abstract

Purpose

The purpose of this paper is to generalize the traditional risk evaluation methods and to specify a multi-level risk evaluation framework, in order to prepare customized risk evaluation and to enable effectively integrating the elements of risk evaluation.

Design/methodology/approach

A real case study of an electric motor manufacturing company is presented to illustrate the advantages of this new framework compared to the traditional and fuzzy failure mode and effect analysis (FMEA) approaches.

Findings

The essence of the proposed total risk evaluation framework (TREF) is its flexible approach that enables the effective integration of firms’ individual requirements by developing tailor-made organizational risk evaluation.

Originality/value

Increasing product/service complexity has led to increasingly complex yet unique organizational operations; as a result, their risk evaluation is a very challenging task. Distinct structures, characteristics and processes within and between organizations require a flexible yet robust approach of evaluating risks efficiently. Most recent risk evaluation approaches are considered to be inadequate due to the lack of flexibility and an inappropriate structure for addressing the unique organizational demands and contextual factors. To address this challenge effectively, taking a crucial step toward customization of risk evaluation.

Details

International Journal of Quality & Reliability Management, vol. 37 no. 4
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 19 June 2009

Chantola Kit, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to…

1858

Abstract

Purpose

The purpose of this paper is to propose efficient algorithms for structural grouping over Extensible Markup Language (XML) data, called TOPOLOGICAL ROLLUP (T‐ROLLUP), which are to compute aggregation functions based on XML data with multiple hierarchical levels. They play important roles in the online analytical processing of XML data, called XML‐OLAP, with which complex analysis over XML can be performed to discover valuable information from XML.

Design/methodology/approach

Several variations of algorithms are proposed for efficient T‐ROLLUP computation. First, two basic algorithms, top‐down algorithm (TDA) and bottom‐up algorithm (BUA), are presented in which the well‐known structural‐join algorithms are used. The paper then proposes more efficient algorithms, called single‐scan by preorder number and single‐scan by postorder number (SSC‐Pre/Post), which are also based on structural joins, but have been modified from the basic algorithms so that multiple levels of grouping are computed with a single scan over node lists. In addition, the paper attempts to adopt the algorithm for parallel execution in multi‐core environments.

Findings

Several experiments are conducted with XMark and synthetic XML data to show the effectiveness of the proposed algorithms. The experiments show that proposed algorithms perform much better than the naïve implementation. In particular, the proposed SSC‐Pre and SSC‐Post perform better than TDA and BUA for all cases. Beyond that, the experiment using the parallel single scan algorithm also shows better performance than the ordinary basic algorithm.

Research limitations/implications

This paper focuses on the T‐ROLLUP operation for XML data analysis. For this reason, other operations related to XML‐OLAP, such as CUBE, WINDOWING, and RANKING should also be investigated.

Originality/value

The paper presents an extended version of one of the award winning papers at iiWAS2008.

Details

International Journal of Web Information Systems, vol. 5 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Abstract

Details

Understanding Mattessich and Ijiri: A Study of Accounting Thought
Type: Book
ISBN: 978-1-78714-841-3

Content available
Book part
Publication date: 2 July 2004

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Article
Publication date: 1 June 2010

Bent Helge Nystad and Magnus Rasmussen

The purpose of this paper is to predict the remaining useful life of a natural gas export compressor, in order to assist decision making of the next planned work order.

Abstract

Purpose

The purpose of this paper is to predict the remaining useful life of a natural gas export compressor, in order to assist decision making of the next planned work order.

Design/methodology/approach

Extraction and aggregation of information from rapid developing condition‐monitoring systems has given rise to the Technical Condition Index (TCI) methodology. The trends of aggregated TCIs at compressor level and historical work orders were used as the basis for remaining useful life estimation.

Findings

The model is merging several condition‐related measurements and quantifying belief in aging versus belief in condition monitoring. This is important information in, for example, maintenance policy selection, and for the choice of a remaining useful life approach.

Practical implications

The model requires historical failure data and well documented condition‐related measurements. Investigation of the physics of failure at the component level also seems important for prognostic theory development.

Originality/value

The proposed methodology combines the TCI methodology, the survival analysis (PHM) methodology, and the general maximum‐likelihood theory to estimate and validate parameters and remaining useful life.

Details

Journal of Quality in Maintenance Engineering, vol. 16 no. 2
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 1 May 2006

Giancarlo Barbiroli, Giovanni Casalicchio and Andrea Raggi

An extensive knowledge of soil characteristics and an awareness of the type and amount of pollutants present are essential to evaluate the utilization potential of soil resources…

Abstract

Purpose

An extensive knowledge of soil characteristics and an awareness of the type and amount of pollutants present are essential to evaluate the utilization potential of soil resources for various land uses. On the other hand, for management and decision‐making purpose, there is a need to develop concise indices in order to adequately express the overall quality of soil resources. The aim of this study was to introduce a flexible soil quality index system, mainly based on fertility and the presence of pollutants.

Design/methodology/approach

A tree‐structured approach was adopted, leading to a concise final index from various intermediate sub‐indices. More specifically, a number of physical‐chemical‐biological parameters lead to an agronomic quality index (AQI). Another group of parameters referring to polluting substances of various origins are combined to give a multifunctional quality index (MQI). AQI and MQI are then coupled into an overall general quality index. The proposed model was implemented by using a set of data concerning various specific sites throughout Italy.

Findings

The proposed methodology proved useful in providing valuable and quick information about the overall soil quality performance and its main sources, and in helping to balance the contrasting needs of obtaining concise quantitative information, on the one hand, and of minimizing the inevitable loss of information inherent in every process of synthesis, on the other hand.

Originality/value

This paper presents an innovative quality index structure for the environmental and multifunctional management of soil, whose main value is related to its flexibility (e.g. different number and kind of parameters, various levels of aggregation) which makes it applicable to various contexts.

Details

Management of Environmental Quality: An International Journal, vol. 17 no. 3
Type: Research Article
ISSN: 1477-7835

Keywords

Article
Publication date: 23 November 2010

Nils Hoeller, Christoph Reinke, Jana Neumann, Sven Groppe, Christian Werner and Volker Linnemann

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and…

Abstract

Purpose

In the last decade, XML has become the de facto standard for data exchange in the world wide web (WWW). The positive benefits of data exchangeability to support system and software heterogeneity on application level and easy WWW integration make XML an ideal data format for many other application and network scenarios like wireless sensor networks (WSNs). Moreover, the usage of XML encourages using standardized techniques like SOAP to adapt the service‐oriented paradigm to sensor network engineering. Nevertheless, integrating XML usage in WSN data management is limited by the low hardware resources that require efficient XML data management strategies suitable to bridge the general resource gap. The purpose of this paper is to present two separate strategies on integrating XML data management in WSNs.

Design/methodology/approach

The paper presents two separate strategies on integrating XML data management in WSNs that have been implemented and are running on today's sensor node platforms. The paper shows how XML data can be processed and how XPath queries can be evaluated dynamically. In an extended evaluation, the performance of both strategies concerning the memory and energy efficiency are compared and both solutions are shown to have application domains fully applicable on today's sensor node products.

Findings

This work shows that dynamic XML data management and query evaluation is possible on sensor nodes with strict limitations in terms of memory, processing power and energy supply.

Originality/value

The paper presents an optimized stream‐based XML compression technique and shows how XML queries can be evaluated on compressed XML bit streams using generic pushdown automata. To the best of the authors' knowledge, this is the first complete approach on integrating dynamic XML data management into WSNs.

Details

International Journal of Web Information Systems, vol. 6 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 11 November 2014

Mihaela Dinsoreanu and Rodica Potolea

The purpose of this paper is to address the challenge of opinion mining in text documents to perform further analysis such as community detection and consistency control. More…

Abstract

Purpose

The purpose of this paper is to address the challenge of opinion mining in text documents to perform further analysis such as community detection and consistency control. More specifically, we aim to identify and extract opinions from natural language documents and to represent them in a structured manner to identify communities of opinion holders based on their common opinions. Another goal is to rapidly identify similar or contradictory opinions on a target issued by different holders.

Design/methodology/approach

For the opinion extraction problem we opted for a supervised approach focusing on the feature selection problem to improve our classification results. On the community detection problem, we rely on the Infomap community detection algorithm and the multi-scale community detection framework used on a graph representation based on the available opinions and social data.

Findings

The classification performance in terms of precision and recall was significantly improved by adding a set of “meta-features” based on grouping rules of certain part of speech (POS) instead of the actual words. Concerning the evaluation of the community detection feature, we have used two quality metrics: the network modularity and the normalized mutual information (NMI). We evaluated seven one-target similarity functions and ten multi-target aggregation functions and concluded that linear functions perform poorly for data sets with multiple targets, while functions that calculate the average similarity have greater resilience to noise.

Originality/value

Although our solution relies on existing approaches, we managed to adapt and integrate them in an efficient manner. Based on the initial experimental results obtained, we managed to integrate original enhancements to improve the performance of the obtained results.

Details

International Journal of Web Information Systems, vol. 10 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 11000