Search results

1 – 10 of over 11000
Article
Publication date: 26 August 2014

Florian Johannsen, Susanne Leist and Reinhold Tausch

The purpose of this paper is to specify the decomposition conditions of Wand and Weber for the Business Process Model and Notation (BPMN). Therefore, an interpretation of the…

Abstract

Purpose

The purpose of this paper is to specify the decomposition conditions of Wand and Weber for the Business Process Model and Notation (BPMN). Therefore, an interpretation of the conditions for BPMN is derived and compared to a specification of the conditions for enhanced Event-Driven Process Chains (eEPCs). Based on these results, guidelines for a conformance check of BPMN and eEPC models with the decomposition conditions are shown. Further, guidelines for decomposition are formulated for BPMN models. The usability of the decomposition guidelines is tested with modelling experts.

Design/methodology/approach

An approach building on a representational mapping is used for specifying the decomposition conditions. Therefore, ontological constructs of the Bunge-Wand-Weber ontology are mapped to corresponding modelling constructs and an interpretation of the decomposition conditions for BPMN is derived. Guidelines for a conformance check are then defined. Based on these results, decomposition guidelines are formulated. Their usability is tested in interviews.

Findings

The research shows that the decomposition conditions stemming from the information systems discipline can be transferred to business process modelling. However, the interpretation of the decomposition conditions depends on specific characteristics of a modelling language. Based on a thorough specification of the conditions, it is possible to derive guidelines for a conformance check of process models with the conditions. In addition, guidelines for decomposition are developed and tested. In the study, these are perceived as understandable and helpful by experts.

Research limitations/implications

Research approaches based on representational mappings are subjected to subjectivity. However, by having three researchers performing the approach independently, subjectivity can be mitigated. Further, only ten experts participated in the usability test, which is therefore to be considered as a first step in a more comprising evaluation.

Practical implications

This paper provides the process modeller with guidelines enabling a conformance check of BPMN and eEPC process models with the decomposition conditions. Further, guidelines for decomposing BPMN models are introduced.

Originality/value

This paper is the first to specify Wand and Weber's decomposition conditions for process modelling with BPMN. A comparison to eEPCs shows, that the ontological expressiveness influences the interpretation of the conditions. Further, guidelines for decomposing BPMN models as well as for checking their adherence to the decomposition conditions are presented.

Details

Business Process Management Journal, vol. 20 no. 5
Type: Research Article
ISSN: 1463-7154

Keywords

Article
Publication date: 1 January 1990

Mohamed E. Ibrahim, Saad A. Metawae and Ibrahim M. Aly

In recent years, a sizeable amount of research in finance and accounting has been devoted to the issue of bond rating and bond rating changes. A major thrust of these research…

Abstract

In recent years, a sizeable amount of research in finance and accounting has been devoted to the issue of bond rating and bond rating changes. A major thrust of these research efforts was to develop and test some prediction‐based models using mainly financial ratios and their trends. This paper tests the ability of statistical decomposition analysis of financial statements to predict bond rating changes. The results show that the decomposition analysis almost does not beat the a priori probability model and is no better than multiple discriminant analysis using simple financial ratios. One important piece of information for participants in debt markets is the assessment of the relative risk associated with a particular bond issue, commonly known as bond ratings. These ratings, however, are not usually fixed for the life of the issues. From time to time, the rating agencies review their ratings of the outstanding bond issues and make changes to these ratings (either upward or downward) when needed. Over the years, researchers have attempted to develop and test some prediction based models in order to predict bond ratings or bond rating changes. These prediction models have employed some variables that are assumed to reflect the rating agency decision‐making activities. Although the rating process is complicated and based mainly on judgmental considerations, Hawkins, Brown and Campbell (1983, p. 95) reported that the academic research strongly suggests that a reliable estimate of a potential bond rating or rating change can be determined by a few key financial ratios. Information theory decomposition measures have received in recent years considerable attention as a potential tool for predicting corporate events, namely corporate bankruptcy (e.g., Lev 1970; Moyer 1977; Walker, Stowe and Moriarity 1979; Booth 1983). The underlying proposition in these studies is that corporate failure, as an event, is expected to be preceded by significant changes in the company's assets and liabilities structure. Although the event of bond rating changes is different from the bankruptcy event in terms of consequences, one can still propose that a bond rating change, as a corporate event, is also expected to be preceded by some significant changes in the company's assets and liabilities structure. Therefore, the decomposition analysis may have a predictive ability in the case of bond rating changes. The purpose of this paper is to empirically test and compare the classification and predictive accuracy of the decomposition analysis with the performance of a multiple discriminant model that uses financial ratios and their trends in the context of bond rating changes.

Details

Managerial Finance, vol. 16 no. 1
Type: Research Article
ISSN: 0307-4358

Article
Publication date: 1 September 2005

C.K. Chan and S.T. Tan

This paper reports on the work done to decompose a large sized solid model into smaller solid components for rapid prototyping technology. The target geometric domain of the solid…

1244

Abstract

Purpose

This paper reports on the work done to decompose a large sized solid model into smaller solid components for rapid prototyping technology. The target geometric domain of the solid model includes quadrics and free form surfaces.

Design/methodology/approach

The decomposition criteria are based on the manufacturability of the model against a user‐defined manufacturing chamber size and the maintenance of geometrical information of the model. In the proposed algorithm, two types of manufacturing chamber are considered: cylindrical shape and rectangular shape. These two types of chamber shape are commonly implemented in rapid prototyping machines.

Findings

The proposed method uses a combination of the regular decomposition (RD)‐method and irregular decomposition (ID)‐method to split a non‐producible solid model into smaller producible subparts. In the ID‐method, the producible feature group decomposition (PFGD)‐method focuses on the decomposition by recognising producible feature groups. In the decomposition process, less additional geometrical and topological information are created. The RD‐method focuses on the splitting of non‐producible sub‐parts, which cannot be further decomposed by the PFGD‐method. Different types of regular split tool surface are studied.

Originality/value

Combination of the RD‐method and the ID‐method makes up the proposed volume decomposition process. The user can also define the sequence and priority of using these methods manually to achieve different decomposition patterns. The proposed idea is also applicable to other decomposition algorithm. Some implementation details and the corresponding problems of the proposed methods are also discussed.

Details

Rapid Prototyping Journal, vol. 11 no. 4
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 15 March 2011

Yi‐Hui Liang

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

1418

Abstract

Purpose

The purpose of this study is to propose the time series decomposition approach to analyze and predict the failure data of the repairable systems.

Design/methodology/approach

This study employs NHPP to model the failure data. Initially, Nelson's graph method is employed to estimate the mean number of repairs and the MCRF value for the repairable system. Second, the time series decomposition approach is employed to predict the mean number of repairs and MCRF values.

Findings

The proposed method can analyze and predict the reliability for repairable systems. It can analyze the combined effect of trend‐cycle components and the seasonal component of the failure data.

Research limitations/implications

This study only adopts simulated data to verify the proposed method. Future research may use other real products' failure data to verify the proposed method. The proposed method is superior to ARIMA and neural network model prediction techniques in the reliability of repairable systems.

Practical implications

Results in this study can provide a valuable reference for engineers when constructing quality feedback systems for assessing current quality conditions, providing logistical support, correcting product design, facilitating optimal component‐replacement and maintenance strategies, and ensuring that products meet quality requirements.

Originality/value

The time series decomposition approach was used to model and analyze software aging and software failure in 2007. However, the time series decomposition approach was rarely used for modeling and analyzing the failure data for repairable systems. This study proposes the time series decomposition approach to analyze and predict the failure data of the repairable systems and the proposed method is better than the ARIMA model and neural networks in predictive accuracy.

Details

International Journal of Quality & Reliability Management, vol. 28 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 9 February 2024

Chengpeng Zhang, Zhihua Yu, Jimin Shi, Yu Li, Wenqiang Xu, Zheyi Guo, Hongshi Zhang, Zhongyuan Zhu and Sheng Qiang

Hexahedral meshing is one of the most important steps in performing an accurate simulation using the finite element analysis (FEA). However, the current hexahedral meshing method…

Abstract

Purpose

Hexahedral meshing is one of the most important steps in performing an accurate simulation using the finite element analysis (FEA). However, the current hexahedral meshing method in the industry is a nonautomatic and inefficient method, i.e. manually decomposing the model into suitable blocks and obtaining the hexahedral mesh from these blocks by mapping or sweeping algorithms. The purpose of this paper is to propose an almost automatic decomposition algorithm based on the 3D frame field and model features to replace the traditional time-consuming and laborious manual decomposition method.

Design/methodology/approach

The proposed algorithm is based on the 3D frame field and features, where features are used to construct feature-cutting surfaces and the 3D frame field is used to construct singular-cutting surfaces. The feature-cutting surfaces constructed from concave features first reduce the complexity of the model and decompose it into some coarse blocks. Then, an improved 3D frame field algorithm is performed on these coarse blocks to extract the singular structure and construct singular-cutting surfaces to further decompose the coarse blocks. In most modeling examples, the proposed algorithm uses both types of cutting surfaces to decompose models fully automatically. In a few examples with special requirements for hexahedral meshes, the algorithm requires manual input of some user-defined cutting surfaces and constructs different singular-cutting surfaces to ensure the effectiveness of the decomposition.

Findings

Benefiting from the feature decomposition and the 3D frame field algorithm, the output blocks of the proposed algorithm have no inner singular structure and are suitable for the mapping or sweeping algorithm. The introduction of internal constraints makes 3D frame field generation more robust in this paper, and it can automatically correct some invalid 3–5 singular structures. In a few examples with special requirements, the proposed algorithm successfully generates valid blocks even though the singular structure of the model is modified by user-defined cutting surfaces.

Originality/value

The proposed algorithm takes the advantage of feature decomposition and the 3D frame field to generate suitable blocks for a mapping or sweeping algorithm, which saves a lot of simulation time and requires less experience. The user-defined cutting surfaces enable the creation of special hexahedral meshes, which was difficult with previous algorithms. An improved 3D frame field generation method is proposed to correct some invalid singular structures and improve the robustness of the previous methods.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 18 January 2024

Jing Tang, Yida Guo and Yilin Han

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for…

Abstract

Purpose

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for predicting the coal price index to enhance coal purchase strategies for coal-consuming enterprises and provide crucial information for global carbon emission reduction.

Design/methodology/approach

The proposed coal price forecasting system combines data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. It addresses the challenge of merging low-resolution and high-resolution data by adaptively combining both types of data and filling in missing gaps through interpolation for internal missing data and self-supervision for initiate/terminal missing data. The system employs self-supervised learning to complete the filling of complex missing data.

Findings

The ensemble model, which combines long short-term memory, XGBoost and support vector regression, demonstrated the best prediction performance among the tested models. It exhibited superior accuracy and stability across multiple indices in two datasets, namely the Bohai-Rim steam-coal price index and coal daily settlement price.

Originality/value

The proposed coal price forecasting system stands out as it integrates data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. Moreover, the system pioneers the use of self-supervised learning for filling in complex missing data, contributing to its originality and effectiveness.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 March 2001

Feng Lin, Yongnian Yan and Wei Sun

A mathematical model to describe the principle of layered manufacturing and layered fabrication error is presented in this paper. In this model, the layered manufacturing process…

Abstract

A mathematical model to describe the principle of layered manufacturing and layered fabrication error is presented in this paper. In this model, the layered manufacturing process is characterized by the model decomposition and material accumulation. A 3D design model is represented by a set of points with sequence functions to correlate the layered processing information. Iso‐sequence planes are defined as the processing layers to collect points with the same processing sequence and to define the material accumulation along its gradient direction. Examples of using the proposed model to describe the layered manufacturing to process flat and no‐flat surfaces and the description of the layered processing error are also presented.

Details

Rapid Prototyping Journal, vol. 7 no. 1
Type: Research Article
ISSN: 1355-2546

Keywords

Abstract

Details

Handbook of Microsimulation Modelling
Type: Book
ISBN: 978-1-78350-570-8

Article
Publication date: 1 June 2004

Anas N. Al‐Rabadi and Martin Zwick

A novel many‐valued decomposition within the framework of lossless reconstructability analysis (RA) is presented. In previous work, modified reconstructability analysis (MRA) was…

Abstract

A novel many‐valued decomposition within the framework of lossless reconstructability analysis (RA) is presented. In previous work, modified reconstructability analysis (MRA) was applied to Boolean functions, where it was shown that most Boolean functions not decomposable using conventional reconstructability analysis (CRA) are decomposable using MRA. Also, it was previously shown that whenever decomposition exists in both MRA and CRA, MRA yields simpler or equal complexity decompositions. In this paper, MRA is extended to many‐valued logic functions, and logic structures that correspond to such decomposition are developed. It is shown that many‐valued MRA can decompose many‐valued functions when CRA fails to do so. Since real‐life data are often many‐valued, this new decomposition can be useful for machine learning and data mining. Many‐valued MRA can also be applied for the decomposition of relations.

Details

Kybernetes, vol. 33 no. 5/6
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 9 March 2010

Le Ma and Chunlu Liu

This paper develops a new decomposition method of the housing market variations to analyse the housing dynamics of the Australian eight capital cities.

Abstract

Purpose

This paper develops a new decomposition method of the housing market variations to analyse the housing dynamics of the Australian eight capital cities.

Design/methodology/approach

This study reviews the prior research on analysing the housing market variations and classifies the previous methods into four main models. Based on this, the study develops a new decomposition of the variations, which is made up of regional information, home‐market information and time information. The panel data regression method, unit root test and F test are adopted to construct the model and interpret the housing market variations of the Australian capital cities.

Findings

This paper suggests that the Australian home‐market information has the same elasticity to the housing market variations across cities and time. In contrast, the elasticities of the regional information are distinguished. However, similarities exit in the west and north of Australia or the south and east of Australia. The time information contributes differently along the observing period, although the similarities are found in certain periods.

Originality/value

This paper introduces the housing market variation decomposition into the research of housing market variations and develops a model based on the new method of the housing market variation decomposition.

Details

International Journal of Housing Markets and Analysis, vol. 3 no. 1
Type: Research Article
ISSN: 1753-8270

Keywords

1 – 10 of over 11000