Search results

1 – 10 of over 101000
To view the access options for this content please click here
Article
Publication date: 27 January 2012

Bokyoung Kang, Dongsoo Kim and Suk‐Ho Kang

The purpose of this paper is to provide industrial managers with insight into the real‐time progress of running processes. The authors formulated a periodic performance…

Downloads
1230

Abstract

Purpose

The purpose of this paper is to provide industrial managers with insight into the real‐time progress of running processes. The authors formulated a periodic performance prediction algorithm for use in a proposed novel approach to real‐time business process monitoring.

Design/methodology/approach

In the course of process executions, the final performance is predicted probabilistically based on partial information. Imputation method is used to generate probable progresses of ongoing process and Support Vector Machine classifies the performances of them. These procedures are periodically iterated along with the real‐time progress in order to describe the ongoing status.

Findings

The proposed approach can describe the ongoing status as the probability that the process will be executed continually and terminated as the identical result. Furthermore, before the actual occurrence, a proactive warning can be provided for implicit notification of eventualities if the probability of occurrence of the given outcome exceeds the threshold.

Research limitations/implications

The performance of the proactive warning strategy was evaluated only for accuracy and proactiveness. However, the process will be improved by additionally considering opportunity costs and benefits from actual termination types and their warning errors.

Originality/value

Whereas the conventional monitoring approaches only classify the already occurred result of a terminated instance deterministically, the proposed approach predicts the possible results of an ongoing instance probabilistically over entire monitoring periods. As such, the proposed approach can provide the real‐time indicator describing the current capability of ongoing process.

To view the access options for this content please click here
Article
Publication date: 24 April 2007

Machiko Louhisuo, Teppo Veijonen, Jussi Ahola and Toshikazu Morohoshi

This paper aims to present disaster information and a monitoring system in order to utilize earth observation data in the operative process of early warning, mitigation…

Downloads
1280

Abstract

Purpose

This paper aims to present disaster information and a monitoring system in order to utilize earth observation data in the operative process of early warning, mitigation and management of natural disasters. The system is aimed at integrating earth observation data analysis with modern ICTs including GIS, grid, mobile communication and web technology to support disaster monitoring and to share disaster information during a crisis.

Design/methodology/approach

The system development concerned outlining an operative disaster monitoring and management process. The process was derived from actual practices, suggestions and needs of different user groups involved in disaster management. After investigating state‐of‐the‐art ICTs and reviewing the existing tools and databases, a suitable system architecture was designed and a prototype system was implemented, adapting to a proven software development process.

Findings

The prototype system implementation demonstrated how satellite‐based data can be used to support disaster management processes. Disaster monitoring requires information system infrastructure that would enable communication and integrate various distributed information sources and services.

Originality/value

The result gives ideas for establishing an operative disaster management process involving local authorities, disaster analysts and the public. The process integrates earth observation data analysis with modern ICTs and improves the methods of early warning. The developed concept can be used as the basis for future development of automated real‐time disaster monitoring.

Details

Management of Environmental Quality: An International Journal, vol. 18 no. 3
Type: Research Article
ISSN: 1477-7835

Keywords

To view the access options for this content please click here
Book part
Publication date: 8 March 2018

Miklos A. Vasarhelyi, Michael G. Alles and Alexander Kogan

The advent of new enabling technologies and the surge in corporate scandals has combined to increase the supply, the demand, and the development of enabling technologies…

Abstract

The advent of new enabling technologies and the surge in corporate scandals has combined to increase the supply, the demand, and the development of enabling technologies for a new system of continuous assurance and measurement. This paper positions continuous assurance (CA) as a methodology for the analytic monitoring of corporate business processes, taking advantage of the automation and integration of business processes brought about by information technologies. Continuous analytic monitoring-based assurance will change the objectives, timing, processes, tools, and outcomes of the assurance process.

The objectives of assurance will expand to encompass a wide set of qualitative and quantitative management reports. The nature of this assurance will be closer to supervisory activities and will involve intensive interchange with more of the firm s stakeholders than just its shareholders. The timing of the audit process will be very close to the event, automated, and will conform to the natural life cycle of the underlying business processes. The processes of assurance will change dramatically to being meta-supervisory in nature, intrusive with the potential of process interruption, and focusing on very different forms of evidential matter than the traditional audit. The tools of the audit will expand considerably with the emergence of major forms of new auditing methods relying heavily on an integrated set of automated information technology (IT) and analytical tools. These will include automatic confirmations (confirmatory extranets), control tags (transparent tagging) tools, continuity equations, and time-series cross-sectional analytics. Finally, the outcomes of the continuous assurance process will entail an expanded set of assurances, evergreen opinions, some future assurances, some improvement on control processes (through incorporating CA tests), and some improved data integrity.

A continuous audit is a methodology that enables independent auditors to provide written assurance on a subject matter, for which an entity’s management is responsible, using a series of auditors’ reports issued virtually simultaneously with, or a short period of time after, the occurrence of events underlying the subject matter.

  • CICA/AICPA Research Study on Continuous Auditing (1999)

CICA/AICPA Research Study on Continuous Auditing (1999)

Companies must disclose certain information on a current basis.

  • Corporate and Auditing Accountability, Responsibility, and Transparency (Sarbanes-Oxley) Act (2002)

Corporate and Auditing Accountability, Responsibility, and Transparency (Sarbanes-Oxley) Act (2002)

Details

Continuous Auditing
Type: Book
ISBN: 978-1-78743-413-4

To view the access options for this content please click here

Abstract

Details

Continuous Auditing
Type: Book
ISBN: 978-1-78743-413-4

To view the access options for this content please click here
Book part
Publication date: 23 December 2005

Giuseppe Labianca and James F. Fairbank

Researchers have traditionally investigated aspects of the interorganizational monitoring process in piecemeal fashion. This conceptual piece argues that juxtaposing the…

Abstract

Researchers have traditionally investigated aspects of the interorganizational monitoring process in piecemeal fashion. This conceptual piece argues that juxtaposing the categorization process with interorganizational emulation, imitation, and competition, brings focus to organizations’ attempts to acquire information from other organizations, signal internal and external constituencies, and ultimately change. We argue that the depth or intensity with which the monitoring process is pursued as well as the breadth or degree of overlap in the sets of organizations chosen to monitor, determines the volume and diversity of information acquired, the strength of the signal sent to constituent groups, and the amount and type of change likely to emerge from the process. All of these factors will ultimately affect the firm's future performance.

Details

Strategy Process
Type: Book
ISBN: 978-1-84950-340-2

To view the access options for this content please click here
Article
Publication date: 25 July 2019

Yinhua Liu, Rui Sun and Sun Jin

Driven by the development in sensing techniques and information and communications technology, and their applications in the manufacturing system, data-driven quality…

Abstract

Purpose

Driven by the development in sensing techniques and information and communications technology, and their applications in the manufacturing system, data-driven quality control methods play an essential role in the quality improvement of assembly products. This paper aims to review the development of data-driven modeling methods for process monitoring and fault diagnosis in multi-station assembly systems. Furthermore, the authors discuss the applications of the methods proposed and present suggestions for future studies in data mining for quality control in product assembly.

Design/methodology/approach

This paper provides an outline of data-driven process monitoring and fault diagnosis methods for reduction in variation. The development of statistical process monitoring techniques and diagnosis methods, such as pattern matching, estimation-based analysis and artificial intelligence-based diagnostics, is introduced.

Findings

A classification structure for data-driven process control techniques and the limitations of their applications in multi-station assembly processes are discussed. From the perspective of the engineering requirements of real, dynamic, nonlinear and uncertain assembly systems, future trends in sensing system location, data mining and data fusion techniques for variation reduction are suggested.

Originality/value

This paper reveals the development of process monitoring and fault diagnosis techniques, and their applications in variation reduction in multi-station assembly.

Details

Assembly Automation, vol. 39 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 27 July 2012

Anupam Das, J. Maiti and R.N. Banerjee

Monitoring of a process leading to the detection of faults and determination of the root causes are essential for the production of consistent good quality end products…

Downloads
1511

Abstract

Purpose

Monitoring of a process leading to the detection of faults and determination of the root causes are essential for the production of consistent good quality end products with improved yield. The history of process monitoring fault detection (PMFD) strategies can be traced back to 1930s. Thereafter various tools, techniques and approaches were developed along with their application in diversified fields. The purpose of this paper is to make a review to categorize, describe and compare the various PMFD strategies.

Design/methodology/approach

Taxonomy was developed to categorize PMFD strategies. The basis for the categorization was the type of techniques being employed for devising the PMFD strategies. Further, PMFD strategies were discussed in detail along with emphasis on the areas of applications. Comparative evaluations of the PMFD strategies based on some commonly identified issues were also carried out. A general framework common to all the PMFD has been presented. And lastly a discussion into future scope of research was carried out.

Findings

The techniques employed for PMFD are primarily of three types, namely data driven techniques such as statistical model based and artificial intelligent based techniques, priori knowledge based techniques, and hybrid models, with a huge dominance of the first type. The factors that should be considered in developing a PMFD strategy are ease in development, diagnostic ability, fault detection speed, robustness to noise, generalization capability, and handling of nonlinearity. The review reveals that there is no single strategy that can address all aspects related to process monitoring and fault detection efficiently and there is a need to mesh the different techniques from various PMFD strategies to devise a more efficient PMFD strategy.

Research limitations/implications

The review documents the existing strategies for PMFD with an emphasis on finding out the nature of the strategies, data requirements, model building steps, applicability and scope for amalgamation. The review helps future researchers and practitioners to choose appropriate techniques for PMFD studies for a given situation. Further, future researchers will get a comprehensive but precise report on PMFD strategies available in the literature to date.

Originality/value

The review starts with identifying key indicators of PMFD for review and taxonomy was proposed. An analysis was conducted to identify the pattern of published articles on PMFD followed by evolution of PMFD strategies. Finally, a general framework is given for PMFD strategies for future researchers and practitioners.

Details

International Journal of Quality & Reliability Management, vol. 29 no. 7
Type: Research Article
ISSN: 0265-671X

Keywords

To view the access options for this content please click here
Article
Publication date: 24 May 2011

Bokyoung Kang, Jae‐Yoon Jung, Nam Wook Cho and Suk‐Ho Kang

The purpose of this paper is to help industrial managers monitor and analyze critical performance indicators in real time during the execution of business processes by…

Downloads
1708

Abstract

Purpose

The purpose of this paper is to help industrial managers monitor and analyze critical performance indicators in real time during the execution of business processes by proposing a visualization technique using an extended formal concept analysis (FCA). The proposed approach monitors the current progress of ongoing processes and periodically predicts their probable routes and performances.

Design/methodology/approach

FCA is utilized to analyze relations among patterns of events in historical process logs, and this method of data analysis visualizes the relations in a concept lattice. To apply FCA to real‐time business process monitoring, the authors extended the conventional concept lattice into a reachability lattice, which enables managers to recognize reachable patterns of events in specific instances of business processes.

Findings

By using a reachability lattice, expected values of a target key performance indicator are predicted and traced along with probable outcomes. Analysis is conducted periodically as the monitoring time elapses over the course of business processes.

Practical implications

The proposed approach focuses on the visualization of probable event occurrences on the basis of historical data. Such visualization can be utilized by industrial managers to evaluate the status of any given instance during business processes and to easily predict possible subsequent states for purposes of effective and efficient decision making. The proposed method was developed in a prototype system for proof of concept and has been illustrated using a simplified real‐world example of a business process in a telecommunications company.

Originality/value

The main contribution of this paper lies in the development of a real‐time monitoring approach of ongoing processes. The authors have provided a new data structure, namely a reachability lattice, which visualizes real‐time progress of ongoing business processes. As a result, current and probable next states can be predicted graphically using periodically conducted analysis during the processes.

Details

Industrial Management & Data Systems, vol. 111 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

To view the access options for this content please click here
Article
Publication date: 5 August 2021

Youn Ji Lee, Hyuk Jun Kwon, Yujin Seok and Sang Jeen Hong

The purpose of this paper is to demonstrate industrial Internet of Things (IIoT) solution to improve the equipment condition monitoring with equipment status data and…

Abstract

Purpose

The purpose of this paper is to demonstrate industrial Internet of Things (IIoT) solution to improve the equipment condition monitoring with equipment status data and process condition monitoring with plasma optical emission spectroscopy data, simultaneously. The suggested research contributes e-maintenance capability by remote monitoring in real time.

Design/methodology/approach

Semiconductor processing equipment consists of more than a thousand of components, and unreliable condition of equipment parts leads to the failure of wafer production. This study presents a web-based remote monitoring system for physical vapor deposition (PVD) systems using programmable logic controller (PLC) and Modbus protocol. A method of obtaining electron temperature and electron density in plasma through optical emission spectroscopy (OES) is proposed to monitor the plasma process. Through this system, parts that affect equipment and processes can be controlled and properly managed. It is certainly beneficial to improve the manufacturing yield by reducing errors from equipment parts.

Findings

A web-based remote monitoring system provides much of benefits to equipment engineers to provide equipment data for the equipment maintenance even though they are physically away from the equipment side. The usefulness of IIoT for the e-maintenance in semiconductor manufacturing domain with the in situ monitoring of plasma parameters is convinced. The authors found the average electron temperature gradually with the increase of Ar carrier gas flow due to the increased atomic collisions in PVD process. The large amount of carrier gas flow, in this experimental case, was 90 sccm, dramatically decreasing the electron temperature, which represents kinetic energy of electrons.

Research limitations/implications

Semiconductor industries require high level of data security for the protection of their intellectual properties, and it also falls into equipment operational condition; however, data security through the Internet communication is not considered in this research, but it is already existing technology to be easily adopted by add-on feature.

Practical implications

The findings indicate that crucial equipment parameters are the amount of carrier gas flow rate and chamber pressure among the many equipment parameters, and they also affect plasma parameters of electron temperature and electron density, which directly affect the quality of metal deposition process result on wafer. Increasing the gas flow rate beyond a certain limit can yield the electron temperature loss to have undesired process result.

Originality/value

Several research studies on data mining with semiconductor equipment data have been suggested in semiconductor data mining domain, but the actual demonstration of the data acquisition system with real-time plasma monitoring data has not been reported. The suggested research is also valuable in terms of high cost and complicated equipment manufacturing.

Details

Journal of Quality in Maintenance Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-2511

Keywords

To view the access options for this content please click here
Article
Publication date: 31 December 2018

Domenico Piatti and Peter Cincinelli

The purpose of this paper is to investigate whether the quality of the credit process is sensitive to reaching a particular threshold level of non-performing loans (NPLs…

Abstract

Purpose

The purpose of this paper is to investigate whether the quality of the credit process is sensitive to reaching a particular threshold level of non-performing loans (NPLs) and, more importantly, whether higher NPLs ratios could make the monitoring activity ineffective.

Design/methodology/approach

The empirical design is composed of two steps: in the first step, the authors introduce a monitoring performance indicator (MPI) of the credit process by combining the non-parametric technique Data Envelopment Analysis with some financial ratios adopted as input and output variables. As second step, the authors apply a threshold panel regression model to a sample of 298 Italian banks, over the time period 2006–2014, and the authors investigate whether the quality of the credit process is sensitive to reaching a particular threshold level of NPLs.

Findings

This paper finds that, first, when the NPLs ratio remains below the threshold value estimated endogenously, an increase in the quality of monitoring has a positive impact on the NPLs ratio. Second, if the NPLs ratio exceeds the estimated threshold, the relationship between the NPLs ratio and quality of monitoring assumes a positive value and is statistically significant.

Research limitations/implications

Due to the lack of data, the investigation of NPLs in the Italian industry across loan types combined with the monitoring effort by banks management was not possible. The authors plan to investigate this topic in future studies.

Practical implications

The identification of the threshold has a double operational valence. The first regards the Supervisory Authority, the threshold approach could be used as an early warning in order to introduce active control strategies based on the additional information requested or by on-site inspections. The second implication is highlighted in relation to the individual banks, the monitoring of credit control quality, if objective and comparable, could facilitate the emergence of best practices among banks.

Social implications

A high NPLs ratio requires greater loan provisions, which reduces capital resources available for lending, and dents bank profitability. Moreover, structural weaknesses on banks’ balance sheets still persist particularly in relation to the inadequate internal governance structures. This means that bank management must able to recognise in advance early warning signals by providing prudent measurement together with an in-depth valuation of loans portfolio.

Originality/value

The originality of the paper is twofold: the authors introduce a new proxy of credit monitoring, called MPI; the authors provide an empirical proof of the Diamond’s (1991) economic intuition: for riskier borrowers, the monitoring activity is an inappropriate instrument depending on the bad reputational quality of borrowers.

Details

Managerial Finance, vol. 45 no. 2
Type: Research Article
ISSN: 0307-4358

Keywords

1 – 10 of over 101000