Search results

1 – 10 of over 214000
Article
Publication date: 1 June 2001

Terrence Perera and Kapila Liyanage

In recent years, computer simulation has become a mainstream decision support tool in manufacturing industry. In order to maximise the benefits of using simulation within…

1859

Abstract

In recent years, computer simulation has become a mainstream decision support tool in manufacturing industry. In order to maximise the benefits of using simulation within businesses simulation models should be designed, developed and deployed in a shorter time span. A number of factors, such as excessive model details, inefficient data collection, lengthy model documentation and poorly planned experiments, increase the overall lead time of simulation projects. Among these factors, input data modelling is seen as a major obstacle. Input data identification, collection, validation, and analysis, typically take more than one‐third of project time. This paper presents a IDEF (Integrated computer‐aided manufacturing DEFinition) based approach to accelerate identification and collection of input data. The use of the methodology is presented through its application in batch manufacturing environments. A functional module library and a reference data model, both developed using the IDEF family of constructs, are the core elements of the methodology. The paper also identifies the major causes behind the inefficient collection of data.

Details

Integrated Manufacturing Systems, vol. 12 no. 3
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 1 June 2004

R.H. Khatibi, R. Lincoln, D. Jackson, S. Surendran, C. Whitlow and J. Schellekens

With the diversification of modelling activities encouraged by versatile modelling tools, handling their datasets has become a formidable problem. A further impetus stems from the…

Abstract

With the diversification of modelling activities encouraged by versatile modelling tools, handling their datasets has become a formidable problem. A further impetus stems from the emergence of the real‐time forecasting culture, transforming data embedded in computer programs of one‐off modelling activities of the 1970s‐1980s into dataset assets, an important feature of modelling since the 1990s, where modelling has emerged as a practice with a pivotal role to data transactions. The scope for data is now vast but in legacy data management practices datasets are fragmented, not transparent outside their native software systems, and normally “monolithic”. Emerging initiatives on published interfaces will make datasets transparent outside their native systems but will not solve the fragmentation and monolithic problems. These problems signify a lack of science base in data management and as such it is necessary to unravel inherent generic structures in data. This paper outlines root causes for these problems and presents a tentative solution referred to as “systemic data management”, which is capable of solving the above problems through the assemblage of packaged data. Categorisation is presented as a packaging methodology and the various sources contributing to the generic structure of data are outlined, e.g. modelling techniques, modelling problems, application areas and application problems. The opportunities offered by systemic data management include: promoting transparency among datasets of different software systems; exploiting inherent synergies within data; and treating data as assets with a long‐term view on reuse of these assets in an integrated capability.

Details

Management of Environmental Quality: An International Journal, vol. 15 no. 3
Type: Research Article
ISSN: 1477-7835

Keywords

Article
Publication date: 1 April 2003

David Cranage

One of the most basic pieces of information useful to hospitality operations is gross sales, and the ability to forecast them is strategically important. These forecasts could…

3677

Abstract

One of the most basic pieces of information useful to hospitality operations is gross sales, and the ability to forecast them is strategically important. These forecasts could provide powerful information to cut costs, increase efficient use of resources, and improve the ability to compete in a constantly changing environment. This study tests sophisticated, yet simple‐to‐use time series models to forecast sales. The results show that, with slight re‐arrangement of historical sales data, easy‐to‐use time series models can accurately forecast gross sales.

Details

International Journal of Contemporary Hospitality Management, vol. 15 no. 2
Type: Research Article
ISSN: 0959-6119

Keywords

Open Access
Article
Publication date: 9 October 2023

Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…

1037

Abstract

Purpose

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.

Design/methodology/approach

This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.

Findings

As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.

Originality/value

This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 22 May 2023

Edmund Baffoe-Twum, Eric Asa and Bright Awuku

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the…

Abstract

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the mining industry; however, it has been successfully applied in diverse scientific disciplines. This technique includes univariate, multivariate, and simulations. Kriging geostatistical methods, simple, ordinary, and universal Kriging, are not multivariate models in the usual statistical function. Notwithstanding, simple, ordinary, and universal kriging techniques utilize random function models that include unlimited random variables while modeling one attribute. The coKriging technique is a multivariate estimation method that simultaneously models two or more attributes defined with the same domains as coregionalization.

Objective: This study investigates the impact of populations on traffic volumes as a variable. The additional variable determines the strength or accuracy obtained when data integration is adopted. In addition, this is to help improve the estimation of annual average daily traffic (AADT).

Methods procedures, process: The investigation adopts the coKriging technique with AADT data from 2009 to 2016 from Montana, Minnesota, and Washington as primary attributes and population as a controlling factor (second variable). CK is implemented for this study after reviewing the literature and work completed by comparing it with other geostatistical methods.

Results, observations, and conclusions: The Investigation employed two variables. The data integration methods employed in CK yield more reliable models because their strength is drawn from multiple variables. The cross-validation results of the model types explored with the CK technique successfully evaluate the interpolation technique's performance and help select optimal models for each state. The results from Montana and Minnesota models accurately represent the states' traffic and population density. The Washington model had a few exceptions. However, the secondary attribute helped yield an accurate interpretation. Consequently, the impact of tourism, shopping, recreation centers, and possible transiting patterns throughout the state is worth exploring.

Details

Emerald Open Research, vol. 1 no. 5
Type: Research Article
ISSN: 2631-3952

Keywords

Article
Publication date: 6 July 2015

Andrew Whyte and James Donaldson

The use of digital-models to communicate civil-engineering design continues to generate debate; this pilot-work reviews technology uptake towards data repurposing and assesses…

Abstract

Purpose

The use of digital-models to communicate civil-engineering design continues to generate debate; this pilot-work reviews technology uptake towards data repurposing and assesses digital (vs traditional) design-preparation timelines and fees for infrastructure. The paper aims to discuss these issues.

Design/methodology/approach

Extending (building-information-modelling) literature, distribution-impact is investigated across: quality-management, technical-applications and contractual-liability. Project case-study scenarios were developed and validated with resultant modelling-application timeline/fees examined, in conjunction with qualitative semi-structured interviews with 11 prominent stakeholder companies.

Findings

Results generated to explore digital-model data-distribution/usage identify: an 8 per cent time/efficiency improvement at the design-phase, and a noteworthy cost-saving of 0.7 per cent overall. Fragmented opinion regarding modelling utilisation exists across supply-chains, with concerns over liability, quality-management and, the lack of Australian-Standard contract-clause(s) dealing directly with digital-model document hierarchy/clarification/reuse.

Research limitations/implications

Representing a small-scale/snapshot industrial-study, findings suggest that (model-distribution) must emphasise checking-procedures within quality-systems and, seek precedence clarification for dimensioned documentation. Similarly, training in specific file-formatting (digital-model-addenda) techniques, CAD-file/hard-copy continuity, and digital-visualisation software, can better regulate model dissemination/reuse. Time/cost savings through digital-model data-distribution in civil-engineering contracts are available to enhance provision of society’s infrastructure.

Originality/value

This work extends knowledge of 3D-model distribution for roads/earthworks/drainage, and presents empirical evidence that (alongside appropriate consideration of general-conditions-of-contract and specific training to address revision-document continuity), industry may achieve tangible benefits from digital-model data as a means to communicate civil-engineering design.

Details

Built Environment Project and Asset Management, vol. 5 no. 3
Type: Research Article
ISSN: 2044-124X

Keywords

Open Access
Article
Publication date: 26 July 2021

Weifei Hu, Tongzhou Zhang, Xiaoyu Deng, Zhenyu Liu and Jianrong Tan

Digital twin (DT) is an emerging technology that enables sophisticated interaction between physical objects and their virtual replicas. Although DT has recently gained significant…

12113

Abstract

Digital twin (DT) is an emerging technology that enables sophisticated interaction between physical objects and their virtual replicas. Although DT has recently gained significant attraction in both industry and academia, there is no systematic understanding of DT from its development history to its different concepts and applications in disparate disciplines. The majority of DT literature focuses on the conceptual development of DT frameworks for a specific implementation area. Hence, this paper provides a state-of-the-art review of DT history, different definitions and models, and six types of key enabling technologies. The review also provides a comprehensive survey of DT applications from two perspectives: (1) applications in four product-lifecycle phases, i.e. product design, manufacturing, operation and maintenance, and recycling and (2) applications in four categorized engineering fields, including aerospace engineering, tunneling and underground engineering, wind engineering and Internet of things (IoT) applications. DT frameworks, characteristic components, key technologies and specific applications are extracted for each DT category in this paper. A comprehensive survey of the DT references reveals the following findings: (1) The majority of existing DT models only involve one-way data transfer from physical entities to virtual models and (2) There is a lack of consideration of the environmental coupling, which results in the inaccurate representation of the virtual components in existing DT models. Thus, this paper highlights the role of environmental factor in DT enabling technologies and in categorized engineering applications. In addition, the review discusses the key challenges and provides future work for constructing DTs of complex engineering systems.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. 2 no. 1
Type: Research Article
ISSN: 2633-6596

Keywords

Article
Publication date: 24 April 2020

Juan Manuel Davila Delgado and Lukumon O. Oyedele

The purpose of this paper is to review and provide recommendations to extend the current open standard data models for describing monitoring systems and circular economy precepts…

Abstract

Purpose

The purpose of this paper is to review and provide recommendations to extend the current open standard data models for describing monitoring systems and circular economy precepts for built assets. Open standard data models enable robust and efficient data exchange which underpins the successful implementation of a circular economy. One of the largest opportunities to reduce the total life cycle cost of a built asset is to use the building information modelling (BIM) approach during the operational phase because it represents the largest share of the entire cost. BIM models that represent the actual conditions and performance of the constructed assets can boost the benefits of the installed monitoring systems and reduce maintenance and operational costs.

Design/methodology/approach

This paper presents a horizontal investigation of current BIM data models and their use for describing circular economy principles and performance monitoring of built assets. Based on the investigation, an extension to the industry foundation classes (IFC) specification, recommendations and guidelines are presented which enable to describe circular economy principles and asset monitoring using IFC.

Findings

Current open BIM data models are not sufficiently mature yet. This limits the interoperability of the BIM approach and the implementation of circular economy principles. An overarching approach to extend the current standards is necessary, which considers aspects related to not only modelling the monitoring system but also data management and analysis.

Originality/value

To the authors’ best knowledge, this is the first study that identifies requirements for data model standards in the context current linear economic model of making, using and disposing is growing unsustainably far beyond the finite limits of planet of a circular economy. The results of this study set the basis for the extension of current standards required to apply the circular economy precepts.

Details

Journal of Engineering, Design and Technology , vol. 18 no. 5
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 18 October 2021

Anna Jurek-Loughrey

In the world of big data, data integration technology is crucial for maximising the capability of data-driven decision-making. Integrating data from multiple sources drastically…

Abstract

Purpose

In the world of big data, data integration technology is crucial for maximising the capability of data-driven decision-making. Integrating data from multiple sources drastically expands the power of information and allows us to address questions that are impossible to answer using a single data source. Record Linkage (RL) is a task of identifying and linking records from multiple sources that describe the same real world object (e.g. person), and it plays a crucial role in the data integration process. RL is challenging, as it is uncommon for different data sources to share a unique identifier. Hence, the records must be matched based on the comparison of their corresponding values. Most of the existing RL techniques assume that records across different data sources are structured and represented by the same scheme (i.e. set of attributes). Given the increasing amount of heterogeneous data sources, those assumptions are rather unrealistic. The purpose of this paper is to propose a novel RL model for unstructured data.

Design/methodology/approach

In the previous work (Jurek-Loughrey, 2020), the authors proposed a novel approach to linking unstructured data based on the application of the Siamese Multilayer Perceptron model. It was demonstrated that the method performed on par with other approaches that make constraining assumptions regarding the data. This paper expands the previous work originally presented at iiWAS2020 [16] by exploring new architectures of the Siamese Neural Network, which improves the generalisation of the RL model and makes it less sensitive to parameter selection.

Findings

The experimental results confirm that the new Autoencoder-based architecture of the Siamese Neural Network obtains better results in comparison to the Siamese Multilayer Perceptron model proposed in (Jurek et al., 2020). Better results have been achieved in three out of four data sets. Furthermore, it has been demonstrated that the second proposed (hybrid) architecture based on integrating the Siamese Autoencoder with a Multilayer Perceptron model, makes the model more stable in terms of the parameter selection.

Originality/value

To address the problem of unstructured RL, this paper presents a new deep learning based approach to improve the generalisation of the Siamese Multilayer Preceptron model and make is less sensitive to parameter selection.

Details

International Journal of Web Information Systems, vol. 17 no. 6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 25 July 2019

Yinhua Liu, Rui Sun and Sun Jin

Driven by the development in sensing techniques and information and communications technology, and their applications in the manufacturing system, data-driven quality control…

Abstract

Purpose

Driven by the development in sensing techniques and information and communications technology, and their applications in the manufacturing system, data-driven quality control methods play an essential role in the quality improvement of assembly products. This paper aims to review the development of data-driven modeling methods for process monitoring and fault diagnosis in multi-station assembly systems. Furthermore, the authors discuss the applications of the methods proposed and present suggestions for future studies in data mining for quality control in product assembly.

Design/methodology/approach

This paper provides an outline of data-driven process monitoring and fault diagnosis methods for reduction in variation. The development of statistical process monitoring techniques and diagnosis methods, such as pattern matching, estimation-based analysis and artificial intelligence-based diagnostics, is introduced.

Findings

A classification structure for data-driven process control techniques and the limitations of their applications in multi-station assembly processes are discussed. From the perspective of the engineering requirements of real, dynamic, nonlinear and uncertain assembly systems, future trends in sensing system location, data mining and data fusion techniques for variation reduction are suggested.

Originality/value

This paper reveals the development of process monitoring and fault diagnosis techniques, and their applications in variation reduction in multi-station assembly.

Details

Assembly Automation, vol. 39 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of over 214000