Search results
1 – 10 of over 4000Motivated by recent research indicating that the operational performance of an enterprise can be enhanced by building a supporting data-driven environment in which to operate…
Abstract
Purpose
Motivated by recent research indicating that the operational performance of an enterprise can be enhanced by building a supporting data-driven environment in which to operate, this paper presents a simulation framework that enables an examination of the effects of applying smart manufacturing principles to conventional production systems, intending to transition to digital platforms.
Design/methodology/approach
To investigate the extent to which conventional production systems can be transformed into novel data-driven environments, the well-known constant work-in-process (CONWIP) production systems and considered production sequencing assignments in flowshops were studied. As a result, a novel data-driven priority heuristic, Net-CONWIP was designed and studied, based on the ability to collect real-time information about customer demand and work-in-process inventory, which was applied as part of a distributed and decentralised production sequencing analysis. Application of heuristics like the Net-CONWIP is only possible through the ability to collect and use real-time data offered by a data-driven system. A four-stage application framework to assist practitioners in applying the proposed model was created.
Findings
To assess the robustness of the Net-CONWIP heuristic under the simultaneous effects of different levels of demand, its different levels of variability and the presence of bottlenecks, the performance of Net-CONWIP with conventional CONWIP systems that use first come, first served priority rule was compared. The results show that the Net-CONWIP priority rule significantly reduced customer wait time in all cases relative to FCFS.
Originality/value
Previous research suggests there is considerable value in creating data-driven environments. This study provides a simulation framework that guides the construction of a digital transformation environment. The suggested framework facilitates the inclusion and analysis of relevant smart manufacturing principles in production systems and enables the design and testing of new heuristics that employ real-time data to improve operational performance. An approach that can guide the structuring of data-driven environments in production systems is currently lacking. This paper bridges this gap by proposing a framework to facilitate the design of digital transformation activities, explore their impact on production systems and improve their operational performance.
Details
Keywords
Georg Grossmann, Alice Beale, Harkaran Singh, Ben Smith and Julie Nichols
Cultural heritage archiving is experiencing an increase in digitalisations of artefacts in the last 15 years. The reason behind this trend is a demand for providing information…
Abstract
Cultural heritage archiving is experiencing an increase in digitalisations of artefacts in the last 15 years. The reason behind this trend is a demand for providing information about the artefact in a more accessible way to the audience, for example, through online delivery or virtual reality. Other reasons might be to simplify and automate the management of artefacts. Having a ‘digital copy’ of artefacts, allows one to search an archive and plan its storage and dissemination in a comprehensive manner. With the increased digitalisation comes an increased use of artificial intelligence [AI] applications. AI can be very beneficial in classifying artefacts automatically through machine learning [ML] and natural language processing [NLP]. For example, an algorithm can identify the source and age of artefacts based on an image and can do this much faster for a large collection of photos than a human. Although AI provides many benefits, it also presents challenges: Sophisticated AI techniques require certain insights on how they work, need specialists to customise a solution, and require an existing large dataset to train an algorithm. Another challenge is that typical AI techniques are regarded as black boxes, which means they decide, but it is not obvious why a decision has been made. This chapter describes a project in collaboration with the South Australian Museum [SAM] on the application of AI to extract material lists from a description of artefacts. A large dataset to train an algorithm did not exist, and hence, a customised approach was required. The outcome of the project was the application of NLP in combination with easy-to-customise rules that can be applied by non-IT specialists. The resulting prototype achieved the extraction of materials from a large list of artefacts within seconds and a flexible solution that can be applied on other collections in the future.
Details
Keywords
Laura Lucantoni, Sara Antomarioni, Filippo Emanuele Ciarapica and Maurizio Bevilacqua
The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely…
Abstract
Purpose
The Overall Equipment Effectiveness (OEE) is considered a standard for measuring equipment productivity in terms of efficiency. Still, Artificial Intelligence solutions are rarely used for analyzing OEE results and identifying corrective actions. Therefore, the approach proposed in this paper aims to provide a new rule-based Machine Learning (ML) framework for OEE enhancement and the selection of improvement actions.
Design/methodology/approach
Association Rules (ARs) are used as a rule-based ML method for extracting knowledge from huge data. First, the dominant loss class is identified and traditional methodologies are used with ARs for anomaly classification and prioritization. Once selected priority anomalies, a detailed analysis is conducted to investigate their influence on the OEE loss factors using ARs and Network Analysis (NA). Then, a Deming Cycle is used as a roadmap for applying the proposed methodology, testing and implementing proactive actions by monitoring the OEE variation.
Findings
The method proposed in this work has also been tested in an automotive company for framework validation and impact measuring. In particular, results highlighted that the rule-based ML methodology for OEE improvement addressed seven anomalies within a year through appropriate proactive actions: on average, each action has ensured an OEE gain of 5.4%.
Originality/value
The originality is related to the dual application of association rules in two different ways for extracting knowledge from the overall OEE. In particular, the co-occurrences of priority anomalies and their impact on asset Availability, Performance and Quality are investigated.
Details
Keywords
Yong Gui and Lanxin Zhang
Influenced by the constantly changing manufacturing environment, no single dispatching rule (SDR) can consistently obtain better scheduling results than other rules for the…
Abstract
Purpose
Influenced by the constantly changing manufacturing environment, no single dispatching rule (SDR) can consistently obtain better scheduling results than other rules for the dynamic job-shop scheduling problem (DJSP). Although the dynamic SDR selection classifier (DSSC) mined by traditional data-mining-based scheduling method has shown some improvement in comparison to an SDR, the enhancement is not significant since the rule selected by DSSC is still an SDR.
Design/methodology/approach
This paper presents a novel data-mining-based scheduling method for the DJSP with machine failure aiming at minimizing the makespan. Firstly, a scheduling priority relation model (SPRM) is constructed to determine the appropriate priority relation between two operations based on the production system state and the difference between their priority values calculated using multiple SDRs. Subsequently, a training sample acquisition mechanism based on the optimal scheduling schemes is proposed to acquire training samples for the SPRM. Furthermore, feature selection and machine learning are conducted using the genetic algorithm and extreme learning machine to mine the SPRM.
Findings
Results from numerical experiments demonstrate that the SPRM, mined by the proposed method, not only achieves better scheduling results in most manufacturing environments but also maintains a higher level of stability in diverse manufacturing environments than an SDR and the DSSC.
Originality/value
This paper constructs a SPRM and mines it based on data mining technologies to obtain better results than an SDR and the DSSC in various manufacturing environments.
Details
Keywords
This study develops a model and algorithm to solve the decentralized resource-constrained multi-project scheduling problem (DRCMPSP) and provides a suitable priority rule (PR) for…
Abstract
Purpose
This study develops a model and algorithm to solve the decentralized resource-constrained multi-project scheduling problem (DRCMPSP) and provides a suitable priority rule (PR) for coordinating global resource conflicts among multiple projects.
Design/methodology/approach
This study addresses the DRCMPSP, which respects the information privacy requirements of project agents; that is, there is no single manager centrally in charge of generating multi-project scheduling. Accordingly, a three-stage model was proposed for the decentralized management of multiple projects. To solve this model, a three-stage solution approach with a repeated negotiation mechanism was proposed.
Findings
The experimental results obtained using the Multi-Project Scheduling Problem LIBrary confirm that our approach outperforms existing methods, regardless of the average utilization factor (AUF). Comparative analysis revealed that delaying activities in the lower project makespan produces a lower average project delay. Furthermore, the new PR LMS performed better in problem subsets with AUF < 1 and large-scale subsets with AUF > 1.
Originality/value
A solution approach with a repeated-negotiation mechanism suitable for the DRCMPSP and a new PR for coordinating global resource allocation are proposed.
Details
Keywords
Ran Wang, Yunbao Xu and Qinwen Yang
This paper intends to construct a new adaptive grey seasonal model (AGSM) to promote the application of the grey forecasting model in quarterly GDP.
Abstract
Purpose
This paper intends to construct a new adaptive grey seasonal model (AGSM) to promote the application of the grey forecasting model in quarterly GDP.
Design/methodology/approach
Firstly, this paper constructs a new accumulation operation that embodies the new information priority by using a hyperparameter. Then, a new AGSM is constructed by using a new grey action quantity, nonlinear Bernoulli operator, discretization operation, moving average trend elimination method and the proposed new accumulation operation. Subsequently, the marine predators algorithm is used to quickly obtain the hyperparameters used to build the AGSM. Finally, comparative analysis experiments and ablation experiments based on China's quarterly GDP confirm the validity of the proposed model.
Findings
AGSM can be degraded to some classical grey prediction models by replacing its own structural parameters. The proposed accumulation operation satisfies the new information priority rule. In the comparative analysis experiments, AGSM shows better prediction performance than other competitive algorithms, and the proposed accumulation operation is also better than the existing accumulation operations. Ablation experiments show that each component in the AGSM is effective in enhancing the predictive performance of the model.
Originality/value
A new AGSM with new information priority accumulation operation is proposed.
Details
Keywords
The purpose of this study is to automatically generate a construction schedule by extracting data from the BIM (Building Information Modeling) model and combining an ontology…
Abstract
Purpose
The purpose of this study is to automatically generate a construction schedule by extracting data from the BIM (Building Information Modeling) model and combining an ontology constraint rule and a genetic algorithm (GA).
Design/methodology/approach
This study developed a feasible multi-phase framework to generate the construction schedule automatically through extracting information from the BIM, utilizing the ontology constraint rule to demonstrate the relationships between all the components and finally using the GA to generate the construction schedule.
Findings
To present the functionality of the framework, a prototype case is adopted to show the whole procedure, and the results show that the scheme designed in this study can quickly generate the schedule and ensure that it can satisfy the requirements of logical constraints and time parameter constraints.
Practical implications
A proper utilization of conceptual framework can contribute to the automatic generation of construction schedules and significantly reduce manual errors in the Architectural, Engineering, and Construction (AEC) industry. Moreover, a scheme of BIM-based ontology and GA for construction schedule generation may reduce additional manual work and improve schedule management performance.
Social implications
The hybrid approach combines the ontology constraint rule and GA proposed in this study, and it is an effective attempt to generate the construction schedule, which provides a direct indicator for the schedule control of the project.
Originality/value
In this study, the data application process of the BIM model is divided into four modules: extraction, processing, optimization, and output. The key technologies including secondary development, ontology theory, and GA are introduced to develop a multi-phase framework for the automatic generation of the construction schedule and to realize the schedule prediction under logical constraints and duration interference.
Details
Keywords
Integrating the Chat Generative Pre-Trained Transformer-type (ChatGPT-type) model with government services has great development prospects. Applying this model improves service…
Abstract
Purpose
Integrating the Chat Generative Pre-Trained Transformer-type (ChatGPT-type) model with government services has great development prospects. Applying this model improves service efficiency but has certain risks, thus having a dual impact on the public. For a responsible and democratic government, it is necessary to fully understand the factors influencing public acceptance and their causal relationships to truly encourage the public to accept and use government ChatGPT-type services.
Design/methodology/approach
This study used the Latent Dirichlet allocation (LDA) model to analyze comment texts and summarize 15 factors that affect public acceptance. Multiple-related matrices were established using the grey decision-making trial and evaluation laboratory (grey-DEMATEL) method to reveal causal relationships among factors. From the two opposite extraction rules of result priority and cause priority, the authors obtained an antagonistic topological model with comprehensive influence values using the total adversarial interpretive structure model (TAISM).
Findings
Fifteen factors were categorized in terms of cause and effect, and the antagonistic topological model with comprehensive influence values was also analyzed. The analysis showed that perceived risk, trust and meeting demand were the three most critical factors of public acceptance. Meanwhile, perceived risk and trust directly affected public acceptance and were affected by other factors. Supervision and accountability had the highest driving power and acted as the causal factor to influence other factors.
Originality/value
This study identified the factors affecting public acceptance of integrating the ChatGPT-type model with government services. It analyzed the relationship between the factors to provide a reference for decision-makers. This study introduced TAISM to form the LDA-grey-DEMATEL-TAISM method to provide an analytical paradigm for studying similar influencing factors.
Details
Keywords
Prime ministers in the Gulf Cooperation Council (GCC) states are part of the ruling families. They often combine the role with that of crown prince or sovereign ministerial…