Search results
1 – 10 of 910Pierre Jouan and Pierre Hallot
The purpose of this paper is to address the challenging issue of developing a quantitative approach for the representation of cultural significance data in heritage information…
Abstract
Purpose
The purpose of this paper is to address the challenging issue of developing a quantitative approach for the representation of cultural significance data in heritage information systems (HIS). The authors propose to provide experts in the field with a dedicated framework to structure and integrate targeted data about historical objects' significance in such environments.
Design/methodology/approach
This research seeks the identification of key indicators which allow to better inform decision-makers about cultural significance. Identified concepts are formalized in a data structure through conceptual data modeling, taking advantage on unified modeling language (HIS). The design science research (DSR) method is implemented to facilitate the development of the data model.
Findings
This paper proposes a practical solution for the formalization of data related to the significance of objects in HIS. The authors end up with a data model which enables multiple knowledge representations through data analysis and information retrieval.
Originality/value
The framework proposed in this article supports a more sustainable vision of heritage preservation as the framework enhances the involvement of all stakeholders in the conservation and management of historical sites. The data model supports explicit communications of the significance of historical objects and strengthens the synergy between the stakeholders involved in different phases of the conservation process.
Details
Keywords
Hsing-Hua Chang, Chen-Hsin Lai, Kuen-Liang Lin and Shih-Kuei Lin
Factor investment is booming in global asset management, especially environmental, social, and governance (ESG), dividend yield, and volatility factors. In this chapter, we use…
Abstract
Factor investment is booming in global asset management, especially environmental, social, and governance (ESG), dividend yield, and volatility factors. In this chapter, we use data from the US securities market from 2003 to 2019 to predict dividends and volatility factors through machine learning and historical data–based methods. After that, we utilize particle swarm optimization to construct the Markowitz portfolio with limits on the number of assets and weight restrictions. The empirical results show that that the prediction ability using XGBoost is superior to the historical factor investment method. Moreover, the investment performance of our portfolio with ESG, high-yield, and low-volatility factors outperforms baseline methods, especially the S&P 500 ETF.
Details
Keywords
This study investigates Rokkan's research programme in the light of the differences between case- and variables-based methodologies. Three phases of the research process are…
Abstract
This study investigates Rokkan's research programme in the light of the differences between case- and variables-based methodologies. Three phases of the research process are distinguished. Studying the way Rokkan actually proceeded in the research within his Europe project, we find that he follows the protocols of case-methodologies such as grounded theory. In the second phase of the research process, however, he constructs variables-based models as tools for his macro-historical comparisons. To get to variables from the sensitizing concepts coded in the first phase, Rokkan defines his variables as close to cases as possible: variables as nominal level typologies, types as variable values. He thus faces two interrelated dilemmas. First, a philosophy of science dissonance: he legitimates his research only with reference to a variable-methodology, while his research is thoroughly case based. Second, a paradox of double coding: using variable-based models in the second phase, the status of the knowledge available in the first phase memos is degraded. Rokkan cannot decide between the two main solutions to these dilemmas: The first solution is to discard his heterogeneous data, instead working only with homogeneous data that opens up to more consistently variables-oriented research. The second solution is to replace the notion of variables/variable values with typology/types, thereby returning to cases, pursuing comparative case reconstructions in the third phase of research. The study concludes in favour of the second solution.
Details
Keywords
Gang Yu, Zhiqiang Li, Ruochen Zeng, Yucong Jin, Min Hu and Vijayan Sugumaran
Accurate prediction of the structural condition of urban critical infrastructure is crucial for predictive maintenance. However, the existing prediction methods lack precision due…
Abstract
Purpose
Accurate prediction of the structural condition of urban critical infrastructure is crucial for predictive maintenance. However, the existing prediction methods lack precision due to limitations in utilizing heterogeneous sensing data and domain knowledge as well as insufficient generalizability resulting from limited data samples. This paper integrates implicit and qualitative expert knowledge into quantifiable values in tunnel condition assessment and proposes a tunnel structure prediction algorithm that augments a state-of-the-art attention-based long short-term memory (LSTM) model with expert rating knowledge to achieve robust prediction results to reasonably allocate maintenance resources.
Design/methodology/approach
Through formalizing domain experts' knowledge into quantitative tunnel condition index (TCI) with analytic hierarchy process (AHP), a fusion approach using sequence smoothing and sliding time window techniques is applied to the TCI and time-series sensing data. By incorporating both sensing data and expert ratings, an attention-based LSTM model is developed to improve prediction accuracy and reduce the uncertainty of structural influencing factors.
Findings
The empirical experiment in Dalian Road Tunnel in Shanghai, China showcases the effectiveness of the proposed method, which can comprehensively evaluate the tunnel structure condition and significantly improve prediction performance.
Originality/value
This study proposes a novel structure condition prediction algorithm that augments a state-of-the-art attention-based LSTM model with expert rating knowledge for robust prediction of structure condition of complex projects.
Details
Keywords
Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest
This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…
Abstract
Purpose
This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”
Design/methodology/approach
The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.
Findings
This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.
Originality/value
This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.
Details
Keywords
Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Abstract
Purpose
This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?
Design/methodology/approach
A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.
Findings
The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.
Practical implications
The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.
Originality/value
The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?
Details
Keywords
Ruchi Kejriwal, Monika Garg and Gaurav Sarin
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…
Abstract
Purpose
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.
Design/methodology/approach
The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.
Findings
Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.
Originality/value
This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.
Details
Keywords
Nanda Kumar Karippur, Pushpa Rani Balaramachandran and Elvin John
This paper aims at identifying the key factors influencing the adoption intention of data analytics for predictive maintenance (PdM) from the lens of the…
Abstract
Purpose
This paper aims at identifying the key factors influencing the adoption intention of data analytics for predictive maintenance (PdM) from the lens of the Technology–Organization–Environment (TOE) framework in the Singapore Process Industries context. The research model aids practitioners and researchers in developing a holistic maintenance strategy for large-scale asset-heavy process industries.
Design/methodology/approach
The TOE framework has been used in this study to consider a wide set of TOE factors and develop a research model with the support of literature. A survey is undertaken and the structural equation modelling (SEM) technique is adopted to test the hypotheses of the proposed model.
Findings
This research highlights the significant roles of digital infrastructure readiness, security and privacy, top management support, organizational competence, partnership with external consultants and government support in influencing adoption intention of data analytics for PdM. Perceived challenges related to organizational restructuring and process automation are not found significant in influencing the adoption intention.
Practical implications
This paper reports valuable insights on adoption intention of data analytics for PdM with relevant implications for the various stakeholders such as the leaders and senior managers of process manufacturing industry companies, government agencies, technology consultants and service providers.
Originality/value
This research uniquely validates the model for the adoption of data analytics for PdM in the process industries using the TOE framework. It reveals the significant technology, organizational and environmental factors influencing the adoption intention and highlights the relevant insights and implications for stakeholders.
Details
Keywords
Mike Brookbanks and Glenn C. Parry
This study aims to examine the effect of Industry 4.0 technology on resilience in established cross-border supply chain(s) (SC).
Abstract
Purpose
This study aims to examine the effect of Industry 4.0 technology on resilience in established cross-border supply chain(s) (SC).
Design/methodology/approach
A literature review provides insight into the resilience capabilities of cross-border SC. The research uses a case study of operational international SC: the producers, importers, logistics companies and UK Government (UKG) departments. Semi-structured interviews determine the resilience capabilities and approaches of participants within cross-border SC and how implementing an Industry 4.0 Internet of Things (IoT) and capitals Distributed Ledger (blockchain) based technology platform changes SC resilience capabilities and approaches.
Findings
A blockchain-based platform introduces common assured data, reducing data duplication. When combined with IoT technology, the platform improves end-to-end SC visibility and information sharing. Industry 4.0 technology builds collaboration, trust, improved agility, adaptability and integration. It enables common resilience capabilities and approaches that reduce the de-coupling between government agencies and participants of cross-border SC.
Research limitations/implications
The case study presents challenges specific to UKG’s customs border operations; research needs to be repeated in different contexts to confirm findings are generalisable.
Practical implications
Operational SC and UKG customs and excise departments must align their resilience strategies to gain full advantage of Industry 4.0 technologies.
Originality/value
Case study research shows how Industry 4.0 technology reduces the de-coupling between the SC and UKG, enhancing common resilience capabilities within established cross-border operations. Improved information sharing and SC visibility provided by IoT and blockchain technologies support the development of resilience in established cross-border SC and enhance interactions with UKG at the customs border.
Details
Keywords
Rosemarie Santa González, Marilène Cherkesly, Teodor Gabriel Crainic and Marie-Eve Rancourt
This study aims to deepen the understanding of the challenges and implications entailed by deploying mobile clinics in conflict zones to reach populations affected by violence and…
Abstract
Purpose
This study aims to deepen the understanding of the challenges and implications entailed by deploying mobile clinics in conflict zones to reach populations affected by violence and cut off from health-care services.
Design/methodology/approach
This research combines an integrated literature review and an instrumental case study. The literature review comprises two targeted reviews to provide insights: one on conflict zones and one on mobile clinics. The case study describes the process and challenges faced throughout a mobile clinic deployment during and after the Iraq War. The data was gathered using mixed methods over a two-year period (2017–2018).
Findings
Armed conflicts directly impact the populations’ health and access to health care. Mobile clinic deployments are often used and recommended to provide health-care access to vulnerable populations cut off from health-care services. However, there is a dearth of peer-reviewed literature documenting decision support tools for mobile clinic deployments.
Originality/value
This study highlights the gaps in the literature and provides direction for future research to support the development of valuable insights and decision support tools for practitioners.
Details