Search results
1 – 10 of over 2000Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…
Abstract
Purpose
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.
Design/methodology/approach
This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).
Findings
The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.
Practical implications
The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.
Originality/value
This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.
Noemi Manara, Lorenzo Rosset, Francesco Zambelli, Andrea Zanola and America Califano
In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme…
Abstract
Purpose
In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme importance. In particular, in many cases, the knowledge of the outdoor/indoor microclimate may support the decision process in conservation and preservation matters of historic buildings. This knowledge is often gained by implementing long and time-consuming monitoring campaigns that allow collecting atmospheric and climatic data.
Design/methodology/approach
Sometimes the collected time series may be corrupted, incomplete and/or subjected to the sensors' errors because of the remoteness of the historic building location, the natural aging of the sensor or the lack of a continuous check of the data downloading process. For this reason, in this work, an innovative approach about reconstructing the indoor microclimate into heritage buildings, just knowing the outdoor one, is proposed. This methodology is based on using machine learning tools known as variational auto encoders (VAEs), that are able to reconstruct time series and/or to fill data gaps.
Findings
The proposed approach is implemented using data collected in Ringebu Stave Church, a Norwegian medieval wooden heritage building. Reconstructing a realistic time series, for the vast majority of the year period, of the natural internal climate of the Church has been successfully implemented.
Originality/value
The novelty of this work is discussed in the framework of the existing literature. The work explores the potentials of machine learning tools compared to traditional ones, providing a method that is able to reliably fill missing data in time series.
Details
Keywords
Thomas Quincy Wilmore, Ana Kriletic, Daniel J. Svyantek and Lilah Donnelly
This study investigates the validity of Ferreira et al.’s (2020) Organizational Bullshit Perception Scale by examining its distinctiveness from similar constructs (perceptions of…
Abstract
Purpose
This study investigates the validity of Ferreira et al.’s (2020) Organizational Bullshit Perception Scale by examining its distinctiveness from similar constructs (perceptions of organizational politics, organizational cynicism, procedural justice) and its predictive validity through its relations with important organizational attitudes (organizational identification) and behaviors (counterproductive work behavior and organizational citizenship behavior). This study also examines the moderating effects of honesty–humility on the relations between organizational bullshit perception and the outcomes of counterproductive work behavior, organizational citizenship behavior and organizational identification. Finally, this study examines the incremental validity of organizational bullshit perception in predicting counterproductive work behavior, organizational citizenship behavior and organizational identification above and beyond similar constructs in an exploratory fashion.
Design/methodology/approach
Survey data were collected from a sample of working adults online via Amazon’s Mechanical Turk platform across two waves (final N = 323 for wave 1 and 174 for wave 2), one month apart.
Findings
The results indicate that organizational bullshit perception, as measured by Ferreira et al.’s (2020) scale, represents a distinct construct that has statistically significant relations with counterproductive work behavior, organizational citizenship behavior and organizational identification, even after controlling for procedural justice, organizational cynicism and perceptions of organizational politics. The results, however, showed no support for honesty–humility as a moderator.
Practical implications
These findings suggest that organizations can benefit from assessing and working to alleviate their employees’ perceptions of organizational bullshit. This construct predicts behaviors and attitudes important for organizational functioning.
Originality/value
This study adds to Ferreira et al.’s (2020) original work by demonstrating organizational bullshit perception’s distinctiveness from existing constructs in the literature and its implications for organizations and their employees.
Details
Keywords
Manuel Rossetti, Juliana Bright, Andrew Freeman, Anna Lee and Anthony Parrish
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management…
Abstract
Purpose
This paper is motivated by the need to assess the risk profiles associated with the substantial number of items within military supply chains. The scale of supply chain management processes creates difficulties in both the complexity of the analysis and in performing risk assessments that are based on the manual (human analyst) assessment methods. Thus, analysts require methods that can be automated and that can incorporate on-going operational data on a regular basis.
Design/methodology/approach
The approach taken to address the identification of supply chain risk within an operational setting is based on aspects of multiobjective decision analysis (MODA). The approach constructs a risk and importance index for supply chain elements based on operational data. These indices are commensurate in value, leading to interpretable measures for decision-making.
Findings
Risk and importance indices were developed for the analysis of items within an example supply chain. Using the data on items, individual MODA models were formed and demonstrated using a prototype tool.
Originality/value
To better prepare risk mitigation strategies, analysts require the ability to identify potential sources of risk, especially in times of disruption such as natural disasters.
Details
Keywords
Victoria Delaney and Victor R. Lee
With increased focus on data literacy and data science education in K-12, little is known about what makes a data set preferable for use by classroom teachers. Given that…
Abstract
Purpose
With increased focus on data literacy and data science education in K-12, little is known about what makes a data set preferable for use by classroom teachers. Given that educational designers often privilege authenticity, the purpose of this study is to examine how teachers use features of data sets to determine their suitability for authentic data science learning experiences with their students.
Design/methodology/approach
Interviews with 12 practicing high school mathematics and statistics teachers were conducted and video-recorded. Teachers were given two different data sets about the same context and asked to explain which one would be better suited for an authentic data science experience. Following knowledge analysis methods, the teachers’ responses were coded and iteratively reviewed to find themes that appeared across multiple teachers related to their aesthetic judgments.
Findings
Three aspects of authenticity for data sets for this task were identified. These include thinking of authentic data sets as being “messy,” as requiring more work for the student or analyst to pore through than other data sets and as involving computation.
Originality/value
Analysis of teachers’ aesthetics of data sets is a new direction for work on data literacy and data science education. The findings invite the field to think critically about how to help teachers develop new aesthetics and to provide data sets in curriculum materials that are suited for classroom use.
Details
Keywords
Elisa Gonzalez Santacruz, David Romero, Julieta Noguez and Thorsten Wuest
This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework…
Abstract
Purpose
This research paper aims to analyze the scientific and grey literature on Quality 4.0 and zero-defect manufacturing (ZDM) frameworks to develop an integrated quality 4.0 framework (IQ4.0F) for quality improvement (QI) based on Six Sigma and machine learning (ML) techniques towards ZDM. The IQ4.0F aims to contribute to the advancement of defect prediction approaches in diverse manufacturing processes. Furthermore, the work enables a comprehensive analysis of process variables influencing product quality with emphasis on the use of supervised and unsupervised ML techniques in Six Sigma’s DMAIC (Define, Measure, Analyze, Improve and Control) cycle stage of “Analyze.”
Design/methodology/approach
The research methodology employed a systematic literature review (SLR) based on PRISMA guidelines to develop the integrated framework, followed by a real industrial case study set in the automotive industry to fulfill the objectives of verifying and validating the proposed IQ4.0F with primary data.
Findings
This research work demonstrates the value of a “stepwise framework” to facilitate a shift from conventional quality management systems (QMSs) to QMSs 4.0. It uses the IDEF0 modeling methodology and Six Sigma’s DMAIC cycle to structure the steps to be followed to adopt the Quality 4.0 paradigm for QI. It also proves the worth of integrating Six Sigma and ML techniques into the “Analyze” stage of the DMAIC cycle for improving defect prediction in manufacturing processes and supporting problem-solving activities for quality managers.
Originality/value
This research paper introduces a first-of-its-kind Quality 4.0 framework – the IQ4.0F. Each step of the IQ4.0F was verified and validated in an original industrial case study set in the automotive industry. It is the first Quality 4.0 framework, according to the SLR conducted, to utilize the principal component analysis technique as a substitute for “Screening Design” in the Design of Experiments phase and K-means clustering technique for multivariable analysis, identifying process parameters that significantly impact product quality. The proposed IQ4.0F not only empowers decision-makers with the knowledge to launch a Quality 4.0 initiative but also provides quality managers with a systematic problem-solving methodology for quality improvement.
Details
Keywords
Oscar F. Bustinza, Luis M. Molina Fernandez and Marlene Mendoza Macías
Machine learning (ML) analytical tools are increasingly being considered as an alternative quantitative methodology in management research. This paper proposes a new approach for…
Abstract
Purpose
Machine learning (ML) analytical tools are increasingly being considered as an alternative quantitative methodology in management research. This paper proposes a new approach for uncovering the antecedents behind product and product–service innovation (PSI).
Design/methodology/approach
The ML approach is novel in the field of innovation antecedents at the country level. A sample of the Equatorian National Survey on Technology and Innovation, consisting of more than 6,000 firms, is used to rank the antecedents of innovation.
Findings
The analysis reveals that the antecedents of product and PSI are distinct, yet rooted in the principles of open innovation and competitive priorities.
Research limitations/implications
The analysis is based on a sample of Equatorian firms with the objective of showing how ML techniques are suitable for testing the antecedents of innovation in any other context.
Originality/value
The novel ML approach, in contrast to traditional quantitative analysis of the topic, can consider the full set of antecedent interactions to each of the innovations analyzed.
Details
Keywords
Luuk Mandemakers, Eva Jaspers and Tanja van der Lippe
Employees facing challenges in their careers – i.e. female, migrant, elderly and lower-educated employees – might expect job searches to have a low likelihood of success and might…
Abstract
Purpose
Employees facing challenges in their careers – i.e. female, migrant, elderly and lower-educated employees – might expect job searches to have a low likelihood of success and might therefore more often stay in unsatisfactory positions. The goal of this study is to discover inequalities in job mobility for these employees.
Design/methodology/approach
We rely on a large sample of Dutch public sector employees (N = 30,709) and study whether employees with challenges in their careers are hampered in translating job dissatisfaction into job searches. Additionally, we assess whether this is due to their perceptions of labor market alternatives.
Findings
Findings show that non-Western migrant, elderly and lower-educated employees are less likely to act on job dissatisfaction than their advantaged counterparts, whereas women are more likely than men to do so. Additionally, we find that although they perceive labor market opportunities as limited, this does not affect their propensity to search for different jobs.
Originality/value
This paper is novel in discovering inequalities in job mobility by analyzing whether employees facing challenges in their careers are less likely to act on job dissatisfaction and therefore more likely to remain in unsatisfactory positions.
Details
Keywords
Efrosini Siougle, Sophia Dimelis and Nikolaos Malevris
This study explores the link between ISO 9001 certification, personal data protection and firm performance using financial balance sheet and survey data. The security aspect of…
Abstract
Purpose
This study explores the link between ISO 9001 certification, personal data protection and firm performance using financial balance sheet and survey data. The security aspect of data protection is analyzed based on the major requirements of the General Data Protection Regulation and mapped to the relevant controls of the ISO/IEC 27001/27002 standards.
Design/methodology/approach
The research analysis is based on 96 ISO 9001–certified and non-certified publicly traded manufacturing and service firms that responded to a structured questionnaire. The authors develop and empirically test their theoretical model using the structural equation modeling technique and follow a difference-in-differences econometric modeling approach to estimate financial performance differences between certified and non-certified firms accounting for the level of data protection.
Findings
The estimates indicate three core dimensions in the areas of “policies, procedures and responsibilities,” “access control management” and “risk-reduction techniques” as desirable components in establishing the concept of data security. The estimates also suggest that the data protection level has significantly impacted the performance of certified firms relative to the non-certified. Controlling for the effect of industry-level factors reveals a positive relationship between data security and high-technological intensity.
Practical implications
The results imply that improving the level of compliance to data protection enhances the link between certification and firm performance.
Originality/value
This study fills a gap in the literature by empirically testing the influence of data protection on the relationship between quality certification and firm performance.
Details
Keywords
Camillia Matuk, Ralph Vacca, Anna Amato, Megan Silander, Kayla DesPortes, Peter J. Woods and Marian Tes
Arts-integration is a promising approach to building students’ abilities to create and critique arguments with data, also known as informal inferential reasoning (IIR). However…
Abstract
Purpose
Arts-integration is a promising approach to building students’ abilities to create and critique arguments with data, also known as informal inferential reasoning (IIR). However, differences in disciplinary practices and routines, as well as school organization and culture, can pose barriers to subject integration. The purpose of this study is to describe synergies and tensions between data science and the arts, and how these can create or constrain opportunities for learners to engage in IIR.
Design/methodology/approach
The authors co-designed and implemented four arts-integrated data literacy units with 10 teachers of arts and mathematics in middle school classrooms from four different schools in the USA. The data include student-generated artwork and their written rationales, and interviews with teachers and students. Through maximum variation sampling, the authors identified examples from the data to illustrate disciplinary synergies and tensions that appeared to support different IIR processes among students.
Findings
Aspects of artistic representation, including embodiment, narrative and visual image; and aspects of the culture of arts, including an emphasis on personal experience, the acknowledgement of subjectivity and considerations for the audience’s perspective, created synergies and tensions that both offered and hindered opportunities for IIR (i.e. going beyond data, using data as evidence and expressing uncertainty).
Originality/value
This study answers calls for humanistic approaches to data literacy education. It contributes an interdisciplinary perspective on data literacy that complements other context-oriented perspectives on data science. This study also offers recommendations for how designers and educators can capitalize on synergies and mitigate tensions between domains to promote successful IIR in arts-integrated data literacy education.
Details