Search results

1 – 10 of 237
Article
Publication date: 13 September 2023

Arti Sahu and S. Shanmugapriya

This research proposes a viable method of slab and shore load computation for the partial striking technique utilized in high-rise construction projects to optimize the use of…

Abstract

Purpose

This research proposes a viable method of slab and shore load computation for the partial striking technique utilized in high-rise construction projects to optimize the use of horizontal formwork. The proposed Partial Striking Simplified Method (PSSM) is designed to be utilized by industry practitioners to schedule the construction operations of casting floors in order to control the formwork costs incurred throughout the completion of a project.

Design/methodology/approach

The article presents the PSSM for calculating slab and shore loads in multi-story building construction. It introduces the concept of “clearing before striking,” where shore supports are partially removed after a few days of pouring fresh concrete. The PSSM procedure is validated through numerical analysis and compared to other simplified approaches. Additionally, a user-friendly Python program based on the PSSM procedure is developed to explore the capability of the PSSM procedure and is used to study the variations in slab load, shoring level, concrete grade and cycle time.

Findings

The study successfully developed a more efficient and reliable method for estimating the loads on shores and slabs using partial striking techniques for multi-story building construction. Compared to other simplified approaches, the PSSM procedure is simpler and more precise, as demonstrated through numerical analysis. The mean of shore and slab load ratios are 1.08 and 1.07, respectively, which seems to have a slight standard deviation of 0.29 and 0.21 with 3D numerical analysis. The Python program developed for load estimation is effective in exploring the capability of the proposed PSSM procedure. The Python program's ability to identify the floor under maximum load and determine the specific construction stage provides valuable insights for multi-story construction, enabling informed decision-making and optimization of construction methods.

Practical implications

High-rise construction in Indian cities is booming, though this trend is not shared by all the country's major metropolitan areas. The growing construction sector in urban cities demands rapid construction for efficient utilization of formwork to control the construction costs of project. The proposed procedure is the best option to optimize the formwork construction cost, construction cycle time, the suitable formwork system with optimum cost, concrete grade for the adopted level of shoring in partaking and many more.

Originality/value

The proposed PSSM reduces the calculation complexity of the existing simplified method. This is done by considering the identical slab stiffness and identical shore layout for uniform load distribution throughout the structure. This procedure utilizes a two-step load distribution calculation for clearing phase. Initially, the 66% prop load of highest floor level is distributed uniformly over the lower interconnected slabs. In the second step, the total prop load is removed equally from all slabs below it. This makes the load distribution user-friendly for the industry expert.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 6 March 2024

Ahmed EL Hana, Ahmed Hader, Jaouad Ait Lahcen, Salma Moushi, Yassine Hariti, Iliass Tarras, Rachid Et Touizi and Yahia Boughaleb

The purpose of the paper is to conduct a numerical and experimental investigation into the properties of nanofluids containing spherical nanoparticles of random sizes flowing…

Abstract

Purpose

The purpose of the paper is to conduct a numerical and experimental investigation into the properties of nanofluids containing spherical nanoparticles of random sizes flowing through a porous medium. The study aims to understand how the thermophysical properties of the nanofluid are affected by factors such as nanoparticle volume fraction, permeability of the porous medium, and pore size. The paper provides insights into the behavior of nanofluids in complex environments and explores the impact of varying conditions on key properties such as thermal conductivity, density, viscosity, and specific heat. Ultimately, the research contributes to the broader understanding of nanofluid dynamics and has potential implications for engineering and industrial applications in porous media.

Design/methodology/approach

This paper investigates nanofluids with spherical nanoparticles in a porous medium, exploring thermal conductivity, density, specific heat, and dynamic viscosity. Studying three compositions, the analysis employs the classical Maxwell model and Koo and Kleinstreuer’s approach for thermal conductivity, considering particle shape and temperature effects. Density and specific heat are defined based on mass and volume ratios. Dynamic viscosity models, including Brinkman’s and Gherasim et al.'s, are discussed. Numerical simulations, implemented in Python using the Langevin model, yield results processed in Origin Pro. This research enhances understanding of nanofluid behavior, contributing valuable insights to porous media applications.

Findings

This study involves a numerical examination of nanofluid properties, featuring spherical nanoparticles of varying sizes suspended in a base fluid with known density, flowing through a porous medium. Experimental findings reveal a notable increase in thermal conductivity, density, and viscosity as the volume fraction of particles rises. Conversely, specific heat experiences a decrease with higher particle volume concentration.xD; xA; The influence of permeability and pore size on particle volume fraction variation is a key focus. Interestingly, while the permeability of the medium has a significant effect, it is observed that it increases with permeability. This underscores the role of the medium’s nature in altering the thermophysical properties of nanofluids.

Originality/value

This paper presents a novel numerical study on nanofluids with randomly sized spherical nanoparticles flowing in a porous medium. It explores the impact of porous medium properties on nanofluid thermophysical characteristics, emphasizing the significance of permeability and pore size. The inclusion of random nanoparticle sizes adds practical relevance. Contrasting trends are observed, where thermal conductivity, density, and viscosity increase with particle volume fraction, while specific heat decreases. These findings offer valuable insights for engineering applications, providing a deeper understanding of nanofluid behavior in porous environments and guiding the design of efficient systems in various industrial contexts.

Details

Multidiscipline Modeling in Materials and Structures, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1573-6105

Keywords

Article
Publication date: 26 December 2023

Farshad Peiman, Mohammad Khalilzadeh, Nasser Shahsavari-Pour and Mehdi Ravanshadnia

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the…

Abstract

Purpose

Earned value management (EVM)–based models for estimating project actual duration (AD) and cost at completion using various methods are continuously developed to improve the accuracy and actualization of predicted values. This study primarily aimed to examine natural gradient boosting (NGBoost-2020) with the classification and regression trees (CART) base model (base learner). To the best of the authors' knowledge, this concept has never been applied to EVM AD forecasting problem. Consequently, the authors compared this method to the single K-nearest neighbor (KNN) method, the ensemble method of extreme gradient boosting (XGBoost-2016) with the CART base model and the optimal equation of EVM, the earned schedule (ES) equation with the performance factor equal to 1 (ES1). The paper also sought to determine the extent to which the World Bank's two legal factors affect countries and how the two legal causes of delay (related to institutional flaws) influence AD prediction models.

Design/methodology/approach

In this paper, data from 30 construction projects of various building types in Iran, Pakistan, India, Turkey, Malaysia and Nigeria (due to the high number of delayed projects and the detrimental effects of these delays in these countries) were used to develop three models. The target variable of the models was a dimensionless output, the ratio of estimated duration to completion (ETC(t)) to planned duration (PD). Furthermore, 426 tracking periods were used to build the three models, with 353 samples and 23 projects in the training set, 73 patterns (17% of the total) and six projects (21% of the total) in the testing set. Furthermore, 17 dimensionless input variables were used, including ten variables based on the main variables and performance indices of EVM and several other variables detailed in the study. The three models were subsequently created using Python and several GitHub-hosted codes.

Findings

For the testing set of the optimal model (NGBoost), the better percentage mean (better%) of the prediction error (based on projects with a lower error percentage) of the NGBoost compared to two KNN and ES1 single models, as well as the total mean absolute percentage error (MAPE) and mean lags (MeLa) (indicating model stability) were 100, 83.33, 5.62 and 3.17%, respectively. Notably, the total MAPE and MeLa for the NGBoost model testing set, which had ten EVM-based input variables, were 6.74 and 5.20%, respectively. The ensemble artificial intelligence (AI) models exhibited a much lower MAPE than ES1. Additionally, ES1 was less stable in prediction than NGBoost. The possibility of excessive and unusual MAPE and MeLa values occurred only in the two single models. However, on some data sets, ES1 outperformed AI models. NGBoost also outperformed other models, especially single models for most developing countries, and was more accurate than previously presented optimized models. In addition, sensitivity analysis was conducted on the NGBoost predicted outputs of 30 projects using the SHapley Additive exPlanations (SHAP) method. All variables demonstrated an effect on ETC(t)/PD. The results revealed that the most influential input variables in order of importance were actual time (AT) to PD, regulatory quality (RQ), earned duration (ED) to PD, schedule cost index (SCI), planned complete percentage, rule of law (RL), actual complete percentage (ACP) and ETC(t) of the ES optimal equation to PD. The probabilistic hybrid model was selected based on the outputs predicted by the NGBoost and XGBoost models and the MAPE values from three AI models. The 95% prediction interval of the NGBoost–XGBoost model revealed that 96.10 and 98.60% of the actual output values of the testing and training sets are within this interval, respectively.

Research limitations/implications

Due to the use of projects performed in different countries, it was not possible to distribute the questionnaire to the managers and stakeholders of 30 projects in six developing countries. Due to the low number of EVM-based projects in various references, it was unfeasible to utilize other types of projects. Future prospects include evaluating the accuracy and stability of NGBoost for timely and non-fluctuating projects (mostly in developed countries), considering a greater number of legal/institutional variables as input, using legal/institutional/internal/inflation inputs for complex projects with extremely high uncertainty (such as bridge and road construction) and integrating these inputs and NGBoost with new technologies (such as blockchain, radio frequency identification (RFID) systems, building information modeling (BIM) and Internet of things (IoT)).

Practical implications

The legal/intuitive recommendations made to governments are strict control of prices, adequate supervision, removal of additional rules, removal of unfair regulations, clarification of the future trend of a law change, strict monitoring of property rights, simplification of the processes for obtaining permits and elimination of unnecessary changes particularly in developing countries and at the onset of irregular projects with limited information and numerous uncertainties. Furthermore, the managers and stakeholders of this group of projects were informed of the significance of seven construction variables (institutional/legal external risks, internal factors and inflation) at an early stage, using time series (dynamic) models to predict AD, accurate calculation of progress percentage variables, the effectiveness of building type in non-residential projects, regular updating inflation during implementation, effectiveness of employer type in the early stage of public projects in addition to the late stage of private projects, and allocating reserve duration (buffer) in order to respond to institutional/legal risks.

Originality/value

Ensemble methods were optimized in 70% of references. To the authors' knowledge, NGBoost from the set of ensemble methods was not used to estimate construction project duration and delays. NGBoost is an effective method for considering uncertainties in irregular projects and is often implemented in developing countries. Furthermore, AD estimation models do fail to incorporate RQ and RL from the World Bank's worldwide governance indicators (WGI) as risk-based inputs. In addition, the various WGI, EVM and inflation variables are not combined with substantial degrees of delay institutional risks as inputs. Consequently, due to the existence of critical and complex risks in different countries, it is vital to consider legal and institutional factors. This is especially recommended if an in-depth, accurate and reality-based method like SHAP is used for analysis.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 28 July 2023

Zahra Karparvar, Mahdieh Mirzabeigi and Ghasem Salimi

The process of knowledge creation is recognized as an essential process for organizational learning and innovation. Creating knowledge to solve the problems and complexities of…

Abstract

Purpose

The process of knowledge creation is recognized as an essential process for organizational learning and innovation. Creating knowledge to solve the problems and complexities of today's world is like opening a black box. Hence, the higher education system and universities are exploring ways to overcome the complexities and cope with global changes. In this regard, interdisciplinary collaborations and activities are crucial in creating knowledge and innovation to counter these changes. This study aimed to know the experiences of Shiraz university interdisciplinary researchers in the field of humanities and also design and explain the conceptual model of knowledge creation in interdisciplinary research teams in the field of humanities.

Design/methodology/approach

In this qualitative research, grounded theory was implemented based on Strauss and Corbin's systematic approach. The sampling method was purposeful, and the participants included sixteen faculty members of shiraz university who had at least one experience of performing an interdisciplinary activity in one of the humanities fields. The first participant was selected as a pilot, and the rest were selected by snowball sampling. Semi-structured interviews were also used to collect data and continued until theoretical saturation was attained. After collecting the available information and interviewing the people, the data were organized and analyzed in three stages, open coding, axial coding, and selective coding, using the proposed framework of Strauss and Corbin. Finally, the researcher reached a final and meaningful categorization.

Findings

In this research, the results were presented as a paradigm model of knowledge creation in the interdisciplinary research teams in the field of humanities. The paradigm model of the study consists of causal factors (internal and external factors), main categories (specialized competencies, scientific discourse, understanding of knowledge domains), strategies (structuring and synchronizing), context (individual and organizational), interfering factors (leadership, industry, and society), and consequences (individual and group achievement).

Originality/value

The present study aimed to explore the experiences of researchers in the interdisciplinary humanities research teams on knowledge creation in qualitative research. The study used Strauss and Corbin's systematic approach to recognize the causal factors of knowledge creation and the contexts. Discovering the main category of knowledge creation in interdisciplinary research teams, the authors analyze the strategies and consequences of knowledge creation.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Open Access
Article
Publication date: 13 March 2024

Tjaša Redek and Uroš Godnov

The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that…

Abstract

Purpose

The Internet has changed consumer decision-making and influenced business behaviour. User-generated product information is abundant and readily available. This paper argues that user-generated content can be efficiently utilised for business intelligence using data science and develops an approach to demonstrate the methods and benefits of the different techniques.

Design/methodology/approach

Using Python Selenium, Beautiful Soup and various text mining approaches in R to access, retrieve and analyse user-generated content, we argue that (1) companies can extract information about the product attributes that matter most to consumers and (2) user-generated reviews enable the use of text mining results in combination with other demographic and statistical information (e.g. ratings) as an efficient input for competitive analysis.

Findings

The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.

Research limitations/implications

The paper shows that combining different types of data (textual and numerical data) and applying and combining different methods can provide organisations with important business information and improve business performance.

Originality/value

The study makes several contributions to the marketing and management literature, mainly by illustrating the methodological advantages of text mining and accompanying statistical analysis, the different types of distilled information and their use in decision-making.

Details

Kybernetes, vol. 53 no. 13
Type: Research Article
ISSN: 0368-492X

Keywords

Book part
Publication date: 25 October 2023

Md Aminul Islam and Md Abu Sufian

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The…

Abstract

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The study thoroughly investigated with advanced tools to scrutinize key performance indicators integral to the functioning of smart cities, thereby enhancing leadership and decision-making strategies. Our work involves the implementation of various machine learning models such as Logistic Regression, Support Vector Machine, Decision Tree, Naive Bayes, and Artificial Neural Networks (ANN), to the data. Notably, the Support Vector Machine and Bernoulli Naive Bayes models exhibit robust performance with an accuracy rate of 70% precision score. In particular, the study underscores the employment of an ANN model on our existing dataset, optimized using the Adam optimizer. Although the model yields an overall accuracy of 61% and a precision score of 58%, implying correct predictions for the positive class 58% of the time, a comprehensive performance assessment using the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) metrics was necessary. This evaluation results in a score of 0.475 at a threshold of 0.5, indicating that there's room for model enhancement. These models and their performance metrics serve as a key cog in our data analytics pipeline, providing decision-makers and city leaders with actionable insights that can steer urban service management decisions. Through real-time data availability and intuitive visualization dashboards, these leaders can promptly comprehend the current state of their services, pinpoint areas requiring improvement, and make informed decisions to bolster these services. This research illuminates the potential for data analytics, machine learning, and AI to significantly upgrade urban service management in smart cities, fostering sustainable and livable communities. Moreover, our findings contribute valuable knowledge to other cities aiming to adopt similar strategies, thus aiding the continued development of smart cities globally.

Details

Technology and Talent Strategies for Sustainable Smart Cities
Type: Book
ISBN: 978-1-83753-023-6

Keywords

Open Access
Article
Publication date: 28 September 2023

Jonas Bundschuh, M. Greta Ruppert and Yvonne Späck-Leigsnering

The purpose of this paper is to present the freely available finite element simulation software Pyrit.

Abstract

Purpose

The purpose of this paper is to present the freely available finite element simulation software Pyrit.

Design/methodology/approach

In a first step, the design principles and the objective of the software project are defined. Then, the software’s structure is established: The software is organized in packages for which an overview is given. The structure is based on the typical steps of a simulation workflow, i.e., problem definition, problem-solving and post-processing. State-of-the-art software engineering principles are applied to ensure a high code quality at all times. Finally, the modeling and simulation workflow of Pyrit is demonstrated by three examples.

Findings

Pyrit is a field simulation software based on the finite element method written in Python to solve coupled systems of partial differential equations. It is designed as a modular software that is easily modifiable and extendable. The framework can, therefore, be adapted to various activities, i.e., research, education and industry collaboration.

Research limitations/implications

The focus of Pyrit are static and quasistatic electromagnetic problems as well as (coupled) heat conduction problems. It allows for both time domain and frequency domain simulations.

Originality/value

In research, problem-specific modifications and direct access to the source code of simulation tools are essential. With Pyrit, the authors present a computationally efficient and platform-independent simulation software for various electromagnetic and thermal field problems.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 42 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 31 July 2023

Daniel Šandor and Marina Bagić Babac

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning…

2928

Abstract

Purpose

Sarcasm is a linguistic expression that usually carries the opposite meaning of what is being said by words, thus making it difficult for machines to discover the actual meaning. It is mainly distinguished by the inflection with which it is spoken, with an undercurrent of irony, and is largely dependent on context, which makes it a difficult task for computational analysis. Moreover, sarcasm expresses negative sentiments using positive words, allowing it to easily confuse sentiment analysis models. This paper aims to demonstrate the task of sarcasm detection using the approach of machine and deep learning.

Design/methodology/approach

For the purpose of sarcasm detection, machine and deep learning models were used on a data set consisting of 1.3 million social media comments, including both sarcastic and non-sarcastic comments. The data set was pre-processed using natural language processing methods, and additional features were extracted and analysed. Several machine learning models, including logistic regression, ridge regression, linear support vector and support vector machines, along with two deep learning models based on bidirectional long short-term memory and one bidirectional encoder representations from transformers (BERT)-based model, were implemented, evaluated and compared.

Findings

The performance of machine and deep learning models was compared in the task of sarcasm detection, and possible ways of improvement were discussed. Deep learning models showed more promise, performance-wise, for this type of task. Specifically, a state-of-the-art model in natural language processing, namely, BERT-based model, outperformed other machine and deep learning models.

Originality/value

This study compared the performance of the various machine and deep learning models in the task of sarcasm detection using the data set of 1.3 million comments from social media.

Details

Information Discovery and Delivery, vol. 52 no. 2
Type: Research Article
ISSN: 2398-6247

Keywords

Open Access
Article
Publication date: 5 February 2024

Krištof Kovačič, Jurij Gregorc and Božidar Šarler

This study aims to develop an experimentally validated three-dimensional numerical model for predicting different flow patterns produced with a gas dynamic virtual nozzle (GDVN).

Abstract

Purpose

This study aims to develop an experimentally validated three-dimensional numerical model for predicting different flow patterns produced with a gas dynamic virtual nozzle (GDVN).

Design/methodology/approach

The physical model is posed in the mixture formulation and copes with the unsteady, incompressible, isothermal, Newtonian, low turbulent two-phase flow. The computational fluid dynamics numerical solution is based on the half-space finite volume discretisation. The geo-reconstruct volume-of-fluid scheme tracks the interphase boundary between the gas and the liquid. To ensure numerical stability in the transition regime and adequately account for turbulent behaviour, the k-ω shear stress transport turbulence model is used. The model is validated by comparison with the experimental measurements on a vertical, downward-positioned GDVN configuration. Three different combinations of air and water volumetric flow rates have been solved numerically in the range of Reynolds numbers for airflow 1,009–2,596 and water 61–133, respectively, at Weber numbers 1.2–6.2.

Findings

The half-space symmetry allows the numerical reconstruction of the dripping, jetting and indication of the whipping mode. The kinetic energy transfer from the gas to the liquid is analysed, and locations with locally increased gas kinetic energy are observed. The calculated jet shapes reasonably well match the experimentally obtained high-speed camera videos.

Practical implications

The model is used for the virtual studies of new GDVN nozzle designs and optimisation of their operation.

Originality/value

To the best of the authors’ knowledge, the developed model numerically reconstructs all three GDVN flow regimes for the first time.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 19 October 2022

Isaac Chairez, Israel Alejandro Guarneros-Sandoval, Vlad Prud, Olga Andrianova, Sleptsov Ernest, Viktor Chertopolokhov, Grigory Bugriy and Arthur Mukhamedov

There are common problems in the identification of uncertain nonlinear systems, nonparametric approximation, state estimation, and automatic control. Dynamic neural network (DNN…

85

Abstract

Purpose

There are common problems in the identification of uncertain nonlinear systems, nonparametric approximation, state estimation, and automatic control. Dynamic neural network (DNN) approximation can simplify the development of all the aforementioned problems in either continuous or discrete systems. A DNN is represented by a system of differential or recurrent equations defined in the space of vector activation functions with weights and offsets that are functionally associated with the input data.

Design/methodology/approach

This study describes the version of the toolbox, that can be used to identify the dynamics of the black box and restore the laws underlying the system using known inputs and outputs. Depending on the completeness of the information, the toolbox allows users to change the DNN structure to suit specific tasks.

Findings

The toolbox consists of three main components: user layer, network manager, and network instance. The user layer provides high-level control and monitoring of system performance. The network manager serves as an intermediary between the user layer and the network instance, and allows the user layer to start and stop learning, providing an interface to indirectly access the internal data of the DNN.

Research limitations/implications

Control capability is limited to adjusting a small number of numerical parameters and selecting functional parameters from a predefined list.

Originality/value

The key feature of the toolbox is the possibility of developing an algorithmic semi-automatic selection of activation function parameters based on optimization problem solutions.

Details

Kybernetes, vol. 52 no. 9
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of 237