Search results

1 – 10 of 222
Article
Publication date: 20 March 2024

Malav R. Sanghvi, Karan W. Chugh and S.T. Mhaske

This study aims to synthesize Prussian blue {FeIII4[FeII(CN)6]3} pigment by reacting ferric chloride with different ferrocyanides through the same procedure. The influence of the…

Abstract

Purpose

This study aims to synthesize Prussian blue {FeIII4[FeII(CN)6]3} pigment by reacting ferric chloride with different ferrocyanides through the same procedure. The influence of the ferrocyanide used on resulting pigment properties is studied.

Design/methodology/approach

Prussian blue is commonly synthesized by direct or indirect methods, through iron salt and ferrocyanide/ferricyanide reactions. In this study, the direct, single-step process was pursued by dropwise addition of the ferrocyanide into ferric chloride (both as aqueous solutions). Two batches – (K-PB) and (Na-PB) – were prepared by using potassium ferrocyanide and sodium ferrocyanide, respectively. The development of pigment was confirmed by an identification test and characterized by spectroscopic techniques. Pigment properties were determined, and light fastness was observed for acrylic emulsion films incorporating dispersed pigment.

Findings

The two pigments differed mainly in elemental detection owing to the dissimilar ferrocyanide being used; IR spectroscopy where only (Na-PB) showed peaks indicating water molecules; and bleeding tendency where (K-PB) was water soluble whereas (Na-PB) was not. The pigment exhibited remarkable blue colour and good bleeding resistance in several solvents and showed no fading in 24 h of light exposure though oil absorption values were high.

Originality/value

This article is a comparative study of Prussian blue pigment properties obtained using different ferrocyanides. The dissimilarity in the extent of water solubility will influence potential applications as a colourant in paints and inks. K-PB would be advantageous in aqueous formulations to confer a blue colour without any dispersing aid but unfavourable in systems where other coats are water-based due to their bleeding tendency.

Details

Pigment & Resin Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0369-9420

Keywords

Article
Publication date: 19 May 2023

Michail Katsigiannis, Minas Pantelidakis and Konstantinos Mykoniatis

With hybrid simulation techniques getting popular for systems improvement in multiple fields, this study aims to provide insight on the use of hybrid simulation to assess the…

Abstract

Purpose

With hybrid simulation techniques getting popular for systems improvement in multiple fields, this study aims to provide insight on the use of hybrid simulation to assess the effect of lean manufacturing (LM) techniques on manufacturing facilities and the transition of a mass production (MP) facility to incorporating LM techniques.

Design/methodology/approach

In this paper, the authors apply a hybrid simulation approach to improve an educational automotive assembly line and provide guidelines for implementing different LM techniques. Specifically, the authors describe the design, development, verification and validation of a hybrid discrete-event and agent-based simulation model of a LEGO® car assembly line to analyze, improve and assess the system’s performance. The simulation approach examines the base model (MP) and an alternative scenario (just-in-time [JIT] with Heijunka).

Findings

The hybrid simulation approach effectively models the facility. The alternative simulation scenario (implementing JIT and Heijunka LM techniques) improved all examined performance metrics. In more detail, the system’s lead time was reduced by 47.37%, the throughput increased by 5.99% and the work-in-progress for workstations decreased by up to 56.73%.

Originality/value

This novel hybrid simulation approach provides insight and can be potentially extrapolated to model other manufacturing facilities and evaluate transition scenarios from MP to LM.

Details

International Journal of Lean Six Sigma, vol. 15 no. 2
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 9 April 2024

Shola Usharani, R. Gayathri, Uday Surya Deveswar Reddy Kovvuri, Maddukuri Nivas, Abdul Quadir Md, Kong Fah Tee and Arun Kumar Sivaraman

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for…

Abstract

Purpose

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for inspectors. Image-based automatic inspection of cracks can be very effective when compared to human eye inspection. With the advancement in deep learning techniques, by utilizing these methods the authors can create automation of work in a particular sector of various industries.

Design/methodology/approach

In this study, an upgraded convolutional neural network-based crack detection method has been proposed. The dataset consists of 3,886 images which include cracked and non-cracked images. Further, these data have been split into training and validation data. To inspect the cracks more accurately, data augmentation was performed on the dataset, and regularization techniques have been utilized to reduce the overfitting problems. In this work, VGG19, Xception and Inception V3, along with Resnet50 V2 CNN architectures to train the data.

Findings

A comparison between the trained models has been performed and from the obtained results, Xception performs better than other algorithms with 99.54% test accuracy. The results show detecting cracked regions and firm non-cracked regions is very efficient by the Xception algorithm.

Originality/value

The proposed method can be way better back to an automatic inspection of cracks in buildings with different design patterns such as decorated historical monuments.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 28 July 2023

Mohammad Omar Aburumman, Rateb Sweis and Ghaleb J. Sweis

The construction industry sector is developing rapidly, especially with the increasing pace of the Fourth Industrial Revolution in this sector. Construction projects can benefit…

Abstract

Purpose

The construction industry sector is developing rapidly, especially with the increasing pace of the Fourth Industrial Revolution in this sector. Construction projects can benefit from greater integration and collaboration between their technologies and processes to reap the advantages and keep pace with the recent significant technological and managerial techniques developments. Therefore, this study aims to delve into and investigate building information modeling (BIM) and Lean Construction (L.C.) with a concentration on the potential BIM–lean interactions synergy and integration in the Jordanian construction industry.

Design/methodology/approach

This study takes exploratory nature, followed by the deductive research approach, and is designed to be a mono-quantitative research methodology. Moreover, the sampling technique is non-probability convenience sampling, and the research strategy is implemented through a questionnaire used and analyzed by using Statistical Package for Social Science to conduct descriptive and inferential statistical analysis and verify the reliability and validity through proper tests.

Findings

The BIM–lean interactions synergy and integration findings revealed that eliminating waste (time, cost, resources), promoting continuous improvement (Kaizen) and standardizing as lean construction principles are the most significant and agreeable toward achieving BIM–lean interactions synergy. On the other hand, “High 3D Visualization Modelling” was the most significant BIM function, followed by “Rapid and Auto-Generation of Documents and Multiple Design Alternatives” and “Maintenance of Information and Design Model Integrity.” Moreover, based on the relative importance index (RII) values, “Lack of Technical Expertise in BIM-LEAN” is the most significant challenge with a 0.89 value of RII, followed by “Lack of Government Direction and Standard Guidelines” with a 0.88 value of RII and “Financial considerations” with a 0.83 value of RII.

Originality/value

This study will help provide a new detailed overview that investigates the effects and expected benefits of integrating BIM processes and technological functionalities with lean construction principles within a synergetic environment. Moreover, the study will increase the awareness of using new technologies and management approaches in the architectural, engineering and construction industry, seeking to achieve integration between these technologies to reach ideal results in terms of the outputs of construction operations.

Details

International Journal of Lean Six Sigma, vol. 15 no. 2
Type: Research Article
ISSN: 2040-4166

Keywords

Article
Publication date: 24 October 2022

Priyanka Chawla, Rutuja Hasurkar, Chaithanya Reddy Bogadi, Naga Sindhu Korlapati, Rajasree Rajendran, Sindu Ravichandran, Sai Chaitanya Tolem and Jerry Zeyu Gao

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives…

Abstract

Purpose

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives by assessing the probability of road accidents and accurate traffic information prediction. It also helps in reducing overall carbon dioxide emissions in the environment and assists the urban population in their everyday lives by increasing overall transportation quality.

Design/methodology/approach

This study offered a real-time traffic model based on the analysis of numerous sensor data. Real-time traffic prediction systems can identify and visualize current traffic conditions on a particular lane. The proposed model incorporated data from road sensors as well as a variety of other sources. It is difficult to capture and process large amounts of sensor data in real time. Sensor data is consumed by streaming analytics platforms that use big data technologies, which is then processed using a range of deep learning and machine learning techniques.

Findings

The study provided in this paper would fill a gap in the data analytics sector by delivering a more accurate and trustworthy model that uses internet of things sensor data and other data sources. This method can also assist organizations such as transit agencies and public safety departments in making strategic decisions by incorporating it into their platforms.

Research limitations/implications

The model has a big flaw in that it makes predictions for the period following January 2020 that are not particularly accurate. This, however, is not a flaw in the model; rather, it is a flaw in Covid-19, the global epidemic. The global pandemic has impacted the traffic scenario, resulting in erratic data for the period after February 2020. However, once the circumstance returns to normal, the authors are confident in their model’s ability to produce accurate forecasts.

Practical implications

To help users choose when to go, this study intended to pinpoint the causes of traffic congestion on the highways in the Bay Area as well as forecast real-time traffic speeds. To determine the best attributes that influence traffic speed in this study, the authors obtained data from the Caltrans performance measurement system (PeMS), reviewed it and used multiple models. The authors developed a model that can forecast traffic speed while accounting for outside variables like weather and incident data, with decent accuracy and generalizability. To assist users in determining traffic congestion at a certain location on a specific day, the forecast method uses a graphical user interface. This user interface has been designed to be readily expanded in the future as the project’s scope and usefulness increase. The authors’ Web-based traffic speed prediction platform is useful for both municipal planners and individual travellers. The authors were able to get excellent results by using five years of data (2015–2019) to train the models and forecast outcomes for 2020 data. The authors’ algorithm produced highly accurate predictions when tested using data from January 2020. The benefits of this model include accurate traffic speed forecasts for California’s four main freeways (Freeway 101, I-680, 880 and 280) for a specific place on a certain date. The scalable model performs better than the vast majority of earlier models created by other scholars in the field. The government would benefit from better planning and execution of new transportation projects if this programme were to be extended across the entire state of California. This initiative could be expanded to include the full state of California, assisting the government in better planning and implementing new transportation projects.

Social implications

To estimate traffic congestion, the proposed model takes into account a variety of data sources, including weather and incident data. According to traffic congestion statistics, “bottlenecks” account for 40% of traffic congestion, “traffic incidents” account for 25% and “work zones” account for 10% (Traffic Congestion Statistics). As a result, incident data must be considered for analysis. The study uses traffic, weather and event data from the previous five years to estimate traffic congestion in any given area. As a result, the results predicted by the proposed model would be more accurate, and commuters who need to schedule ahead of time for work would benefit greatly.

Originality/value

The proposed work allows the user to choose the optimum time and mode of transportation for them. The underlying idea behind this model is that if a car spends more time on the road, it will cause traffic congestion. The proposed system encourages users to arrive at their location in a short period of time. Congestion is an indicator that public transportation needs to be expanded. The optimum route is compared to other kinds of public transit using this methodology (Greenfield, 2014). If the commute time is comparable to that of private car transportation during peak hours, consumers should take public transportation.

Details

World Journal of Engineering, vol. 21 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 15 February 2024

Ganesh Narkhede

Efforts to implement supplier selection and order allocation (SSOA) approaches in small and medium-sized enterprises (SMEs) are quite restricted due to the lack of affordable and…

Abstract

Purpose

Efforts to implement supplier selection and order allocation (SSOA) approaches in small and medium-sized enterprises (SMEs) are quite restricted due to the lack of affordable and simple-to-use strategies. Although there is a huge amount of literature on SSOA techniques, very few studies have attempted to address the issues faced by SMEs and develop strategies from their point of view. The purpose of this study is to provide an effective, practical, and time-tested integrated SSOA framework for evaluating the performance of suppliers and allocating orders to them that can improve the efficiency and competitiveness of SMEs.

Design/methodology/approach

This study was conducted in two stages. First, an integrated supplier selection approach was designed, which consists of the analytic hierarchy process and newly developed measurement alternatives and ranking using compromise solution to evaluate supplier performance and rank them. Second, the Wagner-Whitin algorithm is used to determine optimal order quantities and optimize inventory carrying and ordering costs. The joint impact of quantity discounts is also evaluated at the end.

Findings

Insights derived from the case study proved that the proposed approach is capable of assisting purchase managers in the SSOA decision-making process. In addition, this case study resulted in 10.89% total cost savings and fewer stock-out situations.

Research limitations/implications

Criteria selected in this study are based on the advice of the managers in the selected manufacturing organizations. So the methods applied are limited to manufacturing SMEs. There were some aspects of the supplier selection process that this study could not explore. The development of an effective, reliable supplier selection procedure is a continuous process and it is indeed certainly possible that there are other aspects of supplier selection that are more crucial but are not considered in the proposed approach.

Practical implications

Purchase managers working in SMEs will be the primary beneficiaries of the developed approach. The suggested integrated approach can make a strategic difference in the working of SMEs.

Originality/value

A practical SSOA framework is developed for professionals working in SMEs. This approach will help SMEs to manage their operations effectively.

Details

Journal of Global Operations and Strategic Sourcing, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2398-5364

Keywords

Article
Publication date: 14 December 2023

Huaxiang Song, Chai Wei and Zhou Yong

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of…

Abstract

Purpose

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of clustered ground objects and noisy backgrounds. Recent research typically leverages larger volume models to achieve advanced performance. However, the operating environments of remote sensing commonly cannot provide unconstrained computational and storage resources. It requires lightweight algorithms with exceptional generalization capabilities.

Design/methodology/approach

This study introduces an efficient knowledge distillation (KD) method to build a lightweight yet precise convolutional neural network (CNN) classifier. This method also aims to substantially decrease the training time expenses commonly linked with traditional KD techniques. This approach entails extensive alterations to both the model training framework and the distillation process, each tailored to the unique characteristics of RSIs. In particular, this study establishes a robust ensemble teacher by independently training two CNN models using a customized, efficient training algorithm. Following this, this study modifies a KD loss function to mitigate the suppression of non-target category predictions, which are essential for capturing the inter- and intra-similarity of RSIs.

Findings

This study validated the student model, termed KD-enhanced network (KDE-Net), obtained through the KD process on three benchmark RSI data sets. The KDE-Net surpasses 42 other state-of-the-art methods in the literature published from 2020 to 2023. Compared to the top-ranked method’s performance on the challenging NWPU45 data set, KDE-Net demonstrated a noticeable 0.4% increase in overall accuracy with a significant 88% reduction in parameters. Meanwhile, this study’s reformed KD framework significantly enhances the knowledge transfer speed by at least three times.

Originality/value

This study illustrates that the logit-based KD technique can effectively develop lightweight CNN classifiers for RSI classification without substantial sacrifices in computation and storage costs. Compared to neural architecture search or other methods aiming to provide lightweight solutions, this study’s KDE-Net, based on the inherent characteristics of RSIs, is currently more efficient in constructing accurate yet lightweight classifiers for RSI classification.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 16 April 2024

Kunpeng Shi, Guodong Jin, Weichao Yan and Huilin Xing

Accurately evaluating fluid flow behaviors and determining permeability for deforming porous media is time-consuming and remains challenging. This paper aims to propose a novel…

Abstract

Purpose

Accurately evaluating fluid flow behaviors and determining permeability for deforming porous media is time-consuming and remains challenging. This paper aims to propose a novel machine-learning method for the rapid estimation of permeability of porous media at different deformation stages constrained by hydro-mechanical coupling analysis.

Design/methodology/approach

A convolutional neural network (CNN) is proposed in this paper, which is guided by the results of finite element coupling analysis of equilibrium equation for mechanical deformation and Boltzmann equation for fluid dynamics during the hydro-mechanical coupling process [denoted as Finite element lattice Boltzmann model (FELBM) in this paper]. The FELBM ensures the Lattice Boltzmann analysis of coupled fluid flow with an unstructured mesh, which varies with the corresponding nodal displacement resulting from mechanical deformation. It provides reliable label data for permeability estimation at different stages using CNN.

Findings

The proposed CNN can rapidly and accurately estimate the permeability of deformable porous media, significantly reducing processing time. The application studies demonstrate high accuracy in predicting the permeability of deformable porous media for both the test and validation sets. The corresponding correlation coefficients (R2) is 0.93 for the validation set, and the R2 for the test set A and test set B are 0.93 and 0.94, respectively.

Originality/value

This study proposes an innovative approach with the CNN to rapidly estimate permeability in porous media under dynamic deformations, guided by FELBM coupling analysis. The fast and accurate performance of CNN underscores its promising potential for future applications.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 28 November 2023

Tingting Tian, Hongjian Shi, Ruhui Ma and Yuan Liu

For privacy protection, federated learning based on data separation allows machine learning models to be trained on remote devices or in isolated data devices. However, due to the…

Abstract

Purpose

For privacy protection, federated learning based on data separation allows machine learning models to be trained on remote devices or in isolated data devices. However, due to the limited resources such as bandwidth and power of local devices, communication in federated learning can be much slower than in local computing. This study aims to improve communication efficiency by reducing the number of communication rounds and the size of information transmitted in each round.

Design/methodology/approach

This paper allows each user node to perform multiple local trainings, then upload the local model parameters to a central server. The central server updates the global model parameters by weighted averaging the parameter information. Based on this aggregation, user nodes first cluster the parameter information to be uploaded and then replace each value with the mean value of its cluster. Considering the asymmetry of the federated learning framework, adaptively select the optimal number of clusters required to compress the model information.

Findings

While maintaining the loss convergence rate similar to that of federated averaging, the test accuracy did not decrease significantly.

Originality/value

By compressing uplink traffic, the work can improve communication efficiency on dynamic networks with limited resources.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 3 August 2023

Yandong Hou, Zhengbo Wu, Xinghua Ren, Kaiwen Liu and Zhengquan Chen

High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the…

Abstract

Purpose

High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the semantic segmentation task challenging. In this paper, a bidirectional feature fusion network (BFFNet) is designed to address this challenge, which aims at increasing the accurate recognition of surface objects in order to effectively classify special features.

Design/methodology/approach

There are two main crucial elements in BFFNet. Firstly, the mean-weighted module (MWM) is used to obtain the key features in the main network. Secondly, the proposed polarization enhanced branch network performs feature extraction simultaneously with the main network to obtain different feature information. The authors then fuse these two features in both directions while applying a cross-entropy loss function to monitor the network training process. Finally, BFFNet is validated on two publicly available datasets, Potsdam and Vaihingen.

Findings

In this paper, a quantitative analysis method is used to illustrate that the proposed network achieves superior performance of 2–6%, respectively, compared to other mainstream segmentation networks from experimental results on two datasets. Complete ablation experiments are also conducted to demonstrate the effectiveness of the elements in the network. In summary, BFFNet has proven to be effective in achieving accurate identification of small objects and in reducing the effect of shadows on the segmentation process.

Originality/value

The originality of the paper is the proposal of a BFFNet based on multi-scale and multi-attention strategies to improve the ability to accurately segment high-resolution and complex remote sensing images, especially for small objects and shadow-obscured objects.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 222