Search results

1 – 10 of 131
Article
Publication date: 22 July 2022

Ying Tao Chai and Ting-Kwei Wang

Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection…

Abstract

Purpose

Defects in concrete surfaces are inevitably recurring during construction, which needs to be checked and accepted during construction and completion. Traditional manual inspection of surface defects requires inspectors to judge, evaluate and make decisions, which requires sufficient experience and is time-consuming and labor-intensive, and the expertise cannot be effectively preserved and transferred. In addition, the evaluation standards of different inspectors are not identical, which may lead to cause discrepancies in inspection results. Although computer vision can achieve defect recognition, there is a gap between the low-level semantics acquired by computer vision and the high-level semantics that humans understand from images. Therefore, computer vision and ontology are combined to achieve intelligent evaluation and decision-making and to bridge the above gap.

Design/methodology/approach

Combining ontology and computer vision, this paper establishes an evaluation and decision-making framework for concrete surface quality. By establishing concrete surface quality ontology model and defect identification quantification model, ontology reasoning technology is used to realize concrete surface quality evaluation and decision-making.

Findings

Computer vision can identify and quantify defects, obtain low-level image semantics, and ontology can structurally express expert knowledge in the field of defects. This proposed framework can automatically identify and quantify defects, and infer the causes, responsibility, severity and repair methods of defects. Through case analysis of various scenarios, the proposed evaluation and decision-making framework is feasible.

Originality/value

This paper establishes an evaluation and decision-making framework for concrete surface quality, so as to improve the standardization and intelligence of surface defect inspection and potentially provide reusable knowledge for inspecting concrete surface quality. The research results in this paper can be used to detect the concrete surface quality, reduce the subjectivity of evaluation and improve the inspection efficiency. In addition, the proposed framework enriches the application scenarios of ontology and computer vision, and to a certain extent bridges the gap between the image features extracted by computer vision and the information that people obtain from images.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 19 January 2024

Prihana Vasishta, Navjyoti Dhingra and Seema Vasishta

This research aims to analyse the current state of research on the application of Artificial Intelligence (AI) in libraries by examining document type, publication year, keywords…

Abstract

Purpose

This research aims to analyse the current state of research on the application of Artificial Intelligence (AI) in libraries by examining document type, publication year, keywords, country and research methods. The overarching aim is to enrich the existing knowledge of AI-powered libraries by identifying the prevailing research gaps, providing direction for future research and deepening the understanding needed for effective policy development.

Design/methodology/approach

This study used advanced tools such as bibliometric and network analysis, taking the existing literature from the SCOPUS database extending to the year 2022. This study analysed the application of AI in libraries by identifying and selecting relevant keywords, extracting the data from the database, processing the data using advanced bibliometric visualisation tools and presenting and discussing the results. For this comprehensive research, the search strategy was approved by a panel of computer scientists and librarians.

Findings

The majority of research concerning the application of AI in libraries has been conducted in the last three years, likely driven by the fourth industrial revolution. Results show that highly cited articles were published by Emerald Group Holdings Ltd. However, the application of AI in libraries is a developing field, and the study highlights the need for more research in areas such as Digital Humanities, Machine Learning, Robotics, Data Mining and Big Data in Academic Libraries.

Research limitations/implications

This study has excluded papers written in languages other than English that address domains beyond libraries, such as medicine, health, education, science and technology.

Practical implications

This article offers insight for managers and policymakers looking to implement AI in libraries. By identifying clusters and themes, the article would empower managers to plan ahead, mitigate potential drawbacks and seize opportunities for sustainable growth.

Originality/value

Previous studies on the application of AI in libraries have taken a broad approach, but this study narrows its focus to research published explicitly in Library and Information Science (LIS) journals. This makes it unique compared to previous research in the field.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Open Access
Article
Publication date: 1 December 2023

Francois Du Rand, André Francois van der Merwe and Malan van Tonder

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without…

Abstract

Purpose

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without the need for specialised computational hardware. The idea is to develop this system by making use of more traditional machine learning (ML) models instead of using computationally intensive deep learning (DL) models.

Design/methodology/approach

The approach that is used by this study is to use traditional image processing and classification techniques that can be applied to captured layer images to detect and classify defects without the need for DL algorithms.

Findings

The study proved that a defect classification algorithm could be developed by making use of traditional ML models with a high degree of accuracy and the images could be processed at higher speeds than typically reported in literature when making use of DL models.

Originality/value

This paper addresses a need that has been identified for a high-speed defect classification algorithm that can detect and classify defects without the need for specialised hardware that is typically used when making use of DL technologies. This is because when developing closed-loop feedback systems for these additive manufacturing machines, it is important to detect and classify defects without inducing additional delays to the control system.

Details

Rapid Prototyping Journal, vol. 29 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 27 September 2023

Veera Harsha Vardhan Jilludimudi, Daniel Zhou, Eric Rubstov, Alexander Gonzalez, Will Daknis, Erin Gunn and David Prawel

This study aims to collect real-time, in situ data from polymer melt extrusion (ME) 3D printing and use only the collected data to non-destructively identify printed parts that…

Abstract

Purpose

This study aims to collect real-time, in situ data from polymer melt extrusion (ME) 3D printing and use only the collected data to non-destructively identify printed parts that contain defects.

Design/methodology/approach

A set of sensors was created to collect real-time, in situ data from polymer ME 3D printing. A variance analysis was completed to identify an “acceptable” range for filament diameter on a popular desktop 3D printer. These data were used as the basis of a quality evaluation process to non-destructively identify spatial regions of printed parts in multi-part builds that contain defects.

Findings

Anomalous parts were correctly identified non-destructively using only in situ collected data.

Research limitations/implications

This methodology was developed by varying the filament diameter, one of the most common reasons for print failure in ME. Numerous other printing parameters are known to create faults in melt extruded parts, and this methodology can be extended to analyze other parameters.

Originality/value

To the best of the authors’ knowledge, this is the first report of a non-destructive evaluation of 3D-printed part quality using only in situ data in ME. The value is in improving part quality and reliability in ME, thereby reducing 3D printing part errors, plastic waste and the associated cost of time and material.

Article
Publication date: 19 May 2023

Michail Katsigiannis, Minas Pantelidakis and Konstantinos Mykoniatis

With hybrid simulation techniques getting popular for systems improvement in multiple fields, this study aims to provide insight on the use of hybrid simulation to assess the…

Abstract

Purpose

With hybrid simulation techniques getting popular for systems improvement in multiple fields, this study aims to provide insight on the use of hybrid simulation to assess the effect of lean manufacturing (LM) techniques on manufacturing facilities and the transition of a mass production (MP) facility to incorporating LM techniques.

Design/methodology/approach

In this paper, the authors apply a hybrid simulation approach to improve an educational automotive assembly line and provide guidelines for implementing different LM techniques. Specifically, the authors describe the design, development, verification and validation of a hybrid discrete-event and agent-based simulation model of a LEGO® car assembly line to analyze, improve and assess the system’s performance. The simulation approach examines the base model (MP) and an alternative scenario (just-in-time [JIT] with Heijunka).

Findings

The hybrid simulation approach effectively models the facility. The alternative simulation scenario (implementing JIT and Heijunka LM techniques) improved all examined performance metrics. In more detail, the system’s lead time was reduced by 47.37%, the throughput increased by 5.99% and the work-in-progress for workstations decreased by up to 56.73%.

Originality/value

This novel hybrid simulation approach provides insight and can be potentially extrapolated to model other manufacturing facilities and evaluate transition scenarios from MP to LM.

Details

International Journal of Lean Six Sigma, vol. 15 no. 2
Type: Research Article
ISSN: 2040-4166

Keywords

Open Access
Article
Publication date: 19 August 2022

Nina Jamar

The purpose of the research was to find out if there are any differences in the readability score between abstracts published in scientific journals from library and information…

1611

Abstract

Purpose

The purpose of the research was to find out if there are any differences in the readability score between abstracts published in scientific journals from library and information science with and without an impact factor. Therefore, the author made a comparison between the readability of abstracts from one journal with (Journal of Documentation) and one journal without (Knjižnica or Library) an impact factor.

Design/methodology/approach

As a measure of readability, the Flesch Reading Ease Readability Formula was used. Then, with the help of statistical experts, a comparison of the readability scores between the abstracts of two selected journals was performed.

Findings

The results showed that some statistically important differences exist between the abstracts published in the Journal of Documentation and Knjižnica. The statistically important differences were found in the number of words and sentences in abstracts and in the readability of abstracts included in the research. Therefore, it can be said that there exists a statistically important difference between abstracts with and without an impact factor.

Originality/value

The primary purpose was to find out whether there is a statistically important difference in the readability score of abstracts with and without an impact factor in the field of library and information science. Some similar research studies have been conducted in other scientific fields.

Details

Journal of Documentation, vol. 79 no. 7
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 27 March 2024

Temesgen Agazhie and Shalemu Sharew Hailemariam

This study aims to quantify and prioritize the main causes of lean wastes and to apply reduction methods by employing better waste cause identification methodologies.

Abstract

Purpose

This study aims to quantify and prioritize the main causes of lean wastes and to apply reduction methods by employing better waste cause identification methodologies.

Design/methodology/approach

We employed fuzzy techniques for order preference by similarity to the ideal solution (FTOPSIS), fuzzy analytical hierarchy process (FAHP), and failure mode effect analysis (FMEA) to determine the causes of defects. To determine the current defect cause identification procedures, time studies, checklists, and process flow charts were employed. The study focuses on the sewing department of a clothing industry in Addis Ababa, Ethiopia.

Findings

These techniques outperform conventional techniques and offer a better solution for challenging decision-making situations. Each lean waste’s FMEA criteria, such as severity, occurrence, and detectability, were examined. A pairwise comparison revealed that defect has a larger effect than other lean wastes. Defects were mostly caused by inadequate operator training. To minimize lean waste, prioritizing their causes is crucial.

Research limitations/implications

The research focuses on a case company and the result could not be generalized for the whole industry.

Practical implications

The study used quantitative approaches to quantify and prioritize the causes of lean waste in the garment industry and provides insight for industrialists to focus on the waste causes to improve their quality performance.

Originality/value

The methodology of integrating FMEA with FAHP and FTOPSIS was the new contribution to have a better solution to decision variables by considering the severity, occurrence, and detectability of the causes of wastes. The data collection approach was based on experts’ focus group discussion to rate the main causes of defects which could provide optimal values of defect cause prioritization.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

Open Access
Article
Publication date: 8 August 2023

Elisa Verna, Gianfranco Genta and Maurizio Galetto

The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality…

Abstract

Purpose

The purpose of this paper is to investigate and quantify the impact of product complexity, including architectural complexity, on operator learning, productivity and quality performance in both assembly and disassembly operations. This topic has not been extensively investigated in previous research.

Design/methodology/approach

An extensive experimental campaign involving 84 operators was conducted to repeatedly assemble and disassemble six different products of varying complexity to construct productivity and quality learning curves. Data from the experiment were analysed using statistical methods.

Findings

The human learning factor of productivity increases superlinearly with the increasing architectural complexity of products, i.e. from centralised to distributed architectures, both in assembly and disassembly, regardless of the level of overall product complexity. On the other hand, the human learning factor of quality performance decreases superlinearly as the architectural complexity of products increases. The intrinsic characteristics of product architecture are the reasons for this difference in learning factor.

Practical implications

The results of the study suggest that considering product complexity, particularly architectural complexity, in the design and planning of manufacturing processes can optimise operator learning, productivity and quality performance, and inform decisions about improving manufacturing operations.

Originality/value

While previous research has focussed on the effects of complexity on process time and defect generation, this study is amongst the first to investigate and quantify the effects of product complexity, including architectural complexity, on operator learning using an extensive experimental campaign.

Details

Journal of Manufacturing Technology Management, vol. 34 no. 9
Type: Research Article
ISSN: 1741-038X

Keywords

Article
Publication date: 24 January 2024

Nirmal Singh, Harmanjit Singh Banga, Jaswinder Singh and Rajnish Sharma

This paper aims to prompt ideas amongst readers (especially librarians) about how they can become active partners in knowledge dissemination amongst concerned user groups by…

Abstract

Purpose

This paper aims to prompt ideas amongst readers (especially librarians) about how they can become active partners in knowledge dissemination amongst concerned user groups by implementing 3D printing technology under the “Makerspace.”

Design/methodology/approach

The paper provides a brief account of various tools and techniques used by veterinary and animal sciences institutions for information dissemination amongst the stakeholders and associated challenges with a focus on the use of 3D printing technology to overcome the bottlenecks. An overview of the 3D printing technology has been provided following the instances of use of this novel technology in veterinary and animal sciences. An initiative of the University Library, Guru Angad Dev Veterinary and Animal Sciences University, Ludhiana, to harness the potential of this technology in disseminating information amongst livestock stakeholders has been discussed.

Findings

3D printing has the potential to enhance learning in veterinary and animal sciences by providing hands-on exposure to various anatomical structures, such as bones, organs and blood vessels, without the need for a cadaver. This approach enhances students’ spatial understanding and helps them better understand anatomical concepts. Libraries can enhance their visibility and can contribute actively to knowledge dissemination beyond traditional library services.

Originality/value

The ideas about how to harness the potential of 3D printing in knowledge dissemination amongst livestock sector stakeholders have been elaborated. This promotes creativity amongst librarians enabling them to think how they can engage in knowledge dissemination thinking out of the box.

Details

Library Hi Tech News, vol. 41 no. 2
Type: Research Article
ISSN: 0741-9058

Keywords

1 – 10 of 131