Search results

1 – 10 of 181
Article
Publication date: 20 March 2024

Ziming Zhou, Fengnian Zhao and David Hung

Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine…

Abstract

Purpose

Higher energy conversion efficiency of internal combustion engine can be achieved with optimal control of unsteady in-cylinder flow fields inside a direct-injection (DI) engine. However, it remains a daunting task to predict the nonlinear and transient in-cylinder flow motion because they are highly complex which change both in space and time. Recently, machine learning methods have demonstrated great promises to infer relatively simple temporal flow field development. This paper aims to feature a physics-guided machine learning approach to realize high accuracy and generalization prediction for complex swirl-induced flow field motions.

Design/methodology/approach

To achieve high-fidelity time-series prediction of unsteady engine flow fields, this work features an automated machine learning framework with the following objectives: (1) The spatiotemporal physical constraint of the flow field structure is transferred to machine learning structure. (2) The ML inputs and targets are efficiently designed that ensure high model convergence with limited sets of experiments. (3) The prediction results are optimized by ensemble learning mechanism within the automated machine learning framework.

Findings

The proposed data-driven framework is proven effective in different time periods and different extent of unsteadiness of the flow dynamics, and the predicted flow fields are highly similar to the target field under various complex flow patterns. Among the described framework designs, the utilization of spatial flow field structure is the featured improvement to the time-series flow field prediction process.

Originality/value

The proposed flow field prediction framework could be generalized to different crank angle periods, cycles and swirl ratio conditions, which could greatly promote real-time flow control and reduce experiments on in-cylinder flow field measurement and diagnostics.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 21 December 2023

Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…

1103

Abstract

Purpose

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.

Design/methodology/approach

To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.

Findings

The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.

Practical implications

Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 22 March 2022

Shiva Sumanth Reddy and C. Nandini

The present research work is carried out for determining haemoprotozoan diseases in cattle and breast cancer diseases in humans at early stage. The combination of LeNet and…

Abstract

Purpose

The present research work is carried out for determining haemoprotozoan diseases in cattle and breast cancer diseases in humans at early stage. The combination of LeNet and bidirectional long short-term memory (Bi-LSTM) model is used for the classification of heamoprotazoan samples into three classes such as theileriosis, babesiosis and anaplasmosis. Also, BreaKHis dataset image samples are classified into two major classes as malignant and benign. The hyperparameter optimization is used for selecting the prominent features. The main objective of this approach is to overcome the manual identification and classification of samples into different haemoprotozoan diseases in cattle. The traditional laboratory approach of identification is time-consuming and requires human expertise. The proposed methodology will help to identify and classify the heamoprotozoan disease in early stage without much of human involvement.

Design/methodology/approach

LeNet-based Bi-LSTM model is used for the classification of pathology images into babesiosis, anaplasmosis, theileriosis and breast images classified into malignant or benign. An optimization-based super pixel clustering algorithm is used for segmentation once the normalization of histopathology images is conducted. The edge information in the normalized images is considered for identifying the irregular shape regions of images, which are structurally meaningful. Also, it is compared with another segmentation approach circular Hough Transform (CHT). The CHT is used to separate the nuclei from non-nuclei. The Canny edge detection and gaussian filter is used for extracting the edges before sending to CHT.

Findings

The existing methods such as artificial neural network (ANN), convolution neural network (CNN), recurrent neural network (RNN), LSTM and Bi-LSTM model have been compared with the proposed hyperparameter optimization approach with LeNET and Bi-LSTM. The results obtained by the proposed hyperparameter optimization-Bi-LSTM model showed the accuracy of 98.99% when compared to existing models like Ensemble of Deep Learning Models of 95.29% and Modified ReliefF Algorithm of 95.94%.

Originality/value

In contrast to earlier research done using Modified ReliefF, the suggested LeNet with Bi-LSTM model, there is an improvement in accuracy, precision and F-score significantly. The real time data set is used for the heamoprotozoan disease samples. Also, for anaplasmosis and babesiosis, the second set of datasets were used which are coloured datasets obtained by adding a chemical acetone and stain.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 21 June 2023

Margarita Ntousia, Ioannis Fudos, Spyridon Moschopoulos and Vasiliki Stamati

Objects fabricated using additive manufacturing (AM) technologies often suffer from dimensional accuracy issues and other part-specific problems. This study aims to present a…

Abstract

Purpose

Objects fabricated using additive manufacturing (AM) technologies often suffer from dimensional accuracy issues and other part-specific problems. This study aims to present a framework for estimating the printability of a computer-aided design (CAD) model that expresses the probability that the model is fabricated correctly via an AM technology for a specific application.

Design/methodology/approach

This study predicts the dimensional deviations of the manufactured object per vertex and per part using a machine learning approach. The input to the error prediction artificial neural network (ANN) is per vertex information extracted from the mesh of the model to be manufactured. The output of the ANN is the estimated average per vertex error for the fabricated object. This error is then used along with other global and per part information in a framework for estimating the printability of the model, that is, the probability of being fabricated correctly on a certain AM technology, for a specific application domain.

Findings

A thorough experimental evaluation was conducted on binder jetting technology for both the error prediction approach and the printability estimation framework.

Originality/value

This study presents a method for predicting dimensional errors with high accuracy and a completely novel approach for estimating the probability of a CAD model to be fabricated without significant failures or errors that make it inappropriate for a specific application.

Details

Rapid Prototyping Journal, vol. 29 no. 9
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 18 October 2021

John Peter Cooney, David Oloke and Louis Gyoh

This study aims to demonstrate the possibility of showing the functionality of complex microbial groups, within ancient structures within a process of refurbishment on a heritage…

Abstract

Purpose

This study aims to demonstrate the possibility of showing the functionality of complex microbial groups, within ancient structures within a process of refurbishment on a heritage building information modelling (BIM) platform.

Design/methodology/approach

Both a qualitative and qualitative research method will be used throughout, as observational and scientific results will be obtained and collated. This path being; phenomena – acquisition tools – storage – analysis tools – literature. Using this methodology, one pilot study within the scope of demolition and refurbishment, using suitable methods of collecting and managing data (structural or otherwise), will be used and generated by various software and applications. The principle methods used for the identification of such micro-organisms will incorporate a polymerase chain reaction method (PCR), to amplify DNA and to identify any or all spores present. The BIM/historical BIM (HBIM) process will be used to create a remotely-based survey to obtain and collate data using a laser scanner to produce a three-dimensional point cloud model to evaluate and deduce the condition, make-up and stature of the monument. A documentation management system will be devised to enable the development of plain language questions and an exchange information requirement, to identify such documentation required to enable safe refurbishment and to give health and safety guidance. Four data sampling extractions will be conducted, two for each site, within the research, for each of the periods being assessed, that being the Norman and Tudor areas of the monument.

Findings

From laboratory PCR analysis, results show a conclusive presence of micro-organism groups and will be represented within a hierarchical classification, from kingdom to species.

Originality/value

The BIM/HBIM process will highlight results in a graphical form to show data collected, particularly within the PCR application. It will also create standardisation and availability for such data from ancient monuments to make available all data stored, as such analysis becomes substantially important to enable the production of data sets for comparison, from within the framework of this research.

Details

Journal of Engineering, Design and Technology , vol. 21 no. 4
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 11 July 2023

Abhinandan Chatterjee, Pradip Bala, Shruti Gedam, Sanchita Paul and Nishant Goyal

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for…

Abstract

Purpose

Depression is a mental health problem characterized by a persistent sense of sadness and loss of interest. EEG signals are regarded as the most appropriate instruments for diagnosing depression because they reflect the operating status of the human brain. The purpose of this study is the early detection of depression among people using EEG signals.

Design/methodology/approach

(i) Artifacts are removed by filtering and linear and non-linear features are extracted; (ii) feature scaling is done using a standard scalar while principal component analysis (PCA) is used for feature reduction; (iii) the linear, non-linear and combination of both (only for those whose accuracy is highest) are taken for further analysis where some ML and DL classifiers are applied for the classification of depression; and (iv) in this study, total 15 distinct ML and DL methods, including KNN, SVM, bagging SVM, RF, GB, Extreme Gradient Boosting, MNB, Adaboost, Bagging RF, BootAgg, Gaussian NB, RNN, 1DCNN, RBFNN and LSTM, that have been effectively utilized as classifiers to handle a variety of real-world issues.

Findings

1. Among all, alpha, alpha asymmetry, gamma and gamma asymmetry give the best results in linear features, while RWE, DFA, CD and AE give the best results in non-linear feature. 2. In the linear features, gamma and alpha asymmetry have given 99.98% accuracy for Bagging RF, while gamma asymmetry has given 99.98% accuracy for BootAgg. 3. For non-linear features, it has been shown 99.84% of accuracy for RWE and DFA in RF, 99.97% accuracy for DFA in XGBoost and 99.94% accuracy for RWE in BootAgg. 4. By using DL, in linear features, gamma asymmetry has given more than 96% accuracy in RNN and 91% accuracy in LSTM and for non-linear features, 89% accuracy has been achieved for CD and AE in LSTM. 5. By combining linear and non-linear features, the highest accuracy was achieved in Bagging RF (98.50%) gamma asymmetry + RWE. In DL, Alpha + RWE, Gamma asymmetry + CD and gamma asymmetry + RWE have achieved 98% accuracy in LSTM.

Originality/value

A novel dataset was collected from the Central Institute of Psychiatry (CIP), Ranchi which was recorded using a 128-channels whereas major previous studies used fewer channels; the details of the study participants are summarized and a model is developed for statistical analysis using N-way ANOVA; artifacts are removed by high and low pass filtering of epoch data followed by re-referencing and independent component analysis for noise removal; linear features, namely, band power and interhemispheric asymmetry and non-linear features, namely, relative wavelet energy, wavelet entropy, Approximate entropy, sample entropy, detrended fluctuation analysis and correlation dimension are extracted; this model utilizes Epoch (213,072) for 5 s EEG data, which allows the model to train for longer, thereby increasing the efficiency of classifiers. Features scaling is done using a standard scalar rather than normalization because it helps increase the accuracy of the models (especially for deep learning algorithms) while PCA is used for feature reduction; the linear, non-linear and combination of both features are taken for extensive analysis in conjunction with ML and DL classifiers for the classification of depression. The combination of linear and non-linear features (only for those whose accuracy is highest) is used for the best detection results.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 5 January 2024

Ah Lam Lee and Hyunsook Han

The main issue in the mass customization of apparel products is how to efficiently produce products of various sizes. A parametric pattern-making system is one of the notable ways…

Abstract

Purpose

The main issue in the mass customization of apparel products is how to efficiently produce products of various sizes. A parametric pattern-making system is one of the notable ways to rectify this issue, but there is a lack of information on the parametric design itself and its application to the apparel industry. This study compares and analyzes three types of parametric clothing pattern CAD (P-CAD) software currently in use to identify the characteristics of each, and suggest a basic guideline for efficient and adaptable P-CAD software in the apparel industry.

Design/methodology/approach

This study compared three different types of P-CAD software with different characteristics: SuperALPHA: PLUS(as known as YUKA), GRAFIS and Seamly2D. The authors analyzed the types and management methodologies of each software, according to the three essential components that refer to previous studies about parametric design systems: entities, constraints and parameters.

Findings

The results demonstrated the advantages and disadvantages of methodology in terms of three essential components of each software. Based on the results, the authors proposed five strategies for P-CAD development that can be applied to the mass customization of clothing.

Originality/value

This study is meaningful in that it consolidates and organizes information about P-CAD software that has previously been scattered. The framework used in this study has an academic value suggesting guidelines to analyze P-CAD systems.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 2 February 2023

Ahmed Eslam Salman and Magdy Raouf Roman

The study proposed a human–robot interaction (HRI) framework to enable operators to communicate remotely with robots in a simple and intuitive way. The study focused on the…

Abstract

Purpose

The study proposed a human–robot interaction (HRI) framework to enable operators to communicate remotely with robots in a simple and intuitive way. The study focused on the situation when operators with no programming skills have to accomplish teleoperated tasks dealing with randomly localized different-sized objects in an unstructured environment. The purpose of this study is to reduce stress on operators, increase accuracy and reduce the time of task accomplishment. The special application of the proposed system is in the radioactive isotope production factories. The following approach combined the reactivity of the operator’s direct control with the powerful tools of vision-based object classification and localization.

Design/methodology/approach

Perceptive real-time gesture control predicated on a Kinect sensor is formulated by information fusion between human intuitiveness and an augmented reality-based vision algorithm. Objects are localized using a developed feature-based vision algorithm, where the homography is estimated and Perspective-n-Point problem is solved. The 3D object position and orientation are stored in the robot end-effector memory for the last mission adjusting and waiting for a gesture control signal to autonomously pick/place an object. Object classification process is done using a one-shot Siamese neural network (NN) to train a proposed deep NN; other well-known models are also used in a comparison. The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved.

Findings

The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved. The results revealed the effectiveness of the proposed teleoperation system and demonstrate its potential for use by robotics non-experienced users to effectively accomplish remote robot tasks.

Social implications

The proposed system reduces risk and increases level of safety when applied in hazardous environment such as the nuclear one.

Originality/value

The contribution and uniqueness of the presented study are represented in the development of a well-integrated HRI system that can tackle the four aforementioned circumstances in an effective and user-friendly way. High operator–robot reactivity is kept by using the direct control method, while a lot of cognitive stress is removed using elective/flapped autonomous mode to manipulate randomly localized different configuration objects. This necessitates building an effective deep learning algorithm (in comparison to well-known methods) to recognize objects in different conditions: illumination levels, shadows and different postures.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 November 2022

Buddhini Ginigaddara, Srinath Perera, Yingbin Feng, Payam Rahnamayiezekavat and Mike Kagioglou

Industry 4.0 is exacerbating the need for offsite construction (OSC) adoption, and this rapid transformation is pushing the boundaries of construction skills towards extensive…

Abstract

Purpose

Industry 4.0 is exacerbating the need for offsite construction (OSC) adoption, and this rapid transformation is pushing the boundaries of construction skills towards extensive modernisation. The adoption of this modern production strategy by the construction industry would redefine the position of OSC. This study aims to examine whether the existing skills are capable of satisfying the needs of different OSC types.

Design/methodology/approach

A critical literature review evaluated the impact of transformative technology on OSC skills. An existing industry standard OSC skill classification was used as the basis to develop a master list that recognises emerging and diminishing OSC skills. The master list recognises 67 OSC skills under six skill categories: managers, professionals, technicians and trade workers, clerical and administrative workers, machinery operators and drivers and labourers. The skills data was extracted from a series of 13 case studies using document reviews and semi-structured interviews with project stakeholders.

Findings

The multiple case study evaluation recognised 13 redundant skills and 16 emerging OSC skills such as architects with building information modelling and design for manufacture and assembly knowledge, architects specialised in design and logistics integration, advanced OSC technical skills, factory operators, OSC estimators, technicians for three dimensional visualisation and computer numeric control operators. Interview findings assessed the current state and future directions for OSC skills development. Findings indicate that the prevailing skills are not adequate to readily relocate construction activities from onsite to offsite.

Originality/value

To the best of the authors’ knowledge, this research is one of the first studies that recognises the major differences in skill requirements for non-volumetric and volumetric OSC types.

Details

Construction Innovation , vol. 24 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 17 October 2023

Hatzav Yoffe, Noam Raanan, Shaked Fried, Pnina Plaut and Yasha Jacob Grobman

This study uses computer-aided design to improve the ecological and environmental sustainability of early-stage landscape designs. Urban expansion on open land and natural…

Abstract

Purpose

This study uses computer-aided design to improve the ecological and environmental sustainability of early-stage landscape designs. Urban expansion on open land and natural habitats has led to a decline in biodiversity and increased climate change impacts, affecting urban inhabitants' quality of life and well-being. While sustainability indicators have been employed to assess the performance of buildings and neighbourhoods, landscape designs' ecological and environmental sustainability has received comparatively less attention, particularly in early-design stages where applying sustainability approaches is impactful.

Design/methodology/approach

The authors propose a computation framework for evaluating key landscape sustainability indicators and providing real-time feedback to designers. The method integrates spatial indicators with widely recognized sustainability rating system credits. A specialized tool was developed for measuring biomass optimization, precipitation management and urban heat mitigation, and a proof-of-concept experiment tested the tool's effectiveness on three Mediterranean neighbourhood-level designs.

Findings

The results show a clear connection between the applied design strategy to the indicator behaviour. This connection enhances the ability to establish sustainability benchmarks for different types of landscape developments using parametric design.

Practical implications

The study allows non-expert designers to measure and embed landscape sustainability early in the design stages, thus lowering the entry level for incorporating biodiversity enhancement and climate mitigation approaches.

Originality/value

This study expands the parametric vocabulary for measuring landscape sustainability by introducing spatial ecosystem services and architectural sustainability indicators on a unified platform, enabling the integration of critical climate and biodiversity-loss solutions earlier in the development process.

Details

Archnet-IJAR: International Journal of Architectural Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2631-6862

Keywords

1 – 10 of 181