Search results

1 – 10 of 945
Open Access
Article
Publication date: 3 August 2020

Rajashree Dash, Rasmita Rautray and Rasmita Dash

Since the last few decades, Artificial Neural Networks have been the center of attraction of a large number of researchers for solving diversified problem domains. Due to its…

1190

Abstract

Since the last few decades, Artificial Neural Networks have been the center of attraction of a large number of researchers for solving diversified problem domains. Due to its distinguishing features such as generalization ability, robustness and strong ability to tackle nonlinear problems, it appears to be more popular in financial time series modeling and prediction. In this paper, a Pi-Sigma Neural Network is designed for foretelling the future currency exchange rates in different prediction horizon. The unrevealed parameters of the network are interpreted by a hybrid learning algorithm termed as Shuffled Differential Evolution (SDE). The main motivation of this study is to integrate the partitioning and random shuffling scheme of Shuffled Frog Leaping algorithm with evolutionary steps of a Differential Evolution technique to obtain an optimal solution with an accelerated convergence rate. The efficiency of the proposed predictor model is actualized by predicting the exchange rate price of a US dollar against Swiss France (CHF) and Japanese Yen (JPY) accumulated within the same period of time.

Details

Applied Computing and Informatics, vol. 19 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 15 December 2022

Maria Ferrer-Estévez and Ricardo Chalmeta

Sustainable customer relationship management (SCRM) is a combination of business strategy, customer-oriented business processes and computer systems that seeks to integrate…

10155

Abstract

Purpose

Sustainable customer relationship management (SCRM) is a combination of business strategy, customer-oriented business processes and computer systems that seeks to integrate sustainability into customer relationship management. The purpose of this paper is to contribute to the body of knowledge of marketing, business management and computer systems research domains by classifying in research categories the current state of knowledge on SCRM, by analysing the major research streams and by identifying a future research agenda in each research category.

Design/methodology/approach

To identify, select, collect, synthesise, analyse and evaluate all research published on SCRM, providing a complete insight in this research area, the PRISMA methodology, content analysis and bibliometric tools are used.

Findings

In total, 139 papers were analysed to assess the trend of the number of papers published and the number of citations of these papers; to identify the top contributing countries, authors, institutions and sources; to reveal the findings of the major research streams; to develop a classification framework composed by seven research categories (CRM as a key factor for enterprise sustainability, SCRM frameworks, SCRM computer tools and methods, case studies, SCRM and sustainable supply chain management, sustainable marketing and knowledge management) in which academics could expand SCRM research; and to establish future research challenges.

Social implications

This paper have an important positive social and environmental impact for society because it will lead to an increase in the number of green and socially conscious customers with an ethical behavior, while also transforming business processes, products and services, making them more sustainable.

Originality/value

Customer relationship management in the age of sustainable development is an increasing research area. Nevertheless, to the authors' knowledge, there are no systematic literature reviews that identify the major research streams, develop a classification framework, analyse the evolution in this research field and propose a future research agenda.

Details

Marketing Intelligence & Planning, vol. 41 no. 2
Type: Research Article
ISSN: 0263-4503

Keywords

Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 15 March 2022

Mehrshad Mehrpouya, Daniel Tuma, Tom Vaneker, Mohamadreza Afrasiabi, Markus Bambach and Ian Gibson

This study aims to provide a comprehensive overview of the current state of the art in powder bed fusion (PBF) techniques for additive manufacturing of multiple materials. It…

6626

Abstract

Purpose

This study aims to provide a comprehensive overview of the current state of the art in powder bed fusion (PBF) techniques for additive manufacturing of multiple materials. It reviews the emerging technologies in PBF multimaterial printing and summarizes the latest simulation approaches for modeling them. The topic of “multimaterial PBF techniques” is still very new, undeveloped, and of interest to academia and industry on many levels.

Design/methodology/approach

This is a review paper. The study approach was to carefully search for and investigate notable works and peer-reviewed publications concerning multimaterial three-dimensional printing using PBF techniques. The current methodologies, as well as their advantages and disadvantages, are cross-compared through a systematic review.

Findings

The results show that the development of multimaterial PBF techniques is still in its infancy as many fundamental “research” questions have yet to be addressed before production. Experimentation has many limitations and is costly; therefore, modeling and simulation can be very helpful and is, of course, possible; however, it is heavily dependent on the material data and computational power, so it needs further development in future studies.

Originality/value

This work investigates the multimaterial PBF techniques and discusses the novel printing methods with practical examples. Our literature survey revealed that the number of accounts on the predictive modeling of stresses and optimizing laser scan strategies in multimaterial PBF is low with a (very) limited range of applications. To facilitate future developments in this direction, the key information of the simulation efforts and the state-of-the-art computational models of multimaterial PBF are provided.

Details

Rapid Prototyping Journal, vol. 28 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 10 January 2020

Slawomir Koziel and Anna Pietrenko-Dabrowska

This study aims to propose a computationally efficient framework for multi-objective optimization (MO) of antennas involving nested kriging modeling technology. The technique is…

Abstract

Purpose

This study aims to propose a computationally efficient framework for multi-objective optimization (MO) of antennas involving nested kriging modeling technology. The technique is demonstrated through a two-objective optimization of a planar Yagi antenna and three-objective design of a compact wideband antenna.

Design/methodology/approach

The keystone of the proposed approach is the usage of recently introduced nested kriging modeling for identifying the design space region containing the Pareto front and constructing fast surrogate model for the MO algorithm. Surrogate-assisted design refinement is applied to improve the accuracy of Pareto set determination. Consequently, the Pareto set is obtained cost-efficiently, even though the optimization process uses solely high-fidelity electromagnetic (EM) analysis.

Findings

The optimization cost is dramatically reduced for the proposed framework as compared to other state-of-the-art frameworks. The initial Pareto set is identified more precisely (its span is wider and of better quality), which is a result of a considerably smaller domain of the nested kriging model and better predictive power of the surrogate.

Research limitations/implications

The proposed technique can be generalized to accommodate low- and high-fidelity EM simulations in a straightforward manner. The future work will incorporate variable-fidelity simulations to further reduce the cost of the training data acquisition.

Originality/value

The fast MO optimization procedure with the use of the nested kriging modeling technology for approximation of the Pareto set has been proposed and its superiority over state-of-the-art surrogate-assisted procedures has been proved. To the best of the authors’ knowledge, this approach to multi-objective antenna optimization is novel and enables obtaining optimal designs cost-effectively even in relatively high-dimensional spaces (considering typical antenna design setups) within wide parameter ranges.

Details

Engineering Computations, vol. 37 no. 4
Type: Research Article
ISSN: 0264-4401

Keywords

Open Access
Article
Publication date: 30 August 2021

Kailun Feng, Shiwei Chen, Weizhuo Lu, Shuo Wang, Bin Yang, Chengshuang Sun and Yaowu Wang

Simulation-based optimisation (SO) is a popular optimisation approach for building and civil engineering construction planning. However, in the framework of SO, the simulation is…

1418

Abstract

Purpose

Simulation-based optimisation (SO) is a popular optimisation approach for building and civil engineering construction planning. However, in the framework of SO, the simulation is continuously invoked during the optimisation trajectory, which increases the computational loads to levels unrealistic for timely construction decisions. Modification on the optimisation settings such as reducing searching ability is a popular method to address this challenge, but the quality measurement of the obtained optimal decisions, also termed as optimisation quality, is also reduced by this setting. Therefore, this study aims to develop an optimisation approach for construction planning that reduces the high computational loads of SO and provides reliable optimisation quality simultaneously.

Design/methodology/approach

This study proposes the optimisation approach by modifying the SO framework through establishing an embedded connection between simulation and optimisation technologies. This approach reduces the computational loads and ensures the optimisation quality associated with the conventional SO approach by accurately learning the knowledge from construction simulations using embedded ensemble learning algorithms, which automatically provides efficient and reliable fitness evaluations for optimisation iterations.

Findings

A large-scale project application shows that the proposed approach was able to reduce computational loads of SO by approximately 90%. Meanwhile, the proposed approach outperformed SO in terms of optimisation quality when the optimisation has limited searching ability.

Originality/value

The core contribution of this research is to provide an innovative method that improves efficiency and ensures effectiveness, simultaneously, of the well-known SO approach in construction applications. The proposed method is an alternative approach to SO that can run on standard computing platforms and support nearly real-time construction on-site decision-making.

Details

Engineering, Construction and Architectural Management, vol. 30 no. 1
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 1 December 2023

Francois Du Rand, André Francois van der Merwe and Malan van Tonder

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without…

Abstract

Purpose

This paper aims to discuss the development of a defect classification system that can be used to detect and classify powder bed surface defects from captured layer images without the need for specialised computational hardware. The idea is to develop this system by making use of more traditional machine learning (ML) models instead of using computationally intensive deep learning (DL) models.

Design/methodology/approach

The approach that is used by this study is to use traditional image processing and classification techniques that can be applied to captured layer images to detect and classify defects without the need for DL algorithms.

Findings

The study proved that a defect classification algorithm could be developed by making use of traditional ML models with a high degree of accuracy and the images could be processed at higher speeds than typically reported in literature when making use of DL models.

Originality/value

This paper addresses a need that has been identified for a high-speed defect classification algorithm that can detect and classify defects without the need for specialised hardware that is typically used when making use of DL technologies. This is because when developing closed-loop feedback systems for these additive manufacturing machines, it is important to detect and classify defects without inducing additional delays to the control system.

Details

Rapid Prototyping Journal, vol. 29 no. 11
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 2 December 2016

Juan Aparicio

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The…

2223

Abstract

Purpose

The purpose of this paper is to provide an outline of the major contributions in the literature on the determination of the least distance in data envelopment analysis (DEA). The focus herein is primarily on methodological developments. Specifically, attention is mainly paid to modeling aspects, computational features, the satisfaction of properties and duality. Finally, some promising avenues of future research on this topic are stated.

Design/methodology/approach

DEA is a methodology based on mathematical programming for the assessment of relative efficiency of a set of decision-making units (DMUs) that use several inputs to produce several outputs. DEA is classified in the literature as a non-parametric method because it does not assume a particular functional form for the underlying production function and presents, in this sense, some outstanding properties: the efficiency of firms may be evaluated independently on the market prices of the inputs used and outputs produced; it may be easily used with multiple inputs and outputs; a single score of efficiency for each assessed organization is obtained; this technique ranks organizations based on relative efficiency; and finally, it yields benchmarking information. DEA models provide both benchmarking information and efficiency scores for each of the evaluated units when it is applied to a dataset of observations and variables (inputs and outputs). Without a doubt, this benchmarking information gives DEA a distinct advantage over other efficiency methodologies, such as stochastic frontier analysis (SFA). Technical inefficiency is typically measured in DEA as the distance between the observed unit and a “benchmarking” target on the estimated piece-wise linear efficient frontier. The choice of this target is critical for assessing the potential performance of each DMU in the sample, as well as for providing information on how to increase its performance. However, traditional DEA models yield targets that are determined by the “furthest” efficient projection to the evaluated DMU. The projected point on the efficient frontier obtained as such may not be a representative projection for the judged unit, and consequently, some authors in the literature have suggested determining closest targets instead. The general argument behind this idea is that closer targets suggest directions of enhancement for the inputs and outputs of the inefficient units that may lead them to the efficiency with less effort. Indeed, authors like Aparicio et al. (2007) have shown, in an application on airlines, that it is possible to find substantial differences between the targets provided by applying the criterion used by the traditional DEA models, and those obtained when the criterion of closeness is utilized for determining projection points on the efficient frontier. The determination of closest targets is connected to the calculation of the least distance from the evaluated unit to the efficient frontier of the reference technology. In fact, the former is usually computed through solving mathematical programming models associated with minimizing some type of distance (e.g. Euclidean). In this particular respect, the main contribution in the literature is the paper by Briec (1998) on Hölder distance functions, where formally technical inefficiency to the “weakly” efficient frontier is defined through mathematical distances.

Findings

All the interesting features of the determination of closest targets from a benchmarking point of view have generated, in recent times, the increasing interest of researchers in the calculation of the least distance to evaluate technical inefficiency (Aparicio et al., 2014a). So, in this paper, we present a general classification of published contributions, mainly from a methodological perspective, and additionally, we indicate avenues for further research on this topic. The approaches that we cite in this paper differ in the way that the idea of similarity is made operative. Similarity is, in this sense, implemented as the closeness between the values of the inputs and/or outputs of the assessed units and those of the obtained projections on the frontier of the reference production possibility set. Similarity may be measured through multiple distances and efficiency measures. In turn, the aim is to globally minimize DEA model slacks to determine the closest efficient targets. However, as we will show later in the text, minimizing a mathematical distance in DEA is not an easy task, as it is equivalent to minimizing the distance to the complement of a polyhedral set, which is not a convex set. This complexity will justify the existence of different alternatives for solving these types of models.

Originality/value

As we are aware, this is the first survey in this topic.

Details

Journal of Centrum Cathedra, vol. 9 no. 2
Type: Research Article
ISSN: 1851-6599

Keywords

Open Access
Article
Publication date: 24 October 2022

Babak Lotfi and Bengt Ake Sunden

This study aims to computational numerical simulations to clarify and explore the influences of periodic cellular lattice (PCL) morphological parameters – such as lattice…

1161

Abstract

Purpose

This study aims to computational numerical simulations to clarify and explore the influences of periodic cellular lattice (PCL) morphological parameters – such as lattice structure topology (simple cubic, body-centered cubic, z-reinforced body-centered cubic [BCCZ], face-centered cubic and z-reinforced face-centered cubic [FCCZ] lattice structures) and porosity value ( ) – on the thermal-hydraulic characteristics of the novel trussed fin-and-elliptical tube heat exchanger (FETHX), which has led to a deeper understanding of the superior heat transfer enhancement ability of the PCL structure.

Design/methodology/approach

A three-dimensional computational fluid dynamics (CFD) model is proposed in this paper to provide better understanding of the fluid flow and heat transfer behavior of the PCL structures in the trussed FETHXs associated with different structure topologies and high-porosities. The flow governing equations of the trussed FETHX are solved by the CFD software ANSYS CFX® and use the Menter SST turbulence model to accurately predict flow characteristics in the fluid flow region.

Findings

The thermal-hydraulic performance benchmarks analysis – such as field synergy performance and performance evaluation criteria – conducted during this research successfully identified demonstrates that if the high porosity of all PCL structures decrease to 92%, the best thermal-hydraulic performance is provided. Overall, according to the obtained outcomes, the trussed FETHX with the advantages of using BCCZ lattice structure at 92% porosity presents good thermal-hydraulic performance enhancement among all the investigated PCL structures.

Originality/value

To the best of the authors’ knowledge, this paper is one of the first in the literature that provides thorough thermal-hydraulic characteristics of a novel trussed FETHX with high-porosity PCL structures.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 6 June 2018

Stefan Hartman

Tourism areas are challenged to become adaptive areas in the context of a dynamic networked society and globalizing economy. The purpose of this paper is to contribute to an…

1755

Abstract

Purpose

Tourism areas are challenged to become adaptive areas in the context of a dynamic networked society and globalizing economy. The purpose of this paper is to contribute to an enhanced understanding and conceptualization of adaptive tourism areas by drawing attention to “fitness landscapes,” a metaphor that is used in complexity theories to visualize development trajectories of adaptive systems.

Design/methodology/approach

Fitness landscapes, and its underlying theories, are useful to conceptualize tourism area development as a stepwise movement through a dynamic landscape with peaks and valleys. Doing so allows us to highlight why adaptation is a crucial property for tourism areas that are embedded in dynamic contexts and offers a frame of thought for how tourism areas can be managed.

Findings

The article raises awareness about and draws attention to a set of factors and conditions that support tourism planners and managers in enhancing the capacity of tourism areas to adaptively respond to changing circumstances.

Originality/value

Introducing fitness landscapes contribute to the discussion on adaptive capacity building – a topic that contributes to managing uncertain futures and is likely to gain importance in the dynamic society. Moreover, it helps as well as stimulates tourism scholars to further develop this topic. Finally, it helps tourism planners to build adaptive capacity in practice.

Details

Journal of Tourism Futures, vol. 4 no. 2
Type: Research Article
ISSN: 2055-5911

Keywords

1 – 10 of 945