Search results

1 – 10 of 335
Article
Publication date: 10 November 2023

Yonghong Zhang, Shouwei Li, Jingwei Li and Xiaoyu Tang

This paper aims to develop a novel grey Bernoulli model with memory characteristics, which is designed to dynamically choose the optimal memory kernel function and the length of…

Abstract

Purpose

This paper aims to develop a novel grey Bernoulli model with memory characteristics, which is designed to dynamically choose the optimal memory kernel function and the length of memory dependence period, ultimately enhancing the model's predictive accuracy.

Design/methodology/approach

This paper enhances the traditional grey Bernoulli model by introducing memory-dependent derivatives, resulting in a novel memory-dependent derivative grey model. Additionally, fractional-order accumulation is employed for preprocessing the original data. The length of the memory dependence period for memory-dependent derivatives is determined through grey correlation analysis. Furthermore, the whale optimization algorithm is utilized to optimize the cumulative order, power index and memory kernel function index of the model, enabling adaptability to diverse scenarios.

Findings

The selection of appropriate memory kernel functions and memory dependency lengths will improve model prediction performance. The model can adaptively select the memory kernel function and memory dependence length, and the performance of the model is better than other comparison models.

Research limitations/implications

The model presented in this article has some limitations. The grey model is itself suitable for small sample data, and memory-dependent derivatives mainly consider the memory effect on a fixed length. Therefore, this model is mainly applicable to data prediction with short-term memory effect and has certain limitations on time series of long-term memory.

Practical implications

In practical systems, memory effects typically exhibit a decaying pattern, which is effectively characterized by the memory kernel function. The model in this study skillfully determines the appropriate kernel functions and memory dependency lengths to capture these memory effects, enhancing its alignment with real-world scenarios.

Originality/value

Based on the memory-dependent derivative method, a memory-dependent derivative grey Bernoulli model that more accurately reflects the actual memory effect is constructed and applied to power generation forecasting in China, South Korea and India.

Details

Grey Systems: Theory and Application, vol. 14 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Open Access
Article
Publication date: 29 July 2020

Abdullah Alharbi, Wajdi Alhakami, Sami Bourouis, Fatma Najar and Nizar Bouguila

We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is…

Abstract

We propose in this paper a novel reliable detection method to recognize forged inpainting images. Detecting potential forgeries and authenticating the content of digital images is extremely challenging and important for many applications. The proposed approach involves developing new probabilistic support vector machines (SVMs) kernels from a flexible generative statistical model named “bounded generalized Gaussian mixture model”. The developed learning framework has the advantage to combine properly the benefits of both discriminative and generative models and to include prior knowledge about the nature of data. It can effectively recognize if an image is a tampered one and also to identify both forged and authentic images. The obtained results confirmed that the developed framework has good performance under numerous inpainted images.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 11 March 2024

Vipin Gupta, Barak M.S. and Soumik Das

This paper addresses a significant research gap in the study of Rayleigh surface wave propagation within a piezoelectric medium characterized by piezoelectric properties, thermal…

Abstract

Purpose

This paper addresses a significant research gap in the study of Rayleigh surface wave propagation within a piezoelectric medium characterized by piezoelectric properties, thermal effects and voids. Previous research has often overlooked the crucial aspects related to voids. This study aims to provide analytical solutions for Rayleigh waves propagating through a medium consisting of a nonlocal piezo-thermo-elastic material with voids under the Moore–Gibson–Thompson thermo-elasticity theory with memory dependencies.

Design/methodology/approach

The analytical solutions are derived using a wave-mode method, and roots are computed from the characteristic equation using the Durand–Kerner method. These roots are then filtered based on the decay condition of surface waves. The analysis pertains to a medium subjected to stress-free and isothermal boundary conditions.

Findings

Computational simulations are performed to determine the attenuation coefficient and phase velocity of Rayleigh waves. This investigation goes beyond mere calculations and examines particle motion to gain deeper insights into Rayleigh wave propagation. Furthermore, this investigates how kernel function and nonlocal parameters influence these wave phenomena.

Research limitations/implications

The results of this study reveal several unique cases that significantly contribute to the understanding of Rayleigh wave propagation within this intricate material system, particularly in the presence of voids.

Practical implications

This investigation provides valuable insights into the synergistic dynamics among piezoelectric constituents, void structures and Rayleigh wave propagation, enabling advancements in sensor technology, augmented energy harvesting methodologies and pioneering seismic monitoring approaches.

Originality/value

This study formulates a novel governing equation for a nonlocal piezo-thermo-elastic medium with voids, highlighting the significance of Rayleigh waves and investigating the impact of memory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 6 January 2023

Hanieh Javadi Khasraghi, Isaac Vaghefi and Rudy Hirschheim

The research study intends to gain a better understanding of members' behaviors in the context of crowdsourcing contests. The authors examined the key factors that can motivate or…

232

Abstract

Purpose

The research study intends to gain a better understanding of members' behaviors in the context of crowdsourcing contests. The authors examined the key factors that can motivate or discourage contributing to a team and within the community.

Design/methodology/approach

The authors conducted 21 semi-structured interviews with Kaggle.com members and analyzed the data to capture individual members' contributions and emerging determinants that play a role during this process. The authors adopted a qualitative approach and used standard thematic coding techniques to analyze the data.

Findings

The analysis revealed two processes underlying contribution to the team and community and the decision-making involved in each. Accordingly, a set of key factors affecting each process were identified. Using Holbrook's (2006) typology of value creation, these factors were classified into four types, namely extrinsic and self-oriented (economic value), extrinsic and other-oriented (social value), intrinsic and self-oriented (hedonic value), and intrinsic and other-oriented (altruistic value). Three propositions were developed, which can be tested in future research.

Research limitations/implications

The study has a few limitations, which point to areas for future research on this topic. First, the authors only assessed the behaviors of individuals who use the Kaggle platform. Second, the findings of this study may not be generalizable to other crowdsourcing platforms such as Amazon Mechanical Turk, where there is no competition, and participants cannot meaningfully contribute to the community. Third, the authors collected data from a limited (yet knowledgeable) number of interviewees. It would be useful to use bigger sample sizes to assess other possible factors that did not emerge from our analysis. Finally, the authors presented a set of propositions for individuals' contributory behavior in crowdsourcing contest platforms but did not empirically test them. Future research is necessary to validate these hypotheses, for instance, by using quantitative methods (e.g. surveys or experiments).

Practical implications

The authors offer recommendations for implementing appropriate mechanisms for contribution to crowdsourcing contests and platforms. Practitioners should design architectures to minimize the effect of factors that reduce the likelihood of contributions and maximize the factors that increase contribution in order to manage the tension of simultaneously encouraging contribution and competition.

Social implications

The research study makes key theoretical contributions to research. First, the results of this study help explain the individuals' contributory behavior in crowdsourcing contests from two aspects: joining and selecting a team and content contribution to the community. Second, the findings of this study suggest a revised and extended model of value co-creation, one that integrates this study’s findings with those of Nov et al. (2009), Lakhani and Wolf (2005), Wasko and Faraj (2000), Chen et al. (2018), Hahn et al. (2008), Dholakia et al. (2004) and Teichmann et al. (2015). Third, using direct accounts collected through first-hand interviews with crowdsourcing contest members, this study provides an in-depth understanding of individuals' contributory behavior. Methodologically, this authors’ approach was distinct from common approaches used in this research domain that used secondary datasets (e.g. the content of forum discussions, survey data) (e.g. see Lakhani and Wolf, 2005; Nov et al., 2009) and quantitative techniques for analyzing collaboration and contribution behavior.

Originality/value

The authors advance the broad field of crowdsourcing by extending the literature on value creation in the online community, particularly as it relates to the individual participants. The study advances the theoretical understanding of contribution in crowdsourcing contests by focusing on the members' point of view, which reveals both the determinants and the process for joining teams during crowdsourcing contests as well as the determinants of contribution to the content distributed in the community.

Details

Information Technology & People, vol. 37 no. 1
Type: Research Article
ISSN: 0959-3845

Keywords

Book part
Publication date: 5 April 2024

Taining Wang and Daniel J. Henderson

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Article
Publication date: 14 February 2024

Huiyu Cui, Honggang Guo, Jianzhou Wang and Yong Wang

With the rise in wine consumption, accurate wine price forecasts have significantly impacted restaurant and hotel purchasing decisions and inventory management. This study aims to…

Abstract

Purpose

With the rise in wine consumption, accurate wine price forecasts have significantly impacted restaurant and hotel purchasing decisions and inventory management. This study aims to develop a precise and effective wine price point and interval forecasting model.

Design/methodology/approach

The proposed forecast model uses an improved hybrid kernel extreme learning machine with an attention mechanism and a multi-objective swarm intelligent optimization algorithm to produce more accurate price estimates. To the best of the authors’ knowledge, this is the first attempt at applying artificial intelligence techniques to improve wine price prediction. Additionally, an effective method for predicting price intervals was constructed by leveraging the characteristics of the error distribution. This approach facilitates quantifying the uncertainty of wine price fluctuations, thus rendering decision-making by relevant practitioners more reliable and controllable.

Findings

The empirical findings indicated that the proposed forecast model provides accurate wine price predictions and reliable uncertainty analysis results. Compared with the benchmark models, the proposed model exhibited superiority in both one-step- and multi-step-ahead forecasts. Meanwhile, the model provides new evidence from artificial intelligence to explain wine prices and understand their driving factors.

Originality/value

This study is a pioneering attempt to evaluate the applicability and effectiveness of advanced artificial intelligence techniques in wine price forecasts. The proposed forecast model not only provides useful options for wine price forecasting but also introduces an innovative addition to existing forecasting research methods and literature.

Details

International Journal of Contemporary Hospitality Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-6119

Keywords

Book part
Publication date: 5 April 2024

Feng Yao, Qinling Lu, Yiguo Sun and Junsen Zhang

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the…

Abstract

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the varying coefficients by a series method. We then use the pilot estimates to perform a one-step backfitting through local linear kernel smoothing, which is shown to be oracle efficient in the sense of being asymptotically equivalent to the estimate knowing the other components of the varying coefficients. In both steps, the authors remove the fixed effects through properly constructed weights. The authors obtain the asymptotic properties of both the pilot and efficient estimators. The Monte Carlo simulations show that the proposed estimator performs well. The authors illustrate their applicability by estimating a varying coefficient production frontier using a panel data, without assuming distributions of the efficiency and error terms.

Details

Essays in Honor of Subal Kumbhakar
Type: Book
ISBN: 978-1-83797-874-8

Keywords

Article
Publication date: 2 January 2024

Xiumei Cai, Xi Yang and Chengmao Wu

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to…

Abstract

Purpose

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images.

Design/methodology/approach

The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance.

Findings

The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation.

Originality/value

Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 25 December 2023

Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…

72

Abstract

Purpose

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.

Design/methodology/approach

A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.

Findings

The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.

Practical implications

This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.

Originality/value

This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Book part
Publication date: 5 April 2024

Christine Amsler, Robert James, Artem Prokhorov and Peter Schmidt

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by…

Abstract

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.

1 – 10 of 335