Search results

1 – 10 of 195
Article
Publication date: 10 November 2023

Yonghong Zhang, Shouwei Li, Jingwei Li and Xiaoyu Tang

This paper aims to develop a novel grey Bernoulli model with memory characteristics, which is designed to dynamically choose the optimal memory kernel function and the length of…

Abstract

Purpose

This paper aims to develop a novel grey Bernoulli model with memory characteristics, which is designed to dynamically choose the optimal memory kernel function and the length of memory dependence period, ultimately enhancing the model's predictive accuracy.

Design/methodology/approach

This paper enhances the traditional grey Bernoulli model by introducing memory-dependent derivatives, resulting in a novel memory-dependent derivative grey model. Additionally, fractional-order accumulation is employed for preprocessing the original data. The length of the memory dependence period for memory-dependent derivatives is determined through grey correlation analysis. Furthermore, the whale optimization algorithm is utilized to optimize the cumulative order, power index and memory kernel function index of the model, enabling adaptability to diverse scenarios.

Findings

The selection of appropriate memory kernel functions and memory dependency lengths will improve model prediction performance. The model can adaptively select the memory kernel function and memory dependence length, and the performance of the model is better than other comparison models.

Research limitations/implications

The model presented in this article has some limitations. The grey model is itself suitable for small sample data, and memory-dependent derivatives mainly consider the memory effect on a fixed length. Therefore, this model is mainly applicable to data prediction with short-term memory effect and has certain limitations on time series of long-term memory.

Practical implications

In practical systems, memory effects typically exhibit a decaying pattern, which is effectively characterized by the memory kernel function. The model in this study skillfully determines the appropriate kernel functions and memory dependency lengths to capture these memory effects, enhancing its alignment with real-world scenarios.

Originality/value

Based on the memory-dependent derivative method, a memory-dependent derivative grey Bernoulli model that more accurately reflects the actual memory effect is constructed and applied to power generation forecasting in China, South Korea and India.

Details

Grey Systems: Theory and Application, vol. 14 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 11 March 2024

Vipin Gupta, Barak M.S. and Soumik Das

This paper addresses a significant research gap in the study of Rayleigh surface wave propagation within a piezoelectric medium characterized by piezoelectric properties, thermal…

Abstract

Purpose

This paper addresses a significant research gap in the study of Rayleigh surface wave propagation within a piezoelectric medium characterized by piezoelectric properties, thermal effects and voids. Previous research has often overlooked the crucial aspects related to voids. This study aims to provide analytical solutions for Rayleigh waves propagating through a medium consisting of a nonlocal piezo-thermo-elastic material with voids under the Moore–Gibson–Thompson thermo-elasticity theory with memory dependencies.

Design/methodology/approach

The analytical solutions are derived using a wave-mode method, and roots are computed from the characteristic equation using the Durand–Kerner method. These roots are then filtered based on the decay condition of surface waves. The analysis pertains to a medium subjected to stress-free and isothermal boundary conditions.

Findings

Computational simulations are performed to determine the attenuation coefficient and phase velocity of Rayleigh waves. This investigation goes beyond mere calculations and examines particle motion to gain deeper insights into Rayleigh wave propagation. Furthermore, this investigates how kernel function and nonlocal parameters influence these wave phenomena.

Research limitations/implications

The results of this study reveal several unique cases that significantly contribute to the understanding of Rayleigh wave propagation within this intricate material system, particularly in the presence of voids.

Practical implications

This investigation provides valuable insights into the synergistic dynamics among piezoelectric constituents, void structures and Rayleigh wave propagation, enabling advancements in sensor technology, augmented energy harvesting methodologies and pioneering seismic monitoring approaches.

Originality/value

This study formulates a novel governing equation for a nonlocal piezo-thermo-elastic medium with voids, highlighting the significance of Rayleigh waves and investigating the impact of memory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 4
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 6 January 2023

Hanieh Javadi Khasraghi, Isaac Vaghefi and Rudy Hirschheim

The research study intends to gain a better understanding of members' behaviors in the context of crowdsourcing contests. The authors examined the key factors that can motivate or…

230

Abstract

Purpose

The research study intends to gain a better understanding of members' behaviors in the context of crowdsourcing contests. The authors examined the key factors that can motivate or discourage contributing to a team and within the community.

Design/methodology/approach

The authors conducted 21 semi-structured interviews with Kaggle.com members and analyzed the data to capture individual members' contributions and emerging determinants that play a role during this process. The authors adopted a qualitative approach and used standard thematic coding techniques to analyze the data.

Findings

The analysis revealed two processes underlying contribution to the team and community and the decision-making involved in each. Accordingly, a set of key factors affecting each process were identified. Using Holbrook's (2006) typology of value creation, these factors were classified into four types, namely extrinsic and self-oriented (economic value), extrinsic and other-oriented (social value), intrinsic and self-oriented (hedonic value), and intrinsic and other-oriented (altruistic value). Three propositions were developed, which can be tested in future research.

Research limitations/implications

The study has a few limitations, which point to areas for future research on this topic. First, the authors only assessed the behaviors of individuals who use the Kaggle platform. Second, the findings of this study may not be generalizable to other crowdsourcing platforms such as Amazon Mechanical Turk, where there is no competition, and participants cannot meaningfully contribute to the community. Third, the authors collected data from a limited (yet knowledgeable) number of interviewees. It would be useful to use bigger sample sizes to assess other possible factors that did not emerge from our analysis. Finally, the authors presented a set of propositions for individuals' contributory behavior in crowdsourcing contest platforms but did not empirically test them. Future research is necessary to validate these hypotheses, for instance, by using quantitative methods (e.g. surveys or experiments).

Practical implications

The authors offer recommendations for implementing appropriate mechanisms for contribution to crowdsourcing contests and platforms. Practitioners should design architectures to minimize the effect of factors that reduce the likelihood of contributions and maximize the factors that increase contribution in order to manage the tension of simultaneously encouraging contribution and competition.

Social implications

The research study makes key theoretical contributions to research. First, the results of this study help explain the individuals' contributory behavior in crowdsourcing contests from two aspects: joining and selecting a team and content contribution to the community. Second, the findings of this study suggest a revised and extended model of value co-creation, one that integrates this study’s findings with those of Nov et al. (2009), Lakhani and Wolf (2005), Wasko and Faraj (2000), Chen et al. (2018), Hahn et al. (2008), Dholakia et al. (2004) and Teichmann et al. (2015). Third, using direct accounts collected through first-hand interviews with crowdsourcing contest members, this study provides an in-depth understanding of individuals' contributory behavior. Methodologically, this authors’ approach was distinct from common approaches used in this research domain that used secondary datasets (e.g. the content of forum discussions, survey data) (e.g. see Lakhani and Wolf, 2005; Nov et al., 2009) and quantitative techniques for analyzing collaboration and contribution behavior.

Originality/value

The authors advance the broad field of crowdsourcing by extending the literature on value creation in the online community, particularly as it relates to the individual participants. The study advances the theoretical understanding of contribution in crowdsourcing contests by focusing on the members' point of view, which reveals both the determinants and the process for joining teams during crowdsourcing contests as well as the determinants of contribution to the content distributed in the community.

Details

Information Technology & People, vol. 37 no. 1
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 14 February 2024

Huiyu Cui, Honggang Guo, Jianzhou Wang and Yong Wang

With the rise in wine consumption, accurate wine price forecasts have significantly impacted restaurant and hotel purchasing decisions and inventory management. This study aims to…

Abstract

Purpose

With the rise in wine consumption, accurate wine price forecasts have significantly impacted restaurant and hotel purchasing decisions and inventory management. This study aims to develop a precise and effective wine price point and interval forecasting model.

Design/methodology/approach

The proposed forecast model uses an improved hybrid kernel extreme learning machine with an attention mechanism and a multi-objective swarm intelligent optimization algorithm to produce more accurate price estimates. To the best of the authors’ knowledge, this is the first attempt at applying artificial intelligence techniques to improve wine price prediction. Additionally, an effective method for predicting price intervals was constructed by leveraging the characteristics of the error distribution. This approach facilitates quantifying the uncertainty of wine price fluctuations, thus rendering decision-making by relevant practitioners more reliable and controllable.

Findings

The empirical findings indicated that the proposed forecast model provides accurate wine price predictions and reliable uncertainty analysis results. Compared with the benchmark models, the proposed model exhibited superiority in both one-step- and multi-step-ahead forecasts. Meanwhile, the model provides new evidence from artificial intelligence to explain wine prices and understand their driving factors.

Originality/value

This study is a pioneering attempt to evaluate the applicability and effectiveness of advanced artificial intelligence techniques in wine price forecasts. The proposed forecast model not only provides useful options for wine price forecasting but also introduces an innovative addition to existing forecasting research methods and literature.

Details

International Journal of Contemporary Hospitality Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-6119

Keywords

Book part
Publication date: 5 April 2024

Taining Wang and Daniel J. Henderson

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production…

Abstract

A semiparametric stochastic frontier model is proposed for panel data, incorporating several flexible features. First, a constant elasticity of substitution (CES) production frontier is considered without log-transformation to prevent induced non-negligible estimation bias. Second, the model flexibility is improved via semiparameterization, where the technology is an unknown function of a set of environment variables. The technology function accounts for latent heterogeneity across individual units, which can be freely correlated with inputs, environment variables, and/or inefficiency determinants. Furthermore, the technology function incorporates a single-index structure to circumvent the curse of dimensionality. Third, distributional assumptions are eschewed on both stochastic noise and inefficiency for model identification. Instead, only the conditional mean of the inefficiency is assumed, which depends on related determinants with a wide range of choice, via a positive parametric function. As a result, technical efficiency is constructed without relying on an assumed distribution on composite error. The model provides flexible structures on both the production frontier and inefficiency, thereby alleviating the risk of model misspecification in production and efficiency analysis. The estimator involves a series based nonlinear least squares estimation for the unknown parameters and a kernel based local estimation for the technology function. Promising finite-sample performance is demonstrated through simulations, and the model is applied to investigate productive efficiency among OECD countries from 1970–2019.

Book part
Publication date: 5 April 2024

Feng Yao, Qinling Lu, Yiguo Sun and Junsen Zhang

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the…

Abstract

The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the varying coefficients by a series method. We then use the pilot estimates to perform a one-step backfitting through local linear kernel smoothing, which is shown to be oracle efficient in the sense of being asymptotically equivalent to the estimate knowing the other components of the varying coefficients. In both steps, the authors remove the fixed effects through properly constructed weights. The authors obtain the asymptotic properties of both the pilot and efficient estimators. The Monte Carlo simulations show that the proposed estimator performs well. The authors illustrate their applicability by estimating a varying coefficient production frontier using a panel data, without assuming distributions of the efficiency and error terms.

Details

Essays in Honor of Subal Kumbhakar
Type: Book
ISBN: 978-1-83797-874-8

Keywords

Article
Publication date: 2 January 2024

Xiumei Cai, Xi Yang and Chengmao Wu

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to…

Abstract

Purpose

Multi-view fuzzy clustering algorithms are not widely used in image segmentation, and many of these algorithms are lacking in robustness. The purpose of this paper is to investigate a new algorithm that can segment the image better and retain as much detailed information about the image as possible when segmenting noisy images.

Design/methodology/approach

The authors present a novel multi-view fuzzy c-means (FCM) clustering algorithm that includes an automatic view-weight learning mechanism. Firstly, this algorithm introduces a view-weight factor that can automatically adjust the weight of different views, thereby allowing each view to obtain the best possible weight. Secondly, the algorithm incorporates a weighted fuzzy factor, which serves to obtain local spatial information and local grayscale information to preserve image details as much as possible. Finally, in order to weaken the effects of noise and outliers in image segmentation, this algorithm employs the kernel distance measure instead of the Euclidean distance.

Findings

The authors added different kinds of noise to images and conducted a large number of experimental tests. The results show that the proposed algorithm performs better and is more accurate than previous multi-view fuzzy clustering algorithms in solving the problem of noisy image segmentation.

Originality/value

Most of the existing multi-view clustering algorithms are for multi-view datasets, and the multi-view fuzzy clustering algorithms are unable to eliminate noise points and outliers when dealing with noisy images. The algorithm proposed in this paper has stronger noise immunity and can better preserve the details of the original image.

Details

Engineering Computations, vol. 41 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 5 April 2024

Christine Amsler, Robert James, Artem Prokhorov and Peter Schmidt

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by…

Abstract

The traditional predictor of technical inefficiency proposed by Jondrow, Lovell, Materov, and Schmidt (1982) is a conditional expectation. This chapter explores whether, and by how much, the predictor can be improved by using auxiliary information in the conditioning set. It considers two types of stochastic frontier models. The first type is a panel data model where composed errors from past and future time periods contain information about contemporaneous technical inefficiency. The second type is when the stochastic frontier model is augmented by input ratio equations in which allocative inefficiency is correlated with technical inefficiency. Compared to the standard kernel-smoothing estimator, a newer estimator based on a local linear random forest helps mitigate the curse of dimensionality when the conditioning set is large. Besides numerous simulations, there is an illustrative empirical example.

Article
Publication date: 20 November 2023

Chao Zhang, Fang Wang, Yi Huang and Le Chang

This paper aims to reveal the interdisciplinarity of information science (IS) from the perspective of the evolution of theory application.

Abstract

Purpose

This paper aims to reveal the interdisciplinarity of information science (IS) from the perspective of the evolution of theory application.

Design/methodology/approach

Select eight representative IS journals as data sources, extract the theories mentioned in the full texts of the research papers and then measure annual interdisciplinarity of IS by conducting theory co-occurrence network analysis, diversity measure and evolution analysis.

Findings

As a young and vibrant discipline, IS has been continuously absorbing and internalizing external theoretical knowledge and thus formed a high degree of interdisciplinarity. With the continuous application of some kernel theories, the interdisciplinarity of IS appears to be decreasing and gradually converging into a few neighboring disciplines. Influenced by big data and artificial intelligence, the research paradigm of IS is shifting from a theory centered one to a technology centered one.

Research limitations/implications

This study helps to understand the evolution of the interdisciplinarity of IS in the past 21 years. The main limitation is that the data were collected from eight journals indexed by the Social Sciences Citation Index and a small amount of theories might have been omitted.

Originality/value

This study identifies the kernel theories in IS research, measures the interdisciplinarity of IS based on the evolution of the co-occurrence network of theory source disciplines and reveals the paradigm shift being happening in IS.

Details

Journal of Documentation, vol. 80 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

Open Access
Article
Publication date: 28 November 2022

Ruchi Kejriwal, Monika Garg and Gaurav Sarin

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…

1040

Abstract

Purpose

Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.

Design/methodology/approach

The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.

Findings

Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.

Originality/value

This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.

Details

Vilakshan - XIMB Journal of Management, vol. 21 no. 1
Type: Research Article
ISSN: 0973-1954

Keywords

1 – 10 of 195