Search results

1 – 10 of over 7000
Article
Publication date: 14 September 2023

Cheng Liu, Yi Shi, Wenjing Xie and Xinzhong Bao

This paper aims to provide a complete analysis framework and prediction method for the construction of the patent securitization (PS) basic asset pool.

Abstract

Purpose

This paper aims to provide a complete analysis framework and prediction method for the construction of the patent securitization (PS) basic asset pool.

Design/methodology/approach

This paper proposes an integrated classification method based on genetic algorithm and random forest algorithm. First, comprehensively consider the patent value evaluation model and SME credit evaluation model, determine 17 indicators to measure the patent value and SME credit; Secondly, establish the classification label of high-quality basic assets; Then, genetic algorithm and random forest model are used to predict and screen high-quality basic assets; Finally, the performance of the model is evaluated.

Findings

The machine learning model proposed in this study is mainly used to solve the screening problem of high-quality patents that constitute the underlying asset pool of PS. The empirical research shows that the integrated classification method based on genetic algorithm and random forest has good performance and prediction accuracy, and is superior to the single method that constitutes it.

Originality/value

The main contributions of the article are twofold: firstly, the machine learning model proposed in this article determines the standards for high-quality basic assets; Secondly, this article addresses the screening issue of basic assets in PS.

Details

Kybernetes, vol. 53 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 5 May 2023

Nguyen Thi Dinh, Nguyen Thi Uyen Nhi, Thanh Manh Le and Thanh The Van

The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the…

Abstract

Purpose

The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the KD-Tree structure was proposed.

Design/methodology/approach

A Random Forest structure was built to classify the objects on each image on the basis of the balanced multibranch KD-Tree structure. From that purpose, a KD-Tree structure was generated by the Random Forest to retrieve a set of similar images for an input image. A KD-Tree structure is applied to determine a relationship word at leaves to extract the relationship between objects on an input image. An input image content is described based on class names and relationships between objects.

Findings

A model of image retrieval and image content extraction was proposed based on the proposed theoretical basis; simultaneously, the experiment was built on multi-object image datasets including Microsoft COCO and Flickr with an average image retrieval precision of 0.9028 and 0.9163, respectively. The experimental results were compared with those of other works on the same image dataset to demonstrate the effectiveness of the proposed method.

Originality/value

A balanced multibranch KD-Tree structure was built to apply to relationship classification on the basis of the original KD-Tree structure. Then, KD-Tree Random Forest was built to improve the classifier performance and retrieve a set of similar images for an input image. Concurrently, the image content was described in the process of combining class names and relationships between objects.

Details

Data Technologies and Applications, vol. 57 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 25 April 2024

H.G. Di, Pingbao Xu, Quanmei Gong, Huiji Guo and Guangbei Su

This study establishes a method for predicting ground vibrations caused by railway tunnels in unsaturated soils with spatial variability.

Abstract

Purpose

This study establishes a method for predicting ground vibrations caused by railway tunnels in unsaturated soils with spatial variability.

Design/methodology/approach

First, an improved 2.5D finite-element-method-perfect-matching-layer (FEM-PML) model is proposed. The Galerkin method is used to derive the finite element expression in the ub-pl-pg format for unsaturated soil. Unlike the ub-v-w format, which has nine degrees of freedom per node, the ub-pl-pg format has only five degrees of freedom per node; this significantly enhances the calculation efficiency. The stretching function of the PML is adopted to handle the unlimited boundary domain. Additionally, the 2.5D FEM-PML model couples the tunnel, vehicle and track structures. Next, the spatial variability of the soil parameters is simulated by random fields using the Monte Carlo method. By incorporating random fields of soil parameters into the 2.5D FEM-PML model, the effect of soil spatial variability on ground vibrations is demonstrated using a case study.

Findings

The spatial variability of the soil parameters primarily affected the vibration acceleration amplitude but had a minor effect on its spatial distribution and attenuation over time. In addition, ground vibration acceleration was more affected by the spatial variability of the soil bulk modulus of compressibility than by that of saturation.

Originality/value

Using the 2.5D FEM-PML model in the ub-pl-pg format of unsaturated soil enhances the computational efficiency. On this basis, with the random fields established by Monte Carlo simulation, the model can calculate the reliability of soil dynamics, which was rarely considered by previous models.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 17 March 2023

Stewart Jones

This study updates the literature review of Jones (1987) published in this journal. The study pays particular attention to two important themes that have shaped the field over the…

Abstract

Purpose

This study updates the literature review of Jones (1987) published in this journal. The study pays particular attention to two important themes that have shaped the field over the past 35 years: (1) the development of a range of innovative new statistical learning methods, particularly advanced machine learning methods such as stochastic gradient boosting, adaptive boosting, random forests and deep learning, and (2) the emergence of a wide variety of bankruptcy predictor variables extending beyond traditional financial ratios, including market-based variables, earnings management proxies, auditor going concern opinions (GCOs) and corporate governance attributes. Several directions for future research are discussed.

Design/methodology/approach

This study provides a systematic review of the corporate failure literature over the past 35 years with a particular focus on the emergence of new statistical learning methodologies and predictor variables. This synthesis of the literature evaluates the strength and limitations of different modelling approaches under different circumstances and provides an overall evaluation the relative contribution of alternative predictor variables. The study aims to provide a transparent, reproducible and interpretable review of the literature. The literature review also takes a theme-centric rather than author-centric approach and focuses on structured themes that have dominated the literature since 1987.

Findings

There are several major findings of this study. First, advanced machine learning methods appear to have the most promise for future firm failure research. Not only do these methods predict significantly better than conventional models, but they also possess many appealing statistical properties. Second, there are now a much wider range of variables being used to model and predict firm failure. However, the literature needs to be interpreted with some caution given the many mixed findings. Finally, there are still a number of unresolved methodological issues arising from the Jones (1987) study that still requiring research attention.

Originality/value

The study explains the connections and derivations between a wide range of firm failure models, from simpler linear models to advanced machine learning methods such as gradient boosting, random forests, adaptive boosting and deep learning. The paper highlights the most promising models for future research, particularly in terms of their predictive power, underlying statistical properties and issues of practical implementation. The study also draws together an extensive literature on alternative predictor variables and provides insights into the role and behaviour of alternative predictor variables in firm failure research.

Details

Journal of Accounting Literature, vol. 45 no. 2
Type: Research Article
ISSN: 0737-4607

Keywords

Book part
Publication date: 23 October 2023

Morten I. Lau, Hong Il Yoo and Hongming Zhao

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of…

Abstract

We evaluate the hypothesis of temporal stability in risk preferences using two recent data sets from longitudinal lab experiments. Both experiments included a combination of decision tasks that allows one to identify a full set of structural parameters characterizing risk preferences under Cumulative Prospect Theory (CPT), including loss aversion. We consider temporal stability in those structural parameters at both population and individual levels. The population-level stability pertains to whether the distribution of risk preferences across individuals in the subject population remains stable over time. The individual-level stability pertains to within-individual correlation in risk preferences over time. We embed the CPT structure in a random coefficient model that allows us to evaluate temporal stability at both levels in a coherent manner, without having to switch between different sets of models to draw inferences at a specific level.

Details

Models of Risk Preferences: Descriptive and Normative Challenges
Type: Book
ISBN: 978-1-83797-269-2

Keywords

Article
Publication date: 19 April 2023

Rachana Jaiswal, Shashank Gupta and Aviral Kumar Tiwari

Amidst the turbulent tides of geopolitical uncertainty and pandemic-induced economic disruptions, the information technology industry grapples with alarming attrition and…

Abstract

Purpose

Amidst the turbulent tides of geopolitical uncertainty and pandemic-induced economic disruptions, the information technology industry grapples with alarming attrition and aggravating talent gaps, spurring a surge in demand for specialized digital proficiencies. Leveraging this imperative, firms seek to attract and retain top-tier talent through generous compensation packages. This study introduces a holistic, integrated theoretical framework integrating machine learning models to develop a compensation model, interrogating the multifaceted factors that shape pay determination.

Design/methodology/approach

Drawing upon a stratified sample of 2488 observations, this study determines whether compensation can be accurately predicted via constructs derived from the integrated theoretical framework, employing various cutting-edge machine learning models. This study culminates in discovering a random forest model, exhibiting 99.6% accuracy and 0.08° mean absolute error, following a series of comprehensive robustness checks.

Findings

The empirical findings of this study have revealed critical determinants of compensation, including but not limited to experience level, educational background, and specialized skill-set. The research also elucidates that gender does not play a role in pay disparity, while company size and type hold no consequential sway over individual compensation determination.

Practical implications

The research underscores the importance of equitable compensation to foster technological innovation and encourage the retention of top talent, emphasizing the significance of human capital. Furthermore, the model presented in this study empowers individuals to negotiate their compensation more effectively and supports enterprises in crafting targeted compensation strategies, thereby facilitating sustainable economic growth and helping to attain various Sustainable Development Goals.

Originality/value

The cardinal contribution of this research lies in the inception of an inclusive theoretical framework that persuasively explicates the intricacies of a machine learning-driven remuneration model, ennobled by the synthesis of diverse management theories to capture the complexity of compensation determination. However, the generalizability of the findings to other sectors is constrained as this study is exclusively limited to the IT sector.

Details

Management Decision, vol. 61 no. 8
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 25 April 2022

Yu Zhang, Arnab Rahman and Eric Miller

The purpose of this paper is to model housing price temporal variations and to predict price trends within the context of land use–transportation interactions using machine…

Abstract

Purpose

The purpose of this paper is to model housing price temporal variations and to predict price trends within the context of land use–transportation interactions using machine learning methods based on longitudinal observation of housing transaction prices.

Design/methodology/approach

This paper examines three machine learning algorithms (linear regression machine learning (ML), random forest and decision trees) applied to housing price trends from 2001 to 2016 in the Greater Toronto and Hamilton Area, with particular interests in the role of accessibility in modelling housing price. It compares the performance of the ML algorithms with traditional temporal lagged regression models.

Findings

The empirical results show that the ML algorithms achieve good accuracy (R2 of 0.873 after cross-validation), and the temporal regression produces competitive results (R2 of 0.876). Temporal lag effects are found to play a key role in housing price modelling, along with physical conditions and socio-economic factors. Differences in accessibility effects on housing prices differ by mode and activity type.

Originality/value

Housing prices have been extensively modelled through hedonic-based spatio-temporal regression and ML approaches. However, the mutually dependent relationship between transportation and land use makes price determination a complex process, and the comparison of different longitudinal analysis methods is rarely considered. The finding presents the longitudinal dynamics of housing market variation to housing planners.

Details

International Journal of Housing Markets and Analysis, vol. 16 no. 4
Type: Research Article
ISSN: 1753-8270

Keywords

Book part
Publication date: 28 September 2023

M Anand Shankar Raja, Keerthana Shekar, B Harshith and Purvi Rastogi

The COVID-19 pandemic has recently had an impact on the stock market all over the globe. A thorough review of the literature that included the most cited articles and articles…

Abstract

The COVID-19 pandemic has recently had an impact on the stock market all over the globe. A thorough review of the literature that included the most cited articles and articles from well-known databases revealed that earlier research in the field had not specifically addressed how the BRIC stock markets responded to the COVID-19 pandemic. The data regarding COVID-19 were collected from the World Health Organization (WHO) website, and the stock market data were collected from Yahoo Finance and the respective country’s stock exchange. A random forest regression algorithm takes the closing price of respective stock indices as target variables and COVID-19 variables as input variables. Using this algorithm, a model is fit to the data and is visualised using line plots. This study’s findings highlight a relationship between the COVID-19 variables and stock market indices. In addition, the stock market of BRIC countries showed a high correlation, especially with the Shanghai Composite Stock Index with a correlation value of 0.7 and above. Brazil took the worst hit in the studied duration by declining approximately 45.99%, followed by India by 37.76%. Finally, the data set’s model fit, which employed the random forest machine learning method, produced R2 values of 0.972, 0.005, 0.997, and 0.983 and mean percentage errors of 1.4, 0.8, 0.9, and 0.8 for Brazil, Russia, India, and China (BRIC), respectively. Even now, two years after the coronavirus pandemic started, the Brazilian stock index has not yet returned to its pre-pandemic level.

Details

Digital Transformation, Strategic Resilience, Cyber Security and Risk Management
Type: Book
ISBN: 978-1-83797-009-4

Keywords

Open Access
Article
Publication date: 11 July 2023

Issam Tlemsani, Robin Matthews and Mohamed Ashmel Mohamed Hashim

This empirical research examined the factors and conditions that contribute to the success of international strategic learning alliances. The study aimed to provide organisations…

Abstract

Purpose

This empirical research examined the factors and conditions that contribute to the success of international strategic learning alliances. The study aimed to provide organisations with evidence-based insights and recommendations that can help them to create more effective and sustainable partnerships and to leverage collaborative learning to drive innovation and growth. The examination is performed using game theory as a mathematical framework to analyse the interaction of the decision-makers, where one alliance's decision is contingent on the decision made by others in the partnership. There are 20 possible games out of 120 outcomes that can be grouped into four different types; each type has been divided into several categories.

Design/methodology/approach

The research methodology included secondary and primary data collection using empirical data, the Delphi technique for obtaining qualitative data, a research questionnaire for collecting quantitative data and computer simulation (1,000 cases, network resources and cooperative game theory). The key variables collected and measured when analysing a strategic alliance were identified, grouped and mapped into the developed model.

Findings

Most respondents ranked reputation and mutual benefits in Type 1 games relatively high, averaging 4.1 and 3.85 of a possible 5. That is significantly higher than net transfer benefits, ranked at 0.61. The a priori model demonstrate that Type 1 games are the most used in cooperative games and in-game distribution, 40% of all four types of games. This is also confirmed by the random landscape model, approximately 50%. The results of the empirical data in a combination of payoff characteristics for Type 1 games show that joint and reputation benefits are critical for the success of cooperation.

Practical implications

Research on cross-border learning alliances has several implications. Managerial implications can help managers to understand the challenges and benefits of engaging in these activities. They can use this knowledge to develop strategies to improve the effectiveness of their cross-border learning alliances. Practical implications, the development of game theory and cross-border models can be applied in effective decision-making in a variety of complex contexts. Learning alliances have important policy implications, particularly in trade, investment and innovation. Policymakers must consider the potential benefits and risks of these collaborations and develop policies that encourage and support them while mitigating potential negative impacts.

Originality/value

International learning alliances have become a popular strategy for firms seeking to gain access to new knowledge, capabilities and markets in foreign countries. The originality of this research lies in its ability to contribute to the understanding of the dynamics and outcomes of these complex relationships in a novel and meaningful way.

Details

Journal of Work-Applied Management, vol. 15 no. 2
Type: Research Article
ISSN: 2205-2062

Keywords

Article
Publication date: 8 August 2022

Ean Zou Teoh, Wei-Chuen Yau, Thian Song Ong and Tee Connie

This study aims to develop a regression-based machine learning model to predict housing price, determine and interpret factors that contribute to housing prices using different…

525

Abstract

Purpose

This study aims to develop a regression-based machine learning model to predict housing price, determine and interpret factors that contribute to housing prices using different data sets available publicly. The significant determinants that affect housing prices will be first identified by using multinomial logistics regression (MLR) based on the level of relative importance. A comprehensive study is then conducted by using SHapley Additive exPlanations (SHAP) analysis to examine the features that cause the major changes in housing prices.

Design/methodology/approach

Predictive analytics is an effective way to deal with uncertainties in process modelling and improve decision-making for housing price prediction. The focus of this paper is two-fold; the authors first apply regression analysis to investigate how well the housing independent variables contribute to the housing price prediction. Two data sets are used for this study, namely, Ames Housing dataset and Melbourne Housing dataset. For both the data sets, random forest regression performs the best by achieving an average R2 of 86% for the Ames dataset and 85% for the Melbourne dataset, respectively. Second, multinomial logistic regression is adopted to investigate and identify the factor determinants of housing sales price. For the Ames dataset, the authors find that the top three most significant factor variables to determine the housing price is the general living area, basement size and age of remodelling. As for the Melbourne dataset, properties having more rooms/bathrooms, larger land size and closer distance to central business district (CBD) are higher priced. This is followed by a comprehensive analysis on how these determinants contribute to the predictability of the selected regression model by using explainable SHAP values. These prominent factors can be used to determine the optimal price range of a property which are useful for decision-making for both buyers and sellers.

Findings

By using the combination of MLR and SHAP analysis, it is noticeable that general living area, basement size and age of remodelling are the top three most important variables in determining the house’s price in the Ames dataset, while properties with more rooms/bathrooms, larger land area and closer proximity to the CBD or to the South of Melbourne are more expensive in the Melbourne dataset. These important factors can be used to estimate the best price range for a housing property for better decision-making.

Research limitations/implications

A limitation of this study is that the distribution of the housing prices is highly skewed. Although it is normal that the properties’ price is normally cluttered at the lower side and only a few houses are highly price. As mentioned before, MLR can effectively help in evaluating the likelihood ratio of each variable towards these categories. However, housing price is originally continuous, and there is a need to convert the price to categorical type. Nonetheless, the most effective method to categorize the data is still questionable.

Originality/value

The key point of this paper is the use of explainable machine learning approach to identify the prominent factors of housing price determination, which could be used to determine the optimal price range of a property which are useful for decision-making for both the buyers and sellers.

Details

International Journal of Housing Markets and Analysis, vol. 16 no. 5
Type: Research Article
ISSN: 1753-8270

Keywords

1 – 10 of over 7000