Search results
1 – 10 of 355Feature extraction from 3D datasets is a current problem. Machine learning is an important tool for classification of complex 3D datasets. Machine learning classification…
Abstract
Purpose
Feature extraction from 3D datasets is a current problem. Machine learning is an important tool for classification of complex 3D datasets. Machine learning classification techniques are widely used in various fields, such as text classification, pattern recognition, medical disease analysis, etc. The aim of this study is to apply the most popular classification and regression methods to determine the best classification and regression method based on the geodesics.
Design/methodology/approach
The feature vector is determined by the unit normal vector and the unit principal vector at each point of the 3D surface along with the point coordinates themselves. Moreover, different examples are compared according to the classification methods in terms of accuracy and the regression algorithms in terms of R-squared value.
Findings
Several surface examples are analyzed for the feature vector using classification (31 methods) and regression (23 methods) machine learning algorithms. In addition, two ensemble methods XGBoost and LightGBM are used for classification and regression. Also, the scores for each surface example are compared.
Originality/value
To the best of the author’s knowledge, this is the first study to analyze datasets based on geodesics using machine learning algorithms for classification and regression.
Details
Keywords
Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…
Abstract
Purpose
Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.
Design/methodology/approach
A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.
Findings
The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.
Practical implications
This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.
Originality/value
This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.
Details
Keywords
Ruchi Kejriwal, Monika Garg and Gaurav Sarin
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both…
Abstract
Purpose
Stock market has always been lucrative for various investors. But, because of its speculative nature, it is difficult to predict the price movement. Investors have been using both fundamental and technical analysis to predict the prices. Fundamental analysis helps to study structured data of the company. Technical analysis helps to study price trends, and with the increasing and easy availability of unstructured data have made it important to study the market sentiment. Market sentiment has a major impact on the prices in short run. Hence, the purpose is to understand the market sentiment timely and effectively.
Design/methodology/approach
The research includes text mining and then creating various models for classification. The accuracy of these models is checked using confusion matrix.
Findings
Out of the six machine learning techniques used to create the classification model, kernel support vector machine gave the highest accuracy of 68%. This model can be now used to analyse the tweets, news and various other unstructured data to predict the price movement.
Originality/value
This study will help investors classify a news or a tweet into “positive”, “negative” or “neutral” quickly and determine the stock price trends.
Details
Keywords
This study aims to explore the mediating effect of digital options on the relationship between emerging information technology investments (ITIs) and firm performance (FP). In…
Abstract
Purpose
This study aims to explore the mediating effect of digital options on the relationship between emerging information technology investments (ITIs) and firm performance (FP). In particular, it analyses the performance impacts of investments in five emerging technologies of IT or non-IT firms.
Design/methodology/approach
Secondary data are collected from Chinese A-share listed companies from 2010 to 2018. The authors propose an econometric model focusing on the impact of ITIs on a firm’s market value and profit. A propensity score matching model is applied to control endogeneity.
Findings
The ITIs’ effect on FP is found to be completely mediated by digital options, and the reach of digital options plays a more positive role in the relationship between ITIs and Tobin’s Q, whereas the richness of digital options is stronger between ITIs and return on net assets (ROE). The group study shows that the impact of process technologies such as cloud computing and the Internet of Things has a more profound impact on Tobin’s Q, and the knowledge technologies represented by artificial intelligence, blockchain and big data strongly affect ROE. In addition, the positive relationship between ITIs and FP is unrelated to IT/non-IT firms.
Research limitations/implications
First, the data are based on 219 publicly announced emerging ITIs in China and thus may not be generalizable to other cultural/national contexts. Second, there is a lack of a large sample data set of emerging ITI information in China, and the duration of this study is constrained to the relatively short rise of emerging technologies.
Practical implications
This study provides firm decision-makers with practical implications. The results imply that the effect of ITIs on FP depends on digital options, so both IT firms (e.g., Big Tech giants) and non-IT firms (e.g., incumbents) should discover how to balance firm value and profit in their management of emerging technology investment projects with digital options thinking.
Originality/value
To the best of the authors’ knowledge, this is the first empirical study to investigate the relationship between ITIs and FP from the perspective of digital options, exploring five emerging technologies and considering firm life, size, and state ownership in a sample of Chinese listed firms.
Details
Keywords
The purpose of this paper is to exploit a new and robust method to forecast the long-term extreme dynamic responses for wave energy converters (WECs).
Abstract
Purpose
The purpose of this paper is to exploit a new and robust method to forecast the long-term extreme dynamic responses for wave energy converters (WECs).
Design/methodology/approach
A new adaptive binned kernel density estimation (KDE) methodology is first proposed in this paper.
Findings
By examining the calculation results the authors has found that in the tail region the proposed new adaptive binned KDE distribution curve becomes very smooth and fits quite well with the histogram of the measured ocean wave dataset at the National Data Buoy Center (NDBC) station 46,059. Carefully studying the calculation results also reveals that the 50-year extreme power-take-off heaving force value forecasted based on the environmental contour derived using the new method is 3572600N, which is much larger than the value 2709100N forecasted via the Rosenblatt-inverse second-order reliability method (ISORM) contour method.
Research limitations/implications
The proposed method overcomes the disadvantages of all the existing nonparametric and parametric methods for predicting the tail region probability density values of the sea state parameters.
Originality/value
It is concluded that the proposed new adaptive binned KDE method is robust and can forecast well the 50-year extreme dynamic responses for WECs.
Details
Keywords
Djordje Cica, Branislav Sredanovic, Sasa Tesic and Davorin Kramar
Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with…
Abstract
Sustainable manufacturing is one of the most important and most challenging issues in present industrial scenario. With the intention of diminish negative effects associated with cutting fluids, the machining industries are continuously developing technologies and systems for cooling/lubricating of the cutting zone while maintaining machining efficiency. In the present study, three regression based machine learning techniques, namely, polynomial regression (PR), support vector regression (SVR) and Gaussian process regression (GPR) were developed to predict machining force, cutting power and cutting pressure in the turning of AISI 1045. In the development of predictive models, machining parameters of cutting speed, depth of cut and feed rate were considered as control factors. Since cooling/lubricating techniques significantly affects the machining performance, prediction model development of quality characteristics was performed under minimum quantity lubrication (MQL) and high-pressure coolant (HPC) cutting conditions. The prediction accuracy of developed models was evaluated by statistical error analyzing methods. Results of regressions based machine learning techniques were also compared with probably one of the most frequently used machine learning method, namely artificial neural networks (ANN). Finally, a metaheuristic approach based on a neural network algorithm was utilized to perform an efficient multi-objective optimization of process parameters for both cutting environment.
Details
Keywords
Feng Yao, Qinling Lu, Yiguo Sun and Junsen Zhang
The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the…
Abstract
The authors propose to estimate a varying coefficient panel data model with different smoothing variables and fixed effects using a two-step approach. The pilot step estimates the varying coefficients by a series method. We then use the pilot estimates to perform a one-step backfitting through local linear kernel smoothing, which is shown to be oracle efficient in the sense of being asymptotically equivalent to the estimate knowing the other components of the varying coefficients. In both steps, the authors remove the fixed effects through properly constructed weights. The authors obtain the asymptotic properties of both the pilot and efficient estimators. The Monte Carlo simulations show that the proposed estimator performs well. The authors illustrate their applicability by estimating a varying coefficient production frontier using a panel data, without assuming distributions of the efficiency and error terms.
Details
Keywords
Pouya Bolourchi and Mohammadreza Gholami
The purpose of this paper is to achieve high accuracy in forecasting generation reliability by accurately evaluating the reliability of power systems. This study uses the RTS-79…
Abstract
Purpose
The purpose of this paper is to achieve high accuracy in forecasting generation reliability by accurately evaluating the reliability of power systems. This study uses the RTS-79 reliability test system to measure the method’s effectiveness, using mean absolute percentage error as the performance metrics. Accurate reliability predictions can inform critical decisions related to system design, expansion and maintenance, making this study relevant to power system planning and management.
Design/methodology/approach
This paper proposes a novel approach that uses a radial basis kernel function-based support vector regression method to accurately evaluate the reliability of power systems. The approach selects relevant system features and computes loss of load expectation (LOLE) and expected energy not supplied (EENS) using the analytical unit additional algorithm. The proposed method is evaluated under two scenarios, with changes applied to the load demand side or both the generation system and load profile.
Findings
The proposed method predicts LOLE and EENS with high accuracy, especially in the first scenario. The results demonstrate the method’s effectiveness in forecasting generation reliability. Accurate reliability predictions can inform critical decisions related to system design, expansion and maintenance. Therefore, the findings of this study have significant implications for power system planning and management.
Originality/value
What sets this approach apart is the extraction of several features from both the generation and load sides of the power system, representing a unique contribution to the field.
Details
Keywords
Ellen A. Donnelly, Madeline Stenger, Daniel J. O'Connell, Adam Gavnik, Jullianne Regalado and Laura Bayona-Roman
This study explores the determinants of police officer support for pre-arrest/booking deflection programs that divert people presenting with substance use and/or mental health…
Abstract
Purpose
This study explores the determinants of police officer support for pre-arrest/booking deflection programs that divert people presenting with substance use and/or mental health disorder symptoms out of the criminal justice system and connect them to supportive services.
Design/methodology/approach
This study analyzes responses from 254 surveys fielded to police officers in Delaware. Questionnaires asked about views on leadership, approaches toward crime, training, occupational experience and officer’s personal characteristics. The study applies a new machine learning method called kernel-based regularized least squares (KRLS) for non-linearities and interactions among independent variables. Estimates from a KRLS model are compared with those from an ordinary least square regression (OLS) model.
Findings
Support for diversion is positively associated with leadership endorsing diversion and thinking of new ways to solve problems. Tough-on-crime attitudes diminish programmatic support. Tenure becomes less predictive of police attitudes in the KRLS model, suggesting interactions with other factors. The KRLS model explains a larger proportion of the variance in officer attitudes than the traditional OLS model.
Originality/value
The study demonstrates the usefulness of the KRLS method for practitioners and scholars seeking to illuminate patterns in police attitudes. It further underscores the importance of agency leadership in legitimizing deflection as a pathway to addressing behavioral health challenges in communities.
Details
Keywords
Abstract
Purpose
This study aimed to explore the spatial accessibility dynamics of urban parks and their driving forces from 1901 to 2010 in terms of the dynamic relationships between spatial morphology and road networks, taking Nanjing City as an example.
Design/methodology/approach
This study mapped and examined the spatiotemporal distribution of urban parks and road networks in four time points at Nanjing: the 1910s, 1930s, 1960s and 2010s, using the analysis methodology of spatial design network analysis, kernel density estimation and buffer analysis. Two approaches of spatial overlaying and data analysis were adopted to investigate the accessibility dynamics. The spatial overlaying compared the parks' layout and the road networks' core, subcore and noncore accessible areas; the data analysis clarified the average data on the city-wide and local scales of the road networks within the park buffer zone.
Findings
The analysis of the changing relationships between urban parks and the spatial morphology of road networks showed that the accessibility of urban parks has generally improved. This was influenced by six main factors: planning implementation, political policies, natural resources, historical heritage and cultural and economic levels.
Social implications
The results provide a reference for achieving spatial equity, improving urban park accessibility and supporting sustainable urban park planning.
Originality/value
An increasing number of studies have explored the spatial accessibility of urban parks through the relationships between their spatial distribution and road networks. However, few studies have investigated the dynamic changes in accessibility over time. Discussing parks' accessibility over relatively long-time scales has practical, innovative and theoretical values; because it can reveal correlational laws and internal influences not apparent in short term and provide reference and implications for parks' spatial equity.
Details