Search results
1 – 10 of 186Kala Nisha Gopinathan, Punniyamoorthy Murugesan and Joshua Jebaraj Jeyaraj
This study aims to provide the best estimate of a stock's next day's closing price for a given day with the help of the hidden Markov model–Gaussian mixture model (HMM-GMM). The…
Abstract
Purpose
This study aims to provide the best estimate of a stock's next day's closing price for a given day with the help of the hidden Markov model–Gaussian mixture model (HMM-GMM). The results were compared with Hassan and Nath’s (2005) study using HMM and artificial neural network (ANN).
Design/methodology/approach
The study adopted an initialization approach wherein the hidden states of the HMM are modelled as GMM using two different approaches. Training of the HMM-GMM model is carried out using two methods. The prediction was performed by taking the closest closing price (having a log-likelihood within the tolerance range) to that of the present one as the closing price for the next day. Mean absolute percentage error (MAPE) has been used to compare the proposed GMM-HMM model against the models of the research study (Hassan and Nath, 2005).
Findings
Comparing this study with Hassan and Nath (2005) reveals that the proposed model outperformed in 66 out of the 72 different test cases. The results affirm that the model can be used for more accurate time series prediction. Further, compared with the results of the ANN model from Hassan's study, the proposed HMM model outperformed 24 of the 36 test cases.
Originality/value
The study introduced a novel initialization and two training/prediction approaches for the HMM-GMM model. It is to be noted that the study has introduced a GMM-HMM-based closing price estimator for stock price prediction. The proposed method of forecasting the stock prices using GMM-HMM is explainable and has a solid statistical foundation.
Details
Keywords
Qian Tang, Yuzhuo Qiu and Lan Xu
The demand for the cold chain logistics of agricultural products was investigated through demand forecasting; targeted suggestions and countermeasures are provided. This paper…
Abstract
Purpose
The demand for the cold chain logistics of agricultural products was investigated through demand forecasting; targeted suggestions and countermeasures are provided. This paper aims to discuss the aforementioned statement.
Design/methodology/approach
A Markov-optimised mean GM (1, 1) model is proposed to forecast the demand for the cold chain logistics of agricultural products. The mean GM (1, 1) model was used to forecast the demand trend, and the Markov chain model was used for optimisation. Considering Guangxi province as an example, the feasibility and effectiveness of the proposed method were verified, and relevant suggestions are made.
Findings
Compared with other models, the Markov-optimised mean GM (1, 1) model can more effectively forecast the demand for the cold chain logistics of agricultural products, is closer to the actual value and has better accuracy and minor error. It shows that the demand forecast can provide specific suggestions and theoretical support for the development of cold chain logistics.
Originality/value
This study evaluated the development trend of the cold chain logistics of agricultural products based on the research horizon of demand forecasting for cold chain logistics. A Markov-optimised mean GM (1, 1) model is proposed to overcome the problem of poor prediction for series with considerable fluctuation in the modelling process, and improve the prediction accuracy. It finds a breakthrough to promote the development of cold chain logistics through empirical analysis, and give relevant suggestions based on the obtained results.
Details
Keywords
A zero-day vulnerability is a complimentary ticket to the attackers for gaining entry into the network. Thus, there is necessity to device appropriate threat detection systems and…
Abstract
A zero-day vulnerability is a complimentary ticket to the attackers for gaining entry into the network. Thus, there is necessity to device appropriate threat detection systems and establish an innovative and safe solution that prevents unauthorised intrusions for defending various components of cybersecurity. We present a survey of recent Intrusion Detection Systems (IDS) in detecting zero-day vulnerabilities based on the following dimensions: types of cyber-attacks, datasets used and kinds of network detection systems.
Purpose: The study focuses on presenting an exhaustive review on the effectiveness of the recent IDS with respect to zero-day vulnerabilities.
Methodology: Systematic exploration was done at the IEEE, Elsevier, Springer, RAID, ESCORICS, Google Scholar, and other relevant platforms of studies published in English between 2015 and 2021 using keywords and combinations of relevant terms.
Findings: It is possible to train IDS for zero-day attacks. The existing IDS have strengths that make them capable of effective detection against zero-day attacks. However, they display certain limitations that reduce their credibility. Novel strategies like deep learning, machine learning, fuzzing technique, runtime verification technique, and Hidden Markov Models can be used to design IDS to detect malicious traffic.
Implication: This paper explored and highlighted the advantages and limitations of existing IDS enabling the selection of best possible IDS to protect the system. Moreover, the comparison between signature-based and anomaly-based IDS exemplifies that one viable approach to accurately detect the zero-day vulnerabilities would be the integration of hybrid mechanism.
Details
Keywords
Sou-Sen Leu, Yen-Lin Fu and Pei-Lin Wu
This paper aims to develop a dynamic civil facility degradation prediction model to forecast the reliability performance tendency and remaining useful life under imperfect…
Abstract
Purpose
This paper aims to develop a dynamic civil facility degradation prediction model to forecast the reliability performance tendency and remaining useful life under imperfect maintenance based on the inspection records and the maintenance actions.
Design/methodology/approach
A real-time hidden Markov chain (HMM) model is proposed in this paper to predict the reliability performance tendency and remaining useful life under imperfect maintenance based on rare failure events. The model assumes a Poisson arrival pattern for facility failure events occurrence. HMM is further adopted to establish the transmission probabilities among stages. Finally, the simulation inference is conducted using Particle filter (PF) to estimate the most probable model parameters. Water seals at the spillway hydraulic gate in a Taiwan's reservoir are used to examine the appropriateness of the approach.
Findings
The results of defect probabilities tendency from the real-time HMM model are highly consistent with the real defect trend pattern of civil facilities. The proposed facility degradation prediction model can provide the maintenance division with early warning of potential failure to establish a proper proactive maintenance plan, even under the condition of rare defects.
Originality/value
This model is a new method of civil facility degradation prediction under imperfect maintenance, even with rare failure events. It overcomes several limitations of classical failure pattern prediction approaches and can reliably simulate the occurrence of rare defects under imperfect maintenance and the effect of inspection reliability caused by human error. Based on the degradation trend pattern prediction, effective maintenance management plans can be practically implemented to minimize the frequency of the occurrence and the consequence of civil facility failures.
Details
Keywords
Jitendra Gaur, Kumkum Bharti and Rahul Bajaj
Allocation of the marketing budget has become increasingly challenging due to the diverse channel exposure to customers. This study aims to enhance global marketing knowledge by…
Abstract
Purpose
Allocation of the marketing budget has become increasingly challenging due to the diverse channel exposure to customers. This study aims to enhance global marketing knowledge by introducing an ensemble attribution model to optimize marketing budget allocation for online marketing channels. As empirical research, this study demonstrates the supremacy of the ensemble model over standalone models.
Design/methodology/approach
The transactional data set for car insurance from an Indian insurance aggregator is used in this empirical study. The data set contains information from more than three million platform visitors. A robust ensemble model is created by combining results from two probabilistic models, namely, the Markov chain model and the Shapley value. These results are compared and validated with heuristic models. Also, the performances of online marketing channels and attribution models are evaluated based on the devices used (i.e. desktop vs mobile).
Findings
Channel importance charts for desktop and mobile devices are analyzed to understand the top contributing online marketing channels. Customer relationship management-emailers and Google cost per click a paid advertising is identified as the top two marketing channels for desktop and mobile channels. The research reveals that ensemble model accuracy is better than the standalone model, that is, the Markov chain model and the Shapley value.
Originality/value
To the best of the authors’ knowledge, the current research is the first of its kind to introduce ensemble modeling for solving attribution problems in online marketing. A comparison with heuristic models using different devices (desktop and mobile) offers insights into the results with heuristic models.
Details
Keywords
Yang Li, Jinke Gao, Jianing Zhou, Tong Zhu and Zhilei Jiang
Cutting force prediction is pretty important for manufacture management. Thus, the purpose of this paper is to obtain the cutting force of the machining process with high…
Abstract
Purpose
Cutting force prediction is pretty important for manufacture management. Thus, the purpose of this paper is to obtain the cutting force of the machining process with high efficiency and low cost. A method based on the improved auto regressive moving average (ARMA) model is proposed for cutting force predictions in milling process.
Design/methodology/approach
First, classification and normalization are made for initial cutting force. Second, the cutting force sequences are compressed followed singular and valid value removed. At last, the improved ARMA model is used for cutting force fit and extrapolation considered the time domain characteristics.
Findings
A series of cutting force with the spindle speed 595r/min is carried out in the research. It is showed that the mean absolute percentage error value of cutting force extrapolation results which is based on the improved model is smaller. The percentage value is approximately 5.80%. Then the root mean square error test value is only 72.49, which is smaller than that with other traditional method, such as hidden Markov model. The extrapolation results with the proposed model performed good consistency and accuracy in terms of peaks, valleys and volatility compared with the experiment results.
Originality/value
The proposed method that is based on the improved ARMA model can be used for cutting force predictions conveniently. And the predictions can be used for improving the qualities in milling process.
Details
Keywords
This paper undertakes an extensive and systematic review of the literature on earnings management (EM) over the past three decades (1992–2022). Furthermore, the study identifies…
Abstract
Purpose
This paper undertakes an extensive and systematic review of the literature on earnings management (EM) over the past three decades (1992–2022). Furthermore, the study identifies emerging research themes and proposes future avenues for further investigation in the realm of EM.
Design/methodology/approach
For this study, a comprehensive collection of 2,775 articles on EM published between 1992 and 2022 was extracted from the Scopus database. The author employed various tools, including Microsoft Excel, R studio, Gephi and visualization of similarities viewer, to conduct bibliometric, content, thematic and cluster analyses. Additionally, the study examined the literature across three distinct periods: prior to the enactment of the Sarbanes-Oxley Act (1992–2001), subsequent to the implementation of the Sarbanes-Oxley Act (2002–2012), and after the adoption of International Financial Reporting Standards (2013–2022) to draw more inferences and insights on EM research.
Findings
The study identifies three major themes, namely the operationalization of EM constructs, the trade-off between EM tools (accrual EM, real EM and classification shifting) and the role of corporate governance in mitigating EM in emerging markets. Existing literature in these areas presents mixed and inconclusive findings, suggesting the need for further theoretical development. Further, the study findings observe a shift in research focus over time: initially, understanding manipulation techniques, then evaluating regulatory measures, and more recently, investigating the impact of global accounting standards. Several emerging research themes (technology advancements, cross-cultural and cross-national studies, sustainability, behavioral aspects and non-financial indicators of EM) have been identified. This study subsequent analysis reveals an evolving EM landscape, with researchers from disciplines like data science, computer science and engineering applying their analytical expertise to detect EM anomalies. Furthermore, this study offers significant insights into sophisticated EM techniques such as neural networks, machine learning techniques and hidden Markov models, among others, as well as relevant theories including dynamic capabilities theory, learning curve theory, psychological contract theory and normative institutional theory. These techniques and theories demonstrate the need for further advancement in the field of EM. Lastly, the findings shed light on prominent EM journals, authors and countries.
Originality/value
This study conducts quantitative bibliometric and thematic analyses of the existing literature on EM while identifying areas that require further development to advance EM research.
Details
Keywords
Yiqi Li, Nathan Bartley, Jingyi Sun and Dmitri Williams
Team social capital (TSC) has been attracting increasing research attention aiming to explore team effectiveness through within- and cross-team resource conduits. This study…
Abstract
Purpose
Team social capital (TSC) has been attracting increasing research attention aiming to explore team effectiveness through within- and cross-team resource conduits. This study bridges two disconnected theories – TSC and evolutionary theory – to examine gaming clans and analyzes mechanisms of the clans' TSC building from an evolutionary perspective.
Design/methodology/approach
This research draws longitudinal data from a sample of gaming teams (N = 1,267) from anonymized player data from the game World of Tanks spanning 32 months. The authors explored teams' evolutionary patterns using hidden Markov models and applied longitudinal multilevel modeling to test hypotheses.
Findings
The results showed that teams of different sizes and levels of evolutionary fitness vary in team closure and bridging social capital. The authors also found that larger teams are more effective than smaller ones. The positive association between team-bridging social capital and effectiveness is more substantial for smaller teams.
Originality/value
This research advances the theoretical development of TSC by including the constructs of teams' evolutionary status when analyzing strategic social capital building. Adding to existing literature studying the outcome of TSC, this research also found a moderating effect of team size between TSC and effectiveness. Finally, this study also contributes to a longitudinal view of TSC and found significant evolutionary patterns of teams' membership, TSC, and effectiveness.
Details
Keywords
Bin Wang, Huifeng Li, Le Tong, Qian Zhang, Sulei Zhu and Tao Yang
This paper aims to address the following issues: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for…
Abstract
Purpose
This paper aims to address the following issues: (1) most existing methods are based on recurrent network, which is time-consuming to train long sequences due to not allowing for full parallelism; (2) personalized preference generally are not considered reasonably; (3) existing methods rarely systematically studied how to efficiently utilize various auxiliary information (e.g. user ID and time stamp) in trajectory data and the spatiotemporal relations among nonconsecutive locations.
Design/methodology/approach
The authors propose a novel self-attention network–based model named SanMove to predict the next location via capturing the long- and short-term mobility patterns of users. Specifically, SanMove uses a self-attention module to capture each user's long-term preference, which can represent her personalized location preference. Meanwhile, the authors use a spatial-temporal guided noninvasive self-attention (STNOVA) module to exploit auxiliary information in the trajectory data to learn the user's short-term preference.
Findings
The authors evaluate SanMove on two real-world datasets. The experimental results demonstrate that SanMove is not only faster than the state-of-the-art recurrent neural network (RNN) based predict model but also outperforms the baselines for next location prediction.
Originality/value
The authors propose a self-attention-based sequential model named SanMove to predict the user's trajectory, which comprised long-term and short-term preference learning modules. SanMove allows full parallel processing of trajectories to improve processing efficiency. They propose an STNOVA module to capture the sequential transitions of current trajectories. Moreover, the self-attention module is used to process historical trajectory sequences in order to capture the personalized location preference of each user. The authors conduct extensive experiments on two check-in datasets. The experimental results demonstrate that the model has a fast training speed and excellent performance compared with the existing RNN-based methods for next location prediction.
Details
Keywords
Kinjal Bhargavkumar Mistree, Devendra Thakor and Brijesh Bhatt
According to the Indian Sign Language Research and Training Centre (ISLRTC), India has approximately 300 certified human interpreters to help people with hearing loss. This paper…
Abstract
Purpose
According to the Indian Sign Language Research and Training Centre (ISLRTC), India has approximately 300 certified human interpreters to help people with hearing loss. This paper aims to address the issue of Indian Sign Language (ISL) sentence recognition and translation into semantically equivalent English text in a signer-independent mode.
Design/methodology/approach
This study presents an approach that translates ISL sentences into English text using the MobileNetV2 model and Neural Machine Translation (NMT). The authors have created an ISL corpus from the Brown corpus using ISL grammar rules to perform machine translation. The authors’ approach converts ISL videos of the newly created dataset into ISL gloss sequences using the MobileNetV2 model and the recognized ISL gloss sequence is then fed to a machine translation module that generates an English sentence for each ISL sentence.
Findings
As per the experimental results, pretrained MobileNetV2 model was proven the best-suited model for the recognition of ISL sentences and NMT provided better results than Statistical Machine Translation (SMT) to convert ISL text into English text. The automatic and human evaluation of the proposed approach yielded accuracies of 83.3 and 86.1%, respectively.
Research limitations/implications
It can be seen that the neural machine translation systems produced translations with repetitions of other translated words, strange translations when the total number of words per sentence is increased and one or more unexpected terms that had no relation to the source text on occasion. The most common type of error is the mistranslation of places, numbers and dates. Although this has little effect on the overall structure of the translated sentence, it indicates that the embedding learned for these few words could be improved.
Originality/value
Sign language recognition and translation is a crucial step toward improving communication between the deaf and the rest of society. Because of the shortage of human interpreters, an alternative approach is desired to help people achieve smooth communication with the Deaf. To motivate research in this field, the authors generated an ISL corpus of 13,720 sentences and a video dataset of 47,880 ISL videos. As there is no public dataset available for ISl videos incorporating signs released by ISLRTC, the authors created a new video dataset and ISL corpus.
Details