Search results

1 – 10 of 99
Article
Publication date: 29 November 2018

Manex Bule Yonis, Tassew Woldehanna and Wolday Amha

The effectiveness of any government interventions to support small firms is always a concern in achieving improvements in enterprise performances. The purpose of this paper is to…

Abstract

Purpose

The effectiveness of any government interventions to support small firms is always a concern in achieving improvements in enterprise performances. The purpose of this paper is to evaluate thoroughly the impact of micro and small enterprises’ (MSEs’) support programs on core intermediate and final outcomes of interest.

Design/methodology/approach

The impact evaluation employs a non-parametric matching procedure for parametric outcome analysis using the propensity score matching (PSM) method. Aiming at a doubly robust evaluation process, the study applies parametric analyses than non-parametric permutation-based tests to investigate the causal effects of the public intervention.

Findings

The study reveals that the public intervention encouraged MSEs to develop innovative business practices and improve their human capital development process. Moreover, the intervention had a positive effect in expanding employment opportunities in urban areas. Contrariwise, the study shows that support beneficiaries are not at an advantage in investment intensity. The lower level of investment intensity on fixed capital resulted inefficiency among the recipients. Moreover, the intervention did not have an effect on changing the net-asset over time for the recipients.

Practical implications

This study implies that the support programs need to be dynamic and also targets on creating innovative high-growth MSEs.

Originality/value

This paper is fairly original and provides policy makers and MSE promoters/facilitators evidence-based information on the effectiveness of the support services, with looking at firm-level analysis.

Details

International Journal of Emerging Markets, vol. 13 no. 5
Type: Research Article
ISSN: 1746-8809

Keywords

Article
Publication date: 5 September 2017

Yenny Villuendas-Rey, Carmen Rey-Benguría, Miltiadis Lytras, Cornelio Yáñez-Márquez and Oscar Camacho-Nieto

The purpose of this paper is to improve the classification of families having children with affective-behavioral maladies, and thus giving the families a suitable orientation.

Abstract

Purpose

The purpose of this paper is to improve the classification of families having children with affective-behavioral maladies, and thus giving the families a suitable orientation.

Design/methodology/approach

The proposed methodology includes three steps. Step 1 addresses initial data preprocessing, by noise filtering or data condensation. Step 2 performs a multiple feature sets selection, by using genetic algorithms and rough sets. Finally, Step 3 merges the candidate solutions and obtains the selected features and instances.

Findings

The new proposal show very good results on the family data (with 100 percent of correct classifications). It also obtained accurate results over a variety of repository data sets. The proposed approach is suitable for dealing with non-symmetric similarity functions, as well as with high-dimensionality mixed and incomplete data.

Originality/value

Previous work in the state of the art only considers instance selection to preprocess the schools for children with affective-behavioral maladies data. This paper explores using a new combined instance and feature selection technique to select relevant instances and features, leading to better classification, and to a simplification of the data.

Details

Program, vol. 51 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 30 September 2020

Li Xiaoling

In order to improve the weak recognition accuracy and robustness of the classification algorithm for brain-computer interface (BCI), this paper proposed a novel classification…

Abstract

Purpose

In order to improve the weak recognition accuracy and robustness of the classification algorithm for brain-computer interface (BCI), this paper proposed a novel classification algorithm for motor imagery based on temporal and spatial characteristics extracted by using convolutional neural networks (TS-CNN) model.

Design/methodology/approach

According to the proposed algorithm, a five-layer neural network model was constructed to classify the electroencephalogram (EEG) signals. Firstly, the author designed a motor imagery-based BCI experiment, and four subjects were recruited to participate in the experiment for the recording of EEG signals. Then, after the EEG signals were preprocessed, the temporal and spatial characteristics of EEG signals were extracted by longitudinal convolutional kernel and transverse convolutional kernels, respectively. Finally, the classification of motor imagery was completed by using two fully connected layers.

Findings

To validate the classification performance and efficiency of the proposed algorithm, the comparative experiments with the state-of-the-arts algorithms are applied to validate the proposed algorithm. Experimental results have shown that the proposed TS-CNN model has the best performance and efficiency in the classification of motor imagery, reflecting on the introduced accuracy, precision, recall, ROC curve and F-score indexes.

Originality/value

The proposed TS-CNN model accurately recognized the EEG signals for different tasks of motor imagery, and provided theoretical basis and technical support for the application of BCI control system in the field of rehabilitation exoskeleton.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 16 August 2021

Bo Qiu and Wei Fan

Metropolitan areas suffer from frequent road traffic congestion not only during peak hours but also during off-peak periods. Different machine learning methods have been used in…

Abstract

Purpose

Metropolitan areas suffer from frequent road traffic congestion not only during peak hours but also during off-peak periods. Different machine learning methods have been used in travel time prediction, however, such machine learning methods practically face the problem of overfitting. Tree-based ensembles have been applied in various prediction fields, and such approaches usually produce high prediction accuracy by aggregating and averaging individual decision trees. The inherent advantages of these approaches not only get better prediction results but also have a good bias-variance trade-off which can help to avoid overfitting. However, the reality is that the application of tree-based integration algorithms in traffic prediction is still limited. This study aims to improve the accuracy and interpretability of the models by using random forest (RF) to analyze and model the travel time on freeways.

Design/methodology/approach

As the traffic conditions often greatly change, the prediction results are often unsatisfactory. To improve the accuracy of short-term travel time prediction in the freeway network, a practically feasible and computationally efficient RF prediction method for real-world freeways by using probe traffic data was generated. In addition, the variables’ relative importance was ranked, which provides an investigation platform to gain a better understanding of how different contributing factors might affect travel time on freeways.

Findings

The parameters of the RF model were estimated by using the training sample set. After the parameter tuning process was completed, the proposed RF model was developed. The features’ relative importance showed that the variables (travel time 15 min before) and time of day (TOD) contribute the most to the predicted travel time result. The model performance was also evaluated and compared against the extreme gradient boosting method and the results indicated that the RF always produces more accurate travel time predictions.

Originality/value

This research developed an RF method to predict the freeway travel time by using the probe vehicle-based traffic data and weather data. Detailed information about the input variables and data pre-processing were presented. To measure the effectiveness of proposed travel time prediction algorithms, the mean absolute percentage errors were computed for different observation segments combined with different prediction horizons ranging from 15 to 60 min.

Details

Smart and Resilient Transportation, vol. 3 no. 2
Type: Research Article
ISSN: 2632-0487

Keywords

Article
Publication date: 4 April 2016

He-Boong Kwon, James Jungbae Roh and Nicholas Miceli

The purpose of this paper is to develop an artificial neural network (ANN) based prediction model via integration with data envelopment analysis (DEA) to provide the means of…

Abstract

Purpose

The purpose of this paper is to develop an artificial neural network (ANN) based prediction model via integration with data envelopment analysis (DEA) to provide the means of predicting incremental performance goals. The findings confirm the usefulness of the herein developed prediction approach, based on the results of analyses of time series data from the smartphone industry.

Design/methodology/approach

A two-stage hybrid model was developed, incorporating sequential measurement and prediction capability. In the first stage, a Chames, Cooper, and Rhodes DEA model is the preprocessor, generating efficiency scores (ES) of decision-making units (DMUs). In the second or follow-on stage, the ANN prediction module utilizes knowledge variables and ES to predict the change in performance needed for a desired level of improvement.

Findings

This combined approach effectively captured the information contained in the industry’s turbulent characteristics, and subsequently demonstrated an adaptive prediction capability. The back propagating neural network successfully predicted the incremental performance targets of DMUs, which translated the desired improvement levels into actionable performance goals, e.g., revenue and operating income.

Originality/value

This paper presents an incremental prediction approach that supports better practice benchmarking. This study differentiates itself from previous research by introducing an adaptive prediction method which generates relevant quantity outputs based upon desired improvement levels. The proposed modeling approach integrates performance measurement with a prediction framework and advances benchmarking practices to enable better performance prediction.

Details

Benchmarking: An International Journal, vol. 23 no. 3
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 3 February 2020

Shahidha Banu S. and Maheswari N.

Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time…

Abstract

Purpose

Background modelling has played an imperative role in the moving object detection as the progress of foreground extraction during video analysis and surveillance in many real-time applications. It is usually done by background subtraction. This method is uprightly based on a mathematical model with a fixed feature as a static background, where the background image is fixed with the foreground object running over it. Usually, this image is taken as the background model and is compared against every new frame of the input video sequence. In this paper, the authors presented a renewed background modelling method for foreground segmentation. The principal objective of the work is to perform the foreground object detection only in the premeditated region of interest (ROI). The ROI is calculated using the proposed algorithm reducing and raising by half (RRH). In this algorithm, the coordinate of a circle with the frame width as the diameter is considered for traversal to find the pixel difference. The change in the pixel intensity is considered to be the foreground object and the position of it is determined based on the pixel location. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; The proposed system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizes the pixel as the foreground and mines the precise foreground object. The broad experimental results and the evaluation parameters of the proposed approach with the state of art methods were compared against the most recent background subtraction approaches. Moreover, the efficiency of the authors’ method is analyzed in different situations to prove that this method is available for real-time videos as well as videos available in the 2014 challenge change detection data set.

Design/methodology/approach

In this paper, the authors presented a fresh background modelling method for foreground segmentation. The main objective of the work is to perform the foreground object detection only on the premeditated ROI. The region for foreground extraction is calculated using proposed RRH algorithm. Most of the techniques study their updates to the pixels of the complete frame which may result in increased false rate; most challenging case is that, the slow moving object is updated quickly to detect the foreground region. The anticipated system deals these flaw by controlling the ROI object (the region only where the background subtraction is performed) and thus extracts a correct foreground by exactly categorizing the pixel as the foreground and mining the precise foreground object.

Findings

Plum Analytics provide a new conduit for documenting and contextualizing the public impact and reach of research within digitally networked environments. While limitations are notable, the metrics promoted through the platform can be used to build a more comprehensive view of research impact.

Originality/value

The algorithm used in the work was proposed by the authors and are used for experimental evaluations.

Article
Publication date: 9 April 2024

Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…

Abstract

Purpose

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.

Design/methodology/approach

In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.

Findings

On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.

Originality/value

In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Abstract

Details

Rutgers Studies in Accounting Analytics: Audit Analytics in the Financial Industry
Type: Book
ISBN: 978-1-78743-086-0

Article
Publication date: 17 October 2019

Huaqing Hu, Ketai He, Tianlin Zhong and Yili Hong

This paper aims to propose a method to diagnose fused deposition modeling (FDM) printing faults caused by the variation of temperature field and establish a fault knowledge base…

Abstract

Purpose

This paper aims to propose a method to diagnose fused deposition modeling (FDM) printing faults caused by the variation of temperature field and establish a fault knowledge base, which helps to study the generation mechanism of FDM printing faults.

Design/methodology/approach

Based on the Spearman rank correlation analysis, four relative temperature parameters are selected as the input data to train the SVM-based multi-classes classification model, which further serves as a method to diagnose the FDM printing faults.

Findings

It is found that FDM parts may be in several printing states with the variation of temperature field on the surface of FDM parts. The theoretical dividing lines between different FDM printing states are put forward by traversing all the four-dimensional input parameter combinations. The relationship between the relative mean temperature and the theoretical dividing lines is found to be close and is analyzed qualitatively.

Originality/value

The multi-classes classification model, embedded in FDM printers as an adviser, can be used to prevent waste products and release much work of labors for monitoring.

Article
Publication date: 27 August 2014

Yuangen Lai and Jianxun Zeng

The purpose of this paper is to discuss issues related to customer churn behavior in digital libraries (DLs) and demonstrate the successful application of Survival Analysis for…

1453

Abstract

Purpose

The purpose of this paper is to discuss issues related to customer churn behavior in digital libraries (DLs) and demonstrate the successful application of Survival Analysis for understanding customer churn status and relationship duration distribution between customers and libraries.

Design/methodology/approach

The study applies non-parametric methods of Survival Analysis to analyze churn behaviors of 8,054 customers from a famous Chinese digital library, and a cluster method to make customer segmentation according to customer behavioral features.

Findings

The customer churn rate of the given library is very high, so as to the churn hazard in early three months after customer's registration on the web site of the library. There is clear difference in both customer survival time and churn hazard among customer groups. It is necessary to strengthen customer churn analysis and customer relationship management (CRM) for DLs.

Research limitations/implications

The studied samples are mainly based on customers from one digital library and some hypotheses have not been strictly proven due to the absence of relevant empirical researches.

Practical implications

This study provides a reasonable basis for decision making about CRM in DLs.

Originality/value

Most previous researches about information behavior concentrate on information seeking behavior in DLs, seldom discuss customer switching behavior. The paper discusses issues related to customer churn analysis and illustrates the adaptation of Survival Analysis to understand customer churn status and relationship duration distribution in DLs.

Details

Program, vol. 48 no. 4
Type: Research Article
ISSN: 0033-0337

Keywords

1 – 10 of 99