Search results
1 – 10 of 35Annarita Colamatteo, Marcello Sansone and Giuliano Iorio
This paper aims to examine the impact of the COVID-19 pandemic on the private label food products, specifically assessing the stability and changes in factors influencing…
Abstract
Purpose
This paper aims to examine the impact of the COVID-19 pandemic on the private label food products, specifically assessing the stability and changes in factors influencing purchasing decisions, and comparing pre-pandemic and post-pandemic datasets.
Design/methodology/approach
The study employs the Extra Tree Classifier method, a robust quantitative approach, to analyse data collected from questionnaires distributed among two distinct consumer samples. This methodological choice is explicitly adopted to provide a clear classification of factors influencing consumer preferences for private label products, surpassing conventional qualitative methods.
Findings
Despite the profound disruptions caused by the COVID-19 pandemic, this research underscores the persistent hierarchy of factors shaping consumer choices in the private label food market, showing an overall stability in consumer behaviour. At the same time, the analysis of individual variables highlights the positive increase in those related to product quality, health, taste, and communication.
Research limitations/implications
The use of online surveys for data collection may introduce a self-selection bias, and the non-probabilistic sampling method could limit the generalizability of the results.
Practical implications
Practical implications suggest that managers in the private label industry should prioritize enhancing quality control, ensuring effective communication, and dynamically adapting strategies to meet evolving consumer preferences, with a particular emphasis on quality and health attributes.
Originality/value
This study contributes to the existing body of literature by providing insights into the profound transformations induced by the COVID-19 pandemic on consumer behaviour, specifically in relation to their preferences for private label food products.
Details
Keywords
Jahanzaib Alvi and Imtiaz Arif
The crux of this paper is to unveil efficient features and practical tools that can predict credit default.
Abstract
Purpose
The crux of this paper is to unveil efficient features and practical tools that can predict credit default.
Design/methodology/approach
Annual data of non-financial listed companies were taken from 2000 to 2020, along with 71 financial ratios. The dataset was bifurcated into three panels with three default assumptions. Logistic regression (LR) and k-nearest neighbor (KNN) binary classification algorithms were used to estimate credit default in this research.
Findings
The study’s findings revealed that features used in Model 3 (Case 3) were the efficient and best features comparatively. Results also showcased that KNN exposed higher accuracy than LR, which proves the supremacy of KNN on LR.
Research limitations/implications
Using only two classifiers limits this research for a comprehensive comparison of results; this research was based on only financial data, which exhibits a sizeable room for including non-financial parameters in default estimation. Both limitations may be a direction for future research in this domain.
Originality/value
This study introduces efficient features and tools for credit default prediction using financial data, demonstrating KNN’s superior accuracy over LR and suggesting future research directions.
Details
Keywords
Oscar F. Bustinza, Luis M. Molina Fernandez and Marlene Mendoza Macías
Machine learning (ML) analytical tools are increasingly being considered as an alternative quantitative methodology in management research. This paper proposes a new approach for…
Abstract
Purpose
Machine learning (ML) analytical tools are increasingly being considered as an alternative quantitative methodology in management research. This paper proposes a new approach for uncovering the antecedents behind product and product–service innovation (PSI).
Design/methodology/approach
The ML approach is novel in the field of innovation antecedents at the country level. A sample of the Equatorian National Survey on Technology and Innovation, consisting of more than 6,000 firms, is used to rank the antecedents of innovation.
Findings
The analysis reveals that the antecedents of product and PSI are distinct, yet rooted in the principles of open innovation and competitive priorities.
Research limitations/implications
The analysis is based on a sample of Equatorian firms with the objective of showing how ML techniques are suitable for testing the antecedents of innovation in any other context.
Originality/value
The novel ML approach, in contrast to traditional quantitative analysis of the topic, can consider the full set of antecedent interactions to each of the innovations analyzed.
Details
Keywords
S. Thavasi and T. Revathi
With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of…
Abstract
Purpose
With so many placement opportunities around the students in their final or prefinal year, they start to feel the strain of the season. The students feel the need to be aware of their position and how to increase their chances of being hired. Hence, a system to guide their career is one of the needs of the day.
Design/methodology/approach
The job role prediction system utilizes machine learning techniques such as Naïve Bayes, K-Nearest Neighbor, Support Vector machines (SVM) and Artificial Neural Networks (ANN) to suggest a student’s job role based on their academic performance and course outcomes (CO), out of which ANN performs better. The system uses the Mepco Schlenk Engineering College curriculum, placement and students’ Assessment data sets, in which the CO and syllabus are used to determine the skills that the student has gained from their courses. The necessary skills for a job position are then extracted from the job advertisements. The system compares the student’s skills with the required skills for the job role based on the placement prediction result.
Findings
The system predicts placement possibilities with an accuracy of 93.33 and 98% precision. Also, the skill analysis for students gives the students information about their skill-set strengths and weaknesses.
Research limitations/implications
For skill-set analysis, only the direct assessment of the students is considered. Indirect assessment shall also be considered for future scope.
Practical implications
The model is adaptable and flexible (customizable) to any type of academic institute or universities.
Social implications
The research will be very much useful for the students community to bridge the gap between the academic and industrial needs.
Originality/value
Several works are done for career guidance for the students. However, these career guidance methodologies are designed only using the curriculum and students’ basic personal information. The proposed system will consider the students’ academic performance through direct assessment, along with their curriculum and basic personal information.
Details
Keywords
Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…
Abstract
Purpose
With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.
Design/methodology/approach
In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.
Findings
On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.
Originality/value
In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.
Details
Keywords
Rita Sleiman, Quoc-Thông Nguyen, Sandra Lacaze, Kim-Phuc Tran and Sébastien Thomassey
We propose a machine learning based methodology to deal with data collected from a mobile application asking users their opinion regarding fashion products. Based on different…
Abstract
Purpose
We propose a machine learning based methodology to deal with data collected from a mobile application asking users their opinion regarding fashion products. Based on different machine learning techniques, the proposed approach relies on the data value chain principle to enrich data into knowledge, insights and learning experience.
Design/methodology/approach
Online interaction and the usage of social media have dramatically altered both consumers’ behaviors and business practices. Companies invest in social media platforms and digital marketing in order to increase their brand awareness and boost their sales. Especially for fashion retailers, understanding consumers’ behavior before launching a new collection is crucial to reduce overstock situations. In this study, we aim at providing retailers better understand consumers’ different assessments of newly introduced products.
Findings
By creating new product-related and user-related attributes, the proposed prediction model attends an average of 70.15% accuracy when evaluating the potential success of new future products during the design process of the collection. Results showed that by harnessing artificial intelligence techniques, along with social media data and mobile apps, new ways of interacting with clients and understanding their preferences are established.
Practical implications
From a practical point of view, the proposed approach helps businesses better target their marketing campaigns, localize their potential clients and adjust manufactured quantities.
Originality/value
The originality of the proposed approach lies in (1) the implementation of the data value chain principle to enhance the information of raw data collected from mobile apps and improve the prediction model performances, and (2) the combination consumer and product attributes to provide an accurate prediction of new fashion, products.
Details
Keywords
Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam
Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…
Abstract
Purpose
Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.
Design/methodology/approach
The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.
Findings
The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.
Originality/value
Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.
Details
Keywords
Chong Wu, Xiaofang Chen and Yongjie Jiang
While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of…
Abstract
Purpose
While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of enterprises and also jeopardizes the interests of investors. Therefore, it is important to understand how to accurately and reasonably predict the financial distress of enterprises.
Design/methodology/approach
In the present study, ensemble feature selection (EFS) and improved stacking were used for financial distress prediction (FDP). Mutual information, analysis of variance (ANOVA), random forest (RF), genetic algorithms, and recursive feature elimination (RFE) were chosen for EFS to select features. Since there may be missing information when feeding the results of the base learner directly into the meta-learner, the features with high importance were fed into the meta-learner together. A screening layer was added to select the meta-learner with better performance. Finally, Optima hyperparameters were used for parameter tuning by the learners.
Findings
An empirical study was conducted with a sample of A-share listed companies in China. The F1-score of the model constructed using the features screened by EFS reached 84.55%, representing an improvement of 4.37% compared to the original features. To verify the effectiveness of improved stacking, benchmark model comparison experiments were conducted. Compared to the original stacking model, the accuracy of the improved stacking model was improved by 0.44%, and the F1-score was improved by 0.51%. In addition, the improved stacking model had the highest area under the curve (AUC) value (0.905) among all the compared models.
Originality/value
Compared to previous models, the proposed FDP model has better performance, thus bridging the research gap of feature selection. The present study provides new ideas for stacking improvement research and a reference for subsequent research in this field.
Details
Keywords
Nehal Elshaboury, Tarek Zayed and Eslam Mohammed Abdelkader
Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective…
Abstract
Purpose
Water pipes degrade over time for a variety of pipe-related, soil-related, operational, and environmental factors. Hence, municipalities are necessitated to implement effective maintenance and rehabilitation strategies for water pipes based on reliable deterioration models and cost-effective inspection programs. In the light of foregoing, the paramount objective of this research study is to develop condition assessment and deterioration prediction models for saltwater pipes in Hong Kong.
Design/methodology/approach
As a perquisite to the development of condition assessment models, spherical fuzzy analytic hierarchy process (SFAHP) is harnessed to analyze the relative importance weights of deterioration factors. Afterward, the relative importance weights of deterioration factors coupled with their effective values are leveraged using the measurement of alternatives and ranking according to the compromise solution (MARCOS) algorithm to analyze the performance condition of water pipes. A condition rating system is then designed counting on the generalized entropy-based probabilistic fuzzy C means (GEPFCM) algorithm. A set of fourth order multiple regression functions are constructed to capture the degradation trends in condition of pipelines overtime covering their disparate characteristics.
Findings
Analytical results demonstrated that the top five influential deterioration factors comprise age, material, traffic, soil corrosivity and material. In addition, it was derived that developed deterioration models accomplished correlation coefficient, mean absolute error and root mean squared error of 0.8, 1.33 and 1.39, respectively.
Originality/value
It can be argued that generated deterioration models can assist municipalities in formulating accurate and cost-effective maintenance, repair and rehabilitation programs.
Details
Keywords
Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…
Abstract
Purpose
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.
Design/methodology/approach
On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.
Findings
The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.
Originality/value
The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.
Details