Search results

1 – 10 of 947
Article
Publication date: 1 May 2006

Rajugan Rajagopalapillai, Elizabeth Chang, Tharam S. Dillon and Ling Feng

In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources…

Abstract

In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of EXtensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user‐defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi‐structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three‐fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a viewdriven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction.

Details

International Journal of Web Information Systems, vol. 2 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 6 September 2018

Pengfei Zhao, Ji Wu, Zhongsheng Hua and Shijian Fang

The purpose of this paper is to identify electronic word-of-mouth (eWOM) customers from customer reviews. Thus, firms can precisely leverage eWOM customers to increase their…

2021

Abstract

Purpose

The purpose of this paper is to identify electronic word-of-mouth (eWOM) customers from customer reviews. Thus, firms can precisely leverage eWOM customers to increase their product sales.

Design/methodology/approach

This research proposed a framework to analyze the content of consumer-generated product reviews. Specific algorithms were used to identify potential eWOM reviewers, and then an evaluation method was used to validate the relationship between product sales and the eWOM reviewers identified by the authors’ proposed method.

Findings

The results corroborate that online product reviews that are made by the eWOM customers identified by the authors’ proposed method are more related to product sales than customer reviews that are made by non-eWOM customers and that the predictive power of the reviews generated by eWOM customers are significantly higher than the reviews generated by non-eWOM customers.

Research limitations/implications

The proposed method is useful in the data set, which is based on one type of products. However, for other products, the validity must be tested. Previous eWOM customers may have no significant influence on product sales in the future. Therefore, the proposed method should be tested in the new market environment.

Practical implications

By combining the method with the previous customer segmentation method, a new framework of customer segmentation is proposed to help firms understand customers’ value specifically.

Originality/value

This study is the first to identify eWOM customers from online reviews and to evaluate the relationship between reviewers and product sales.

Details

Industrial Management & Data Systems, vol. 119 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 13 August 2020

Chandra Sekhar Kolli and Uma Devi Tatavarthi

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even…

Abstract

Purpose

Fraud transaction detection has become a significant factor in the communication technologies and electronic commerce systems, as it affects the usage of electronic payment. Even though, various fraud detection methods are developed, enhancing the performance of electronic payment by detecting the fraudsters results in a great challenge in the bank transaction.

Design/methodology/approach

This paper aims to design the fraud detection mechanism using the proposed Harris water optimization-based deep recurrent neural network (HWO-based deep RNN). The proposed fraud detection strategy includes three different phases, namely, pre-processing, feature selection and fraud detection. Initially, the input transactional data is subjected to the pre-processing phase, where the data is pre-processed using the Box-Cox transformation to remove the redundant and noise values from data. The pre-processed data is passed to the feature selection phase, where the essential and the suitable features are selected using the wrapper model. The selected feature makes the classifier to perform better detection performance. Finally, the selected features are fed to the detection phase, where the deep recurrent neural network classifier is used to achieve the fraud detection process such that the training process of the classifier is done by the proposed Harris water optimization algorithm, which is the integration of water wave optimization and Harris hawks optimization.

Findings

Moreover, the proposed HWO-based deep RNN obtained better performance in terms of the metrics, such as accuracy, sensitivity and specificity with the values of 0.9192, 0.7642 and 0.9943.

Originality/value

An effective fraud detection method named HWO-based deep RNN is designed to detect the frauds in the bank transaction. The optimal features selected using the wrapper model enable the classifier to find fraudulent activities more efficiently. However, the accurate detection result is evaluated through the optimization model based on the fitness measure such that the function with the minimal error value is declared as the best solution, as it yields better detection results.

Article
Publication date: 25 January 2022

Tobias Mueller, Alexander Segin, Christoph Weigand and Robert H. Schmitt

In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the…

Abstract

Purpose

In the determination of the measurement uncertainty, the GUM procedure requires the building of a measurement model that establishes a functional relationship between the measurand and all influencing quantities. Since the effort of modelling as well as quantifying the measurement uncertainties depend on the number of influencing quantities considered, the aim of this study is to determine relevant influencing quantities and to remove irrelevant ones from the dataset.

Design/methodology/approach

In this work, it was investigated whether the effort of modelling for the determination of measurement uncertainty can be reduced by the use of feature selection (FS) methods. For this purpose, 9 different FS methods were tested on 16 artificial test datasets, whose properties (number of data points, number of features, complexity, features with low influence and redundant features) were varied via a design of experiments.

Findings

Based on a success metric, the stability, universality and complexity of the method, two FS methods could be identified that reliably identify relevant and irrelevant influencing quantities for a measurement model.

Originality/value

For the first time, FS methods were applied to datasets with properties of classical measurement processes. The simulation-based results serve as a basis for further research in the field of FS for measurement models. The identified algorithms will be applied to real measurement processes in the future.

Details

International Journal of Quality & Reliability Management, vol. 40 no. 3
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 12 June 2017

Taehoon Ko, Je Hyuk Lee, Hyunchang Cho, Sungzoon Cho, Wounjoo Lee and Miji Lee

Quality management of products is an important part of manufacturing process. One way to manage and assure product quality is to use machine learning algorithms based on…

1893

Abstract

Purpose

Quality management of products is an important part of manufacturing process. One way to manage and assure product quality is to use machine learning algorithms based on relationship among various process steps. The purpose of this paper is to integrate manufacturing, inspection and after-sales service data to make full use of machine learning algorithms for estimating the products’ quality in a supervised fashion. Proposed frameworks and methods are applied to actual data associated with heavy machinery engines.

Design/methodology/approach

By following Lenzerini’s formula, manufacturing, inspection and after-sales service data from various sources are integrated. The after-sales service data are used to label each engine as normal or abnormal. In this study, one-class classification algorithms are used due to class imbalance problem. To address multi-dimensionality of time series data, the symbolic aggregate approximation algorithm is used for data segmentation. Then, binary genetic algorithm-based wrapper approach is applied to segmented data to find the optimal feature subset.

Findings

By employing machine learning-based anomaly detection models, an anomaly score for each engine is calculated. Experimental results show that the proposed method can detect defective engines with a high probability before they are shipped.

Originality/value

Through data integration, the actual customer-perceived quality from after-sales service is linked to data from manufacturing and inspection process. In terms of business application, data integration and machine learning-based anomaly detection can help manufacturers establish quality management policies that reflect the actual customer-perceived quality by predicting defective engines.

Details

Industrial Management & Data Systems, vol. 117 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 26 November 2021

Bouslah Ayoub and Taleb Nora

Parkinson's disease (PD) is a well-known complex neurodegenerative disease. Typically, its identification is based on motor disorders, while the computer estimation of its main…

Abstract

Purpose

Parkinson's disease (PD) is a well-known complex neurodegenerative disease. Typically, its identification is based on motor disorders, while the computer estimation of its main symptoms with computational machine learning (ML) has a high exposure which is supported by researches conducted. Nevertheless, ML approaches required first to refine their parameters and then to work with the best model generated. This process often requires an expert user to oversee the performance of the algorithm. Therefore, an attention is required towards new approaches for better forecasting accuracy.

Design/methodology/approach

To provide an available identification model for Parkinson disease as an auxiliary function for clinicians, the authors suggest a new evolutionary classification model. The core of the prediction model is a fast learning network (FLN) optimized by a genetic algorithm (GA). To get a better subset of features and parameters, a new coding architecture is introduced to improve GA for obtaining an optimal FLN model.

Findings

The proposed model is intensively evaluated through a series of experiments based on Speech and HandPD benchmark datasets. The very popular wrappers induction models such as support vector machine (SVM), K-nearest neighbors (KNN) have been tested in the same condition. The results support that the proposed model can achieve the best performances in terms of accuracy and g-mean.

Originality/value

A novel efficient PD detection model is proposed, which is called A-W-FLN. The A-W-FLN utilizes FLN as the base classifier; in order to take its higher generalization ability, and identification capability is also embedded to discover the most suitable feature model in the detection process. Moreover, the proposed method automatically optimizes the FLN's architecture to a smaller number of hidden nodes and solid connecting weights. This helps the network to train on complex PD datasets with non-linear features and yields superior result.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 11 October 2011

Jiang Shu, Layne T. Watson, Naren Ramakrishnan, Frederick A. Kamke and Shubhangi Deshpande

This paper describes a practical approach to implement computational steering for problem solving environments (PSEs) by using WBCSim as an example. WBCSim is a Web based…

Abstract

Purpose

This paper describes a practical approach to implement computational steering for problem solving environments (PSEs) by using WBCSim as an example. WBCSim is a Web based simulation system designed to increase the productivity of wood scientists conducting research on wood‐based composites manufacturing processes. WBCSim serves as a prototypical example for the design, construction, and evaluation of small‐scale PSEs.

Design/methodology/approach

Various changes have been made to support computational steering across the three layers – client, server, developer – comprising the WBCSim system. A detailed description of the WBCSim system architecture is presented, along with a typical scenario of computational steering usage.

Findings

The set of changes and components are: design and add a very simple steering module at the legacy simulation code level, provide a way to monitor simulation execution (alert users when it is time to steer), add an interface to access and visualize simulation results, and perhaps to compare intermediate results across multiple steering attempts. These simple changes and components have a relatively low cost in terms of increasing software complexity.

Originality/value

The novelty lies in designing and implementing a practical approach to enable computational steering capability for PSEs embedded with legacy simulation code.

Article
Publication date: 31 December 2006

Akio Sashima, Noriaki Izumi and Koichi Kurumatani

In the vision of pervasive computing, numerous heterogeneous devices, various information services, and users performing daily activities are physically co‐located in a…

Abstract

In the vision of pervasive computing, numerous heterogeneous devices, various information services, and users performing daily activities are physically co‐located in a environment. How can we coordinate the services and devices to assist a particular user in receiving a particular service so as to maximize the user’s satisfaction? To solve this human‐centered coordination issue, we propose an agent‐based service coordination framework for pervasive computing. It is called location‐aware middle agent framework. The middle agent takes account of the user location in cognitive way (based on location‐ontology), and determines best‐matched services for the user. Based on this coordination framework, we have developed a multi‐agent architecture for pervasive computing, called CONSORTS (Coordination System of Real‐world Transaction Services). In this paper, we first outline some requirements of the human‐centered service coordination in pervasive computing. Secondly, we describe location‐aware middle agent framework to fill the requirements. Lastly, we outline CONSORTS, an prototype of location‐aware middle agent framework, and two applications of CONSORTS, location‐aware information assistance services in a museum and wireless‐LAN based location systems on FIPA agent Networks.

Details

International Journal of Pervasive Computing and Communications, vol. 2 no. 2
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 9 March 2022

G.L. Infant Cyril and J.P. Ananth

The bank is termed as an imperative part of the marketing economy. The failure or success of an institution relies on the ability of industries to compute the credit risk. The…

Abstract

Purpose

The bank is termed as an imperative part of the marketing economy. The failure or success of an institution relies on the ability of industries to compute the credit risk. The loan eligibility prediction model utilizes analysis method that adapts past and current information of credit user to make prediction. However, precise loan prediction with risk and assessment analysis is a major challenge in loan eligibility prediction.

Design/methodology/approach

This aim of the research technique is to present a new method, namely Social Border Collie Optimization (SBCO)-based deep neuro fuzzy network for loan eligibility prediction. In this method, box cox transformation is employed on input loan data to create the data apt for further processing. The transformed data utilize the wrapper-based feature selection to choose suitable features to boost the performance of loan eligibility calculation. Once the features are chosen, the naive Bayes (NB) is adapted for feature fusion. In NB training, the classifier builds probability index table with the help of input data features and groups values. Here, the testing of NB classifier is done using posterior probability ratio considering conditional probability of normalization constant with class evidence. Finally, the loan eligibility prediction is achieved by deep neuro fuzzy network, which is trained with designed SBCO. Here, the SBCO is devised by combining the social ski driver (SSD) algorithm and Border Collie Optimization (BCO) to produce the most precise result.

Findings

The analysis is achieved by accuracy, sensitivity and specificity parameter by. The designed method performs with the highest accuracy of 95%, sensitivity and specificity of 95.4 and 97.3%, when compared to the existing methods, such as fuzzy neural network (Fuzzy NN), multiple partial least squares regression model (Multi_PLS), instance-based entropy fuzzy support vector machine (IEFSVM), deep recurrent neural network (Deep RNN), whale social optimization algorithm-based deep RNN (WSOA-based Deep RNN).

Originality/value

This paper devises SBCO-based deep neuro fuzzy network for predicting loan eligibility. Here, the deep neuro fuzzy network is trained with proposed SBCO, which is devised by combining the SSD and BCO to produce most precise result for loan eligibility prediction.

Details

Kybernetes, vol. 52 no. 8
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 17 January 2022

Syed Haroon Abdul Gafoor and Padma Theagarajan

Conventional diagnostic techniques, on the other hand, may be prone to subjectivity since they depend on assessment of motions that are often subtle to individual eyes and hence…

126

Abstract

Purpose

Conventional diagnostic techniques, on the other hand, may be prone to subjectivity since they depend on assessment of motions that are often subtle to individual eyes and hence hard to classify, potentially resulting in misdiagnosis. Meanwhile, early nonmotor signs of Parkinson’s disease (PD) can be mild and may be due to variety of other conditions. As a result, these signs are usually ignored, making early PD diagnosis difficult. Machine learning approaches for PD classification and healthy controls or individuals with similar medical symptoms have been introduced to solve these problems and to enhance the diagnostic and assessment processes of PD (like, movement disorders or other Parkinsonian syndromes).

Design/methodology/approach

Medical observations and evaluation of medical symptoms, including characterization of a wide range of motor indications, are commonly used to diagnose PD. The quantity of the data being processed has grown in the last five years; feature selection has become a prerequisite before any classification. This study introduces a feature selection method based on the score-based artificial fish swarm algorithm (SAFSA) to overcome this issue.

Findings

This study adds to the accuracy of PD identification by reducing the amount of chosen vocal features while to use the most recent and largest publicly accessible database. Feature subset selection in PD detection techniques starts by eliminating features that are not relevant or redundant. According to a few objective functions, features subset chosen should provide the best performance.

Research limitations/implications

In many situations, this is an Nondeterministic Polynomial Time (NP-Hard) issue. This method enhances the PD detection rate by selecting the most essential features from the database. To begin, the data set's dimensionality is reduced using Singular Value Decomposition dimensionality technique. Next, Biogeography-Based Optimization (BBO) for feature selection; the weight value is a vital parameter for finding the best features in PD classification.

Originality/value

PD classification is done by using ensemble learning classification approaches such as hybrid classifier of fuzzy K-nearest neighbor, kernel support vector machines, fuzzy convolutional neural network and random forest. The suggested classifiers are trained using data from UCI ML repository, and their results are verified using leave-one-person-out cross validation. The measures employed to assess the classifier efficiency include accuracy, F-measure, Matthews correlation coefficient.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of 947