Search results
1 – 10 of 294Chaw Thet Zan and Hayato Yamana
The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments…
Abstract
Purpose
The paper aims to estimate the segment size and alphabet size of Symbolic Aggregate approXimation (SAX). In SAX, time series data are divided into a set of equal-sized segments. Each segment is represented by its mean value and mapped with an alphabet, where the number of adopted symbols is called alphabet size. Both parameters control data compression ratio and accuracy of time series mining tasks. Besides, optimal parameters selection highly depends on different application and data sets. In fact, these parameters are iteratively selected by analyzing entire data sets, which limits handling of the huge amount of time series and reduces the applicability of SAX.
Design/methodology/approach
The segment size is estimated based on Shannon sampling theorem (autoSAXSD_S) and adaptive hierarchical segmentation (autoSAXSD_M). As for the alphabet size, it is focused on how mean values of all the segments are distributed. The small number of alphabet size is set for large distribution to easily distinguish the difference among segments.
Findings
Experimental evaluation using University of California Riverside (UCR) data sets shows that the proposed schemes are able to select the parameters well with high classification accuracy and show comparable efficiency in comparison with state-of-the-art methods, SAX and auto_iSAX.
Originality/value
The originality of this paper is the way to find out the optimal parameters of SAX using the proposed estimation schemes. The first parameter segment size is automatically estimated on two approaches and the second parameter alphabet size is estimated on the most frequent average (mean) value among segments.
Details
Keywords
Taehoon Ko, Je Hyuk Lee, Hyunchang Cho, Sungzoon Cho, Wounjoo Lee and Miji Lee
Quality management of products is an important part of manufacturing process. One way to manage and assure product quality is to use machine learning algorithms based on…
Abstract
Purpose
Quality management of products is an important part of manufacturing process. One way to manage and assure product quality is to use machine learning algorithms based on relationship among various process steps. The purpose of this paper is to integrate manufacturing, inspection and after-sales service data to make full use of machine learning algorithms for estimating the products’ quality in a supervised fashion. Proposed frameworks and methods are applied to actual data associated with heavy machinery engines.
Design/methodology/approach
By following Lenzerini’s formula, manufacturing, inspection and after-sales service data from various sources are integrated. The after-sales service data are used to label each engine as normal or abnormal. In this study, one-class classification algorithms are used due to class imbalance problem. To address multi-dimensionality of time series data, the symbolic aggregate approximation algorithm is used for data segmentation. Then, binary genetic algorithm-based wrapper approach is applied to segmented data to find the optimal feature subset.
Findings
By employing machine learning-based anomaly detection models, an anomaly score for each engine is calculated. Experimental results show that the proposed method can detect defective engines with a high probability before they are shipped.
Originality/value
Through data integration, the actual customer-perceived quality from after-sales service is linked to data from manufacturing and inspection process. In terms of business application, data integration and machine learning-based anomaly detection can help manufacturers establish quality management policies that reflect the actual customer-perceived quality by predicting defective engines.
Details
Keywords
Naga Jyothi P., Rajya Lakshmi D. and Rama Rao K.V.S.N.
Analyzing medicare data is a role undertaken by the government and commercial companies for accepting the appeals and sanctioning the claims of those insured under Medicare. As…
Abstract
Purpose
Analyzing medicare data is a role undertaken by the government and commercial companies for accepting the appeals and sanctioning the claims of those insured under Medicare. As the data of medicare is robust and made up of heterogeneous typed columns, traditional approaches consist of a laborious and time-consuming process. The understanding and processing of such data sets and finding the role of each attribute for data analysis are tricky tasks which this research will attempt to ease. The paper aims to discuss these issues.
Design/methodology/approach
This paper proposes a Hierarchical Grouping (HG) with an experimental model to handle the complex data and analysis of the categorical data which consist of heterogeneous typed columns. The HG methodology starts with feature subset selection. HG forms a structure by quantitatively estimating the similarities and forms groups of the features for data. This is carried by applying metrics like decomposition; it splits the dataset and helps to analyze thoroughly under different labels with different selected attributes of Medicare data. The method of fixed regression includes metrics of re-indexing and grouping which works well for multiple keys (multi-index) of categorical data. The final stage of structure is applying multiple aggregation function on each attribute for quantitative computation.
Findings
The data are analyzed quantitatively with the HG mechanism. The results shown in this paper took less computation cost and speed, which are usually incurred on the publicly available data sets.
Practical implications
The motive of this paper is to provide a supportive work for the tasks like outlier detection, prediction, decision making and prescriptive tasks for multi-dimensional data.
Originality/value
It provides a new efficient approach to analyze medicare data sets.
Details
Keywords
Luis Acedo, Marta Botella, Juan Carlos Cortés, J. Ignacio Hidalgo, Esther Maqueda and Rafael Jacinto Villanueva
The purpose of this paper is to study insulin pump therapy and accurate monitoring of glucose levels in diabetic patients, which are current research trends in diabetology. Both…
Abstract
Purpose
The purpose of this paper is to study insulin pump therapy and accurate monitoring of glucose levels in diabetic patients, which are current research trends in diabetology. Both problems have a wide margin for improvement and promising applications in the control of parameters and levels involved.
Design/methodology/approach
The authors have registered data for the levels of glucose in diabetic patients throughout a day with a temporal resolution of 5 minutes, the amount and time of insulin administered and time of ingestion. The estimated quantity of carbohydrates is also monitored. A mathematical model for Type 1 patients was fitted piecewise to these data and the evolution of the parameters was analyzed.
Findings
They have found that the parameters for the model change abruptly throughout a day for the same patient, but this set of parameters account with precision for the evolution of the glucose levels in the test patients. This fitting technique could be used to personalize treatments for specific patients and predict the glucose-level variations in terms of hours or even shorter periods of time. This way more effective insulin pump therapies could be developed.
Originality/value
The proposed model could allow for the development of improved schedules on insulin pump therapies.
Details
Keywords
Hafiz Muhammad Athar Farid, Harish Garg, Muhammad Riaz and Gustavo Santos-García
Single-valued neutrosophic sets (SVNSs) are efficient models to address the complexity issues potentially with three components, namely indeterminacy, truthness and falsity…
Abstract
Purpose
Single-valued neutrosophic sets (SVNSs) are efficient models to address the complexity issues potentially with three components, namely indeterminacy, truthness and falsity. Taking advantage of SVNSs, this paper introduces some new aggregation operators (AOs) for information fusion of single-valued neutrosophic numbers (SVNNs) to meet multi-criteria group decision-making (MCGDM) challenges.
Design/methodology/approach
Einstein operators are well-known AOs for smooth approximation, and prioritized operators are suitable to take advantage of prioritized relationships among multiple criteria. Motivated by the features of these operators, new hybrid aggregation operators are proposed named as “single-valued neutrosophic Einstein prioritized weighted average (SVNEPWA) operator” and “single-valued neutrosophic Einstein prioritized weighted geometric (SVNEPWG) operators.” These hybrid aggregation operators are more efficient and reliable for information aggregation.
Findings
A robust approach for MCGDM problems is developed to take advantage of newly developed hybrid operators. The effectiveness of the proposed MCGDM method is demonstrated by numerical examples. Moreover, a comparative analysis and authenticity analysis of the suggested MCGDM approach with existing approaches are offered to examine the practicality, validity and superiority of the proposed operators.
Originality/value
The study reveals that by choosing a suitable AO as per the choice of the expert, it will provide a wide range of compromise solutions for the decision-maker.
Details
Keywords
Enhancing the innovative behaviour of knowledge workers is a main task in knowledge management. The pay-for-performance policy is one of the management practices for innovative…
Abstract
Purpose
Enhancing the innovative behaviour of knowledge workers is a main task in knowledge management. The pay-for-performance policy is one of the management practices for innovative behaviour enhancement and has been gaining popularity in the knowledge-intensive context. However, it is still uncertain whether such practice really enhances the innovative behaviour of knowledge workers. To address this issue, this paper aims to propose and verify a conceptual framework incorporating kernel notions of social exchange, psychological empowerment and work engagement rooted in the social cognition paradigm.
Design/methodology/approach
The current study conducts a survey on 608 knowledge workers and their supervisors, validating the model structure and causal path pattern of the proposed framework. The causality is delineated from social exchange attributes of financial incentive, psychological empowerment and work engagement to innovative behaviour of knowledge workers.
Findings
Perceived organisational support and perceived pay equity are primary antecedents of symbolic incentive meaning reflected in the financial incentive of the pay-for-performance policy. Symbolic incentive meaning comprising dimensions of relative position, control and personal importance relates positively to innovative behaviour of knowledge workers. Psychological empowerment and work engagement are partial mediators of the positive relationship.
Originality/value
The current study explicates why and how social exchange attributes of the financial incentive provided by the pay-for-performance policy may enhance innovative behaviour of knowledge workers. Implications are supplied to knowledge management scholars and practitioners to optimise the pay-for-performance policy for innovative behaviour enhancement.
Details
Keywords
This paper aims to study the symbolic categorisations management accountants produce. It examines the categories they use to describe their work and analyses the meanings they…
Abstract
Purpose
This paper aims to study the symbolic categorisations management accountants produce. It examines the categories they use to describe their work and analyses the meanings they attach to such categories. This aims at explaining how management accountants can follow a common occupational orientation despite the need to adjust their practices to the specificities of their local and organisational context. The author’s argument is that management accountants build symbolic categories to create a bridge between what they do and who they are. The author further argues that symbolic categories are needed to make sense of a practice in tension between a common aspirational orientation and heterogeneous local contexts.
Design/methodology/approach
This paper draws on a multiple case field study conducted by observation and interviews in a range of organisations.
Findings
This paper examines the empirical diversity of management accountants’ practices and perceptions through the symbolic categories they produce. The author finds that categorisation work constitutes a central mechanism to build a shared narrative despite heterogeneous situations. The author further shows that through symbolic categorisation work, a variety of activities ranging from bookkeeping through managerial support to hierarchical surveillance and challenge in the name of the shareholder are subsumed under stable labels. This, he argues, serves to mask financial accountability, shareholder orientation and hierarchical control behind a narrative of “support” and “partnership”.
Originality/value
This paper contributes to literature on management accountants’ identity by showing the central role played by symbolic categorisations. It also contributes to literature in accounting more generally by showing how symbolic categorisation work blurs the lines between “operational support” and “shareholder value creation”. The same words are used to refer to activities that managers consider helpful to make operational decisions and other activities that increase shareholder control and surveillance and encourage managers to internalise the frames and objectives of shareholder value creation. Symbolic categories that include hierarchical financial accountability within a narrative of “support” and “partnership” masks “financialisation” behind a rhetoric of “business orientation”.
Details
Keywords
To test empirical relationships between export market information use and export knowledge and export performance.
Abstract
Purpose
To test empirical relationships between export market information use and export knowledge and export performance.
Design/methodology/approach
Confirmatory factor analysis, using LISREL 8.50, based on a postal survey. The setting selected was the Norwegian seafood industry, mainly consisting of a number of small and medium‐sized firms with a strong export dependency.
Findings
The results indicate that “instrumental/conceptual” use of information positively affects both export knowledge and export performance, while “symbolic” use does not affect either. Export knowledge is found to have no direct influence on export performance in this study.
Research limitations/implications
For generalisation purposes, longitudinal studies in multiple settings would be preferable to this cross‐sectional survey in a specific setting.
Practical implications
Firms accumulate knowledge and expertise by integrating and incorporating information that has been processed, interpreted and used. This study underscores the importance, for success in export markets, of a commitment to systematically generating, disseminating and responding to export market information. There are clear implications for the management of market intelligence and planning, to enhance the firm's performance.
Originality/value
Provides a better understanding of export market information use and its consequences, by integrating it with the concepts of export knowledge and export performance, and testing their structural relations.
Details
Keywords
Da Ruan, Jun Liu and Roland Carchon
A flexible and realistic linguistic assessment approach is developed to provide a mathematical tool for synthesis and evaluation analysis of nuclear safeguards indicator…
Abstract
A flexible and realistic linguistic assessment approach is developed to provide a mathematical tool for synthesis and evaluation analysis of nuclear safeguards indicator information. This symbolic approach, which acts by the direct computation on linguistic terms, is established based on fuzzy set theory. More specifically, a lattice‐valued linguistic algebra model, which is based on a logical algebraic structure of the lattice implication algebra, is applied to represent imprecise information and to deal with both comparable and incomparable linguistic terms (i.e. non‐ordered linguistic values). Within this framework, some weighted aggregation functions introduced by Yager are analyzed and extended to treat these kinds of lattice‐value linguistic information. The application of these linguistic aggregation operators for managing nuclear safeguards indicator information is successfully demonstrated.
Details
Keywords
Gabriella Vindigni, Marco A. Janssen and Wander Jager
An approach is introduced to combine survey data with multi‐agent simulation models of consumer behaviour to study the diffusion process of organic food consumption. This…
Abstract
An approach is introduced to combine survey data with multi‐agent simulation models of consumer behaviour to study the diffusion process of organic food consumption. This methodology is based on rough set theory, which is able to translate survey data into behavioural rules. The topic of rule induction has been extensively investigated in other fields and in particular in learning machine, where several efficient algorithms have been proposed. However, the peculiarity of the rough set approach is that the inconsistencies in a data set about consumer behaviour are not aggregated or corrected since lower and upper approximation are computed. Thus, we expect that rough sets theory is suitable to extract knowledge in the form of rules within a consistent theoretical framework of consumer behaviour.
Details