Search results

1 – 10 of over 12000
Article
Publication date: 19 September 2008

George Menexes and Stamatis Angelopoulos

The aim of the study is to propose certain agricultural policy measures for the financing and development of Greek farms, established by young farmers, based on the results of a…

Abstract

Purpose

The aim of the study is to propose certain agricultural policy measures for the financing and development of Greek farms, established by young farmers, based on the results of a clustering method suitable for handling socio‐economic categorical data.

Design/methodology/approach

The clustering method was applied to categorical data collected from 110 randomly selected investment plans of Greek agricultural farms. The investment plans were submitted to the “Region of Central Macedonia” administrative office, in the framework of the Operational Programme “Agricultural Development – Reform of the Countryside 2000‐2006” and refer to agricultural investments by “Young Farmers”, according to the terms and conditions of Priority Axis III: “Improvement of the Age Composition of the Agricultural Population”. The input variables for the analyses were the farmers' gender, age class, education level and permanent place of residence, the farms' agricultural activity, Human Labour Units (HLU) and farms' viability level. All these variables were measured on nominal or ordinal scales. The available data were analyzed by means of a hierarchical cluster analysis method applied on the rows of an appropriate matrix of a complete disjunctive form with a dummy coding 0 or 1. The similarities were measured through the Benzécri'sχ2distance (metric), while the Ward's method was used as a criterion for cluster formation.

Findings

Five clusters of farms emerged, with statistically significant diverse socio‐economic profiles. The most important impact on the formation of the groups of farms was found to be related to the number of HLU, the farmers' level of education and gender. This derived typology allows for the determination of a flexible development and funding policy for the agricultural farms, based on the socio‐economic profile of the formulated clusters.

Research limitations/implications

One of the limitations of the current study derives from the fact that the clustering method used is suitable only for categorical, non‐metric data. Another limitation comes from the fact that a relative small number of investment plans were used in the analysis. A larger sample covering and other geographical regions is needed in order to confirm the current results and make nation‐wide comparisons and “tailor‐made” proposals for financing and development. Finally, it is interesting to contact longitudinal surveys in order to evaluate the effectiveness of the funding policy of the corresponding programme.

Originality/value

The study's results could be useful to practitioners and academics because certain agricultural policy measures for the financing and development of Greek farms established by young farmers are proposed. Additionally, the data analysis method used in this study offers an alternative way for clustering categorical data.

Details

EuroMed Journal of Business, vol. 3 no. 3
Type: Research Article
ISSN: 1450-2194

Keywords

Article
Publication date: 7 November 2019

Andika Rachman and R.M. Chandima Ratnayake

Corrosion loop development is an integral part of the risk-based inspection (RBI) methodology. The corrosion loop approach allows a group of piping to be analyzed simultaneously…

Abstract

Purpose

Corrosion loop development is an integral part of the risk-based inspection (RBI) methodology. The corrosion loop approach allows a group of piping to be analyzed simultaneously, thus reducing non-value adding activities by eliminating repetitive degradation mechanism assessment for piping with similar operational and design characteristics. However, the development of the corrosion loop requires rigorous process that involves a considerable amount of engineering man-hours. Moreover, corrosion loop development process is a type of knowledge-intensive work that involves engineering judgement and intuition, causing the output to have high variability. The purpose of this paper is to reduce the amount of time and output variability of corrosion loop development process by utilizing machine learning and group technology method.

Design/methodology/approach

To achieve the research objectives, k-means clustering and non-hierarchical classification model are utilized to construct an algorithm that allows automation and a more effective and efficient corrosion loop development process. A case study is provided to demonstrate the functionality and performance of the corrosion loop development algorithm on an actual piping data set.

Findings

The results show that corrosion loops generated by the algorithm have lower variability and higher coherence than corrosion loops produced by manual work. Additionally, the utilization of the algorithm simplifies the corrosion loop development workflow, which potentially reduces the amount of time required to complete the development. The application of corrosion loop development algorithm is expected to generate a “leaner” overall RBI assessment process.

Research limitations/implications

Although the algorithm allows a part of corrosion loop development workflow to be automated, it is still deemed as necessary to allow the incorporation of the engineer’s expertise, experience and intuition into the algorithm outputs in order to capture tacit knowledge and refine insights generated by the algorithm intelligence.

Practical implications

This study shows that the advancement of Big Data analytics and artificial intelligence can promote the substitution of machines for human labors to conduct highly complex tasks requiring high qualifications and cognitive skills, including inspection and maintenance management area.

Originality/value

This paper discusses the novel way of developing a corrosion loop. The development of corrosion loop is an integral part of the RBI methodology, but it has less attention among scholars in inspection and maintenance-related subjects.

Details

Journal of Quality in Maintenance Engineering, vol. 26 no. 3
Type: Research Article
ISSN: 1355-2511

Keywords

Open Access
Article
Publication date: 17 October 2019

Petros Maravelakis

The purpose this paper is to review some of the statistical methods used in the field of social sciences.

49750

Abstract

Purpose

The purpose this paper is to review some of the statistical methods used in the field of social sciences.

Design/methodology/approach

A review of some of the statistical methodologies used in areas like survey methodology, official statistics, sociology, psychology, political science, criminology, public policy, marketing research, demography, education and economics.

Findings

Several areas are presented such as parametric modeling, nonparametric modeling and multivariate methods. Focus is also given to time series modeling, analysis of categorical data and sampling issues and other useful techniques for the analysis of data in the social sciences. Indicative references are given for all the above methods along with some insights for the application of these techniques.

Originality/value

This paper reviews some statistical methods that are used in social sciences and the authors draw the attention of researchers on less popular methods. The purpose is not to give technical details and also not to refer to all the existing techniques or to all the possible areas of statistics. The focus is mainly on the applied aspect of the techniques and the authors give insights about techniques that can be used to answer problems in the abovementioned areas of research.

Details

Journal of Humanities and Applied Social Sciences, vol. 1 no. 2
Type: Research Article
ISSN:

Keywords

Book part
Publication date: 15 July 2019

David E. Caughlin and Talya N. Bauer

Data visualizations in some form or another have served as decision-support tools for many centuries. In conjunction with advancements in information technology, data

Abstract

Data visualizations in some form or another have served as decision-support tools for many centuries. In conjunction with advancements in information technology, data visualizations have become more accessible and more efficient to generate. In fact, virtually all enterprise resource planning and human resource (HR) information system vendors offer off-the-shelf data visualizations as part of decision-support dashboards as well as stand-alone images and displays for reporting. Plus, advances in programing languages and software such as Tableau, Microsoft Power BI, R, and Python have expanded the possibilities of fully customized graphics. Despite the proliferation of data visualization, relatively little is known about how to design data visualizations for displaying different types of HR data to different user groups, for different purposes, and with the overarching goal of improving the ways in which users comprehend and interpret data visualizations for decision-making purposes. To understand the state of science and practice as they relate to HR data visualizations and data visualizations in general, we review the literature on data visualizations across disciplines and offer an organizing framework that emphasizes the roles data visualization characteristics (e.g., display type, features), user characteristics (e.g., experience, individual differences), tasks, and objectives (e.g., compare values) play in user comprehension, interpretation, and decision-making. Finally, we close by proposing future directions for science and practice.

Details

Research in Personnel and Human Resources Management
Type: Book
ISBN: 978-1-78973-852-0

Keywords

Article
Publication date: 22 February 2013

Abhilash Ponnam and Jagrook Dawra

There is a lack of a framework that explicates how to determine the benefits that consumers desire from a product. The purpose of this article is to formulate a scientific…

2672

Abstract

Purpose

There is a lack of a framework that explicates how to determine the benefits that consumers desire from a product. The purpose of this article is to formulate a scientific procedure for discerning the benefits that consumers seek from a product. The authors term this procedure as visual thematic analysis (VTA). VTA procedure is illustrated through discerning the benefits of mainstream (non‐financial) English newspapers.

Design/methodology/approach

The focus group method was used to collect data. These data were analyzed using visual thematic analysis which involves using multiple investigators and multi‐dimensional scaling techniques in stages.

Findings

A total of 26 newspaper attributes combined to form eight distinct newspaper benefits namely ease of comprehension, journalistic values, critical insights, general news, entertainment, well‐being, classifieds and offers.

Practical implications

Obtained results may be used further: to segment the newspaper market based upon benefits sought, to position newspapers within the desired segment(s) and to fashion product mix in a way that appeals to the targeted segment(s).

Originality/value

This paper proposes a new method called “visual thematic analysis” for data reduction. One such application of VTA is “discerning product benefits” which is discussed in detail. Other applications of this technique that are mentioned in the paper are in the areas of data reduction when researcher confronts small sample size, data reduction of categorical variables and scale development.

Article
Publication date: 14 October 2019

Naga Jyothi P., Rajya Lakshmi D. and Rama Rao K.V.S.N.

Analyzing medicare data is a role undertaken by the government and commercial companies for accepting the appeals and sanctioning the claims of those insured under Medicare. As…

Abstract

Purpose

Analyzing medicare data is a role undertaken by the government and commercial companies for accepting the appeals and sanctioning the claims of those insured under Medicare. As the data of medicare is robust and made up of heterogeneous typed columns, traditional approaches consist of a laborious and time-consuming process. The understanding and processing of such data sets and finding the role of each attribute for data analysis are tricky tasks which this research will attempt to ease. The paper aims to discuss these issues.

Design/methodology/approach

This paper proposes a Hierarchical Grouping (HG) with an experimental model to handle the complex data and analysis of the categorical data which consist of heterogeneous typed columns. The HG methodology starts with feature subset selection. HG forms a structure by quantitatively estimating the similarities and forms groups of the features for data. This is carried by applying metrics like decomposition; it splits the dataset and helps to analyze thoroughly under different labels with different selected attributes of Medicare data. The method of fixed regression includes metrics of re-indexing and grouping which works well for multiple keys (multi-index) of categorical data. The final stage of structure is applying multiple aggregation function on each attribute for quantitative computation.

Findings

The data are analyzed quantitatively with the HG mechanism. The results shown in this paper took less computation cost and speed, which are usually incurred on the publicly available data sets.

Practical implications

The motive of this paper is to provide a supportive work for the tasks like outlier detection, prediction, decision making and prescriptive tasks for multi-dimensional data.

Originality/value

It provides a new efficient approach to analyze medicare data sets.

Details

International Journal of Intelligent Unmanned Systems, vol. 8 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 3 September 2024

Biplab Bhattacharjee, Kavya Unni and Maheshwar Pratap

Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This…

Abstract

Purpose

Product returns are a major challenge for e-businesses as they involve huge logistical and operational costs. Therefore, it becomes crucial to predict returns in advance. This study aims to evaluate different genres of classifiers for product return chance prediction, and further optimizes the best performing model.

Design/methodology/approach

An e-commerce data set having categorical type attributes has been used for this study. Feature selection based on chi-square provides a selective features-set which is used as inputs for model building. Predictive models are attempted using individual classifiers, ensemble models and deep neural networks. For performance evaluation, 75:25 train/test split and 10-fold cross-validation strategies are used. To improve the predictability of the best performing classifier, hyperparameter tuning is performed using different optimization methods such as, random search, grid search, Bayesian approach and evolutionary models (genetic algorithm, differential evolution and particle swarm optimization).

Findings

A comparison of F1-scores revealed that the Bayesian approach outperformed all other optimization approaches in terms of accuracy. The predictability of the Bayesian-optimized model is further compared with that of other classifiers using experimental analysis. The Bayesian-optimized XGBoost model possessed superior performance, with accuracies of 77.80% and 70.35% for holdout and 10-fold cross-validation methods, respectively.

Research limitations/implications

Given the anonymized data, the effects of individual attributes on outcomes could not be investigated in detail. The Bayesian-optimized predictive model may be used in decision support systems, enabling real-time prediction of returns and the implementation of preventive measures.

Originality/value

There are very few reported studies on predicting the chance of order return in e-businesses. To the best of the authors’ knowledge, this study is the first to compare different optimization methods and classifiers, demonstrating the superiority of the Bayesian-optimized XGBoost classification model for returns prediction.

Details

Journal of Systems and Information Technology, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1328-7265

Keywords

Abstract

Details

Machine Learning and Artificial Intelligence in Marketing and Sales
Type: Book
ISBN: 978-1-80043-881-1

Article
Publication date: 14 March 2016

Fatima Isiaka, Kassim S Mwitondi and Adamu M Ibrahim

The purpose of this paper is to proposes a forward search algorithm for detecting and identifying natural structures arising in human-computer interaction (HCI) and human…

Abstract

Purpose

The purpose of this paper is to proposes a forward search algorithm for detecting and identifying natural structures arising in human-computer interaction (HCI) and human physiological response (HPR) data.

Design/methodology/approach

The paper portrays aspects that are essential to modelling and precision in detection. The methods involves developed algorithm for detecting outliers in data to recognise natural patterns in incessant data such as HCI-HPR data. The detected categorical data are simultaneously labelled based on the data reliance on parametric rules to predictive models used in classification algorithms. Data were also simulated based on multivariate normal distribution method and used to compare and validate the original data.

Findings

Results shows that the forward search method provides robust features that are capable of repelling over-fitting in physiological and eye movement data.

Research limitations/implications

One of the limitations of the robust forward search algorithm is that when the number of digits for residuals value is more than the expected size for stack flow, it normally yields an error caution; to counter this, the data sets are normally standardized by taking the logarithmic function of the model before running the algorithm.

Practical implications

The authors conducted some of the experiments at individual residence which may affect environmental constraints.

Originality/value

The novel approach to this method is the detection of outliers for data sets based on the Mahalanobis distances on HCI and HPR. And can also involve a large size of data with p possible parameters. The improvement made to the algorithm is application of more graphical display and rendering of the residual plot.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 9 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 2 September 2019

Pedro Albuquerque, Gisela Demo, Solange Alfinito and Kesia Rozzett

Factor analysis is the most used tool in organizational research and its widespread use in scale validations contribute to decision-making in management. However, standard factor…

1893

Abstract

Purpose

Factor analysis is the most used tool in organizational research and its widespread use in scale validations contribute to decision-making in management. However, standard factor analysis is not always applied correctly mainly due to the misuse of ordinal data as interval data and the inadequacy of the former for classical factor analysis. The purpose of this paper is to present and apply the Bayesian factor analysis for mixed data (BFAMD) in the context of empirical using the Bayesian paradigm for the construction of scales.

Design/methodology/approach

Ignoring the categorical nature of some variables often used in management studies, as the popular Likert scale, may result in a model with false accuracy and possibly biased estimates. To address this issue, Quinn (2004) proposed a Bayesian factor analysis model for mixed data, which is capable of modeling ordinal (qualitative measure) and continuous data (quantitative measure) jointly and allows the inclusion of qualitative information through prior distributions for the parameters’ model. This model, adopted here, presents considering advantages and allows the estimation of the posterior distribution for the latent variables estimated, making the process of inference easier.

Findings

The results show that BFAMD is an effective approach for scale validation in management studies making both exploratory and confirmatory analyses possible for the estimated factors and also allowing the analysts to insert a priori information regardless of the sample size, either by using the credible intervals for Factor Loadings or by conducting specific hypotheses tests. The flexibility of the Bayesian approach presented is counterbalanced by the fact that the main estimates used in factor analysis as uniqueness and communalities commonly lose their usual interpretation due to the choice of using prior distributions.

Originality/value

Considering that the development of scales through factor analysis aims to contribute to appropriate decision-making in management and the increasing misuse of ordinal scales as interval in organizational studies, this proposal seems to be effective for mixed data analyses. The findings found here are not intended to be conclusive or limiting but offer a useful starting point from which further theoretical and empirical research of Bayesian factor analysis can be built.

Details

RAUSP Management Journal, vol. 54 no. 4
Type: Research Article
ISSN: 2531-0488

Keywords

1 – 10 of over 12000