Search results

1 – 10 of over 3000
Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 11 April 2023

Wenhao Yi, Mingnian Wang, Jianjun Tong, Siguang Zhao, Jiawang Li, Dengbin Gui and Xiao Zhang

The purpose of the study is to quickly identify significant heterogeneity of surrounding rock of tunnel face that generally occurs during the construction of large-section rock…

Abstract

Purpose

The purpose of the study is to quickly identify significant heterogeneity of surrounding rock of tunnel face that generally occurs during the construction of large-section rock tunnels of high-speed railways.

Design/methodology/approach

Relying on the support vector machine (SVM)-based classification model, the nominal classification of blastholes and nominal zoning and classification terms were used to demonstrate the heterogeneity identification method for the surrounding rock of tunnel face, and the identification calculation was carried out for the five test tunnels. Then, the suggestions for local optimization of the support structures of large-section rock tunnels were put forward.

Findings

The results show that compared with the two classification models based on neural networks, the SVM-based classification model has a higher classification accuracy when the sample size is small, and the average accuracy can reach 87.9%. After the samples are replaced, the SVM-based classification model can still reach the same accuracy, whose generalization ability is stronger.

Originality/value

By applying the identification method described in this paper, the significant heterogeneity characteristics of the surrounding rock in the process of two times of blasting were identified, and the identification results are basically consistent with the actual situation of the tunnel face at the end of blasting, and can provide a basis for local optimization of support parameters.

Details

Railway Sciences, vol. 2 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Open Access
Article
Publication date: 23 July 2020

Rami Mustafa A. Mohammad

Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet…

2015

Abstract

Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet users. Several features can be used for creating data mining and machine learning based spam classification models. Yet, spammers know that the longer they will use the same set of features for tricking email users the more probably the anti-spam parties might develop tools for combating this kind of annoying email messages. Spammers, so, adapt by continuously reforming the group of features utilized for composing spam emails. For that reason, even though traditional classification methods possess sound classification results, they were ineffective for lifelong classification of spam emails duo to the fact that they might be prone to the so-called “Concept Drift”. In the current study, an enhanced model is proposed for ensuring lifelong spam classification model. For the evaluation purposes, the overall performance of the suggested model is contrasted against various other stream mining classification techniques. The results proved the success of the suggested model as a lifelong spam emails classification method.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 8 March 2021

Mamdouh Abdel Alim Saad Mowafy and Walaa Mohamed Elaraby Mohamed Shallan

Heart diseases have become one of the most causes of death among Egyptians. With 500 deaths per 100,000 occurring annually in Egypt, it has been noticed that medical data faces a…

1106

Abstract

Purpose

Heart diseases have become one of the most causes of death among Egyptians. With 500 deaths per 100,000 occurring annually in Egypt, it has been noticed that medical data faces a high-dimensional problem that leads to a decrease in the classification accuracy of heart data. So the purpose of this study is to improve the classification accuracy of heart disease data for helping doctors efficiently diagnose heart disease by using a hybrid classification technique.

Design/methodology/approach

This paper used a new approach based on the integration between dimensionality reduction techniques as multiple correspondence analysis (MCA) and principal component analysis (PCA) with fuzzy c means (FCM) then with both of multilayer perceptron (MLP) and radial basis function networks (RBFN) which separate patients into different categories based on their diagnosis results in this paper, a comparative study of the performance performed including six structures such as MLP, RBFN, MLP via FCM–MCA, MLP via FCM–PCA, RBFN via FCM–MCA and RBFN via FCM–PCA to reach to the best classifier.

Findings

The results show that the MLP via FCM–MCA classifier structure has the highest ratio of classification accuracy and has the best performance superior to other methods; and that Smoking was the most factor causing heart disease.

Originality/value

This paper shows the importance of integrating statistical methods in increasing the classification accuracy of heart disease data.

Details

Review of Economics and Political Science, vol. 6 no. 3
Type: Research Article
ISSN: 2356-9980

Keywords

Open Access
Article
Publication date: 21 December 2023

Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…

1181

Abstract

Purpose

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.

Design/methodology/approach

To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.

Findings

The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.

Practical implications

Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 14 December 2021

Mariam Elhussein and Samiha Brahimi

This paper aims to propose a novel way of using textual clustering as a feature selection method. It is applied to identify the most important keywords in the profile…

Abstract

Purpose

This paper aims to propose a novel way of using textual clustering as a feature selection method. It is applied to identify the most important keywords in the profile classification. The method is demonstrated through the problem of sick-leave promoters on Twitter.

Design/methodology/approach

Four machine learning classifiers were used on a total of 35,578 tweets posted on Twitter. The data were manually labeled into two categories: promoter and nonpromoter. Classification performance was compared when the proposed clustering feature selection approach and the standard feature selection were applied.

Findings

Radom forest achieved the highest accuracy of 95.91% higher than similar work compared. Furthermore, using clustering as a feature selection method improved the Sensitivity of the model from 73.83% to 98.79%. Sensitivity (recall) is the most important measure of classifier performance when detecting promoters’ accounts that have spam-like behavior.

Research limitations/implications

The method applied is novel, more testing is needed in other datasets before generalizing its results.

Practical implications

The model applied can be used by Saudi authorities to report on the accounts that sell sick-leaves online.

Originality/value

The research is proposing a new way textual clustering can be used in feature selection.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 15 July 2021

Kalervo Järvelin and Pertti Vakkari

This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research…

5718

Abstract

Purpose

This paper analyses the research in Library and Information Science (LIS) and reports on (1) the status of LIS research in 2015 and (2) on the evolution of LIS research longitudinally from 1965 to 2015.

Design/methodology/approach

The study employs a quantitative intellectual content analysis of articles published in 30+ scholarly LIS journals, following the design by Tuomaala et al. (2014). In the content analysis, we classify articles along eight dimensions covering topical content and methodology.

Findings

The topical findings indicate that the earlier strong LIS emphasis on L&I services has declined notably, while scientific and professional communication has become the most popular topic. Information storage and retrieval has given up its earlier strong position towards the end of the years analyzed. Individuals are increasingly the units of observation. End-user's and developer's viewpoints have strengthened at the cost of intermediaries' viewpoint. LIS research is methodologically increasingly scattered since survey, scientometric methods, experiment, case studies and qualitative studies have all gained in popularity. Consequently, LIS may have become more versatile in the analysis of its research objects during the years analyzed.

Originality/value

Among quantitative intellectual content analyses of LIS research, the study is unique in its scope: length of analysis period (50 years), width (8 dimensions covering topical content and methodology) and depth (the annual batch of 30+ scholarly journals).

Open Access
Article
Publication date: 5 June 2023

Elias Shohei Kamimura, Anderson Rogério Faia Pinto and Marcelo Seido Nagano

This paper aims to present a literature review of the most recent optimisation methods applied to Credit Scoring Models (CSMs).

2516

Abstract

Purpose

This paper aims to present a literature review of the most recent optimisation methods applied to Credit Scoring Models (CSMs).

Design/methodology/approach

The research methodology employed technical procedures based on bibliographic and exploratory analyses. A traditional investigation was carried out using the Scopus, ScienceDirect and Web of Science databases. The papers selection and classification took place in three steps considering only studies in English language and published in electronic journals (from 2008 to 2022). The investigation led up to the selection of 46 publications (10 presenting literature reviews and 36 proposing CSMs).

Findings

The findings showed that CSMs are usually formulated using Financial Analysis, Machine Learning, Statistical Techniques, Operational Research and Data Mining Algorithms. The main databases used by the researchers were banks and the University of California, Irvine. The analyses identified 48 methods used by CSMs, the main ones being: Logistic Regression (13%), Naive Bayes (10%) and Artificial Neural Networks (7%). The authors conclude that advances in credit score studies will require new hybrid approaches capable of integrating Big Data and Deep Learning algorithms into CSMs. These algorithms should have practical issues considered consider practical issues for improving the level of adaptation and performance demanded for the CSMs.

Practical implications

The results of this study might provide considerable practical implications for the application of CSMs. As it was aimed to demonstrate the application of optimisation methods, it is highly considerable that legal and ethical issues should be better adapted to CSMs. It is also suggested improvement of studies focused on micro and small companies for sales in instalment plans and commercial credit through the improvement or new CSMs.

Originality/value

The economic reality surrounding credit granting has made risk management a complex decision-making issue increasingly supported by CSMs. Therefore, this paper satisfies an important gap in the literature to present an analysis of recent advances in optimisation methods applied to CSMs. The main contribution of this paper consists of presenting the evolution of the state of the art and future trends in studies aimed at proposing better CSMs.

Details

Journal of Economics, Finance and Administrative Science, vol. 28 no. 56
Type: Research Article
ISSN: 2077-1886

Keywords

Open Access
Article
Publication date: 6 August 2019

Anton Wiberg, Johan Persson and Johan Ölvander

This paper aims to review recent research in design for additive manufacturing (DfAM), including additive manufacturing (AM) terminology, trends, methods, classification of DfAM…

16525

Abstract

Purpose

This paper aims to review recent research in design for additive manufacturing (DfAM), including additive manufacturing (AM) terminology, trends, methods, classification of DfAM methods and software. The focus is on the design engineer’s role in the DfAM process and includes which design methods and tools exist to aid the design process. This includes methods, guidelines and software to achieve design optimization and in further steps to increase the level of design automation for metal AM techniques. The research has a special interest in structural optimization and the coupling between topology optimization and AM.

Design/methodology/approach

The method used in the review consists of six rounds in which literature was sequentially collected, sorted and removed. Full presentation of the method used could be found in the paper.

Findings

Existing DfAM research has been divided into three main groups – component, part and process design – and based on the review of existing DfAM methods, a proposal for a DfAM process has been compiled. Design support suitable for use by design engineers is linked to each step in the compiled DfAM process. Finally, the review suggests a possible new DfAM process that allows a higher degree of design automation than today’s process. Furthermore, research areas that need to be further developed to achieve this framework are pointed out.

Originality/value

The review maps existing research in design for additive manufacturing and compiles a proposed design method. For each step in the proposed method, existing methods and software are coupled. This type of overall methodology with connecting methods and software did not exist before. The work also contributes with a discussion regarding future design process and automation.

Details

Rapid Prototyping Journal, vol. 25 no. 6
Type: Research Article
ISSN: 1355-2546

Keywords

Open Access
Article
Publication date: 9 April 2018

Maheshwaran Gopalakrishnan and Anders Skoogh

The purpose of this paper is to identify the productivity improvement potentials from maintenance planning practices in manufacturing companies. In particular, the paper aims at…

5454

Abstract

Purpose

The purpose of this paper is to identify the productivity improvement potentials from maintenance planning practices in manufacturing companies. In particular, the paper aims at understanding the connection between machine criticality assessment and maintenance prioritization in industrial practice, as well as providing the improvement potentials.

Design/methodology/approach

An explanatory mixed method research design was used in this study. Data from literature analysis, a web-based questionnaire survey, and semi-structured interviews were gathered and triangulated. Additionally, simulation experimentation was used to evaluate the productivity potential.

Findings

The connection between machine criticality and maintenance prioritization is assessed in an industrial set-up. The empirical findings show that maintenance prioritization is not based on machine criticality, as criticality assessment is non-factual, static, and lacks system view. It is with respect to these finding that the ways to increase system productivity and future directions are charted.

Originality/value

In addition to the empirical results showing productivity improvement potentials, the paper emphasizes on the need for a systems view for solving maintenance problems, i.e. solving maintenance problems for the whole factory. This contribution is equally important for both industry and academics, as the maintenance organization needs to solve this problem with the help of the right decision support.

Details

International Journal of Productivity and Performance Management, vol. 67 no. 4
Type: Research Article
ISSN: 1741-0401

Keywords

1 – 10 of over 3000