Search results

1 – 10 of 50
Article
Publication date: 30 November 2004

S B Kotsiantis and P E Pintelas

Machine Learning algorithms fed with data sets which include information such as attendance data, test scores and other student information can provide tutors with…

Abstract

Machine Learning algorithms fed with data sets which include information such as attendance data, test scores and other student information can provide tutors with powerful tools for decision‐making. Until now, much of the research has been limited to the relation between single variables and student performance. Combining multiple variables as possible predictors of dropout has generally been overlooked. The aim of this work is to present a high level architecture and a case study for a prototype machine learning tool which can automatically recognize dropout‐prone students in university level distance learning classes. Tracking student progress is a time‐consuming job which can be handled automatically by such a tool. While the tutors will still have an essential role in monitoring and evaluating student progress, the tool can compile the data required for reasonable and efficient monitoring. What is more, the application of the tool is not restricted to predicting drop‐out prone students: it can be also used for the prediction of students’ marks, for the prediction of how many students will submit a written assignment, etc. It can also help tutors explore data and build models for prediction, forecasting and classification. Finally, the underlying architecture is independent of the data set and as such it can be used to develop other similar tools

Details

Interactive Technology and Smart Education, vol. 1 no. 4
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 18 October 2018

Kalyan Nagaraj, Biplab Bhattacharjee, Amulyashree Sridhar and Sharvani GS

Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because…

Abstract

Purpose

Phishing is one of the major threats affecting businesses worldwide in current times. Organizations and customers face the hazards arising out of phishing attacks because of anonymous access to vulnerable details. Such attacks often result in substantial financial losses. Thus, there is a need for effective intrusion detection techniques to identify and possibly nullify the effects of phishing. Classifying phishing and non-phishing web content is a critical task in information security protocols, and full-proof mechanisms have yet to be implemented in practice. The purpose of the current study is to present an ensemble machine learning model for classifying phishing websites.

Design/methodology/approach

A publicly available data set comprising 10,068 instances of phishing and legitimate websites was used to build the classifier model. Feature extraction was performed by deploying a group of methods, and relevant features extracted were used for building the model. A twofold ensemble learner was developed by integrating results from random forest (RF) classifier, fed into a feedforward neural network (NN). Performance of the ensemble classifier was validated using k-fold cross-validation. The twofold ensemble learner was implemented as a user-friendly, interactive decision support system for classifying websites as phishing or legitimate ones.

Findings

Experimental simulations were performed to access and compare the performance of the ensemble classifiers. The statistical tests estimated that RF_NN model gave superior performance with an accuracy of 93.41 per cent and minimal mean squared error of 0.000026.

Research limitations/implications

The research data set used in this study is publically available and easy to analyze. Comparative analysis with other real-time data sets of recent origin must be performed to ensure generalization of the model against various security breaches. Different variants of phishing threats must be detected rather than focusing particularly toward phishing website detection.

Originality/value

The twofold ensemble model is not applied for classification of phishing websites in any previous studies as per the knowledge of authors.

Details

Journal of Systems and Information Technology, vol. 20 no. 3
Type: Research Article
ISSN: 1328-7265

Keywords

Book part
Publication date: 30 September 2020

Madhulika Bhatia, Shubham Sharma, Madhurima Hooda and Narayan C. Debnath

Recent research advances in artificial intelligence, machine learning, and neural networks are becoming essential tools for building a wide range of intelligent…

Abstract

Recent research advances in artificial intelligence, machine learning, and neural networks are becoming essential tools for building a wide range of intelligent applications. Moreover, machine learning helps to automate analytical model building. Machine learning based frameworks and approaches allow making well-informed and intelligent choices for improving daily eating habits and extension of healthy lifestyle. This book chapter presents a new machine learning approach for meal classification and assessment of nutrients values based on weather conditions along with new and innovative ideas for further study and research on health care-related applications.

Details

Big Data Analytics and Intelligence: A Perspective for Health Care
Type: Book
ISBN: 978-1-83909-099-8

Keywords

Article
Publication date: 8 May 2019

Dinara Davlembayeva, Savvas Papagiannidis and Eleftherios Alamanos

The sharing economy is a socio-economic system in which individuals acquire and distribute goods and services among each other for free or for compensation through…

1089

Abstract

Purpose

The sharing economy is a socio-economic system in which individuals acquire and distribute goods and services among each other for free or for compensation through internet platforms. The sharing economy has attracted the interest of the academic community, which examined the phenomenon from the economic, social and technological perspectives. The paper aims to discuss this issue.

Design/methodology/approach

Given the lack of an overarching analysis of the sharing economy, this paper employs a quantitative content analysis approach to explore and synthesise relevant findings to facilitate the understanding of this emerging phenomenon.

Findings

The paper identified and grouped findings under four themes, namely: collaborative consumption practices, resources, drivers of user engagement and impacts, each of which is discussed in relation to the three main themes, aiming to compare findings and then put forward an agenda for further research.

Originality/value

The paper offers a balanced analysis of the building blocks of the sharing economy, to identify emerging themes within each stream, to discuss any contextual differences from a multi-stakeholder perspective and to propose directions for future studies.

Details

Information Technology & People, vol. 33 no. 3
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 17 October 2017

Xiling Yao, Seung Ki Moon and Guijun Bi

This paper aims to present a hybrid machine learning algorithm for additive manufacturing (AM) design feature recommendation during the conceptual design phase.

1875

Abstract

Purpose

This paper aims to present a hybrid machine learning algorithm for additive manufacturing (AM) design feature recommendation during the conceptual design phase.

Design/methodology/approach

In the proposed hybrid machine learning algorithm, hierarchical clustering is performed on coded AM design features and target components, resulting in a dendrogram. Existing industrial application examples are used to train a supervised classifier that determines the final sub-cluster within the dendrogram containing the recommended AM design features.

Findings

Through a case study of designing additive manufactured R/C car components, the proposed hybrid machine learning method was proven useful in providing feasible conceptual design solutions for inexperienced designers by recommending appropriate AM design features.

Originality/value

The proposed method helps inexperienced designers who are newly exposed to AM capabilities explore and utilize AM design knowledge computationally.

Details

Rapid Prototyping Journal, vol. 23 no. 6
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 25 September 2007

Joanna Jedrzejowicz and Jakub Neumann

This paper seeks to describe XML technologies and to show how they can be applied for developing web‐based courses and supporting authors who do not have much experience…

Abstract

Purpose

This paper seeks to describe XML technologies and to show how they can be applied for developing web‐based courses and supporting authors who do not have much experience with the preparation of web‐based courses.

Design/methodology/approach

When developing online courses the academic staff has to address the following problem – how to keep pace with the ever‐changing technology. Using XML technologies helps to develop a learning environment which can be useful for academics when designing web‐based courses, preparing the materials and then reusing them.

Findings

The paper discusses the benefits of using XML for developing computer‐based courses. The task of introducing new versions of existing courses can be reduced to editing appropriate XML files without any need for program change and an author can perform this task easily from a computer connected to the internet. What is more – using XML makes it possible to reuse data in different teaching situations.

Research limitations/implications

The environment has only been used for two years and further research is needed on how user‐friendly the system really is and how it can still be improved.

Practical implications

The paper describes the environment which can be used to develop and reuse online materials, courses, metadata etc.

Originality/value

The paper offers practical help to academics interested in web‐based teaching.

Details

Interactive Technology and Smart Education, vol. 4 no. 2
Type: Research Article
ISSN: 1741-5659

Keywords

Article
Publication date: 25 June 2019

Valery Gitis and Alexander Derendyaev

The purpose of this paper is to offer two Web-based platforms for systematic analysis of seismic processes. Both platforms are designed to analyze and forecast the state…

Abstract

Purpose

The purpose of this paper is to offer two Web-based platforms for systematic analysis of seismic processes. Both platforms are designed to analyze and forecast the state of the environment and, in particular, the level of seismic hazard. The first platform analyzes the fields representing the properties of the seismic process; the second platform forecasts strong earthquakes. Earthquake forecasting is based on a new one-class classification method.

Design/methodology/approach

The paper suggests an approach to systematic forecasting of earthquakes and examines the results of tests. This approach is based on a new method of machine learning, called the method of the minimum area of alarm. The method allows to construct a forecast rule that optimizes the probability of detecting target earthquakes in a learning sample set, provided that the area of the alarm zone does not exceed a predetermined one.

Findings

The paper presents two platforms alongside the method of analysis. It was shown that these platforms can be used for systematic analysis of seismic process. By testing of the earthquake forecasting method in several regions, it was shown that the method of the minimum area of alarm has satisfactory forecast quality.

Originality/value

The described technology has two advantages: simplicity of configuration for a new problem area and a combination of interactive easy analysis supported by intuitive operations and a simplified user interface with a detailed, comprehensive analysis of spatio-temporal processes intended for specialists. The method of the minimum area of alarm solves the problem of one-class classification. The method is original. It uses in training the precedents of anomalous objects and statistically takes into account normal objects.

Article
Publication date: 16 March 2010

Cataldo Zuccaro

The purpose of this paper is to discuss and assess the structural characteristics (conceptual utility) of the most popular classification and predictive techniques…

1924

Abstract

Purpose

The purpose of this paper is to discuss and assess the structural characteristics (conceptual utility) of the most popular classification and predictive techniques employed in customer relationship management and customer scoring and to evaluate their classification and predictive precision.

Design/methodology/approach

A sample of customers' credit rating and socio‐demographic profiles are employed to evaluate the analytic and classification properties of discriminant analysis, binary logistic regression, artificial neural networks, C5 algorithm, and regression trees employing Chi‐squared Automatic Interaction Detector (CHAID).

Findings

With regards to interpretability and the conceptual utility of the parameters generated by the five techniques, logistic regression provides easily interpretable parameters through its logit. The logits can be interpreted in the same way as regression slopes. In addition, the logits can be converted to odds providing a common sense evaluation of the relative importance of each independent variable. Finally, the technique provides robust statistical tests to evaluate the model parameters. Finally, both CHAID and the C5 algorithm provide visual tools (regression tree) and semantic rules (rule set for classification) to facilitate the interpretation of the model parameters. These can be highly desirable properties when the researcher attempts to explain the conceptual and operational foundations of the model.

Originality/value

Most treatments of complex classification procedures have been undertaken idiosyncratically, that is, evaluating only one technique. This paper evaluates and compares the conceptual utility and predictive precision of five different classification techniques on a moderate sample size and provides clear guidelines in technique selection when undertaking customer scoring and classification.

Details

Journal of Modelling in Management, vol. 5 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 3 July 2020

Azra Nazir, Roohie Naaz Mir and Shaima Qureshi

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine…

234

Abstract

Purpose

The trend of “Deep Learning for Internet of Things (IoT)” has gained fresh momentum with enormous upcoming applications employing these models as their processing engine and Cloud as their resource giant. But this picture leads to underutilization of ever-increasing device pool of IoT that has already passed 15 billion mark in 2015. Thus, it is high time to explore a different approach to tackle this issue, keeping in view the characteristics and needs of the two fields. Processing at the Edge can boost applications with real-time deadlines while complementing security.

Design/methodology/approach

This review paper contributes towards three cardinal directions of research in the field of DL for IoT. The first section covers the categories of IoT devices and how Fog can aid in overcoming the underutilization of millions of devices, forming the realm of the things for IoT. The second direction handles the issue of immense computational requirements of DL models by uncovering specific compression techniques. An appropriate combination of these techniques, including regularization, quantization, and pruning, can aid in building an effective compression pipeline for establishing DL models for IoT use-cases. The third direction incorporates both these views and introduces a novel approach of parallelization for setting up a distributed systems view of DL for IoT.

Findings

DL models are growing deeper with every passing year. Well-coordinated distributed execution of such models using Fog displays a promising future for the IoT application realm. It is realized that a vertically partitioned compressed deep model can handle the trade-off between size, accuracy, communication overhead, bandwidth utilization, and latency but at the expense of an additionally considerable memory footprint. To reduce the memory budget, we propose to exploit Hashed Nets as potentially favorable candidates for distributed frameworks. However, the critical point between accuracy and size for such models needs further investigation.

Originality/value

To the best of our knowledge, no study has explored the inherent parallelism in deep neural network architectures for their efficient distribution over the Edge-Fog continuum. Besides covering techniques and frameworks that have tried to bring inference to the Edge, the review uncovers significant issues and possible future directions for endorsing deep models as processing engines for real-time IoT. The study is directed to both researchers and industrialists to take on various applications to the Edge for better user experience.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 12 April 2022

Monica Puri Sikka, Alok Sarkar and Samridhi Garg

With the help of basic physics, the application of computer algorithms in the form of recent advances such as machine learning and neural networking in textile Industry…

Abstract

Purpose

With the help of basic physics, the application of computer algorithms in the form of recent advances such as machine learning and neural networking in textile Industry has been discussed in this review. Scientists have linked the underlying structural or chemical science of textile materials and discovered several strategies for completing some of the most time-consuming tasks with ease and precision. Since the 1980s, computer algorithms and machine learning have been used to aid the majority of the textile testing process. With the rise in demand for automation, deep learning, and neural networks, these two now handle the majority of testing and quality control operations in the form of image processing.

Design/methodology/approach

The state-of-the-art of artificial intelligence (AI) applications in the textile sector is reviewed in this paper. Based on several research problems and AI-based methods, the current literature is evaluated. The research issues are categorized into three categories based on the operation processes of the textile industry, including yarn manufacturing, fabric manufacture and coloration.

Findings

AI-assisted automation has improved not only machine efficiency but also overall industry operations. AI's fundamental concepts have been examined for real-world challenges. Several scientists conducted the majority of the case studies, and they confirmed that image analysis, backpropagation and neural networking may be specifically used as testing techniques in textile material testing. AI can be used to automate processes in various circumstances.

Originality/value

This research conducts a thorough analysis of artificial neural network applications in the textile sector.

Details

Research Journal of Textile and Apparel, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1560-6074

Keywords

1 – 10 of 50