Search results

1 – 10 of over 1000
Article
Publication date: 5 September 2016

Djamel Guessoum, Moeiz Miraoui and Chakib Tadj

The prediction of a context, especially of a user’s location, is a fundamental task in the field of pervasive computing. Such predictions open up a new and rich field of proactive…

Abstract

Purpose

The prediction of a context, especially of a user’s location, is a fundamental task in the field of pervasive computing. Such predictions open up a new and rich field of proactive adaptation for context-aware applications. This study/paper aims to propose a methodology that predicts a user’s location on the basis of a user’s mobility history.

Design/methodology/approach

Contextual information is used to find the points of interest that a user visits frequently and to determine the sequence of these visits with the aid of spatial clustering, temporal segmentation and speed filtering.

Findings

The proposed method was tested with a real data set using several supervised classification algorithms, which yielded very interesting results.

Originality/value

The method uses contextual information (current position, day of the week, time and speed) that can be acquired easily and accurately with the help of common sensors such as GPS.

Details

International Journal of Pervasive Computing and Communications, vol. 12 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 18 July 2022

Yakub Kayode Saheed, Usman Ahmad Baba and Mustafa Ayobami Raji

Purpose: This chapter aims to examine machine learning (ML) models for predicting credit card fraud (CCF).Need for the study: With the advance of technology, the world is…

Abstract

Purpose: This chapter aims to examine machine learning (ML) models for predicting credit card fraud (CCF).

Need for the study: With the advance of technology, the world is increasingly relying on credit cards rather than cash in daily life. This creates a slew of new opportunities for fraudulent individuals to abuse these cards. As of December 2020, global card losses reached $28.65billion, up 2.9% from $27.85 billion in 2018, according to the Nilson 2019 research. To safeguard the safety of credit card users, the credit card issuer should include a service that protects customers from potential risks. CCF has become a severe threat as internet buying has grown. To this goal, various studies in the field of automatic and real-time fraud detection are required. Due to their advantageous properties, the most recent ones employ a variety of ML algorithms and techniques to construct a well-fitting model to detect fraudulent transactions. When it comes to recognising credit card risk is huge and high-dimensional data, feature selection (FS) is critical for improving classification accuracy and fraud detection.

Methodology/design/approach: The objectives of this chapter are to construct a new model for credit card fraud detection (CCFD) based on principal component analysis (PCA) for FS and using supervised ML techniques such as K-nearest neighbour (KNN), ridge classifier, gradient boosting, quadratic discriminant analysis, AdaBoost, and random forest for classification of fraudulent and legitimate transactions. When compared to earlier experiments, the suggested approach demonstrates a high capacity for detecting fraudulent transactions. To be more precise, our model’s resilience is constructed by integrating the power of PCA for determining the most useful predictive features. The experimental analysis was performed on German credit card and Taiwan credit card data sets.

Findings: The experimental findings revealed that the KNN achieved an accuracy of 96.29%, recall of 100%, and precision of 96.29%, which is the best performing model on the German data set. While the ridge classifier was the best performing model on Taiwan Credit data with an accuracy of 81.75%, recall of 34.89, and precision of 66.61%.

Practical implications: The poor performance of the models on the Taiwan data revealed that it is an imbalanced credit card data set. The comparison of our proposed models with state-of-the-art credit card ML models showed that our results were competitive.

Open Access
Article
Publication date: 10 May 2023

Marko Kureljusic and Erik Karger

Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current…

76134

Abstract

Purpose

Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current technological developments. Thus, artificial intelligence (AI) in financial accounting is often applied only in pilot projects. Using AI-based forecasts in accounting enables proactive management and detailed analysis. However, thus far, there is little knowledge about which prediction models have already been evaluated for accounting problems. Given this lack of research, our study aims to summarize existing findings on how AI is used for forecasting purposes in financial accounting. Therefore, the authors aim to provide a comprehensive overview and agenda for future researchers to gain more generalizable knowledge.

Design/methodology/approach

The authors identify existing research on AI-based forecasting in financial accounting by conducting a systematic literature review. For this purpose, the authors used Scopus and Web of Science as scientific databases. The data collection resulted in a final sample size of 47 studies. These studies were analyzed regarding their forecasting purpose, sample size, period and applied machine learning algorithms.

Findings

The authors identified three application areas and presented details regarding the accuracy and AI methods used. Our findings show that sociotechnical and generalizable knowledge is still missing. Therefore, the authors also develop an open research agenda that future researchers can address to enable the more frequent and efficient use of AI-based forecasts in financial accounting.

Research limitations/implications

Owing to the rapid development of AI algorithms, our results can only provide an overview of the current state of research. Therefore, it is likely that new AI algorithms will be applied, which have not yet been covered in existing research. However, interested researchers can use our findings and future research agenda to develop this field further.

Practical implications

Given the high relevance of AI in financial accounting, our results have several implications and potential benefits for practitioners. First, the authors provide an overview of AI algorithms used in different accounting use cases. Based on this overview, companies can evaluate the AI algorithms that are most suitable for their practical needs. Second, practitioners can use our results as a benchmark of what prediction accuracy is achievable and should strive for. Finally, our study identified several blind spots in the research, such as ensuring employee acceptance of machine learning algorithms in companies. However, companies should consider this to implement AI in financial accounting successfully.

Originality/value

To the best of our knowledge, no study has yet been conducted that provided a comprehensive overview of AI-based forecasting in financial accounting. Given the high potential of AI in accounting, the authors aimed to bridge this research gap. Moreover, our cross-application view provides general insights into the superiority of specific algorithms.

Details

Journal of Applied Accounting Research, vol. 25 no. 1
Type: Research Article
ISSN: 0967-5426

Keywords

Article
Publication date: 22 September 2021

Samar Ali Shilbayeh and Sunil Vadera

This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question that arises…

Abstract

Purpose

This paper aims to describe the use of a meta-learning framework for recommending cost-sensitive classification methods with the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?”

Design/methodology/approach

This paper describes the use of a meta-learning framework for recommending cost-sensitive classification methods for the aim of answering an important question that arises in machine learning, namely, “Among all the available classification algorithms, and in considering a specific type of data and cost, which is the best algorithm for my problem?” The framework is based on the idea of applying machine learning techniques to discover knowledge about the performance of different machine learning algorithms. It includes components that repeatedly apply different classification methods on data sets and measures their performance. The characteristics of the data sets, combined with the algorithms and the performance provide the training examples. A decision tree algorithm is applied to the training examples to induce the knowledge, which can then be used to recommend algorithms for new data sets. The paper makes a contribution to both meta-learning and cost-sensitive machine learning approaches. Those both fields are not new, however, building a recommender that recommends the optimal case-sensitive approach for a given data problem is the contribution. The proposed solution is implemented in WEKA and evaluated by applying it on different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system. The developed solution takes the misclassification cost into consideration during the learning process, which is not available in the compared project.

Findings

The proposed solution is implemented in WEKA and evaluated by applying it to different data sets and comparing the results with existing studies available in the literature. The results show that a developed meta-learning solution produces better results than METAL, a well-known meta-learning system.

Originality/value

The paper presents a major piece of new information in writing for the first time. Meta-learning work has been done before but this paper presents a new meta-learning framework that is costs sensitive.

Details

Journal of Modelling in Management, vol. 17 no. 3
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 16 January 2017

Shervan Fekriershad and Farshad Tajeripour

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise…

Abstract

Purpose

The purpose of this paper is to propose a color-texture classification approach which uses color sensor information and texture features jointly. High accuracy, low noise sensitivity and low computational complexity are specified aims for this proposed approach.

Design/methodology/approach

One of the efficient texture analysis operations is local binary patterns (LBP). The proposed approach includes two steps. First, a noise resistant version of color LBP is proposed to decrease its sensitivity to noise. This step is evaluated based on combination of color sensor information using AND operation. In a second step, a significant points selection algorithm is proposed to select significant LBPs. This phase decreases final computational complexity along with increasing accuracy rate.

Findings

The proposed approach is evaluated using Vistex, Outex and KTH-TIPS-2a data sets. This approach has been compared with some state-of-the-art methods. It is experimentally demonstrated that the proposed approach achieves the highest accuracy. In two other experiments, results show low noise sensitivity and low computational complexity of the proposed approach in comparison with previous versions of LBP. Rotation invariant, multi-resolution and general usability are other advantages of our proposed approach.

Originality/value

In the present paper, a new version of LBP is proposed originally, which is called hybrid color local binary patterns (HCLBP). HCLBP can be used in many image processing applications to extract color/texture features jointly. Also, a significant point selection algorithm is proposed for the first time to select key points of images.

Article
Publication date: 14 May 2019

Georgia Boskou, Efstathios Kirkos and Charalambos Spathis

This paper aims to assess internal audit quality (IAQ) by using automated textual analysis of disclosures of internal audit mechanisms in annual reports.

2504

Abstract

Purpose

This paper aims to assess internal audit quality (IAQ) by using automated textual analysis of disclosures of internal audit mechanisms in annual reports.

Design/methodology/approach

This paper uses seven text mining techniques to construct classification models that predict whether companies listed on the Athens Stock Exchange are audited by a Big 4 firm, an auditor selection that prior research finds is associated with higher IAQ. The classification accuracy of the models is compared to predictions based on financial indicators.

Findings

The results show that classification models developed using text analysis can be a promising alternative proxy in assessing IAQ. Terms, N-Grams and financial indicators of a company, as they are presented in the annual reports, can provide information on the IAQ.

Practical implications

This study offers a novel approach to assessing the IAQ by applying textual analysis techniques. These findings are important for those who oversee internal audit activities, assess internal audit performance or want to improve or evaluate internal audit systems, such as managers or audit committees. Practitioners, regulators and investors may also extract useful information on internal audit and internal auditors by using textual analysis. The insights are also relevant for external auditors who are required to consider various aspects of corporate governance, including IAQ.

Originality/value

IAQ has been the subject of thorough examination. However, this study is the first attempt, to the authors’ knowledge, to introduce an innovative text mining approach utilizing unstructured textual disclosure from annual reports to develop a proxy for IAQ. It contributes to the internal audit field literature by further exploring concerns relevant to IAQ.

Details

Managerial Auditing Journal, vol. 34 no. 8
Type: Research Article
ISSN: 0268-6902

Keywords

Article
Publication date: 29 April 2021

Riyaz Abdullah Sheikh, Surbhi Bhatia, Sujit Gajananrao Metre and Ali Yahya A. Faqihi

In spite of the popularity of learning analytics (LA) in higher education institutions (HEIs), the success rate and value gained through LA projects is still little and unclear…

Abstract

Purpose

In spite of the popularity of learning analytics (LA) in higher education institutions (HEIs), the success rate and value gained through LA projects is still little and unclear. The existing research on LA focusses more on tactical capabilities rather than its effect on organizational value. The key questions are what are the expected benefits for the institution? And how the investment in LA can bring tangible value? In this research, the authors proposed a value realization framework from LA extending the existing framework of information technology value.

Design/methodology/approach

The study includes a detailed literature review focusing on the importance, existing frameworks and LA adoption challenges. Based on the identified research gap, a new framework is designed. The framework depicts the several constructs and their relationships focusing on strategic value realization. Furthermore, this study includes three case studies to validate the framework.

Findings

The framework suggests that leveraging LA for strategic value demands adequate investment not only in data infrastructure and analytics but also in staff skill training and development and strategic planning. Universities are required to measure the strategic role of LA and spend wisely in quality data, analytical tools, skilled staff who are aware of the latest technologies and data-driven opportunities for continuous improvement in learning.

Originality/value

The framework permits education leaders to design better strategies for attaining excellence in learning and teaching, and furnish learners with new data to settle on the most ideal decisions about learning. The authors believe that the appropriation of this framework and consistent efficient interest in learning analytics by the higher education area will prompt better results for learners, colleges and more extensive society. The research also proposes two approaches and eleven research agendas for future research based on the framework. The first is based on the constructs and their relationships in LA value creation, whereas the later one focusing on identifying problems associate with it.

Details

Journal of Applied Research in Higher Education, vol. 14 no. 2
Type: Research Article
ISSN: 2050-7003

Keywords

Article
Publication date: 3 November 2020

Femi Emmanuel Ayo, Olusegun Folorunso, Friday Thomas Ibharalu and Idowu Ademola Osinuga

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with…

Abstract

Purpose

Hate speech is an expression of intense hatred. Twitter has become a popular analytical tool for the prediction and monitoring of abusive behaviors. Hate speech detection with social media data has witnessed special research attention in recent studies, hence, the need to design a generic metadata architecture and efficient feature extraction technique to enhance hate speech detection.

Design/methodology/approach

This study proposes a hybrid embeddings enhanced with a topic inference method and an improved cuckoo search neural network for hate speech detection in Twitter data. The proposed method uses a hybrid embeddings technique that includes Term Frequency-Inverse Document Frequency (TF-IDF) for word-level feature extraction and Long Short Term Memory (LSTM) which is a variant of recurrent neural networks architecture for sentence-level feature extraction. The extracted features from the hybrid embeddings then serve as input into the improved cuckoo search neural network for the prediction of a tweet as hate speech, offensive language or neither.

Findings

The proposed method showed better results when tested on the collected Twitter datasets compared to other related methods. In order to validate the performances of the proposed method, t-test and post hoc multiple comparisons were used to compare the significance and means of the proposed method with other related methods for hate speech detection. Furthermore, Paired Sample t-Test was also conducted to validate the performances of the proposed method with other related methods.

Research limitations/implications

Finally, the evaluation results showed that the proposed method outperforms other related methods with mean F1-score of 91.3.

Originality/value

The main novelty of this study is the use of an automatic topic spotting measure based on naïve Bayes model to improve features representation.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 4 October 2021

Guang-Yih Sheu and Chang-Yu Li

In a classroom, a support vector machines model with a linear kernel, a neural network and the k-nearest neighbors algorithm failed to detect simulated money laundering accounts…

Abstract

Purpose

In a classroom, a support vector machines model with a linear kernel, a neural network and the k-nearest neighbors algorithm failed to detect simulated money laundering accounts generated from the Panama papers data set of the offshore leak database. This study aims to resolve this failure.

Design/methodology/approach

Build a graph attention network having three modules as a new money laundering detection tool. A feature extraction module encodes these input data to create a weighted graph structure. In it, directed edges and their end vertices denote financial transactions. Each directed edge has weights for storing the frequency of money transactions and other significant features. Social network metrics are features of nodes for characterizing an account’s roles in a money laundering typology. A graph attention module implements a self-attention mechanism for highlighting target nodes. A classification module further filters out such targets using the biased rectified linear unit function.

Findings

Resulted from the highlighting of nodes using a self-attention mechanism, the proposed graph attention network outperforms a Naïve Bayes classifier, the random forest method and a support vector machines model with a radial kernel in detecting money laundering accounts. The Naïve Bayes classifier produces second accurate classifications.

Originality/value

This paper develops a new money laundering detection tool, which outperforms existing methods. This new tool produces more accurate detections of money laundering, perfects warns of money laundering accounts or links and provides sharp efficiency in processing financial transaction records without being afraid of their amount.

Details

Journal of Money Laundering Control, vol. 25 no. 3
Type: Research Article
ISSN: 1368-5201

Keywords

Book part
Publication date: 19 November 2014

Daniel Felix Ahelegbey and Paolo Giudici

The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a…

Abstract

The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a fundamental role in the spread of systemic risks. In this paper we propose to enrich the topological perspective of network models with a more structured statistical framework, that of Bayesian Gaussian graphical models. From a statistical viewpoint, we propose a new class of hierarchical Bayesian graphical models that can split correlations between institutions into country specific and idiosyncratic ones, in a way that parallels the decomposition of returns in the well-known Capital Asset Pricing Model. From a financial economics viewpoint, we suggest a way to model systemic risk that can explicitly take into account frictions between different financial markets, particularly suited to study the ongoing banking union process in Europe. From a computational viewpoint, we develop a novel Markov chain Monte Carlo algorithm based on Bayes factor thresholding.

1 – 10 of over 1000