Search results
1 – 10 of 741Zhuoxuan Jiang, Chunyan Miao and Xiaoming Li
Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by…
Abstract
Purpose
Recent years have witnessed the rapid development of massive open online courses (MOOCs). With more and more courses being produced by instructors and being participated by learners all over the world, unprecedented massive educational resources are aggregated. The educational resources include videos, subtitles, lecture notes, quizzes, etc., on the teaching side, and forum contents, Wiki, log of learning behavior, log of homework, etc., on the learning side. However, the data are both unstructured and diverse. To facilitate knowledge management and mining on MOOCs, extracting keywords from the resources is important. This paper aims to adapt the state-of-the-art techniques to MOOC settings and evaluate the effectiveness on real data. In terms of practice, this paper also tries to answer the questions for the first time that to what extend can the MOOC resources support keyword extraction models, and how many human efforts are required to make the models work well.
Design/methodology/approach
Based on which side generates the data, i.e instructors or learners, the data are classified to teaching resources and learning resources, respectively. The approach used on teaching resources is based on machine learning models with labels, while the approach used on learning resources is based on graph model without labels.
Findings
From the teaching resources, the methods used by the authors can accurately extract keywords with only 10 per cent labeled data. The authors find a characteristic of the data that the resources of various forms, e.g. subtitles and PPTs, should be separately considered because they have the different model ability. From the learning resources, the keywords extracted from MOOC forums are not as domain-specific as those extracted from teaching resources, but they can reflect the topics which are lively discussed in forums. Then instructors can get feedback from the indication. The authors implement two applications with the extracted keywords: generating concept map and generating learning path. The visual demos show they have the potential to improve learning efficiency when they are integrated into a real MOOC platform.
Research limitations/implications
Conducting keyword extraction on MOOC resources is quite difficult because teaching resources are hard to be obtained due to copyrights. Also, getting labeled data is tough because usually expertise of the corresponding domain is required.
Practical implications
The experiment results support that MOOC resources are good enough for building models of keyword extraction, and an acceptable balance between human efforts and model accuracy can be achieved.
Originality/value
This paper presents a pioneer study on keyword extraction on MOOC resources and obtains some new findings.
Details
Keywords
Eric Pettersson Ruiz and Jannis Angelis
This study aims to explore how to deanonymize cryptocurrency money launderers with the help of machine learning (ML). Money is laundered through cryptocurrencies by distributing…
Abstract
Purpose
This study aims to explore how to deanonymize cryptocurrency money launderers with the help of machine learning (ML). Money is laundered through cryptocurrencies by distributing funds to multiple accounts and then reexchanging the crypto back. This process of exchanging currencies is done through cryptocurrency exchanges. Current preventive efforts are outdated, and ML may provide novel ways to identify illicit currency movements. Hence, this study investigates ML applicability for combatting money laundering activities using cryptocurrency.
Design/methodology/approach
Four supervised-learning algorithms were compared using the Bitcoin Elliptic Dataset. The method covered a quantitative analysis of the algorithmic performance, capturing differences in three key evaluation metrics of F1-scores, precision and recall. Two complementary qualitative interviews were performed at cryptocurrency exchanges to identify fit and applicability of the algorithms.
Findings
The study results show that the current implemented ML tools for preventing money laundering at cryptocurrency exchanges are all too slow and need to be optimized for the task. The results also show that while not one single algorithm is most suitable for detecting transactions related to money-laundering, the specific applicability of the decision tree algorithm is most suitable for adoption by cryptocurrency exchanges.
Originality/value
Given the growth of cryptocurrency use, this study explores the newly developed field of algorithmic tools to combat illicit currency movement, in particular in the growing arena of cryptocurrencies. The study results provide new insights into the applicability of ML as a tool to combat money laundering using cryptocurrency exchanges.
Details
Keywords
Xuhui Ye, Gongping Wu, Fei Fan, XiangYang Peng and Ke Wang
An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection…
Abstract
Purpose
An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection robot cross obstacle automatically. This paper aims to propose an improved approach which is called adaptive homomorphic filter and supervised learning (AHSL) for overhead ground wire detection.
Design/methodology/approach
First, to decrease the influence of the varying illumination caused by the open work environment of the inspection robot, the adaptive homomorphic filter is introduced to compensation the changing illumination. Second, to represent ground wire more effectively and to extract more powerful and discriminative information for building a binary classifier, the global and local features fusion method followed by supervised learning method support vector machine is proposed.
Findings
Experiment results on two self-built testing data sets A and B which contain relative older ground wires and relative newer ground wire and on the field ground wires show that the use of the adaptive homomorphic filter and global and local feature fusion method can improve the detection accuracy of the ground wire effectively. The result of the proposed method lays a solid foundation for inspection robot grasping the ground wire by visual servo.
Originality/value
This method AHSL has achieved 80.8 per cent detection accuracy on data set A which contains relative older ground wires and 85.3 per cent detection accuracy on data set B which contains relative newer ground wires, and the field experiment shows that the robot can detect the ground wire accurately. The performance achieved by proposed method is the state of the art under open environment with varying illumination.
Details
Keywords
Marko Kureljusic and Erik Karger
Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current…
Abstract
Purpose
Accounting information systems are mainly rule-based, and data are usually available and well-structured. However, many accounting systems are yet to catch up with current technological developments. Thus, artificial intelligence (AI) in financial accounting is often applied only in pilot projects. Using AI-based forecasts in accounting enables proactive management and detailed analysis. However, thus far, there is little knowledge about which prediction models have already been evaluated for accounting problems. Given this lack of research, our study aims to summarize existing findings on how AI is used for forecasting purposes in financial accounting. Therefore, the authors aim to provide a comprehensive overview and agenda for future researchers to gain more generalizable knowledge.
Design/methodology/approach
The authors identify existing research on AI-based forecasting in financial accounting by conducting a systematic literature review. For this purpose, the authors used Scopus and Web of Science as scientific databases. The data collection resulted in a final sample size of 47 studies. These studies were analyzed regarding their forecasting purpose, sample size, period and applied machine learning algorithms.
Findings
The authors identified three application areas and presented details regarding the accuracy and AI methods used. Our findings show that sociotechnical and generalizable knowledge is still missing. Therefore, the authors also develop an open research agenda that future researchers can address to enable the more frequent and efficient use of AI-based forecasts in financial accounting.
Research limitations/implications
Owing to the rapid development of AI algorithms, our results can only provide an overview of the current state of research. Therefore, it is likely that new AI algorithms will be applied, which have not yet been covered in existing research. However, interested researchers can use our findings and future research agenda to develop this field further.
Practical implications
Given the high relevance of AI in financial accounting, our results have several implications and potential benefits for practitioners. First, the authors provide an overview of AI algorithms used in different accounting use cases. Based on this overview, companies can evaluate the AI algorithms that are most suitable for their practical needs. Second, practitioners can use our results as a benchmark of what prediction accuracy is achievable and should strive for. Finally, our study identified several blind spots in the research, such as ensuring employee acceptance of machine learning algorithms in companies. However, companies should consider this to implement AI in financial accounting successfully.
Originality/value
To the best of our knowledge, no study has yet been conducted that provided a comprehensive overview of AI-based forecasting in financial accounting. Given the high potential of AI in accounting, the authors aimed to bridge this research gap. Moreover, our cross-application view provides general insights into the superiority of specific algorithms.
Details
Keywords
Martin Jullum, Anders Løland, Ragnar Bang Huseby, Geir Ånonsen and Johannes Lorentzen
The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential…
Abstract
Purpose
The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential money laundering. The model is applied to a large data set from Norway’s largest bank, DNB.
Design/methodology/approach
A supervised machine learning model is trained by using three types of historic data: “normal” legal transactions; those flagged as suspicious by the bank’s internal alert system; and potential money laundering cases reported to the authorities. The model is trained to predict the probability that a new transaction should be reported, using information such as background information about the sender/receiver, their earlier behaviour and their transaction history.
Findings
The paper demonstrates that the common approach of not using non-reported alerts (i.e. transactions that are investigated but not reported) in the training of the model can lead to sub-optimal results. The same applies to the use of normal (un-investigated) transactions. Our developed method outperforms the bank’s current approach in terms of a fair measure of performance.
Originality/value
This research study is one of very few published anti-money laundering (AML) models for suspicious transactions that have been applied to a realistically sized data set. The paper also presents a new performance measure specifically tailored to compare the proposed method to the bank’s existing AML system.
Details
Keywords
Reema Khaled AlRowais and Duaa Alsaeed
Automatically extracting stance information from natural language texts is a significant research problem with various applications, particularly after the recent explosion of…
Abstract
Purpose
Automatically extracting stance information from natural language texts is a significant research problem with various applications, particularly after the recent explosion of data on the internet via platforms like social media sites. Stance detection system helps determine whether the author agree, against or has a neutral opinion with the given target. Most of the research in stance detection focuses on the English language, while few research was conducted on the Arabic language.
Design/methodology/approach
This paper aimed to address stance detection on Arabic tweets by building and comparing different stance detection models using four transformers, namely: Araelectra, MARBERT, AraBERT and Qarib. Using different weights for these transformers, the authors performed extensive experiments fine-tuning the task of stance detection Arabic tweets with the four different transformers.
Findings
The results showed that the AraBERT model learned better than the other three models with a 70% F1 score followed by the Qarib model with a 68% F1 score.
Research limitations/implications
A limitation of this study is the imbalanced dataset and the limited availability of annotated datasets of SD in Arabic.
Originality/value
Provide comprehensive overview of the current resources for stance detection in the literature, including datasets and machine learning methods used. Therefore, the authors examined the models to analyze and comprehend the obtained findings in order to make recommendations for the best performance models for the stance detection task.
Details
Keywords
Eva Davidsson and Martin Stigmar
Previous research has pointed to a lack of studies concerning supervision training courses. Consequently, the literature has little to suggest, and the research field is…
Abstract
Purpose
Previous research has pointed to a lack of studies concerning supervision training courses. Consequently, the literature has little to suggest, and the research field is underexplored, so questions around the content and design of supervision training courses remain unanswered and need to be addressed systematically. The main aim of the present study is to explore and map whether shared content and design exist in supervisor training courses across different vocations.
Design/methodology/approach
A syllabus analysis is used in order to investigate characteristic features in supervisor training courses related to the professions of dentist, doctor, psychologist, police officer and teacher.
Findings
The results point to the existence of shared content in the different courses, such as an emphasis on learning and supervision theories, feedback, ethics, assessment and communication. Furthermore, the results conclude similarities in design of the courses, such as a problem-based approach, seminars, lectures and homework. Thus, there are common theoretical approaches to important supervisory competences.
Practical implications
Our results intend to offer possibilities to learn from different professions when improving supervisor training courses but may also constitute a starting point for developing a shared model of interprofessional supervisor competences. Furthermore, the results may support possible cooperation in interprofessional courses. This could include arranging interprofessional courses, where one part is shared for participants from the included professions and another part is profession-specific.
Originality/value
We seek to contribute to the research field of supervision at workplaces with knowledge and ideas about how to learn from different professions when developing and improving supervisor training courses.
Details
Keywords
Kiran Fahd, Shah Jahan Miah and Khandakar Ahmed
Student attritions in tertiary educational institutes may play a significant role to achieve core values leading towards strategic mission and financial well-being. Analysis of…
Abstract
Purpose
Student attritions in tertiary educational institutes may play a significant role to achieve core values leading towards strategic mission and financial well-being. Analysis of data generated from student interaction with learning management systems (LMSs) in blended learning (BL) environments may assist with the identification of students at risk of failing, but to what extent this may be possible is unknown. However, existing studies are limited to address the issues at a significant scale.
Design/methodology/approach
This study develops a new approach harnessing applications of machine learning (ML) models on a dataset, that is publicly available, relevant to student attrition to identify potential students at risk. The dataset consists of the data generated by the interaction of students with LMS for their BL environment.
Findings
Identifying students at risk through an innovative approach will promote timely intervention in the learning process, such as for improving student academic progress. To evaluate the performance of the proposed approach, the accuracy is compared with other representational ML methods.
Originality/value
The best ML algorithm random forest with 85% is selected to support educators in implementing various pedagogical practices to improve students’ learning.
Details
Keywords
Armin Mahmoodi, Leila Hashemi, Milad Jasemi, Jeremy Laliberté, Richard C. Millar and Hamed Noshadi
In this research, the main purpose is to use a suitable structure to predict the trading signals of the stock market with high accuracy. For this purpose, two models for the…
Abstract
Purpose
In this research, the main purpose is to use a suitable structure to predict the trading signals of the stock market with high accuracy. For this purpose, two models for the analysis of technical adaptation were used in this study.
Design/methodology/approach
It can be seen that support vector machine (SVM) is used with particle swarm optimization (PSO) where PSO is used as a fast and accurate classification to search the problem-solving space and finally the results are compared with the neural network performance.
Findings
Based on the result, the authors can say that both new models are trustworthy in 6 days, however, SVM-PSO is better than basic research. The hit rate of SVM-PSO is 77.5%, but the hit rate of neural networks (basic research) is 74.2.
Originality/value
In this research, two approaches (raw-based and signal-based) have been developed to generate input data for the model: raw-based and signal-based. For comparison, the hit rate is considered the percentage of correct predictions for 16 days.
Details
Keywords
Jingfeng Xie, Jun Huang, Lei Song, Jingcheng Fu and Xiaoqiang Lu
The typical approach of modeling the aerodynamics of an aircraft is to develop a complete database through testing or computational fluid dynamics (CFD). The database will be huge…
Abstract
Purpose
The typical approach of modeling the aerodynamics of an aircraft is to develop a complete database through testing or computational fluid dynamics (CFD). The database will be huge if it has a reasonable resolution and requires an unacceptable CFD effort during the conceptional design. Therefore, this paper aims to reduce the computing effort required via establishing a general aerodynamic model that needs minor parameters.
Design/methodology/approach
The model structure was a preconfigured polynomial model, and the parameters were estimated with a recursive method to further reduce the calculation effort. To uniformly disperse the sample points through each step, a unique recursive sampling method based on a Voronoi diagram was presented. In addition, a multivariate orthogonal function approach was used.
Findings
A case study of a flying wing aircraft demonstrated that generating a model with acceptable precision (0.01 absolute error or 5% relative error) costs only 1/54 of the cost of creating a database. A series of six degrees of freedom flight simulations shows that the model’s prediction was accurate.
Originality/value
This method proposed a new way to simplify the model and recursive sampling. It is a low-cost way of obtaining high-fidelity models during primary design, allowing for more precise flight dynamics analysis.
Details