Search results

1 – 10 of over 2000
Book part
Publication date: 31 January 2015

Davy Janssens and Geert Wets

Several activity-based transportation models are now becoming operational and are entering the stage of application for the modelling of travel demand. In our application, we will…

Abstract

Several activity-based transportation models are now becoming operational and are entering the stage of application for the modelling of travel demand. In our application, we will use decision rules to support the decision-making of the model instead of principles of utility maximization, which means our work can be interpreted as an application of the concept of bounded rationality in the transportation domain. In this chapter we explored a novel idea of combining decision trees and Bayesian networks to improve decision-making in order to maintain the potential advantages of both techniques. The results of this study suggest that integrated Bayesian networks and decision trees can be used for modelling the different choice facets of a travel demand model with better predictive power than CHAID decision trees. Another conclusion is that there are initial indications that the new way of integrating decision trees and Bayesian networks has produced a decision tree that is structurally more stable.

Details

Bounded Rational Choice Behaviour: Applications in Transport
Type: Book
ISBN: 978-1-78441-071-1

Keywords

Article
Publication date: 14 November 2016

Konstantinos Domdouzis, Babak Akhgar, Simon Andrews, Helen Gibson and Laurence Hirsch

A number of crisis situations, such as natural disasters, have affected the planet over the past decade. The outcomes of such disasters are catastrophic for the infrastructures of…

1263

Abstract

Purpose

A number of crisis situations, such as natural disasters, have affected the planet over the past decade. The outcomes of such disasters are catastrophic for the infrastructures of modern societies. Furthermore, after large disasters, societies come face-to-face with important issues, such as the loss of human lives, people who are missing and the increment of the criminality rate. In many occasions, they seem unprepared to face such issues. This paper aims to present an automated social media and crowdsourcing data mining system for the synchronization of the police and law enforcement agencies for the prevention of criminal activities during and post a large crisis situation.

Design/methodology/approach

The paper realized qualitative research in the form of a review of the literature. This review focuses on the necessity of using social media and crowdsourcing data mining techniques in combination with advanced Web technologies for the purpose of providing solutions to problems related to criminal activities caused during and after a crisis. The paper presents the ATHENA crisis management system, which uses a number of data mining techniques to collect and analyze crisis-related data from social media for the purpose of crime prevention.

Findings

Conclusions are drawn on the significance of social media and crowdsourcing data mining techniques for the resolution of problems related to large crisis situations with emphasis to the ATHENA system.

Originality/value

The paper shows how the integrated use of social media and data mining algorithms can contribute in the resolution of problems that are developed during and after a large crisis.

Details

Journal of Systems and Information Technology, vol. 18 no. 4
Type: Research Article
ISSN: 1328-7265

Keywords

Article
Publication date: 14 August 2018

Waqas Khalid and Zaza Nadja Lee Herbert-Hansen

This paper aims to investigate the application of unsupervised machine learning in the international location decision (ILD). This paper addresses the need for a fast…

Abstract

Purpose

This paper aims to investigate the application of unsupervised machine learning in the international location decision (ILD). This paper addresses the need for a fast, quantitative and dynamic location decision framework.

Design/methodology/approach

Unsupervised machine learning technique, i.e. k-means clustering, is used to carry out the analysis. In total, 24 different indicators of 94 countries, categorized into five groups, have been used in the analysis. After the clustering, the clusters have been compared and scored to select the feasible countries.

Findings

A new framework is developed based on k-means clustering that can be used in ILD. This method provides a quantitative output without personal subjectivity. The indicators can be easily added or extracted based on the preferences of the decision-makers. Hence, it was found out that the unsupervised machine learning, i.e. k-means clustering, is a fast and flexible decision support framework that can be used in ILD.

Research limitations/implications

Limitations include the generality of selected indicators and clustering algorithm used. The use of other methods and parameters may lead to alternate results.

Originality/value

The framework developed through the research intends to assist the decision-makers in deciding on the facility locations. The framework can be used in international and national domains. It provides a quantitative, fast and flexible way to shortlist the potential locations. Other methods can also be used to further decide on the specific location.

Details

Journal of Global Operations and Strategic Sourcing, vol. 11 no. 3
Type: Research Article
ISSN: 2398-5364

Keywords

Article
Publication date: 27 September 2021

S. Prathiba and Sharmila Sankar

The purpose of this paper is to provide energy-efficient task scheduling and resource allocation (RA) in cloud data centers (CDC).

Abstract

Purpose

The purpose of this paper is to provide energy-efficient task scheduling and resource allocation (RA) in cloud data centers (CDC).

Design/methodology/approach

Task scheduling and RA is proposed in this paper for cloud environment, which schedules the user’s seasonal requests and allocates resources in an optimized manner. The proposed study does the following operations: data collection, feature extraction, feature reduction and RA. Initially, the online streaming data of seasonal requests of multiple users were gathered. After that, the features are extracted based on user requests along with the cloud server, and the extracted features are lessened using modified principal component analysis. For RA, the split data of the user request is identified and that data is pre-processed by computing closed frequent itemset along with entropy values. After that, the user requests are scheduled using the normalized K-means algorithm (NKMA) centered on the entropy values. Finally, the apt resources are allotted to that scheduled task using the Cauchy mutation-genetic algorithm (CM-GA). The investigational outcomes exhibit that the proposed study outruns other existing algorithms in respect to response time, execution time, clustering accuracy, precision and recall.

Findings

The proposed NKMA and CM-GA technique’s performance is analyzed by comparing them with the existing techniques. The NKMA performance is analyzed with KMA and Fuzzy C-means regarding Prc (Precision), Rca (Recall), F ms (f measure), Acr (Accuracy)and Ct (Clustering Time). The performance is compared to about 500 numbers of tasks. For all tasks, the NKMA provides the highest values for Prc, Rca, Fms and Acr, takes the lowest time (Ct) for clustering the data. Then, the CM-GA optimization for optimally allocating the resource in the cloud is contrasted with the GA and particle swarm optimization with respect to Rt (Response Time), Pt (Process Time), Awt (Average Waiting Time), Atat (Average Turnaround Time), Lcy (Latency) and Tp (Throughput). For all number of tasks, the proposed CM-GA gives the lowest values for Rt, Pt, Awt, Atat and Lcy and also provides the highest values for Tp. So, from the results, it is known that the proposed technique for seasonal requests RA works well and the method optimally allocates the resources in the cloud.

Originality/value

The proposed approach provides energy-efficient task scheduling and RA and it paves the way for the development of effective CDC.

Article
Publication date: 3 August 2021

Chuanming Yu, Haodong Xue, Manyi Wang and Lu An

Owing to the uneven distribution of annotated corpus among different languages, it is necessary to bridge the gap between low resource languages and high resource languages. From…

Abstract

Purpose

Owing to the uneven distribution of annotated corpus among different languages, it is necessary to bridge the gap between low resource languages and high resource languages. From the perspective of entity relation extraction, this paper aims to extend the knowledge acquisition task from a single language context to a cross-lingual context, and to improve the relation extraction performance for low resource languages.

Design/methodology/approach

This paper proposes a cross-lingual adversarial relation extraction (CLARE) framework, which decomposes cross-lingual relation extraction into parallel corpus acquisition and adversarial adaptation relation extraction. Based on the proposed framework, this paper conducts extensive experiments in two tasks, i.e. the English-to-Chinese and the English-to-Arabic cross-lingual entity relation extraction.

Findings

The Macro-F1 values of the optimal models in the two tasks are 0.880 1 and 0.789 9, respectively, indicating that the proposed CLARE framework for CLARE can significantly improve the effect of low resource language entity relation extraction. The experimental results suggest that the proposed framework can effectively transfer the corpus as well as the annotated tags from English to Chinese and Arabic. This study reveals that the proposed approach is less human labour intensive and more effective in the cross-lingual entity relation extraction than the manual method. It shows that this approach has high generalizability among different languages.

Originality/value

The research results are of great significance for improving the performance of the cross-lingual knowledge acquisition. The cross-lingual transfer may greatly reduce the time and cost of the manual construction of the multi-lingual corpus. It sheds light on the knowledge acquisition and organization from the unstructured text in the era of big data.

Details

The Electronic Library , vol. 39 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 11 December 2023

Chi-Un Lei, Wincy Chan and Yuyue Wang

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how…

Abstract

Purpose

Higher education plays an essential role in achieving the United Nations sustainable development goals (SDGs). However, there are only scattered studies on monitoring how universities promote SDGs through their curriculum. The purpose of this study is to investigate the connection of existing common core courses in a university to SDG education. In particular, this study wanted to know how common core courses can be classified by machine-learning approach according to SDGs.

Design/methodology/approach

In this report, the authors used machine learning techniques to tag the 166 common core courses in a university with SDGs and then analyzed the results based on visualizations. The training data set comes from the OSDG public community data set which the community had verified. Meanwhile, key descriptions of common core courses had been used for the classification. The study used the multinomial logistic regression algorithm for the classification. Descriptive analysis at course-level, theme-level and curriculum-level had been included to illustrate the proposed approach’s functions.

Findings

The results indicate that the machine-learning classification approach can significantly accelerate the SDG classification of courses. However, currently, it cannot replace human classification due to the complexity of the problem and the lack of relevant training data.

Research limitations/implications

The study can achieve a more accurate model training through adopting advanced machine learning algorithms (e.g. deep learning, multioutput multiclass machine learning algorithms); developing a more effective test data set by extracting more relevant information from syllabus and learning materials; expanding the training data set of SDGs that currently have insufficient records (e.g. SDG 12); and replacing the existing training data set from OSDG by authentic education-related documents (such as course syllabus) with SDG classifications. The performance of the algorithm should also be compared to other computer-based and human-based SDG classification approaches for cross-checking the results, with a systematic evaluation framework. Furthermore, the study can be analyzed by circulating results to students and understanding how they would interpret and use the results for choosing courses for studying. Furthermore, the study mainly focused on the classification of topics that are taught in courses but cannot measure the effectiveness of adopted pedagogies, assessment strategies and competency development strategies in courses. The study can also conduct analysis based on assessment tasks and rubrics of courses to see whether the assessment tasks can help students understand and take action on SDGs.

Originality/value

The proposed approach explores the possibility of using machine learning for SDG classifications in scale.

Details

International Journal of Sustainability in Higher Education, vol. 25 no. 4
Type: Research Article
ISSN: 1467-6370

Keywords

Open Access
Article
Publication date: 21 January 2020

Martin Jullum, Anders Løland, Ragnar Bang Huseby, Geir Ånonsen and Johannes Lorentzen

The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential…

37097

Abstract

Purpose

The purpose of this paper is to develop, describe and validate a machine learning model for prioritising which financial transactions should be manually investigated for potential money laundering. The model is applied to a large data set from Norway’s largest bank, DNB.

Design/methodology/approach

A supervised machine learning model is trained by using three types of historic data: “normal” legal transactions; those flagged as suspicious by the bank’s internal alert system; and potential money laundering cases reported to the authorities. The model is trained to predict the probability that a new transaction should be reported, using information such as background information about the sender/receiver, their earlier behaviour and their transaction history.

Findings

The paper demonstrates that the common approach of not using non-reported alerts (i.e. transactions that are investigated but not reported) in the training of the model can lead to sub-optimal results. The same applies to the use of normal (un-investigated) transactions. Our developed method outperforms the bank’s current approach in terms of a fair measure of performance.

Originality/value

This research study is one of very few published anti-money laundering (AML) models for suspicious transactions that have been applied to a realistically sized data set. The paper also presents a new performance measure specifically tailored to compare the proposed method to the bank’s existing AML system.

Details

Journal of Money Laundering Control, vol. 23 no. 1
Type: Research Article
ISSN: 1368-5201

Keywords

Article
Publication date: 27 February 2023

Dilawar Ali, Kenzo Milleville, Steven Verstockt, Nico Van de Weghe, Sally Chambers and Julie M. Birkholz

Historical newspaper collections provide a wealth of information about the past. Although the digitization of these collections significantly improves their accessibility, a large…

Abstract

Purpose

Historical newspaper collections provide a wealth of information about the past. Although the digitization of these collections significantly improves their accessibility, a large portion of digitized historical newspaper collections, such as those of KBR, the Royal Library of Belgium, are not yet searchable at article-level. However, recent developments in AI-based research methods, such as document layout analysis, have the potential for further enriching the metadata to improve the searchability of these historical newspaper collections. This paper aims to discuss the aforementioned issue.

Design/methodology/approach

In this paper, the authors explore how existing computer vision and machine learning approaches can be used to improve access to digitized historical newspapers. To do this, the authors propose a workflow, using computer vision and machine learning approaches to (1) provide article-level access to digitized historical newspaper collections using document layout analysis, (2) extract specific types of articles (e.g. feuilletons – literary supplements from Le Peuple from 1938), (3) conduct image similarity analysis using (un)supervised classification methods and (4) perform named entity recognition (NER) to link the extracted information to open data.

Findings

The results show that the proposed workflow improves the accessibility and searchability of digitized historical newspapers, and also contributes to the building of corpora for digital humanities research. The AI-based methods enable automatic extraction of feuilletons, clustering of similar images and dynamic linking of related articles.

Originality/value

The proposed workflow enables automatic extraction of articles, including detection of a specific type of article, such as a feuilleton or literary supplement. This is particularly valuable for humanities researchers as it improves the searchability of these collections and enables corpora to be built around specific themes. Article-level access to, and improved searchability of, KBR's digitized newspapers are demonstrated through the online tool (https://tw06v072.ugent.be/kbr/).

Book part
Publication date: 18 January 2024

Tulsi Pawan Fowdur, Satyadev Rosunee, Robert T. F. Ah King, Pratima Jeetah and Mahendra Gooroochurn

In this chapter, a general introduction on artificial intelligence (AI) is given as well as an overview of the advances of AI in different engineering disciplines, including its…

Abstract

In this chapter, a general introduction on artificial intelligence (AI) is given as well as an overview of the advances of AI in different engineering disciplines, including its effectiveness in driving the United Nations Sustainable Development Goals (UN SDGs). This chapter begins with some fundamental definitions and concepts on AI and machine learning (ML) followed by a classification of the different categories of ML algorithms. After that, a general overview of the impact which different engineering disciplines such as Civil, Chemical, Mechanical, Electrical and Telecommunications Engineering have on the UN SDGs is given. The application of AI and ML to enhance the processes in these different engineering disciplines is also briefly explained. This chapter concludes with a brief description of the UN SDGs and how AI can positively impact the attainment of these goals by the target year of 2030.

Details

Artificial Intelligence, Engineering Systems and Sustainable Development
Type: Book
ISBN: 978-1-83753-540-8

Keywords

Book part
Publication date: 7 May 2019

Nikolaos Dimisianos

This chapter examines the ways social media, analytics, and disruptive technologies are combined and leveraged by political campaigns to increase the probability of victory…

Abstract

This chapter examines the ways social media, analytics, and disruptive technologies are combined and leveraged by political campaigns to increase the probability of victory through micro-targeting, voter engagement, and public relations. More specifically, the importance of community detection, social influence, natural language processing and text analytics, machine learning, and predictive analytics is assessed and reviewed in relation to political campaigns. In this context, data processing is examined through the lens of the General Data Protection Regulation (GDPR) effective as of May 25, 2018. It is concluded that while data processing during political campaigns does not violate the GDPR, electoral campaigns engage in surveillance, thereby violating Articles 12 and 19, in respect to private life, and freedom of expression accordingly, as stated in the 1948 Universal Declaration of Human Rights.

Details

Politics and Technology in the Post-Truth Era
Type: Book
ISBN: 978-1-78756-984-3

Keywords

1 – 10 of over 2000