Search results

1 – 10 of over 40000
Book part
Publication date: 25 March 2021

Tayfun Kasapoglu and Anu Masso

Purpose: This study explores the perspectives of data experts (DXs) and refugees on the algorithms used by law enforcement officers and focuses on emerging insecurities. The…

Abstract

Purpose: This study explores the perspectives of data experts (DXs) and refugees on the algorithms used by law enforcement officers and focuses on emerging insecurities. The authors take police risk-scoring algorithms (PRSA) as a proxy to examine perceptions on algorithms that make/assist sensitive decisions affecting people’s lives.

Methodology/approach: In-depth interviews were conducted with DXs (24) in Estonia and refugees (19) in Estonia and Turkey. Using projective techniques, the interviewees were provided a simple definition of PRSA and a photo to encourage them to share their perspectives. The authors applied thematic analysis to the data combining manual and computer-aided techniques using the Maxqda software.

Findings: The study revealed that the perspectives on PRSA may change depending on the individual’s position relative to the double security paradox surrounding refugees. The use of algorithms for a sensitive matter such as security raises concerns about potential social outcomes, intentions of authorities and fairness of the algorithms. The algorithms are perceived to construct further social borders in society and justify extant ideas about marginalized groups.

Research limitations: The study made use of a small population sample and aimed at exploring perspectives of refugees and DXs by taking PRSA as the case without targeting representativeness.

Originality/value: The study is based on a double security paradox where refugees who escape their homelands due to security concerns are also considered to be national security threats. DXs, on the other hand, represent a group that takes an active role in decisions about who is at risk and who is risky. The study provides insights on two groups of people who are engaged with algorithms in different ways.

Details

Theorizing Criminality and Policing in the Digital Media Age
Type: Book
ISBN: 978-1-83909-112-4

Keywords

Article
Publication date: 19 December 2023

Susan Gardner Archambault

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught…

Abstract

Purpose

Research shows that postsecondary students are largely unaware of the impact of algorithms on their everyday lives. Also, most noncomputer science students are not being taught about algorithms as part of the regular curriculum. This exploratory, qualitative study aims to explore subject-matter experts’ insights and perceptions of the knowledge components, coping behaviors and pedagogical considerations to aid faculty in teaching algorithmic literacy to postsecondary students.

Design/methodology/approach

Eleven semistructured interviews and one focus group were conducted with scholars and teachers of critical algorithm studies and related fields. A content analysis was manually performed on the transcripts using a mixture of deductive and inductive coding. Data analysis was aided by the coding software program Dedoose (2021) to determine frequency totals for occurrences of a code across all participants along with how many times specific participants mentioned a code. Then, findings were organized around the three themes of knowledge components, coping behaviors and pedagogy.

Findings

The findings suggested a set of 10 knowledge components that would contribute to students’ algorithmic literacy along with seven behaviors that students could use to help them better cope with algorithmic systems. A set of five teaching strategies also surfaced to help improve students’ algorithmic literacy.

Originality/value

This study contributes to improved pedagogy surrounding algorithmic literacy and validates existing multi-faceted conceptualizations and measurements of algorithmic literacy.

Details

Information and Learning Sciences, vol. 125 no. 1/2
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 23 June 2021

Serkan Altuntas, Türkay Dereli and Zülfiye Erdoğan

This study aims to propose a service quality evaluation model for health-care services.

676

Abstract

Purpose

This study aims to propose a service quality evaluation model for health-care services.

Design/methodology/approach

In this study, a service quality evaluation model is proposed based on the service quality measurement (SERVQUAL) scale and machine learning algorithm. Primarily, items that affect the quality of service are determined based on the SERVQUAL scale. Subsequently, a service quality assessment model is generated to manage the resources that are allocated to improve the activities efficiently. Following this phase, a sample of classification model is conducted. Machine learning algorithms are used to establish the classification model.

Findings

The proposed evaluation model addresses the following questions: What are the potential impact levels of service quality dimensions on the quality of service practically? What should be prioritization among the service quality dimensions and Which dimensions of service quality should be improved primarily? A real-life case study in a public hospital is carried out to reveal how the proposed model works. The results that have been obtained from the case study show that the proposed model can be conducted easily in practice. It is also found that there is a remarkably high-service gap in the public hospital, in which the case study has been conducted, regarding the general physical conditions and food services.

Originality/value

The primary contribution of this study is threefold. The proposed evaluation model determines the impact levels of service quality dimensions on the service quality in practice. The proposed evaluation model prioritizes service quality dimensions in terms of their significance. The proposed evaluation model finds out the answer to the question of which service quality dimensions should be improved primarily?

Details

Kybernetes, vol. 51 no. 2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 19 December 2019

Waqar Ahmed Khan, S.H. Chung, Muhammad Usman Awan and Xin Wen

The purpose of this paper is to conduct a comprehensive review of the noteworthy contributions made in the area of the Feedforward neural network (FNN) to improve its…

1462

Abstract

Purpose

The purpose of this paper is to conduct a comprehensive review of the noteworthy contributions made in the area of the Feedforward neural network (FNN) to improve its generalization performance and convergence rate (learning speed); to identify new research directions that will help researchers to design new, simple and efficient algorithms and users to implement optimal designed FNNs for solving complex problems; and to explore the wide applications of the reviewed FNN algorithms in solving real-world management, engineering and health sciences problems and demonstrate the advantages of these algorithms in enhancing decision making for practical operations.

Design/methodology/approach

The FNN has gained much popularity during the last three decades. Therefore, the authors have focused on algorithms proposed during the last three decades. The selected databases were searched with popular keywords: “generalization performance,” “learning rate,” “overfitting” and “fixed and cascade architecture.” Combinations of the keywords were also used to get more relevant results. Duplicated articles in the databases, non-English language, and matched keywords but out of scope, were discarded.

Findings

The authors studied a total of 80 articles and classified them into six categories according to the nature of the algorithms proposed in these articles which aimed at improving the generalization performance and convergence rate of FNNs. To review and discuss all the six categories would result in the paper being too long. Therefore, the authors further divided the six categories into two parts (i.e. Part I and Part II). The current paper, Part I, investigates two categories that focus on learning algorithms (i.e. gradient learning algorithms for network training and gradient-free learning algorithms). Furthermore, the remaining four categories which mainly explore optimization techniques are reviewed in Part II (i.e. optimization algorithms for learning rate, bias and variance (underfitting and overfitting) minimization algorithms, constructive topology neural networks and metaheuristic search algorithms). For the sake of simplicity, the paper entitled “Machine learning facilitated business intelligence (Part II): Neural networks optimization techniques and applications” is referred to as Part II. This results in a division of 80 articles into 38 and 42 for Part I and Part II, respectively. After discussing the FNN algorithms with their technical merits and limitations, along with real-world management, engineering and health sciences applications for each individual category, the authors suggest seven (three in Part I and other four in Part II) new future directions which can contribute to strengthening the literature.

Research limitations/implications

The FNN contributions are numerous and cannot be covered in a single study. The authors remain focused on learning algorithms and optimization techniques, along with their application to real-world problems, proposing to improve the generalization performance and convergence rate of FNNs with the characteristics of computing optimal hyperparameters, connection weights, hidden units, selecting an appropriate network architecture rather than trial and error approaches and avoiding overfitting.

Practical implications

This study will help researchers and practitioners to deeply understand the existing algorithms merits of FNNs with limitations, research gaps, application areas and changes in research studies in the last three decades. Moreover, the user, after having in-depth knowledge by understanding the applications of algorithms in the real world, may apply appropriate FNN algorithms to get optimal results in the shortest possible time, with less effort, for their specific application area problems.

Originality/value

The existing literature surveys are limited in scope due to comparative study of the algorithms, studying algorithms application areas and focusing on specific techniques. This implies that the existing surveys are focused on studying some specific algorithms or their applications (e.g. pruning algorithms, constructive algorithms, etc.). In this work, the authors propose a comprehensive review of different categories, along with their real-world applications, that may affect FNN generalization performance and convergence rate. This makes the classification scheme novel and significant.

Details

Industrial Management & Data Systems, vol. 120 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 28 February 2023

Meltem Aksoy, Seda Yanık and Mehmet Fatih Amasyali

When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals…

Abstract

Purpose

When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals are primarily based on manual matching of similar topics, discipline areas and keywords declared by project applicants. When the number of proposals increases, this task becomes complex and requires excessive time. This paper aims to demonstrate how to effectively use the rich information in the titles and abstracts of Turkish project proposals to group them automatically.

Design/methodology/approach

This study proposes a model that effectively groups Turkish project proposals by combining word embedding, clustering and classification techniques. The proposed model uses FastText, BERT and term frequency/inverse document frequency (TF/IDF) word-embedding techniques to extract terms from the titles and abstracts of project proposals in Turkish. The extracted terms were grouped using both the clustering and classification techniques. Natural groups contained within the corpus were discovered using k-means, k-means++, k-medoids and agglomerative clustering algorithms. Additionally, this study employs classification approaches to predict the target class for each document in the corpus. To classify project proposals, various classifiers, including k-nearest neighbors (KNN), support vector machines (SVM), artificial neural networks (ANN), classification and regression trees (CART) and random forest (RF), are used. Empirical experiments were conducted to validate the effectiveness of the proposed method by using real data from the Istanbul Development Agency.

Findings

The results show that the generated word embeddings can effectively represent proposal texts as vectors, and can be used as inputs for clustering or classification algorithms. Using clustering algorithms, the document corpus is divided into five groups. In addition, the results demonstrate that the proposals can easily be categorized into predefined categories using classification algorithms. SVM-Linear achieved the highest prediction accuracy (89.2%) with the FastText word embedding method. A comparison of manual grouping with automatic classification and clustering results revealed that both classification and clustering techniques have a high success rate.

Research limitations/implications

The proposed model automatically benefits from the rich information in project proposals and significantly reduces numerous time-consuming tasks that managers must perform manually. Thus, it eliminates the drawbacks of the current manual methods and yields significantly more accurate results. In the future, additional experiments should be conducted to validate the proposed method using data from other funding organizations.

Originality/value

This study presents the application of word embedding methods to effectively use the rich information in the titles and abstracts of Turkish project proposals. Existing research studies focus on the automatic grouping of proposals; traditional frequency-based word embedding methods are used for feature extraction methods to represent project proposals. Unlike previous research, this study employs two outperforming neural network-based textual feature extraction techniques to obtain terms representing the proposals: BERT as a contextual word embedding method and FastText as a static word embedding method. Moreover, to the best of our knowledge, there has been no research conducted on the grouping of project proposals in Turkish.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Content available
Article
Publication date: 3 December 2019

Masoud Kavoosi, Maxim A. Dulebenets, Olumide Abioye, Junayed Pasha, Oluwatosin Theophilus, Hui Wang, Raphael Kampmann and Marko Mikijeljević

Marine transportation has been faced with an increasing demand for containerized cargo during the past decade. Marine container terminals (MCTs), as the facilities for connecting…

1586

Abstract

Purpose

Marine transportation has been faced with an increasing demand for containerized cargo during the past decade. Marine container terminals (MCTs), as the facilities for connecting seaborne and inland transportation, are expected to handle the increasing amount of containers, delivered by vessels. Berth scheduling plays an important role for the total throughput of MCTs as well as the overall effectiveness of the MCT operations. This study aims to propose a novel island-based metaheuristic algorithm to solve the berth scheduling problem and minimize the total cost of serving the arriving vessels at the MCT.

Design/methodology/approach

A universal island-based metaheuristic algorithm (UIMA) was proposed in this study, aiming to solve the spatially constrained berth scheduling problem. The UIMA population was divided into four sub-populations (i.e. islands). Unlike the canonical island-based algorithms that execute the same metaheuristic on each island, four different population-based metaheuristics are adopted within the developed algorithm to search the islands, including the following: evolutionary algorithm (EA), particle swarm optimization (PSO), estimation of distribution algorithm (EDA) and differential evolution (DE). The adopted population-based metaheuristic algorithms rely on different operators, which facilitate the search process for superior solutions on the UIMA islands.

Findings

The conducted numerical experiments demonstrated that the developed UIMA algorithm returned near-optimal solutions for the small-size problem instances. As for the large-size problem instances, UIMA was found to be superior to the EA, PSO, EDA and DE algorithms, which were executed in isolation, in terms of the obtained objective function values at termination. Furthermore, the developed UIMA algorithm outperformed various single-solution-based metaheuristic algorithms (including variable neighborhood search, tabu search and simulated annealing) in terms of the solution quality. The maximum UIMA computational time did not exceed 306 s.

Research limitations/implications

Some of the previous berth scheduling studies modeled uncertain vessel arrival times and/or handling times, while this study assumed the vessel arrival and handling times to be deterministic.

Practical implications

The developed UIMA algorithm can be used by the MCT operators as an efficient decision support tool and assist with a cost-effective design of berth schedules within an acceptable computational time.

Originality/value

A novel island-based metaheuristic algorithm is designed to solve the spatially constrained berth scheduling problem. The proposed island-based algorithm adopts several types of metaheuristic algorithms to cover different areas of the search space. The considered metaheuristic algorithms rely on different operators. Such feature is expected to facilitate the search process for superior solutions.

Article
Publication date: 16 October 2023

Maedeh Gholamazad, Jafar Pourmahmoud, Alireza Atashi, Mehdi Farhoudi and Reza Deljavan Anvari

A stroke is a serious, life-threatening condition that occurs when the blood supply to a part of the brain is cut off. The earlier a stroke is treated, the less damage is likely…

Abstract

Purpose

A stroke is a serious, life-threatening condition that occurs when the blood supply to a part of the brain is cut off. The earlier a stroke is treated, the less damage is likely to occur. One of the methods that can lead to faster treatment is timely and accurate prediction and diagnosis. This paper aims to compare the binary integer programming-data envelopment analysis (BIP-DEA) model and the logistic regression (LR) model for diagnosing and predicting the occurrence of stroke in Iran.

Design/methodology/approach

In this study, two algorithms of the BIP-DEA and LR methods were introduced and key risk factors leading to stroke were extracted.

Findings

The study population consisted of 2,100 samples (patients) divided into six subsamples of different sizes. The classification table of each algorithm showed that the BIP-DEA model had more reliable results than the LR for the small data size. After running each algorithm, the BIP-DEA and LR algorithms identified eight and five factors as more effective risk factors and causes of stroke, respectively. Finally, predictive models using the important risk factors were proposed.

Originality/value

The main objective of this study is to provide the integrated BIP-DEA algorithm as a fast, easy and suitable tool for evaluation and prediction. In fact, the BIP-DEA algorithm can be used as an alternative tool to the LR model when the sample size is small. These algorithms can be used in various fields, including the health-care industry, to predict and prevent various diseases before the patient’s condition becomes more dangerous.

Details

Journal of Modelling in Management, vol. 19 no. 2
Type: Research Article
ISSN: 1746-5664

Keywords

Article
Publication date: 28 February 2023

Lin-Lin Xie, Yajiao Chen, Sisi Wu, Rui-Dong Chang and Yilong Han

Project scheduling plays an essential role in the implementation of a project due to the limitation of resources in practical projects. However, the existing research tend to…

Abstract

Purpose

Project scheduling plays an essential role in the implementation of a project due to the limitation of resources in practical projects. However, the existing research tend to focus on finding suitable algorithms to solve various scheduling problems and fail to find the potential scheduling rules in these optimal or near-optimal solutions, that is, the possible intrinsic relationships between attributes related to the scheduling of activity sequences. Data mining (DM) is used to analyze and interpret data to obtain valuable information stored in large-scale data. The goal of this paper is to use DM to discover scheduling concepts and obtain a set of rules that approximate effective solutions to resource-constrained project scheduling problems. These rules do not require any search and simulation, which have extremely low time complexity and support real-time decision-making to improve planning/scheduling.

Design/methodology/approach

The resource-constrained project scheduling problem can be described as scheduling a group of interrelated activities to optimize the project completion time and other objectives while satisfying the activity priority relationship and resource constraints. This paper proposes a new approach to solve the resource-constrained project scheduling problem by combining DM technology and the genetic algorithm (GA). More specifically, the GA is used to generate various optimal project scheduling schemes, after that C4.5 decision tree (DT) is adopted to obtain valuable knowledge from these schemes for further predicting and solving new scheduling problems.

Findings

In this study, the authors use GA and DM technology to analyze and extract knowledge from a large number of scheduling schemes, and determine the scheduling rule set to minimize the completion time. In order to verify the application effect of the proposed DT classification model, the J30, J60 and J120 datasets in PSPLIB are used to test the validity of the scheduling rules. The results show that DT can readily duplicate the excellent performance of GA for scheduling problems of different scales. In addition, the DT prediction model developed in this study is applied to a high-rise residential project consisting of 117 activities. The results show that compared with the completion time obtained by GA, the DT model can realize rapid adjustment of project scheduling problem to deal with the dynamic environment interference. In a word, the data-based approach is feasible, practical and effective. It not only captures the knowledge contained in the known optimal scheduling schemes, but also helps to provide a flexible scheduling decision-making approach for project implementation.

Originality/value

This paper proposes a novel knowledge-based project scheduling approach. In previous studies, intelligent optimization algorithm is often used to solve the project scheduling problem. However, although these intelligent optimization algorithms can generate a set of effective solutions for problem instances, they are unable to explain the process of decision-making, nor can they identify the characteristics of good scheduling decisions generated by the optimization process. Moreover, their calculation is slow and complex, which is not suitable for planning and scheduling complex projects. In this study, the set of effective solutions of problem instances is taken as the training dataset of DM algorithm, and the extracted scheduling rules can provide the prediction and solution of new scheduling problems. The proposed method focuses on identifying the key parameters of a specific dynamic scheduling environment, which can not only reproduces the scheduling performance of the original algorithm well, but also has the ability to make decisions quickly under the dynamic interference construction scenario. It is helpful for project managers to implement quick decisions in response to construction emergencies, which is of great practical significance for improving the flexibility and efficiency of construction projects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 30 December 2021

Mohammad Hossein Saraei, Ayyoob Sharifi and Mohsen Adeli

The purpose of this study is to optimize the location of hospitals in Gorgan, Iran, to provide desirable services to citizens in the event of an earthquake crisis.

Abstract

Purpose

The purpose of this study is to optimize the location of hospitals in Gorgan, Iran, to provide desirable services to citizens in the event of an earthquake crisis.

Design/methodology/approach

This paper, due to target, is practical and developmental, due to doing method is descriptive and analytical and due to information gathering method is documental and surveying. In the present study, the capabilities of genetic algorithms and imperialist competition algorithm in MATLAB environment in combination with GIS capabilities have been used. In fact, cases such as route blocking, network analysis and vulnerability raster have been obtained from GIS-based on current status data, and then the output of this information is entered as non-random heuristic information into genetic algorithms and imperialist competition algorithm in MATLAB environment.

Findings

After spatial optimization, the hospital service process has become more favorable. Also, the average cost and transfer vector from hospitals to citizens has decreased significantly. By establishing hospitals in the proposed locations, a larger population of citizens can access relief services in less time.

Originality/value

Spatial optimization of relief centers, including hospitals, is one of the issues that can be of significant importance, especially in the event of an earthquake crisis. The findings of the present study and the originality, efficiency and innovation of the used methods can provide a favorable theoretical framework for the success of earthquake crisis management projects.

Details

International Journal of Disaster Resilience in the Built Environment, vol. 14 no. 3
Type: Research Article
ISSN: 1759-5908

Keywords

Article
Publication date: 8 November 2018

Mahmood Kasravi, Amin Mahmoudi and Mohammad Reza Feylizadeh

Construction projects managers try their best for the project to go according to the plans. They always attempt to complete the projects on time and consistent with the…

Abstract

Purpose

Construction projects managers try their best for the project to go according to the plans. They always attempt to complete the projects on time and consistent with the predetermined budgets. Amid so many problems in project planning, the most critical and well-known problem is the Resource-Constrained Project Scheduling Problem (RCPSP). The purpose of this paper is to solve RCPSP using hybrid algorithm ICA/PSO.

Design/methodology/approach

Due to the existence of various forms for scheduling the problem and also the diversity of constraints and objective functions, myriad of research studies have been conducted in this realm of study. Since most of these problems are NP-hard ones, heuristic and meta-heuristic methods are used for solving these problems. In this research, a novel hybrid method which is composed of meta-heuristic methods of particle swarm optimization (PSO) and imperialist competitive algorithm (ICA) has been used to solve RCPSP. Finally, a railway project has been examined for RCPS Problem in a real-world situation.

Findings

According to the results of the case study, ICA/PSO algorithm has better results than ICAs and PSO individually.

Practical implications

ICA/PSO algorithm could be used for solving problems in a multi-mode situation of activities or considering more constraints on the resources, such as the existence of non-renewable resources and renewable. Based on the case study in construction project, ICA/PSO algorithm has a better solution than PSO and ICA.

Originality/value

In this study, by combining PSO and ICA algorithms and creating a new hybrid algorithm, better solutions have been achieved in RCPSP. In order to validate the method, standard problems available in PSPLib library were used.

Details

Journal of Advances in Management Research, vol. 16 no. 2
Type: Research Article
ISSN: 0972-7981

Keywords

1 – 10 of over 40000