Search results
1 – 10 of over 1000Zabih Ghelichi, Monica Gentili and Pitu Mirchandani
This paper aims to propose a simulation-based performance evaluation model for the drone-based delivery of aid items to disaster-affected areas. The objective of the model is to…
Abstract
Purpose
This paper aims to propose a simulation-based performance evaluation model for the drone-based delivery of aid items to disaster-affected areas. The objective of the model is to perform analytical studies, evaluate the performance of drone delivery systems for humanitarian logistics and can support the decision-making on the operational design of the system – on where to locate drone take-off points and on assignment and scheduling of delivery tasks to drones.
Design/methodology/approach
This simulation model captures the dynamics and variabilities of the drone-based delivery system, including demand rates, location of demand points, time-dependent parameters and possible failures of drones’ operations. An optimization model integrated with the simulation system can update the optimality of drones’ schedules and delivery assignments.
Findings
An extensive set of experiments was performed to evaluate alternative strategies to demonstrate the effectiveness for the proposed optimization/simulation system. In the first set of experiments, the authors use the simulation-based evaluation tool for a case study for Central Florida. The goal of this set of experiments is to show how the proposed system can be used for decision-making and decision-support. The second set of experiments presents a series of numerical studies for a set of randomly generated instances.
Originality/value
The goal is to develop a simulation system that can allow one to evaluate performance of drone-based delivery systems, accounting for the uncertainties through simulations of real-life drone delivery flights. The proposed simulation model captures the variations in different system parameters, including interval of updating the system after receiving new information, demand parameters: the demand rate and their spatial distribution (i.e. their locations), service time parameters: travel times, setup and loading times, payload drop-off times and repair times and drone energy level: battery’s energy is impacted and requires battery change/recharging while flying.
Details
Keywords
Yuho Okita, Takao Kaneko, Hiroaki Imai, Monique Nair and Kounosuke Tomori
Goal setting is a crucial aspect of client-centered practice in occupational therapy (OT) for mental health conditions. However, it remains to be seen how goal-setting has been…
Abstract
Purpose
Goal setting is a crucial aspect of client-centered practice in occupational therapy (OT) for mental health conditions. However, it remains to be seen how goal-setting has been delivered in mental health, particularly the OT process. The purpose of this scoping review was to explore the nature and extent of goal setting delivered in mental health and informed OT practice.
Design/methodology/approach
The authors followed the guidelines of Arksey and O’Malley (2005) and searched three databases using key search terms: “mental disorder,” “goal setting,” and “occupational therapy” and their synonyms.
Findings
After excluding duplicate records, the authors initially screened 883 records and resulted in 20 records in total after the screening process. Most of the identified articles used goal-setting delivered by both a health professional and a client (n = 14), and focused on people with schizophrenia or schizoaffective disorder (n = 13), but three interventions were delivered by occupational therapists. Further research needs on goal-setting in mental health OT, exploring the reliability and validity of different goal-setting strategies and investigating the effectiveness of goal-setting for promoting behavior change and client engagement across various mental health conditions and settings.
Research limitations/implications
The scoping review has some limitations, such as not investigating the validity and reliability of goal-setting strategies identified, and excluding conference papers and non-English articles.
Originality/value
This scoping review presents a mapping of how goal-setting has been delivered in mental health and informed OT practice. The findings suggest limited research in OT and highlight the need for more studies to address the evidence gap in individualized client-centered OT.
Details
Keywords
Koraljka Golub, Osma Suominen, Ahmed Taiye Mohammed, Harriet Aagaard and Olof Osterman
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an…
Abstract
Purpose
In order to estimate the value of semi-automated subject indexing in operative library catalogues, the study aimed to investigate five different automated implementations of an open source software package on a large set of Swedish union catalogue metadata records, with Dewey Decimal Classification (DDC) as the target classification system. It also aimed to contribute to the body of research on aboutness and related challenges in automated subject indexing and evaluation.
Design/methodology/approach
On a sample of over 230,000 records with close to 12,000 distinct DDC classes, an open source tool Annif, developed by the National Library of Finland, was applied in the following implementations: lexical algorithm, support vector classifier, fastText, Omikuji Bonsai and an ensemble approach combing the former four. A qualitative study involving two senior catalogue librarians and three students of library and information studies was also conducted to investigate the value and inter-rater agreement of automatically assigned classes, on a sample of 60 records.
Findings
The best results were achieved using the ensemble approach that achieved 66.82% accuracy on the three-digit DDC classification task. The qualitative study confirmed earlier studies reporting low inter-rater agreement but also pointed to the potential value of automatically assigned classes as additional access points in information retrieval.
Originality/value
The paper presents an extensive study of automated classification in an operative library catalogue, accompanied by a qualitative study of automated classes. It demonstrates the value of applying semi-automated indexing in operative information retrieval systems.
Details
Keywords
Jie Ma, Zhiyuan Hao and Mo Hu
The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and…
Abstract
Purpose
The density peak clustering algorithm (DP) is proposed to identify cluster centers by two parameters, i.e. ρ value (local density) and δ value (the distance between a point and another point with a higher ρ value). According to the center-identifying principle of the DP, the potential cluster centers should have a higher ρ value and a higher δ value than other points. However, this principle may limit the DP from identifying some categories with multi-centers or the centers in lower-density regions. In addition, the improper assignment strategy of the DP could cause a wrong assignment result for the non-center points. This paper aims to address the aforementioned issues and improve the clustering performance of the DP.
Design/methodology/approach
First, to identify as many potential cluster centers as possible, the authors construct a point-domain by introducing the pinhole imaging strategy to extend the searching range of the potential cluster centers. Second, they design different novel calculation methods for calculating the domain distance, point-domain density and domain similarity. Third, they adopt domain similarity to achieve the domain merging process and optimize the final clustering results.
Findings
The experimental results on analyzing 12 synthetic data sets and 12 real-world data sets show that two-stage density peak clustering based on multi-strategy optimization (TMsDP) outperforms the DP and other state-of-the-art algorithms.
Originality/value
The authors propose a novel DP-based clustering method, i.e. TMsDP, and transform the relationship between points into that between domains to ultimately further optimize the clustering performance of the DP.
Details
Keywords
Diego Camara Sales, Leandro Buss Becker and Cristian Koliver
Managing components' resources plays a critical role in the success of systems' architectures designed for cyber–physical systems (CPS). Performing the selection of candidate…
Abstract
Purpose
Managing components' resources plays a critical role in the success of systems' architectures designed for cyber–physical systems (CPS). Performing the selection of candidate components to pursue a specific application's needs also involves identifying the relationships among architectural components, the network and the physical process, as the system characteristics and properties are related.
Design/methodology/approach
Using a Model-Driven Engineering (MDE) approach is a valuable asset therefore. Within this context, the authors present the so-called Systems Architecture Ontology (SAO), which allows the representation of a system architecture (SA), as well as the relationships, characteristics and properties of a CPS application.
Findings
SAO uses a common vocabulary inspired by the Architecture Analysis and Design Language (AADL) standard. To demonstrate SAO's applicability, this paper presents its use as an MDE approach combined with ontology-based modeling through the Ontology Web Language (OWL). From OWL models based on SAO, the authors propose a model transformation tool to extract data related to architectural modeling in AADL code, allowing the creation of a components' library and a property set model. Besides saving design time by automatically generating many lines of code, such code is less error-prone, that is, without inconsistencies.
Originality/value
To illustrate the proposal, the authors present a case study in the aerospace domain with the application of SAO and its transformation tool. As result, a library containing 74 components and a related set of properties are automatically generated to support architectural design and evaluation.
Details
Keywords
Maria Angela Butturi, Francesco Lolli and Rita Gamberini
This study presents the development of a supply chain (SC) observatory, which is a benchmarking solution to support companies within the same industry in understanding their…
Abstract
Purpose
This study presents the development of a supply chain (SC) observatory, which is a benchmarking solution to support companies within the same industry in understanding their positioning in terms of SC performance.
Design/methodology/approach
A case study is used to demonstrate the set-up of the observatory. Twelve experts on automatic equipment for the wrapping and packaging industry were asked to select a set of performance criteria taken from the literature and evaluate their importance for the chosen industry using multi-criteria decision-making (MCDM) techniques. To handle the high number of criteria without requiring a high amount of time-consuming effort from decision-makers (DMs), five subjective, parsimonious methods for criteria weighting are applied and compared.
Findings
A benchmarking methodology is presented and discussed, aimed at DMs in the considered industry. Ten companies were ranked with regard to SC performance. The ranking solution of the companies was on average robust since the general structure of the ranking was very similar for all five weighting methodologies, though simplified-analytic hierarchy process (AHP) was the method with the greatest ability to discriminate between the criteria of importance and was considered faster to carry out and more quickly understood by the decision-makers.
Originality/value
Developing an SC observatory usually requires managing a large number of alternatives and criteria. The developed methodology uses parsimonious weighting methods, providing DMs with an easy-to-use and time-saving tool. A future research step will be to complete the methodology by defining the minimum variation required for one or more criteria to reach a specific position in the ranking through the implementation of a post-fact analysis.
Details
Keywords
This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a…
Abstract
Purpose
This study focuses on the classification of targets with varying shapes using radar cross section (RCS), which is influenced by the target’s shape. This study aims to develop a robust classification method by considering an incident angle with minor random fluctuations and using a physical optics simulation to generate data sets.
Design/methodology/approach
The approach involves several supervised machine learning and classification methods, including traditional algorithms and a deep neural network classifier. It uses histogram-based definitions of the RCS for feature extraction, with an emphasis on resilience against noise in the RCS data. Data enrichment techniques are incorporated, including the use of noise-impacted histogram data sets.
Findings
The classification algorithms are extensively evaluated, highlighting their efficacy in feature extraction from RCS histograms. Among the studied algorithms, the K-nearest neighbour is found to be the most accurate of the traditional methods, but it is surpassed in accuracy by a deep learning network classifier. The results demonstrate the robustness of the feature extraction from the RCS histograms, motivated by mm-wave radar applications.
Originality/value
This study presents a novel approach to target classification that extends beyond traditional methods by integrating deep neural networks and focusing on histogram-based methodologies. It also incorporates data enrichment techniques to enhance the analysis, providing a comprehensive perspective for target detection using RCS.
Details
Keywords
Henriett Primecz and Jasmin Mahadevan
Using intersectionality and introducing newer developments from critical cross-cultural management studies, this paper aims to discuss how diversity is applicable to changing…
Abstract
Purpose
Using intersectionality and introducing newer developments from critical cross-cultural management studies, this paper aims to discuss how diversity is applicable to changing cultural contexts.
Design/methodology/approach
The paper is a conceptual paper built upon relevant empirical research findings from critical cross-cultural management studies.
Findings
By applying intersectionality as a conceptual lens, this paper underscores the practical and conceptual limitations of the business case for diversity, in particular in a culturally diverse international business (IB) setting. Introducing newer developments from critical cross-cultural management studies, the authors identify the need to investigate and manage diversity across distinct categories, and as intersecting with culture, context and power.
Research limitations/implications
This paper builds on previous empirical research in critical cross-cultural management studies using intersectionality as a conceptual lens and draws implications for diversity management in an IB setting from there. The authors add to the critique of the business case by showing its failures of identifying and, consequently, managing diversity, equality/equity and inclusion (DEI) in IB settings.
Practical implications
Organizations (e.g. MNEs) are enabled to clearly see the limitations of the business case and provided with a conceptual lens for addressing DEI issues in a more contextualized and intersectional manner.
Originality/value
This paper introduces intersectionality, as discussed and applied in critical cross-cultural management studies, as a conceptual lens for outlining the limitations of the business case for diversity and for promoting DEI in an IB setting in more complicated, realistic and relevant ways.
Details
Keywords
Shruti Garg, Rahul Kumar Patro, Soumyajit Behera, Neha Prerna Tigga and Ranjita Pandey
The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.
Abstract
Purpose
The purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.
Design/methodology/approach
Classical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.
Findings
The two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.
Originality/value
The present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.
Details
Keywords
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…
Abstract
Purpose
Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.
Design/methodology/approach
The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.
Findings
On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.
Originality/value
A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.
Details