Search results

1 – 10 of 45
Open Access
Article
Publication date: 15 December 2020

Soha Rawas and Ali El-Zaart

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern…

Abstract

Purpose

Image segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.

Design/methodology/approach

The proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.

Findings

On the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.

Originality/value

A novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 26 November 2018

Zhishuo Liu, Qianhui Shen, Jingmiao Ma and Ziqi Dong

This paper aims to extract the comment targets in Chinese online shopping platform.

1086

Abstract

Purpose

This paper aims to extract the comment targets in Chinese online shopping platform.

Design/methodology/approach

The authors first collect the comment texts, word segmentation, part-of-speech (POS) tagging and extracted feature words twice. Then they cluster the evaluation sentence and find the association rules between the evaluation words and the evaluation object. At the same time, they establish the association rule table. Finally, the authors can mine the evaluation object of comment sentence according to the evaluation word and the association rule table. At last, they obtain comment data from Taobao and demonstrate that the method proposed in this paper is effective by experiment.

Findings

The extracting comment target method the authors proposed in this paper is effective.

Research limitations/implications

First, the study object of extracting implicit features is review clauses, and not considering the context information, which may affect the accuracy of the feature excavation to a certain degree. Second, when extracting feature words, the low-frequency feature words are not considered, but some low-frequency feature words also contain effective information.

Practical implications

Because of the mass online reviews data, reading every comment one by one is impossible. Therefore, it is important that research on handling product comments and present useful or interest comments for clients.

Originality/value

The extracting comment target method the authors proposed in this paper is effective.

Details

International Journal of Crowd Science, vol. 2 no. 3
Type: Research Article
ISSN: 2398-7294

Keywords

Open Access
Article
Publication date: 29 July 2020

T. Mahalingam and M. Subramoniam

Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving…

2120

Abstract

Surveillance is the emerging concept in the current technology, as it plays a vital role in monitoring keen activities at the nooks and corner of the world. Among which moving object identifying and tracking by means of computer vision techniques is the major part in surveillance. If we consider moving object detection in video analysis is the initial step among the various computer applications. The main drawbacks of the existing object tracking method is a time-consuming approach if the video contains a high volume of information. There arise certain issues in choosing the optimum tracking technique for this huge volume of data. Further, the situation becomes worse when the tracked object varies orientation over time and also it is difficult to predict multiple objects at the same time. In order to overcome these issues here, we have intended to propose an effective method for object detection and movement tracking. In this paper, we proposed robust video object detection and tracking technique. The proposed technique is divided into three phases namely detection phase, tracking phase and evaluation phase in which detection phase contains Foreground segmentation and Noise reduction. Mixture of Adaptive Gaussian (MoAG) model is proposed to achieve the efficient foreground segmentation. In addition to it the fuzzy morphological filter model is implemented for removing the noise present in the foreground segmented frames. Moving object tracking is achieved by the blob detection which comes under tracking phase. Finally, the evaluation phase has feature extraction and classification. Texture based and quality based features are extracted from the processed frames which is given for classification. For classification we are using J48 ie, decision tree based classifier. The performance of the proposed technique is analyzed with existing techniques k-NN and MLP in terms of precision, recall, f-measure and ROC.

Details

Applied Computing and Informatics, vol. 17 no. 1
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 24 June 2021

Bo Wang, Guanwei Wang, Youwei Wang, Zhengzheng Lou, Shizhe Hu and Yangdong Ye

Vehicle fault diagnosis is a key factor in ensuring the safe and efficient operation of the railway system. Due to the numerous vehicle categories and different fault mechanisms…

Abstract

Purpose

Vehicle fault diagnosis is a key factor in ensuring the safe and efficient operation of the railway system. Due to the numerous vehicle categories and different fault mechanisms, there is an unbalanced fault category problem. Most of the current methods to solve this problem have complex algorithm structures, low efficiency and require prior knowledge. This study aims to propose a new method which has a simple structure and does not require any prior knowledge to achieve a fast diagnosis of unbalanced vehicle faults.

Design/methodology/approach

This study proposes a novel K-means with feature learning based on the feature learning K-means-improved cluster-centers selection (FKM-ICS) method, which includes the ICS and the FKM. Specifically, this study defines cluster centers approximation to select the initialized cluster centers in the ICS. This study uses improved term frequency-inverse document frequency to measure and adjust the feature word weights in each cluster, retaining the top τ feature words with the highest weight in each cluster and perform the clustering process again in the FKM. With the FKM-ICS method, clustering performance for unbalanced vehicle fault diagnosis can be significantly enhanced.

Findings

This study finds that the FKM-ICS can achieve a fast diagnosis of vehicle faults on the vehicle fault text (VFT) data set from a railway station in the 2017 (VFT) data set. The experimental results on VFT indicate the proposed method in this paper, outperforms several state-of-the-art methods.

Originality/value

This is the first effort to address the vehicle fault diagnostic problem and the proposed method performs effectively and efficiently. The ICS enables the FKM-ICS method to exclude the effect of outliers, solves the disadvantages of the fault text data contained a certain amount of noisy data, which effectively enhanced the method stability. The FKM enhances the distribution of feature words that discriminate between different fault categories and reduces the number of feature words to make the FKM-ICS method faster and better cluster for unbalanced vehicle fault diagnostic.

Details

Smart and Resilient Transportation, vol. 3 no. 2
Type: Research Article
ISSN: 2632-0487

Keywords

Open Access
Article
Publication date: 18 January 2022

Srinimalan Balakrishnan Selvakumaran and Daniel Mark Hall

The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science…

1463

Abstract

Purpose

The purpose of this paper is to investigate the feasibility of an end-to-end simplified and automated reconstruction pipeline for digital building assets using the design science research approach. Current methods to create digital assets by capturing the state of existing buildings can provide high accuracy but are time-consuming, expensive and difficult.

Design/methodology/approach

Using design science research, this research identifies the need for a crowdsourced and cloud-based approach to reconstruct digital building assets. The research then develops and tests a fully functional smartphone application prototype. The proposed end-to-end smartphone workflow begins with data capture and ends with user applications.

Findings

The resulting implementation can achieve a realistic three-dimensional (3D) model characterized by different typologies, minimal trade-off in accuracy and low processing costs. By crowdsourcing the images, the proposed approach can reduce costs for asset reconstruction by an estimated 93% compared to manual modeling and 80% compared to locally processed reconstruction algorithms.

Practical implications

The resulting implementation achieves “good enough” reconstruction of as-is 3D models with minimal tradeoffs in accuracy compared to automated approaches and 15× cost savings compared to a manual approach. Potential facility management use cases include the issue and information tracking, 3D mark-up and multi-model configurators.

Originality/value

Through user engagement, development, testing and validation, this work demonstrates the feasibility and impact of a novel crowdsourced and cloud-based approach for the reconstruction of digital building assets.

Details

Journal of Facilities Management , vol. 20 no. 3
Type: Research Article
ISSN: 1472-5967

Keywords

Open Access
Article
Publication date: 4 August 2020

Alaa Tharwat

Independent component analysis (ICA) is a widely-used blind source separation technique. ICA has been applied to many applications. ICA is usually utilized as a black box, without…

28769

Abstract

Independent component analysis (ICA) is a widely-used blind source separation technique. ICA has been applied to many applications. ICA is usually utilized as a black box, without understanding its internal details. Therefore, in this paper, the basics of ICA are provided to show how it works to serve as a comprehensive source for researchers who are interested in this field. This paper starts by introducing the definition and underlying principles of ICA. Additionally, different numerical examples in a step-by-step approach are demonstrated to explain the preprocessing steps of ICA and the mixing and unmixing processes in ICA. Moreover, different ICA algorithms, challenges, and applications are presented.

Details

Applied Computing and Informatics, vol. 17 no. 2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 7 June 2021

Flavia I. Gonsales

The paper aims to introduce social marketing (SM) as a tool to overcome the low cultural participation, a problem of the arts and culture sector that has worsened in the…

4633

Abstract

Purpose

The paper aims to introduce social marketing (SM) as a tool to overcome the low cultural participation, a problem of the arts and culture sector that has worsened in the post-pandemic scenario.

Design/methodology/approach

The study uses a multidisciplinary literature review (SM, museum marketing, museology and cultural policy) to address the problem of museums and other cultural heritage institutions, at both the macro-level (prevailing cultural policies and antecedents, barriers and consequences to cultural participation) and micro-level (challenges faced by museums in the 21st century and marketing as a management instrument).

Findings

The downstream, midstream and upstream approaches can be used to design and implement SM interventions intended to address the problem of low cultural participation in museums. The three approaches should be considered holistically, with their synergetic and recursive effects.

Research limitations/implications

Due to its introductory and conceptual nature, the study provides a comprehensive intervention framework to be used as a platform for future theoretical and empirical research. Further investigations may expand on the specificities of each approach (down, mid and upstream) and extend the framework to other nonprofit cultural institutions beyond museums, such as libraries and archives, cultural heritage sites and theater, music and dance companies.

Practical implications

The paper proposes a comprehensive SM intervention framework that integrates three interdependent approaches (downstream, midstream and upstream).

Originality/value

The paper provides a starting point for the holistic application of SM in the arts and culture sector. It also encourages researchers, cultural policymakers and cultural heritage professionals to investigate, design and implement SM programs that better understand, expand and diversify the audience and strengthen the legitimacy and relevance of cultural actors and activities to transform them into inclusive, accessible and sustainable institutions.

Open Access
Article
Publication date: 20 September 2022

Joo Hun Yoo, Hyejun Jeong, Jaehyeok Lee and Tai-Myoung Chung

This study aims to summarize the critical issues in medical federated learning and applicable solutions. Also, detailed explanations of how federated learning techniques can be…

2907

Abstract

Purpose

This study aims to summarize the critical issues in medical federated learning and applicable solutions. Also, detailed explanations of how federated learning techniques can be applied to the medical field are presented. About 80 reference studies described in the field were reviewed, and the federated learning framework currently being developed by the research team is provided. This paper will help researchers to build an actual medical federated learning environment.

Design/methodology/approach

Since machine learning techniques emerged, more efficient analysis was possible with a large amount of data. However, data regulations have been tightened worldwide, and the usage of centralized machine learning methods has become almost infeasible. Federated learning techniques have been introduced as a solution. Even with its powerful structural advantages, there still exist unsolved challenges in federated learning in a real medical data environment. This paper aims to summarize those by category and presents possible solutions.

Findings

This paper provides four critical categorized issues to be aware of when applying the federated learning technique to the actual medical data environment, then provides general guidelines for building a federated learning environment as a solution.

Originality/value

Existing studies have dealt with issues such as heterogeneity problems in the federated learning environment itself, but those were lacking on how these issues incur problems in actual working tasks. Therefore, this paper helps researchers understand the federated learning issues through examples of actual medical machine learning environments.

Details

International Journal of Web Information Systems, vol. 18 no. 2/3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 5 August 2022

Philippos Nikiforou, Thomas Dimopoulos and Petros Sivitanides

The purpose of this study is to investigate how the degree of overpricing (DOP) and other variables are associated with the time on the market (TOM) and the final selling price…

Abstract

Purpose

The purpose of this study is to investigate how the degree of overpricing (DOP) and other variables are associated with the time on the market (TOM) and the final selling price (SP) for residential properties in the Paphos urban area.

Design/methodology/approach

The hedonic pricing model was used to examine the association of TOM and SP with various factors. The association of the independent variable of DOP and other independent variables with the two dependent variables of TOM and SP were investigated via ordinary least squares (OLS) regression models. In the first set of models the dependent variable was TOM and in the second set of models the dependent variable was SP. A sample of N = 538 completed transactions from Q1 2008 to Q2 2019 was used to estimate the optimum DOP that a seller must apply on the current market value of a property in order to achieve highest SP price in the shortest TOM.

Findings

The results of this study also suggest that the degree of overpricing in thin and less transparent markets is higher than that in transparent markets with high property transaction volumes. In mature markets like the USA and the UK where the actual sold prices are published, the DOP is around 1.5% which is much lower than the 11% DOP identified in this study.

Practical implications

It was found that buyers are willing to pay more for the same house in a bigger plot than a bigger house in the same plot. The outcome is that smaller houses sell faster at a higher price per square meter than larger houses. Smaller houses are more affordable than larger houses.

Social implications

There is a large pool of buyers for smaller houses than bigger houses. Higher demand for smaller houses results in a higher price per square meter for smaller houses than the price per square meter for bigger houses. Respectively the TOM for smaller houses is shorter than the TOM for bigger houses.

Originality/value

The database used is unique, from an estate agent located in Paphos that managed to sell more than 27,000 properties in 20 years. This data set is the most accurate information for Cyprus' property transactions.

Details

Journal of European Real Estate Research, vol. 15 no. 3
Type: Research Article
ISSN: 1753-9269

Keywords

Open Access
Article
Publication date: 5 September 2019

Matilde Lafuente-Lechuga, Ursula Faura-Martínez and Olga García-Luque

This paper studies social inequality in the vital field of employment in Spain during the crisis period 2009-2014.

1277

Abstract

Purpose

This paper studies social inequality in the vital field of employment in Spain during the crisis period 2009-2014.

Design/methodology/approach

Factor analysis is used to build a synthetic index of employment exclusion. The starting information matrix collects data from a wide set of employment variables for all 17 Spanish autonomous communities and the autonomous cities of Ceuta and Melilla. Based on this information, four factors are extracted which explain employment exclusion in different situations of vulnerability, such as unemployment, temporality, poverty or low pay.

Findings

In the territorial ranking, Madrid, Basque Country, Aragon and Catalonia show the lowest risk of employment exclusion, whereas Ceuta, Andalusia, Extremadura and Canary Islands show the highest ones.

Originality/value

The main value of this research is that it confirms the need for coordination of public policies in order to foster social and territorial cohesion in Spain.

Details

Applied Economic Analysis, vol. 27 no. 80
Type: Research Article
ISSN: 2632-7627

Keywords

1 – 10 of 45