Search results

1 – 8 of 8
Open Access
Article
Publication date: 22 May 2023

Mirjana Pejić Bach, Berislav Žmuk, Tanja Kamenjarska, Maja Bašić and Bojan Morić Milovanović

This paper aims to explore and analyse stakeholders’ perceptions of the development priorities and suggests more effective strategies to assist sustainable economic growth in the…

Abstract

Purpose

This paper aims to explore and analyse stakeholders’ perceptions of the development priorities and suggests more effective strategies to assist sustainable economic growth in the United Arab Emirates (UAE).

Design/methodology/approach

The authors use the World Bank data set, which collects various stakeholders’ opinions on the UAE development. First, the exploratory factor analysis has been applied to detect the main groups of development priorities. Second, the fuzzy cluster analysis has been conducted to detect the groups of stakeholders with different attitudes towards the importance of extracted groups of priorities. Third, clusters have been compared according to demographics, media usage and shared prosperity goals.

Findings

The two main groups of development priorities have been extracted by the exploratory factor analysis: economic priorities and sustainability priorities. Four clusters have been detected according to the level of motivation when it comes to the economic and sustainability priorities: Cluster 1 (High economic – High sustainability), Cluster 2 (High economic – Medium sustainability), Cluster 3 (High economic – Low sustainability) and Cluster 4 (Low economic – Low sustainability). Members of the cluster that prefer a high level of economic and sustainability priorities (Cluster 1) also prefer more diversified economic growth providing better employment opportunities and better education and training for young people in the UAE.

Research limitations/implications

Limitations stem from the survey being conducted on a relatively small sample using the data collected by the World Bank; however, this data set allowed a comparison of various stakeholders. Future research should consider a broader sample approach, e.g. exploring and comparing all of the Gulf Cooperation Council (GCC) countries; investigating the opinions of the expatriate managers living in the UAE that are not from GCC countries; and/or including other various groups that are lagging, such as female entrepreneurs.

Practical implications

Several practical implications were identified regarding education and media coverage. Since respondents prioritize the economic development factors over sustainability factors, a media campaign could be developed and executed to increase sustainability awareness. A campaign could target especially male citizens since the analysis indicates that males are more likely to affirm high economic and low sustainability priorities than females. There is no need for further diversification of media campaigns according to age since the analysis did not reveal relevant differences in age groups, implying there is no inter-generational gap between respondents.

Originality/value

This paper contributes to the literature by comparing the perceived importance of various development goals in the UAE, such as development priorities and shared prosperity indicators. The fuzzy cluster analysis has been used as a novel approach to detect the relevant groups of stakeholders in the UAE and their developmental priorities. The issue of media usage and demographic characteristics in this context has also been discussed.

Details

Journal of Enterprising Communities: People and Places in the Global Economy, vol. 17 no. 5
Type: Research Article
ISSN: 1750-6204

Keywords

Open Access
Article
Publication date: 24 May 2024

Long Li, Binyang Chen and Jiangli Yu

The selection of sensitive temperature measurement points is the premise of thermal error modeling and compensation. However, most of the sensitive temperature measurement point…

Abstract

Purpose

The selection of sensitive temperature measurement points is the premise of thermal error modeling and compensation. However, most of the sensitive temperature measurement point selection methods do not consider the influence of the variability of thermal sensitive points on thermal error modeling and compensation. This paper considers the variability of thermal sensitive points, and aims to propose a sensitive temperature measurement point selection method and thermal error modeling method that can reduce the influence of thermal sensitive point variability.

Design/methodology/approach

Taking the truss robot as the experimental object, the finite element method is used to construct the simulation model of the truss robot, and the temperature measurement point layout scheme is designed based on the simulation model to collect the temperature and thermal error data. After the clustering of the temperature measurement point data is completed, the improved attention mechanism is used to extract the temperature data of the key time steps of the temperature measurement points in each category for thermal error modeling.

Findings

By comparing with the thermal error modeling method of the conventional fixed sensitive temperature measurement points, it is proved that the method proposed in this paper is more flexible in the processing of sensitive temperature measurement points and more stable in prediction accuracy.

Originality/value

The Grey Attention-Long Short Term Memory (GA-LSTM) thermal error prediction model proposed in this paper can reduce the influence of the variability of thermal sensitive points on the accuracy of thermal error modeling in long-term processing, and improve the accuracy of thermal error prediction model, which has certain application value. It has guiding significance for thermal error compensation prediction.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 23 July 2020

Rami Mustafa A. Mohammad

Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet…

2090

Abstract

Spam emails classification using data mining and machine learning approaches has enticed the researchers' attention duo to its obvious positive impact in protecting internet users. Several features can be used for creating data mining and machine learning based spam classification models. Yet, spammers know that the longer they will use the same set of features for tricking email users the more probably the anti-spam parties might develop tools for combating this kind of annoying email messages. Spammers, so, adapt by continuously reforming the group of features utilized for composing spam emails. For that reason, even though traditional classification methods possess sound classification results, they were ineffective for lifelong classification of spam emails duo to the fact that they might be prone to the so-called “Concept Drift”. In the current study, an enhanced model is proposed for ensuring lifelong spam classification model. For the evaluation purposes, the overall performance of the suggested model is contrasted against various other stream mining classification techniques. The results proved the success of the suggested model as a lifelong spam emails classification method.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 15 December 2023

Nicola Castellano, Roberto Del Gobbo and Lorenzo Leto

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on…

Abstract

Purpose

The concept of productivity is central to performance management and decision-making, although it is complex and multifaceted. This paper aims to describe a methodology based on the use of Big Data in a cluster analysis combined with a data envelopment analysis (DEA) that provides accurate and reliable productivity measures in a large network of retailers.

Design/methodology/approach

The methodology is described using a case study of a leading kitchen furniture producer. More specifically, Big Data is used in a two-step analysis prior to the DEA to automatically cluster a large number of retailers into groups that are homogeneous in terms of structural and environmental factors and assess a within-the-group level of productivity of the retailers.

Findings

The proposed methodology helps reduce the heterogeneity among the units analysed, which is a major concern in DEA applications. The data-driven factorial and clustering technique allows for maximum within-group homogeneity and between-group heterogeneity by reducing subjective bias and dimensionality, which is embedded with the use of Big Data.

Practical implications

The use of Big Data in clustering applied to productivity analysis can provide managers with data-driven information about the structural and socio-economic characteristics of retailers' catchment areas, which is important in establishing potential productivity performance and optimizing resource allocation. The improved productivity indexes enable the setting of targets that are coherent with retailers' potential, which increases motivation and commitment.

Originality/value

This article proposes an innovative technique to enhance the accuracy of productivity measures through the use of Big Data clustering and DEA. To the best of the authors’ knowledge, no attempts have been made to benefit from the use of Big Data in the literature on retail store productivity.

Details

International Journal of Productivity and Performance Management, vol. 73 no. 11
Type: Research Article
ISSN: 1741-0401

Keywords

Open Access
Article
Publication date: 8 December 2023

Armin Mahmoodi, Leila Hashemi, Amin Mahmoodi, Benyamin Mahmoodi and Milad Jasemi

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese…

Abstract

Purpose

The proposed model has been aimed to predict stock market signals by designing an accurate model. In this sense, the stock market is analysed by the technical analysis of Japanese Candlestick, which is combined by the following meta heuristic algorithms: support vector machine (SVM), meta-heuristic algorithms, particle swarm optimization (PSO), imperialist competition algorithm (ICA) and genetic algorithm (GA).

Design/methodology/approach

In addition, among the developed algorithms, the most effective one is chosen to determine probable sell and buy signals. Moreover, the authors have proposed comparative results to validate the designed model in this study with the same basic models of three articles in the past. Hence, PSO is used as a classification method to search the solution space absolutelyand with the high speed of running. In terms of the second model, SVM and ICA are examined by the time. Where the ICA is an improver for the SVM parameters. Finally, in the third model, SVM and GA are studied, where GA acts as optimizer and feature selection agent.

Findings

Results have been indicated that, the prediction accuracy of all new models are high for only six days, however, with respect to the confusion matrixes results, it is understood that the SVM-GA and SVM-ICA models have correctly predicted more sell signals, and the SCM-PSO model has correctly predicted more buy signals. However, SVM-ICA has shown better performance than other models considering executing the implemented models.

Research limitations/implications

In this study, the authors to analyze the data the long length of time between the years 2013–2021, makes the input data analysis challenging. They must be changed with respect to the conditions.

Originality/value

In this study, two methods have been developed in a candlestick model, they are raw based and signal-based approaches which the hit rate is determined by the percentage of correct evaluations of the stock market for a 16-day period.

Details

Journal of Capital Markets Studies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-4774

Keywords

Open Access
Article
Publication date: 18 October 2023

Mohammad Rahiminia, Jafar Razmi, Sareh Shahrabi Farahani and Ali Sabbaghnia

Supplier segmentation provides companies with suitable policies to control each segment, thereby saving time and resources. Sustainability has become a mandatory requirement in…

1207

Abstract

Purpose

Supplier segmentation provides companies with suitable policies to control each segment, thereby saving time and resources. Sustainability has become a mandatory requirement in competitive business environments. This study aims to develop a clustering-based approach to sustainable supplier segmentation.

Design/methodology/approach

The characteristics of the suppliers and the aspects of the purchased items were considered simultaneously. The weights of the sub-criteria were determined using the best-worst method. Then, the K-means clustering algorithm was applied to all company suppliers based on four criteria. The proposed model is applied to a real case study to test the performance of the proposed approach.

Findings

The results prove that supplier segmentation is more efficient when using clustering algorithms, and the best criteria are selected for sustainable supplier segmentation and managing supplier relationships.

Originality/value

This study integrates sustainability considerations into the supplier segmentation problem using a hybrid approach. The proposed sustainable supplier segmentation is a practical tool that eliminates complexity and presents the possibility of convenient execution. The proposed method helps business owners to elevate their sustainable insights.

Details

Modern Supply Chain Research and Applications, vol. 5 no. 3
Type: Research Article
ISSN: 2631-3871

Keywords

Open Access
Article
Publication date: 29 July 2020

Mahmood Al-khassaweneh and Omar AlShorman

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including…

Abstract

In the big data era, image compression is of significant importance in today’s world. Importantly, compression of large sized images is required for everyday tasks; including electronic data communications and internet transactions. However, two important measures should be considered for any compression algorithm: the compression factor and the quality of the decompressed image. In this paper, we use Frei-Chen bases technique and the Modified Run Length Encoding (RLE) to compress images. The Frei-Chen bases technique is applied at the first stage in which the average subspace is applied to each 3 × 3 block. Those blocks with the highest energy are replaced by a single value that represents the average value of the pixels in the corresponding block. Even though Frei-Chen bases technique provides lossy compression, it maintains the main characteristics of the image. Additionally, the Frei-Chen bases technique enhances the compression factor, making it advantageous to use. In the second stage, RLE is applied to further increase the compression factor. The goal of using RLE is to enhance the compression factor without adding any distortion to the resultant decompressed image. Integrating RLE with Frei-Chen bases technique, as described in the proposed algorithm, ensures high quality decompressed images and high compression rate. The results of the proposed algorithms are shown to be comparable in quality and performance with other existing methods.

Details

Applied Computing and Informatics, vol. 20 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 9 November 2023

Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani

The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…

Abstract

Purpose

The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.

Design/methodology/approach

This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).

Findings

The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.

Practical implications

The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.

Originality/value

This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 13
Type: Research Article
ISSN: 0969-9988

Keywords

Access

Only Open Access

Year

Last 12 months (8)

Content type

1 – 8 of 8