Search results

1 – 10 of over 3000
Article
Publication date: 31 January 2020

Metin Vatansever, İbrahim Demir and Ali Hepşen

The main purpose of this study is to detect homogeneous housing market areas among 196 districts of 5 major cities of Turkey in terms of house sale price indices. The second…

Abstract

Purpose

The main purpose of this study is to detect homogeneous housing market areas among 196 districts of 5 major cities of Turkey in terms of house sale price indices. The second purpose is to forecast these 196 house sale price indices.

Design/methodology/approach

In this paper, the authors use the monthly house sale price indices of 196 districts of 5 major cities of Turkey. The authors propose an autoregressive (AR) model-based fuzzy clustering approach to detect homogeneous housing market areas and to forecast house price indices.

Findings

The AR model-based fuzzy clustering approach detects three numbers of homogenous property market areas among 196 districts of 5 major cities of Turkey where house sale price moves together (or with similar house sales dynamic). This approach also provides better forecasting results compared to standard AR models by higher data efficiency and lower model validation and maintenance effort.

Research limitations/implications

In this study, the authors could not use any district-based socioeconomic and consumption behavioral indicators and any discrete geographical and property characteristics because of the data limitation.

Practical implications

The finding of this study would help property investors for establishing more effective property management strategies by taking different geographical location conditions into account.

Social implications

From the government side, knowing future rises, falls and turning points of property prices in different locations can allow the government to monitor the property price changes and control the speculation activities that cause a dramatic change in the market.

Originality/value

There is no previous research paper focusing on neighborhood-based clusters and forecasting house sale price indices in Turkey. At this point, it is the first academic study.

Details

International Journal of Housing Markets and Analysis, vol. 13 no. 4
Type: Research Article
ISSN: 1753-8270

Keywords

Article
Publication date: 22 April 2024

Ruoxi Zhang and Chenhan Ren

This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.

Abstract

Purpose

This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.

Design/methodology/approach

This study consisted of two main parts: danmu comment sentiment series generation and clustering. In the first part, the authors proposed a sentiment classification model based on BERT fine-tuning to quantify danmu comment sentiment polarity. To smooth the sentiment series, they used methods, such as comprehensive weights. In the second part, the shaped-based distance (SBD)-K-shape method was used to cluster the actual collected data.

Findings

The filtered sentiment series or curves of the microfilms on the Bilibili website could be divided into four major categories. There is an apparently stable time interval for the first three types of sentiment curves, while the fourth type of sentiment curve shows a clear trend of fluctuation in general. In addition, it was found that “disputed points” or “highlights” are likely to appear at the beginning and the climax of films, resulting in significant changes in the sentiment curves. The clustering results show a significant difference in user participation, with the second type prevailing over others.

Originality/value

Their sentiment classification model based on BERT fine-tuning outperformed the traditional sentiment lexicon method, which provides a reference for using deep learning as well as transfer learning for danmu comment sentiment analysis. The BERT fine-tuning–SBD-K-shape algorithm can weaken the effect of non-regular noise and temporal phase shift of danmu text.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 29 July 2014

Yong Liu and Huan-huan Zhao

– The purpose of this paper is to construct a dynamic information aggregation decision-making model based on variable precision rough set.

Abstract

Purpose

The purpose of this paper is to construct a dynamic information aggregation decision-making model based on variable precision rough set.

Design/methodology/approach

To deal with the dynamic decision-making problems, the grey relational analysis method, grey fixed weight clustering based on the centre triangle whitening weight function and maximum entropy principle is used to establish the dynamic information aggregation decision-making model based on variable precision rough set. The method, to begin with, the grey relational analysis method is used to determine the attributes weights of each stage; taking the proximity of the attribute measurement value and positive and negative desired effect value and the uncertainty of time weight into account, a multi-objective optimisation model based on maximum entropy principle is established to solve the model with Lagrange multiplier method, so that time weights expression are acquired; what is more, the decision-making attribute is obtained by grey fixed weight clustering based on the centre triangle whitening weight function, so that multi-decision-making table with dynamic characteristics is established, and then probabilistic decision rules from multi-criteria decision table are derived by applying variable precision rough set. Finally, a decision-making model validates the feasibility and effectiveness of the model.

Findings

The results show that it the proposed model can well aggregate the multi-stage dynamic decision-making information, realise the extraction of decision-making rules.

Research limitations/implications

The method exposed in the paper can be used to deal with the decision-making problems with the multi-stage dynamic characteristics, and decision-making attributes contain noise data and the attribute values are interval grey numbers.

Originality/value

The paper succeeds in realising both the aggregation of dynamic decision-making information and the extraction of decision-making rules.

Details

Grey Systems: Theory and Application, vol. 4 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 16 November 2021

Medhat Abd el Azem El Sayed Rostum, Hassan Mohamed Mahmoud Moustafa, Ibrahim El Sayed Ziedan and Amr Ahmed Zamel

The current challenge for forecasting smart meters electricity consumption lies in the uncertainty and volatility of load profiles. Moreover, forecasting the electricity…

Abstract

Purpose

The current challenge for forecasting smart meters electricity consumption lies in the uncertainty and volatility of load profiles. Moreover, forecasting the electricity consumption for all the meters requires an enormous amount of time. Most papers tend to avoid such complexity by forecasting the electricity consumption at an aggregated level. This paper aims to forecast the electricity consumption for all smart meters at an individual level. This paper, for the first time, takes into account the computational time for training and forecasting the electricity consumption of all the meters.

Design/methodology/approach

A novel hybrid autoregressive-statistical equations idea model with the help of clustering and whale optimization algorithm (ARSEI-WOA) is proposed in this paper to forecast the electricity consumption of all the meters with best performance in terms of computational time and prediction accuracy.

Findings

The proposed model was tested using realistic Irish smart meters energy data and its performance was compared with nine regression methods including: autoregressive integrated moving average, partial least squares regression, conditional inference tree, M5 rule-based model, k-nearest neighbor, multilayer perceptron, RandomForest, RPART and support vector regression. Results have proved that ARSEI-WOA is an efficient model that is able to achieve an accurate prediction with low computational time.

Originality/value

This paper presents a new hybrid ARSEI model to perform smart meters load forecasting at an individual level instead of an aggregated one. With the help of clustering technique, similar meters are grouped into a few clusters from which reduce the computational time of the training and forecasting process. In addition, WOA improves the prediction accuracy of each meter by finding an optimal factor between the average electricity consumption values of each cluster and the electricity consumption values for each one of its meters.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 41 no. 1
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 February 2016

Sifeng Liu, Yingjie Yang, Naiming Xie and Jeffrey Forrest

The purpose of this paper is to summarize the progress in grey system research during 2000-2015, so as to present some important new concepts, models, methods and a new framework…

1808

Abstract

Purpose

The purpose of this paper is to summarize the progress in grey system research during 2000-2015, so as to present some important new concepts, models, methods and a new framework of grey system theory.

Design/methodology/approach

The new thinking, new models and new methods of grey system theory and their applications are presented in this paper. It includes algorithm rules of grey numbers based on the “kernel” and the degree of greyness of grey numbers, the concept of general grey numbers, the synthesis axiom of degree of greyness of grey numbers and their operations; the general form of buffer operators of grey sequence operators; the four basic models of grey model GM(1,1), such as even GM, original difference GM, even difference GM, discrete GM and the suitable sequence type of each basic model, and suitable range of most used grey forecasting models; the similarity degree of grey incidences, the closeness degree of grey incidences and the three-dimensional absolute degree of grey incidence of grey incidence analysis models; the grey cluster model based on center-point and end-point mixed triangular whitenization functions; the multi-attribute intelligent grey target decision model, the two stages decision model with grey synthetic measure of grey decision models; grey game models, grey input-output models of grey combined models; and the problems of robust stability for grey stochastic time-delay systems of neutral type, distributed-delay type and neutral distributed-delay type of grey control, etc. And the new framework of grey system theory is given as well.

Findings

The problems which remain for further studying are discussed at the end of each section. The reader could know the general picture of research and developing trend of grey system theory from this paper.

Practical implications

A lot of successful practical applications of the new models to solve various problems have been found in many different areas of natural science, social science and engineering, including spaceflight, civil aviation, information, metallurgy, machinery, petroleum, chemical industry, electrical power, electronics, light industries, energy resources, transportation, medicine, health, agriculture, forestry, geography, hydrology, seismology, meteorology, environment protection, architecture, behavioral science, management science, law, education, military science, etc. These practical applications have brought forward definite and noticeable social and economic benefits. It demonstrates a wide range of applicability of grey system theory, especially in the situation where the available information is incomplete and the collected data are inaccurate.

Originality/value

The reader is given a general picture of grey systems theory as a new model system and a new framework for studying problems where partial information is known; especially for uncertain systems with few data points and poor information. The problems remaining for further studying are identified at the end of each section.

Details

Grey Systems: Theory and Application, vol. 6 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Book part
Publication date: 30 June 2004

Patrick M. Crowley

This paper attempts to evaluate whether the set of NAFTA countries (the U.S., Canada and Mexico) should adopt the same currency. The theoretical basis for the paper is the optimal…

Abstract

This paper attempts to evaluate whether the set of NAFTA countries (the U.S., Canada and Mexico) should adopt the same currency. The theoretical basis for the paper is the optimal currency area theory which suggests that countries or regions that experience similar business cycles can gain advantages in adopting the same currency. The statistical methodology used in the paper to evaluate whether states or provinces have similar business cycle correlations is model-based cluster analysis, a recently-developed method to group data in the applied statistics literature.

Details

North American Economic and Financial Integration
Type: Book
ISBN: 978-0-76231-094-4

Article
Publication date: 7 August 2017

Jing Ye and Yaoguo Dang

Nowadays, evaluation objects are becoming more and more complicated. The interval grey numbers can be used to more accurately express the evaluation objects. However, the…

Abstract

Purpose

Nowadays, evaluation objects are becoming more and more complicated. The interval grey numbers can be used to more accurately express the evaluation objects. However, the information distribution of interval grey numbers is not balanced. The purpose of this paper is to introduce the central-point triangular whitenization weight function to solve the clustering process of this kind of numbers.

Design/methodology/approach

A new expression of the central-point triangular whitenization weight function is presented in this paper, in terms of the grey cluster problem based on interval grey numbers. By establishing the integral mean value function on the set of interval grey numbers, the application range of grey clustering model is extended to the interval grey number category, and, in this way, the grey fixed weight cluster model based on interval grey numbers is obtained.

Findings

The model is verified by a case which reveals a high distinguishability, validity and practicability.

Practical implications

This model can be used in many fields, such as agriculture, economy, geology and medical science, and provides a feasible method for evaluation schemes in performance evaluation, scheme selection, risk evaluation and so on.

Originality/value

The central-point triangular whitenization weight function is introduced. The method reflects the thought “make full use of the information” in grey system theory and further enriches the system of grey clustering theory as well as expands the application scope of the grey clustering method.

Details

Grey Systems: Theory and Application, vol. 7 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 21 August 2017

Nadi Serhan Aydın

This paper aims to introduce a model-based stress-testing methodology for Islamic finance products. The importance of stress testing was indeed clearly underlined by the adverse…

Abstract

Purpose

This paper aims to introduce a model-based stress-testing methodology for Islamic finance products. The importance of stress testing was indeed clearly underlined by the adverse developments in the global finance industry. One of the key takeaways was the need to strengthen the coverage of the capital framework. Cognisant of this fact, Basel III encapsulates provisions to enhance the financial sector’s ability to withstand shocks arising from possible stress events, thereby reducing adverse spillovers into the real economy. Similarly, the Islamic Financial Services Board requires Islamic financial institutions to run stress tests as part of capital planning.

Design/methodology/approach

The authors perform thorough backtests on Islamic and conventional portfolios under widely used risk models, which are characterised by an underlying conditional volatility framework and distribution, to identify the most suitable risk model specification. Associated with an appropriate initial shock and estimation window size, the paper also conducts a model-based stress test to examine whether the stress losses estimated by the selected models compare favourably to the historical shocks.

Findings

The results suggest that the model-based framework, when combined with an appropriate risk model and distribution, can successfully reproduce past stress periods. The conditional empirical risk model is the most effective one in both long and short portfolio cases – particularly when combined with a long-enough estimation window. The relative performance of normal vs heavy-tailed distributions and symmetric vs asymmetric risk models, on the other hand, is highly dependent on whether the portfolio is long or short. Finally, the authors find that the Islamic portfolio is generally associated with lower historical stress losses as compared to the conventional portfolio.

Originality/value

The model-based framework eliminates some of the key problems associated with traditional scenario-based approaches and is easily adaptable to Islamic finance.

Details

International Journal of Islamic and Middle Eastern Finance and Management, vol. 10 no. 3
Type: Research Article
ISSN: 1753-8394

Keywords

Article
Publication date: 7 April 2015

Zhou Cheng and Tao Juncheng

To accurately forecast logistics freight volume plays a vital part in rational planning formulation for a country. The purpose of this paper is to contribute to developing a novel…

Abstract

Purpose

To accurately forecast logistics freight volume plays a vital part in rational planning formulation for a country. The purpose of this paper is to contribute to developing a novel combination forecasting model to predict China’s logistics freight volume, in which an improved PSO-BP neural network is proposed to determine the combination weights.

Design/methodology/approach

Since BP neural network has the ability of learning, storing, and recalling information that given by individual forecasting models, it is effective in determining the combination weights of combination forecasting model. First, an improved PSO based on simulated annealing method and space-time adjustment strategy (SAPSO) is proposed to solve out the connection weights of BP neural network, which overcomes the problems of local optimum traps, low precision and poor convergence during BP neural network training process. Then, a novel combination forecast model based on SAPSO-BP neural network is established.

Findings

Simulation tests prove that the proposed SAPSO has better convergence performance and more stability. At the same time, combination forecasting models based on three types of BP neural networks are developed, which rank as SAPSO-BP, PSO-BP and BP in accordance with mean absolute percentage error (MAPE) and convergent speed. Also the proposed combination model based on SAPSO-BP shows its superiority, compared with some other combination weight assignment methods.

Originality/value

SAPSO-BP neural network is an original contribution to the combination weight assignment methods of combination forecasting model, which has better convergence performance and more stability.

Article
Publication date: 12 June 2017

Kehe Wu, Yayun Zhu, Quan Li and Ziwei Wu

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities…

Abstract

Purpose

The purpose of this paper is to propose a data prediction framework for scenarios which require forecasting demand for large-scale data sources, e.g., sensor networks, securities exchange, electric power secondary system, etc. Concretely, the proposed framework should handle several difficult requirements including the management of gigantic data sources, the need for a fast self-adaptive algorithm, the relatively accurate prediction of multiple time series, and the real-time demand.

Design/methodology/approach

First, the autoregressive integrated moving average-based prediction algorithm is introduced. Second, the processing framework is designed, which includes a time-series data storage model based on the HBase, and a real-time distributed prediction platform based on Storm. Then, the work principle of this platform is described. Finally, a proof-of-concept testbed is illustrated to verify the proposed framework.

Findings

Several tests based on Power Grid monitoring data are provided for the proposed framework. The experimental results indicate that prediction data are basically consistent with actual data, processing efficiency is relatively high, and resources consumption is reasonable.

Originality/value

This paper provides a distributed real-time data prediction framework for large-scale time-series data, which can exactly achieve the requirement of the effective management, prediction efficiency, accuracy, and high concurrency for massive data sources.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 10 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

1 – 10 of over 3000