Search results

1 – 10 of over 2000
Open Access
Article
Publication date: 10 April 2017

Allard C.R. van Riel, Jörg Henseler, Ildikó Kemény and Zuzana Sasovova

Many important constructs of business and social sciences are conceptualized as composites of common factors, i.e. as second-order constructs composed of reflectively measured…

17344

Abstract

Purpose

Many important constructs of business and social sciences are conceptualized as composites of common factors, i.e. as second-order constructs composed of reflectively measured first-order constructs. Current approaches to model this type of second-order construct provide inconsistent estimates and lack a model test that helps assess the existence and/or usefulness of a second-order construct. The purpose of this paper is to present a novel three-stage approach to model, estimate, and test second-order constructs composed of reflectively measured first-order constructs.

Design/methodology/approach

The authors compare the efficacy of the proposed three-stage approach with that of the dominant extant approaches, i.e. the repeated indicator approach, the two-stage approach, and the hybrid approach by means of simulated data whose underlying population model is known. Moreover, the authors apply the three-stage approach to a real research setting in business research.

Findings

The study based on simulated data illustrates that the three-stage approach is Fisher-consistent, whereas the dominant extant approaches are not. The study based on real data shows that the three-stage approach is meaningfully applicable in typical research settings of business research. Its results can differ substantially from those of the extant approaches.

Research limitations/implications

Analysts aiming at modeling composites of common factors should apply the proposed procedure in order to test the existence and/or usefulness of a second-order construct and to obtain consistent estimates.

Originality/value

The three-stage approach is the only consistent approach for modeling, estimating, and testing composite second-order constructs made up of reflectively measured first-order constructs.

Details

Industrial Management & Data Systems, vol. 117 no. 3
Type: Research Article
ISSN: 0263-5577

Keywords

Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

Open Access
Article
Publication date: 20 August 2021

Daniel Hofer, Markus Jäger, Aya Khaled Youssef Sayed Mohamed and Josef Küng

For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases…

2169

Abstract

Purpose

For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks.

Design/methodology/approach

We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model.

Findings

We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries.

Research limitations/implications

Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results.

Originality/value

In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 18 August 2023

Lindokuhle Talent Zungu and Lorraine Greyling

This study aims to test the validity of the Rajan theory in South Africa and other selected emerging markets (Chile, Peru and Brazil) during the period 1975–2019.

608

Abstract

Purpose

This study aims to test the validity of the Rajan theory in South Africa and other selected emerging markets (Chile, Peru and Brazil) during the period 1975–2019.

Design/methodology/approach

In this study, the researchers used time-series data to estimate a Bayesian Vector Autoregression (BVAR) model with hierarchical priors. The BVAR technique has the advantage of being able to accommodate a wide cross-section of variables without running out of degrees of freedom. It is also able to deal with dense parameterization by imposing structure on model coefficients via prior information and optimal choice of the degree of formativeness.

Findings

The results for all countries except Peru confirmed the Rajan hypotheses, indicating that inequality contributes to high indebtedness, resulting in financial fragility. However, for Peru, this study finds it contradicts the theory. This study controlled for monetary policy shock and found the results differing country-specific.

Originality/value

The findings suggest that an escalating level of inequality leads to financial fragility, which implies that policymakers ought to be cautious of excessive inequality when endeavouring to contain the risk of financial fragility, by implementing sound structural reform policies that aim to attract investments consistent with job creation, development and growth in these countries. Policymakers should also be cautious when implementing policy tools (redistributive policies, a sound monetary policy), as they seem to increase the risk of excessive credit growth and financial fragility, and they need to treat income inequality as an important factor relevant to macroeconomic aggregates and financial fragility.

Details

International Journal of Emerging Markets, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1746-8809

Keywords

Content available
Article
Publication date: 15 June 2017

Jianfeng Zheng, Cong Fu and Haibo Kuang

This paper aims to investigate the location of regional and international hub ports in liner shipping by proposing a hierarchical hub location problem.

3334

Abstract

Purpose

This paper aims to investigate the location of regional and international hub ports in liner shipping by proposing a hierarchical hub location problem.

Design/methodology/approach

This paper develops a mixed-integer linear programming model for the authors’ proposed problem. Numerical experiments based on a realistic Asia-Europe-Oceania liner shipping network are carried out to account for the effectiveness of this model.

Findings

The results show that one international hub port (i.e. Rotterdam) and one regional hub port (i.e. Zeebrugge) are opened in Europe. Two international hub ports (i.e. Sokhna and Salalah) are located in Western Asia, where no regional hub port is established. One international hub port (i.e. Colombo) and one regional hub port (i.e. Cochin) are opened in Southern Asia. One international hub port (i.e. Singapore) and one regional hub port (i.e. Jakarta) are opened in Southeastern Asia and Australia. Three international hub ports (i.e. Hong Kong, Shanghai and Yokohama) and two regional hub ports (i.e. Qingdao and Kwangyang) are opened in Eastern Asia.

Originality/value

This paper proposes a hierarchical hub location problem, in which the authors distinguish between regional and international hub ports in liner shipping. Moreover, scale economies in ship size are considered. Furthermore, the proposed problem introduces the main ports.

Details

Maritime Business Review, vol. 2 no. 2
Type: Research Article
ISSN: 2397-3757

Keywords

Open Access
Article
Publication date: 30 September 2019

Joseph F. Hair Jr. and Luiz Paulo Fávero

This paper aims to discuss multilevel modeling for longitudinal data, clarifying the circumstances in which they can be used.

18338

Abstract

Purpose

This paper aims to discuss multilevel modeling for longitudinal data, clarifying the circumstances in which they can be used.

Design/methodology/approach

The authors estimate three-level models with repeated measures, offering conditions for their correct interpretation.

Findings

From the concepts and techniques presented, the authors can propose models, in which it is possible to identify the fixed and random effects on the dependent variable, understand the variance decomposition of multilevel random effects, test alternative covariance structures to account for heteroskedasticity and calculate and interpret the intraclass correlations of each analysis level.

Originality/value

Understanding how nested data structures and data with repeated measures work enables researchers and managers to define several types of constructs from which multilevel models can be used.

Details

RAUSP Management Journal, vol. 54 no. 4
Type: Research Article
ISSN: 2531-0488

Keywords

Open Access
Article
Publication date: 2 May 2017

Berna Keskin, Richard Dunning and Craig Watkins

This paper aims to explore the impact of a recent earthquake activity on house prices and their spatial distribution in the Istanbul housing market.

4648

Abstract

Purpose

This paper aims to explore the impact of a recent earthquake activity on house prices and their spatial distribution in the Istanbul housing market.

Design/methodology/approach

The paper uses a multi-level approach within an event study framework to model changes in the pattern of house prices in Istanbul. The model allows the isolation of the effects of earthquake risk and explores the differential impact in different submarkets in two study periods – one before (2007) and one after (2012) recent earthquake activity in the Van region, which although in Eastern Turkey served to alter the perceptions of risk through the wider geographic region.

Findings

The analysis shows that there are variations in the size of price discounts in submarkets resulting from the differential influence of a recent earthquake activity on perceived risk of damage. The model results show that the spatial impacts of these changes are not transmitted evenly across the study area. Rather it is clear that submarkets at the cheaper end of the market have proportionately larger negative impacts on real estate values.

Research limitations/implications

The robustness of the models would be enhanced by the addition of further spatial levels and larger data sets.

Practical implications

The methods introduced in this study can be used by real estate agents, valuers and insurance companies to help them more accurately assess the likely impacts of changes in the perceived risk of earthquake activity (or other environmental events such as flooding) on the formation of house prices in different market segments.

Social implications

The application of these methods is intended to inform a fairer approach to setting insurance premiums and a better basis for determining policy interventions and public investment designed to mitigate potential earthquake risk.

Originality/value

The paper represents an attempt to develop a novel extension of the standard use of hedonic models in event studies to investigate the impact of natural disasters on real estate values. The value of the approach is that it is able to better capture the granularity of the spatial effects of environmental events than the standard approach.

Details

Journal of European Real Estate Research, vol. 10 no. 1
Type: Research Article
ISSN: 1753-9269

Keywords

Open Access
Article
Publication date: 13 November 2018

Apostolos Giovanis and Pinelopi Athanasopoulou

The purpose of this study is to develop and empirically test a lovemark measure that can be used to identify how brands of wireless-enabled computing devices are classified based…

14013

Abstract

Purpose

The purpose of this study is to develop and empirically test a lovemark measure that can be used to identify how brands of wireless-enabled computing devices are classified based on customers’ respect and love toward them.

Design/methodology/approach

On evidence drawn from 1,016 consumers of wireless-enabled computing devices (e.g. netbooks and tablets) in Greece, partial least squares method is used to test the validity of the proposed hierarchical model.

Findings

Results show that a lovemark measure can be conceptualized as a third-order reflective construct having respect and love as its second-order dimensions. In turn, respect reflects on brand performance, trust and reputation, and love reflects on brand commitment, intimacy and passion. The proposed measure presents a very good external validity as it can explain big portions of variance in consumer responses including repurchase intentions, positive WOM and willingness to pay a price premium. Finally, the proposed measure is used to classify eight well-known devices as products, fads, brands and lovemarks and identify the love styles associated with brand relationships.

Originality/value

This paper provides empirical evidence for measuring and identifying lovemarks using a hierarchical model, which can be further used to develop a more effective strategy for managing the functional and emotional aspects of brands to strengthen consumer-brand relationships.

Propósito

El objetivo de este estudio es el desarrollo metodológico y validación empírica de una escala para clasificar las marcas de productos tecnológicos en base a las dos dimensiones que caracterizan a las marcas Lovemark: el respecto y amor.

Diseño/metodología/enfoque

Con una base de datos recogidos de una muestra de 1.106 consumidores de productos tecnológicos (e.g., tablets y portátiles pequeños) en Grecia, se usa PLS para testar la validez del modelo jerárquico propuesto.

Resultados

Los resultados ponen de manifiesto que el concepto Lovemark puede ser conceptualizado como un constructo reflectivo de tres dimensiones siendo el respeto y el amor hacia la marca las dimensiones de segundo orden. A su vez, el respeto hacia la marca refleja el desempeño, la confianza y reputación de la marca mientras que el amor queda reflejado en conceptos tales como el compromiso, la intimidad y la pasión. La medida propuesta presenta una aceptable validez externa pues es capaz de explicar mayor porcentaje de la varianza de las intenciones de compra, la comunicación boca-oreja positiva y la disposición a pagar un mayor precio por la marca. Finalmente, se demuestra la utilidad de la medida propuesta para clasificar ocho marcas conocidas según los niveles de amor y respeto que los consumidores manifiestan hacia las mismas así como identificar los estilos de amor asociados a la relación que los consumidores mantienen con estas marcas.

Originalidad/valor

Este trabajo ofrece evidencias empíricas para medir e identificar las Lovemark usando un modelo jeráquico que puede ser utilizado posteriormente para desarrollar una estrategia más efectiva en la gestión de los aspectos funcionales y emocionales de las marcas como medio para fortalecer las relaciones marca-consumidor.

Open Access
Article
Publication date: 30 April 2012

Afzal Mohammad Khaled and Yong Jin Kim

Logistical facility location decisions can make a crucial difference in the success or failure of a company. Geographical Information Systems (GIS) have recently become a very…

Abstract

Logistical facility location decisions can make a crucial difference in the success or failure of a company. Geographical Information Systems (GIS) have recently become a very popular decision support system to help deal with facility location problems. However, until recently, GIS methodologies have not been fully embraced as a way to deal with new facility location problems in business logistics. This research makes a framework for categorizing empirical facility location problems based on the intensity of the involvement of GIS methodologies in decision making. This framework was built by analyzing facility location models and GIS methodologies. The research results revealed the depth of the embracement of GIS methodologies in logistics for determining new facility location decisions. In the new facility location decisions, spatial data inputs are almost always coupled with the visualization of the problems and solutions. However, the usage of GIS capability solely (i.e. suitability analysis) for problem solving has not been embraced at the same level. In most cases, the suitability analysis is used together with special optimization models for choosing among the multiple alternatives.

Details

Journal of International Logistics and Trade, vol. 10 no. 1
Type: Research Article
ISSN: 1738-2122

Keywords

Open Access
Article
Publication date: 17 December 2019

Yin Kedong, Shiwei Zhou and Tongtong Xu

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The…

1319

Abstract

Purpose

To construct a scientific and reasonable indicator system, it is necessary to design a set of standardized indicator primary selection and optimization inspection process. The purpose of this paper is to provide theoretical guidance and reference standards for the indicator system design process, laying a solid foundation for the application of the indicator system, by systematically exploring the expert evaluation method to optimize the index system to enhance its credibility and reliability, to improve its resolution and accuracy and reduce its objectivity and randomness.

Design/methodology/approach

The paper is based on system theory and statistics, and it designs the main line of “relevant theoretical analysis – identification of indicators – expert assignment and quality inspection” to achieve the design and optimization of the indicator system. First, the theoretical basis analysis, relevant factor analysis and physical process description are used to clarify the comprehensive evaluation problem and the correlation mechanism. Second, the system structure analysis, hierarchical decomposition and indicator set identification are used to complete the initial establishment of the indicator system. Third, based on expert assignment method, such as Delphi assignments, statistical analysis, t-test and non-parametric test are used to complete the expert assignment quality diagnosis of a single index, the reliability and validity test is used to perform single-index assignment correction and consistency test is used for KENDALL coordination coefficient and F-test multi-indicator expert assignment quality diagnosis.

Findings

Compared with the traditional index system construction method, the optimization process used in the study standardizes the process of index establishment, reduces subjectivity and randomness, and enhances objectivity and scientificity.

Originality/value

The innovation point and value of the paper are embodied in three aspects. First, the system design process of the combined indicator system, the multi-dimensional index screening and system optimization are carried out to ensure that the index system is scientific, reasonable and comprehensive. Second, the experts’ background is comprehensively evaluated. The objectivity and reliability of experts’ assignment are analyzed and improved on the basis of traditional methods. Third, aim at the quality of expert assignment, conduct t-test, non-parametric test of single index, and multi-optimal test of coordination and importance of multiple indicators, enhance experts the practicality of assignment and ensures the quality of expert assignment.

Details

Marine Economics and Management, vol. 2 no. 1
Type: Research Article
ISSN: 2516-158X

Keywords

1 – 10 of over 2000