Search results

1 – 10 of 322
Content available
Article
Publication date: 1 June 2003

Jon Rigelsford

486

Abstract

Details

Sensor Review, vol. 23 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 2 June 2021

Maria Gabriella Ceravolo, Vincenzo Farina, Lucrezia Fattobene, Elvira Anna Graziano, Lucia Leonelli and GianMario Raggetti

This study investigates whether colors red or blue in financial disclosure documents (Key Investor Information Documents – KIIDs) affect attention distribution toward the visual…

1778

Abstract

Purpose

This study investigates whether colors red or blue in financial disclosure documents (Key Investor Information Documents – KIIDs) affect attention distribution toward the visual stimulus and the perception of financial attractiveness of the products.

Design/methodology/approach

In order to observe and measure financial consumers' visual attention, the unobtrusive methodology of eye-tracking is used on a sample of nonprofessional investors, applying an ecological protocol, through a cross-sectional design.

Findings

Financial information processing and visual attention distribution are influenced by the color of the KIID document, as red seems to attract attention, proxied by gazing behavior, more than blue. Red color, compared to blue, is also observed to push investors to rate the products as less financially attractive, especially when the product Risk Reward Profile is high.

Practical implications

The findings highlight the role of the basic visual properties of documents conveying financial information, prompting to investigate the unconscious and automatic mechanisms of individual's attention and its influence on decision making.

Originality/value

Using the eye-tracking tool, this study bridges neuroscience, color research, marketing and finance and provides new knowledge on the underlying neural mechanisms of financial consumers' behavior.

Details

International Journal of Bank Marketing, vol. 39 no. 7
Type: Research Article
ISSN: 0265-2323

Keywords

Open Access
Article
Publication date: 16 March 2021

Bayu Adi Nugroho

It is crucial to find a better portfolio optimization strategy, considering the cryptocurrencies' asymmetric volatilities. Hence, this research aimed to present dynamic…

1672

Abstract

Purpose

It is crucial to find a better portfolio optimization strategy, considering the cryptocurrencies' asymmetric volatilities. Hence, this research aimed to present dynamic optimization on minimum variance (MVP), equal risk contribution (ERC) and most diversified portfolio (MDP).

Design/methodology/approach

This study applied dynamic covariances from multivariate GARCH(1,1) with Student’s-t-distribution. This research also constructed static optimization from the conventional MVP, ERC and MDP as comparison. Moreover, the optimization involved transaction cost and out-of-sample analysis from the rolling windows method. The sample consisted of ten significant cryptocurrencies.

Findings

Dynamic optimization enhanced risk-adjusted return. Moreover, dynamic MDP and ERC could win the naïve strategy (1/N) under various estimation windows, and forecast lengths when the transaction cost ranging from 10 bps to 50 bps. The researcher also used another researcher's sample as a robustness test. Findings showed that dynamic optimization (MDP and ERC) outperformed the benchmark.

Practical implications

Sophisticated investors may use the dynamic ERC and MDP to optimize cryptocurrencies portfolio.

Originality/value

To the best of the author’s knowledge, this is the first paper that studies the dynamic optimization on MVP, ERC and MDP using DCC and ADCC-GARCH with multivariate-t-distribution and rolling windows method.

Details

Journal of Capital Markets Studies, vol. 5 no. 1
Type: Research Article
ISSN: 2514-4774

Keywords

Content available
Article
Publication date: 13 July 2022

Ady Milman and Asli D.A. Tasci

The purpose of this study is to identify the influence of perceived brand color emotions on perceived brand creativity, assess the influence of perceived brand creativity on…

1533

Abstract

Purpose

The purpose of this study is to identify the influence of perceived brand color emotions on perceived brand creativity, assess the influence of perceived brand creativity on utilitarian and hedonic values, measure the impact of hedonic and utilitarian values on brand loyalty and evaluate the role of different theme park color schemes in influencing these relationships.

Design/methodology/approach

The study modeled the proposed relationships by analyzing data from an online survey using partial least squares structural equation modeling. Respondents were presented with different color schemes to induce certain emotions before answering questions.

Findings

The results showed that the valence and arousal of emotions incited by various colors lead to a perception of creativity for theme park products, which then influence both utilitarian and hedonic values and thus brand loyalty. When the model was compared for seven different color schemes for a theme park brand, differences seem sporadic rather than systematic.

Research limitations/implications

The online nature and timing of the study may have prohibited authentic reactions from consumers as the US theme park industry is currently in its recovery mode.

Practical implications

While the results did not identify a specific preferred color scheme, theme park executives should continue using a variety of color combinations to generate visitor perceptions of novelty and creativity that would impact their perceived hedonistic and utilitarian values.

Originality/value

The study empirically tests color influences on a brand’s perceived creativity and its consequences on a brand’s utilitarian and hedonic values and brand loyalty.

Details

Consumer Behavior in Tourism and Hospitality, vol. 17 no. 4
Type: Research Article
ISSN: 2752-6666

Keywords

Open Access
Article
Publication date: 18 October 2021

Ruhao Zhao, Xiaoping Ma, He Zhang, Honghui Dong, Yong Qin and Limin Jia

This paper aims to propose an enhanced densely dehazing network to suit railway scenes’ features and improve the visual quality degraded by haze and fog.

Abstract

Purpose

This paper aims to propose an enhanced densely dehazing network to suit railway scenes’ features and improve the visual quality degraded by haze and fog.

Design/methodology/approach

It is an end-to-end network based on DenseNet. The authors design enhanced dense blocks and fuse them in a pyramid pooling module for visual data’s local and global features. Multiple ablation studies have been conducted to show the effects of each module proposed in this paper.

Findings

The authors have compared dehazed results on real hazy images and railway hazy images of state-of-the-art dehazing networks with the dehazed results in data quality. Finally, an object-detection test is taken to judge the edge information preservation after haze removal. All results demonstrate that the proposed dehazing network performs better under railway scenes in detail.

Originality/value

This study provides a new method for image enhancing in the railway monitoring system.

Details

Smart and Resilient Transportation, vol. 3 no. 3
Type: Research Article
ISSN: 2632-0487

Keywords

Content available
Book part
Publication date: 29 October 2018

Abstract

Details

The Work-Family Interface: Spillover, Complications, and Challenges
Type: Book
ISBN: 978-1-78769-112-4

Open Access
Article
Publication date: 19 August 2021

Linh Truong-Hong, Roderik Lindenbergh and Thu Anh Nguyen

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation

2311

Abstract

Purpose

Terrestrial laser scanning (TLS) point clouds have been widely used in deformation measurement for structures. However, reliability and accuracy of resulting deformation estimation strongly depends on quality of each step of a workflow, which are not fully addressed. This study aims to give insight error of these steps, and results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. Thus, the main contributions of the paper are investigating point cloud registration error affecting resulting deformation estimation, identifying an appropriate segmentation method used to extract data points of a deformed surface, investigating a methodology to determine an un-deformed or a reference surface for estimating deformation, and proposing a methodology to minimize the impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Design/methodology/approach

In practice, the quality of data point clouds and of surface extraction strongly impacts on resulting deformation estimation based on laser scanning point clouds, which can cause an incorrect decision on the state of the structure if uncertainty is available. In an effort to have more comprehensive insight into those impacts, this study addresses four issues: data errors due to data registration from multiple scanning stations (Issue 1), methods used to extract point clouds of structure surfaces (Issue 2), selection of the reference surface Sref to measure deformation (Issue 3), and available outlier and/or mixed pixels (Issue 4). This investigation demonstrates through estimating deformation of the bridge abutment, building and an oil storage tank.

Findings

The study shows that both random sample consensus (RANSAC) and region growing–based methods [a cell-based/voxel-based region growing (CRG/VRG)] can be extracted data points of surfaces, but RANSAC is only applicable for a primary primitive surface (e.g. a plane in this study) subjected to a small deformation (case study 2 and 3) and cannot eliminate mixed pixels. On another hand, CRG and VRG impose a suitable method applied for deformed, free-form surfaces. In addition, in practice, a reference surface of a structure is mostly not available. The use of a fitting plane based on a point cloud of a current surface would cause unrealistic and inaccurate deformation because outlier data points and data points of damaged areas affect an accuracy of the fitting plane. This study would recommend the use of a reference surface determined based on a design concept/specification. A smoothing method with a spatial interval can be effectively minimize, negative impact of outlier, noisy data and/or mixed pixels on deformation estimation.

Research limitations/implications

Due to difficulty in logistics, an independent measurement cannot be established to assess the deformation accuracy based on TLS data point cloud in the case studies of this research. However, common laser scanners using the time-of-flight or phase-shift principle provide point clouds with accuracy in the order of 1–6 mm, while the point clouds of triangulation scanners have sub-millimetre accuracy.

Practical implications

This study aims to give insight error of these steps, and the results of the study would be guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds.

Social implications

The results of this study would provide guidelines for a practical community to either develop a new workflow or refine an existing one of deformation estimation based on TLS point clouds. A low-cost method can be applied for deformation analysis of the structure.

Originality/value

Although a large amount of the studies used laser scanning to measure structure deformation in the last two decades, the methods mainly applied were to measure change between two states (or epochs) of the structure surface and focused on quantifying deformation-based TLS point clouds. Those studies proved that a laser scanner could be an alternative unit to acquire spatial information for deformation monitoring. However, there are still challenges in establishing an appropriate procedure to collect a high quality of point clouds and develop methods to interpret the point clouds to obtain reliable and accurate deformation, when uncertainty, including data quality and reference information, is available. Therefore, this study demonstrates the impact of data quality in a term of point cloud registration error, selected methods for extracting point clouds of surfaces, identifying reference information, and available outlier, noisy data and/or mixed pixels on deformation estimation.

Details

International Journal of Building Pathology and Adaptation, vol. 40 no. 3
Type: Research Article
ISSN: 2398-4708

Keywords

Open Access
Article
Publication date: 11 March 2022

Edmund Baffoe-Twum, Eric Asa and Bright Awuku

Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations…

Abstract

Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations, travel demand, safety-performance evaluation, and maintenance. Regular updates help to determine traffic patterns for decision-making. Unfortunately, the luxury of having permanent recorders on all road segments, especially low-volume roads, is virtually impossible. Consequently, insufficient AADT information is acquired for planning and new developments. A growing number of statistical, mathematical, and machine-learning algorithms have helped estimate AADT data values accurately, to some extent, at both sampled and unsampled locations on low-volume roadways. In some cases, roads with no representative AADT data are resolved with information from roadways with similar traffic patterns.

Methods: This study adopted an integrative approach with a combined systematic literature review (SLR) and meta-analysis (MA) to identify and to evaluate the performance, the sources of error, and possible advantages and disadvantages of the techniques utilized most for estimating AADT data. As a result, an SLR of various peer-reviewed articles and reports was completed to answer four research questions.

Results: The study showed that the most frequent techniques utilized to estimate AADT data on low-volume roadways were regression, artificial neural-network techniques, travel-demand models, the traditional factor approach, and spatial interpolation techniques. These AADT data-estimating methods' performance was subjected to meta-analysis. Three studies were completed: R squared, root means square error, and mean absolute percentage error. The meta-analysis results indicated a mixed summary effect: 1. all studies were equal; 2. all studies were not comparable. However, the integrated qualitative and quantitative approach indicated that spatial-interpolation (Kriging) methods outperformed the others.

Conclusions: Spatial-interpolation methods may be selected over others to generate accurate AADT data by practitioners at all levels for decision making. Besides, the resulting cross-validation statistics give statistics like the other methods' performance measures.

Details

Emerald Open Research, vol. 1 no. 5
Type: Research Article
ISSN: 2631-3952

Keywords

Content available
Article
Publication date: 9 June 2023

Wahib Saif and Adel Alshibani

This paper aims to present a highly accessible and affordable tracking model for earthmoving operations in an attempt to overcome some of the limitations of current tracking…

Abstract

Purpose

This paper aims to present a highly accessible and affordable tracking model for earthmoving operations in an attempt to overcome some of the limitations of current tracking models.

Design/methodology/approach

The proposed methodology involves four main processes: acquiring onsite terrestrial images, processing the images into 3D scaled cloud data, extracting volumetric measurements and crew productivity estimations from multiple point clouds using Delaunay triangulation and conducting earned value/schedule analysis and forecasting the remaining scope of work based on the estimated performance. For validation, the tracking model was compared with an observation-based tracking approach for a backfilling site. It was also used for tracking a coarse base aggregate inventory for a road construction project.

Findings

The presented model has proved to be a practical and accurate tracking approach that algorithmically estimates and forecasts all performance parameters from the captured data.

Originality/value

The proposed model is unique in extracting accurate volumetric measurements directly from multiple point clouds in a developed code using Delaunay triangulation instead of extracting them from textured models in modelling software which is neither automated nor time-effective. Furthermore, the presented model uses a self-calibration approach aiming to eliminate the pre-calibration procedure required before image capturing for each camera intended to be used. Thus, any worker onsite can directly capture the required images with an easily accessible camera (e.g. handheld camera or a smartphone) and can be sent to any processing device via e-mail, cloud-based storage or any communication application (e.g. WhatsApp).

Content available
Book part
Publication date: 30 July 2018

Abstract

Details

Marketing Management in Turkey
Type: Book
ISBN: 978-1-78714-558-0

1 – 10 of 322