Search results
1 – 10 of 10The author develops a bilateral Nash bargaining model under value uncertainty and private/asymmetric information, combining ideas from axiomatic and strategic bargaining theory…
Abstract
The author develops a bilateral Nash bargaining model under value uncertainty and private/asymmetric information, combining ideas from axiomatic and strategic bargaining theory. The solution to the model leads organically to a two-tier stochastic frontier (2TSF) setup with intra-error dependence. The author presents two different statistical specifications to estimate the model, one that accounts for regressor endogeneity using copulas, the other able to identify separately the bargaining power from the private information effects at the individual level. An empirical application using a matched employer–employee data set (MEEDS) from Zambia and a second using another one from Ghana showcase the applied potential of the approach.
Details
Keywords
In order to solve the decision-making problem that the attributive weight and attributive value are both interval grey numbers, this paper tries to construct a multi-attribute…
Abstract
Purpose
In order to solve the decision-making problem that the attributive weight and attributive value are both interval grey numbers, this paper tries to construct a multi-attribute grey decision-making model based on generalized greyness of interval grey number.
Design/methodology/approach
Firstly, according to the nature of the generalized gresness of interval grey number, the generalized weighted greyness distance between interval grey numbers is given, and the transformation relationship between greyness distance and real number distance is analyzed. Then according to the objective function that the square sum of generalized weighted greyness distances from the decision scheme to the best scheme and the worst scheme is the minimum, a multi-attribute grey decision-making model is constructed, and the simplified form of the model is given. Finally, the grey decision-making model proposed in this paper is applied to the evaluation of technological innovation capability of 6 provinces in China to verify the effectiveness of the model.
Findings
The results show that the grey decision-making model proposed in this paper has a strict mathematical foundation, clear physical meaning, simple calculation and easy programming. The application example shows that the grey decision model in this paper is feasible and effective. The research results not only enrich the grey system theory, but also provide a new way for the decision-making problem that the attributive weights and attributive values are interval grey numbers.
Practical implications
The decision-making model proposed in this paper does not need to seek the optimal solution of the attributive weight and the attributive value, and can save the decision-making labor and capital investment. The model in this paper is also suitable for the decision-making problem that deals with the coexistence of interval grey numbers and real numbers.
Originality/value
The paper succeeds in realizing the multi-attribute grey decision-making model based on generalized gresness and its simplified forms, which provide a new method for grey decision analysis.
Details
Keywords
Ziwen Gao, Steven F. Lehrer, Tian Xie and Xinyu Zhang
Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and…
Abstract
Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.
Details
Keywords
Chon Van Le and Uyen Hoang Pham
This paper aims mainly at introducing applied statisticians and econometricians to the current research methodology with non-Euclidean data sets. Specifically, it provides the…
Abstract
Purpose
This paper aims mainly at introducing applied statisticians and econometricians to the current research methodology with non-Euclidean data sets. Specifically, it provides the basis and rationale for statistics in Wasserstein space, where the metric on probability measures is taken as a Wasserstein metric arising from optimal transport theory.
Design/methodology/approach
The authors spell out the basis and rationale for using Wasserstein metrics on the data space of (random) probability measures.
Findings
In elaborating the new statistical analysis of non-Euclidean data sets, the paper illustrates the generalization of traditional aspects of statistical inference following Frechet's program.
Originality/value
Besides the elaboration of research methodology for a new data analysis, the paper discusses the applications of Wasserstein metrics to the robustness of financial risk measures.
Details
Keywords
Yonghong Zhang, Shouwei Li, Jingwei Li and Xiaoyu Tang
This paper aims to develop a novel grey Bernoulli model with memory characteristics, which is designed to dynamically choose the optimal memory kernel function and the length of…
Abstract
Purpose
This paper aims to develop a novel grey Bernoulli model with memory characteristics, which is designed to dynamically choose the optimal memory kernel function and the length of memory dependence period, ultimately enhancing the model's predictive accuracy.
Design/methodology/approach
This paper enhances the traditional grey Bernoulli model by introducing memory-dependent derivatives, resulting in a novel memory-dependent derivative grey model. Additionally, fractional-order accumulation is employed for preprocessing the original data. The length of the memory dependence period for memory-dependent derivatives is determined through grey correlation analysis. Furthermore, the whale optimization algorithm is utilized to optimize the cumulative order, power index and memory kernel function index of the model, enabling adaptability to diverse scenarios.
Findings
The selection of appropriate memory kernel functions and memory dependency lengths will improve model prediction performance. The model can adaptively select the memory kernel function and memory dependence length, and the performance of the model is better than other comparison models.
Research limitations/implications
The model presented in this article has some limitations. The grey model is itself suitable for small sample data, and memory-dependent derivatives mainly consider the memory effect on a fixed length. Therefore, this model is mainly applicable to data prediction with short-term memory effect and has certain limitations on time series of long-term memory.
Practical implications
In practical systems, memory effects typically exhibit a decaying pattern, which is effectively characterized by the memory kernel function. The model in this study skillfully determines the appropriate kernel functions and memory dependency lengths to capture these memory effects, enhancing its alignment with real-world scenarios.
Originality/value
Based on the memory-dependent derivative method, a memory-dependent derivative grey Bernoulli model that more accurately reflects the actual memory effect is constructed and applied to power generation forecasting in China, South Korea and India.
Details
Keywords
Siti Hafsah Zulkarnain and Abdol Samad Nawi
The purpose of this study is to analyse numerous aspects affecting residential property price in Malaysia against macroeconomics issues such as gross domestic product (GDP)…
Abstract
Purpose
The purpose of this study is to analyse numerous aspects affecting residential property price in Malaysia against macroeconomics issues such as gross domestic product (GDP), exchange rate, unemployment and wage.
Design/methodology/approach
The hedonic pricing model has been adopted as econometric model for this research to investigate the relationship between residential property price against macroeconomics indicator. The data for residential property price and macroeconomic variables were collected from 1991 to 2019. Multiple linear regression had been adopted to find the relationship between the dependent and independent variables.
Findings
The result shows that the GDP has a significant positive impact on residential property price, while exchange rate has no significant impact although it was positive. In addition, the unemployment rate has a significant impact on the residential property price and has a negative relationship. Similar to the wage that shows the negative relationship with residential property prices. Moreover, during the pandemic COVID-19 in Malaysia, this research shows a more transparent view of the relationship between residential property price and the macroeconomic issues of GDP, exchange rate, unemployment and wage.
Originality/value
The findings of this research found that macroeconomics issue cannot be eliminated due to Malaysia is a developing country, and there will always be an issue that will happen, but the issues can be reduced to maximise the advantages, e.g. during COVID-19, the solution to fight against COVID-19 were crucial and weaken the macroeconomics issues.
Details
Keywords
Sarah Amber Evans, Lingzi Hong, Jeonghyun Kim, Erin Rice-Oyler and Irhamni Ali
Data literacy empowers college students, equipping them with essential skills necessary for their personal lives and careers in today’s data-driven world. This study aims to…
Abstract
Purpose
Data literacy empowers college students, equipping them with essential skills necessary for their personal lives and careers in today’s data-driven world. This study aims to explore how community college students evaluate their data literacy and further examine demographic and educational/career advancement disparities in their self-assessed data literacy levels.
Design/methodology/approach
An online survey presenting a data literacy self-assessment scale was distributed and completed by 570 students at four community colleges. Statistical tests were performed between the data literacy factor scores and students’ demographic and educational/career advancement variables.
Findings
Male students rated their data literacy skills higher than females. The 18–19 age group has relatively lower confidence in their data literacy scores than other age groups. High school graduates do not feel proficient in data literacy to the level required for college and the workplace. Full-time employed students demonstrate more confidence in their data literacy than part-time and nonemployed students.
Originality/value
Given the lack of research on community college students’ data literacy, the findings of this study can be valuable in designing and implementing data literacy training programs for different groups of community college students.
Details
Keywords
Huimin Jing and Yixin Zhu
This paper aims to explore the impact of cycle superposition on bank liquidity risk under different levels of financial openness so that banks can better manage their liquidity…
Abstract
Purpose
This paper aims to explore the impact of cycle superposition on bank liquidity risk under different levels of financial openness so that banks can better manage their liquidity risk. Meanwhile, it can also provide some ideas for banks in other emerging economies to better cope with the shocks of the global financial cycle.
Design/methodology/approach
Employing the monthly data of 16 commercial banks in China from 2005 to 2021 and based on the time-varying parameter vector autoregressive model with stochastic volatility (TVP-SV-VAR) model, the authors first examine whether the cycle superposition can magnify the impact of China's financial cycle on bank liquidity risk. Subsequently, the authors investigate the impact of different levels of financial openness on cycle superposition amplification. Finally, the shock of the financial cycle of the world's major economies on the liquidity risk of Chinese banks is also empirically analyzed.
Findings
Cycle superposition can magnify the impact of China's financial cycle on bank liquidity risk. However, there are significant differences under different levels of financial openness. Compared with low financial openness, in the period of high financial openness, the magnifying effect of cycle superposition is strengthened in the short term but obviously weakened in the long run. In addition, the authors' findings also demonstrate that although the United States is the main shock country, the influence of other developed economies, such as Japan and Eurozone countries, cannot be ignored.
Originality/value
Firstly, the cycle superposition index is constructed. Secondly, the authors supplement the literature by providing evidence that the association between cycle superposition and bank liquidity risk also depends on financial openness. Finally, the dominant countries of the global financial cycle have been rejudged.
Details
Keywords
Divya Surendran Nair and Seema Bhandare
The purpose of this study was to examine how well a strength-based program grounded in positive psychology principles can advance the practical critical thinking skills of those…
Abstract
Purpose
The purpose of this study was to examine how well a strength-based program grounded in positive psychology principles can advance the practical critical thinking skills of those pursuing the teacher training course.
Design/methodology/approach
This study used a single-group pre-test post-test design with 35 teacher-trainees from the Bachelor of Education course. The two-and-a-half-week strength-based program used the values in action survey to identify strengths. Pre- and post-test scores, measured with the Cornell Critical Thinking Test – Level Z, underwent Statistical Package for Social Sciences analysis including paired samples t-test for subcomponent and overall composite analysis.
Findings
Analysis of the pre- and post-test scores demonstrated a statistical significance in the critical thinking scores obtained by the teacher-trainees. Post-test scores were consistently significant. Out of the elements of critical thinking, induction, meaning, observation and credibility were more prominent. Deduction and assumption identification were also having a significant effect.
Originality/value
Most critical thinking programs focus on evaluating specific teaching methods for improving critical thinking skills. In education, positive psychology studies often center on students’ well-being, attention spans and academic success, aligning with wellness programs. Despite the importance of strengths in positive psychology, there is a lack of research on using a strength-based approach to boost critical thinking skills. This study aims to enhance teacher-trainees’ critical thinking by leveraging their individual strengths, moving away from traditional instructional strategies.
Details