Search results
1 – 10 of over 2000Considering the continuous rise in the public debt stock of developing countries (particularly Ghana) with the unstable economic growth rate for the past decades and the recent…
Abstract
Purpose
Considering the continuous rise in the public debt stock of developing countries (particularly Ghana) with the unstable economic growth rate for the past decades and the recent borrowing because of the impact of COVID 19, this paper aims to examine the causal relationships between public debt and economic growth over time.
Design/methodology/approach
The paper uses a dynamic multivariate autoregressive-distributed lag (ARDL)-based Granger-causality model to test the causal relationships between public debt and economic growth [gross domestic product (GDP)]. Annual time-series data that spanned 1978–2018 were sourced from the World Bank Development Indicator database and the IMF fiscal Affairs Department Database and WEO.
Findings
The results reveal that public debt has no causal relationship with GDP in the short-run but there is unidirectional Granger causality running from public debt to GDP in the long run. Again, investment spending has a negative bi-directional causal relationship with GDP in the short-run but they have a positive bi-directional causal relationship in the long run. Conversely, no short-run causal relationship exists between government consumption expenditure and GDP but long-run Granger causality runs from government consumption expenditure to GDP. Finally, public debt has a positive impact on the inflation rate in the short run.
Practical implications
The findings imply that government(s) must ensure high fiscal discipline to serve as a precursor for the effective and efficient use of recent borrowing, that is, the loans should be used for highly prioritized projects (preferably investment spending) that are well evaluated and self-sustained to add positively to the GDP.
Originality/value
This paper provides contemporary findings to augment extant literature on public debt and economic growth by using variables and empirical models, which prior studies could not sufficiently cover in a developing country perspective and affirms that public debt contributes to GDP only in the long run.
Details
Keywords
Yen Sun, Citra Amanda and Berty Caroline Centana
This research aims to determine the factors that affected Bitcoin price return in the period before and during the COVID-19 pandemic.
Abstract
Purpose
This research aims to determine the factors that affected Bitcoin price return in the period before and during the COVID-19 pandemic.
Design/methodology/approach
The independent variables used in this study are hashrate, transaction volume, social media and some macroeconomics variables. The data are processed using the vector error correction model (VECM) to determine the short-term and long-term relationships between variables.
Findings
The research shows that (1) Twitter and Gold significantly affected Bitcoin in the short term before the COVID-19 pandemic; (2) hashrate, transaction volume, Twitter and the financial stress index had a significant effect on Bitcoin in the long term before the COVID-19 pandemic; (3) the volatility index had a significant effect on Bitcoin in the short term during the COVID-19 pandemic; and (4) hashrate, transaction volume, Twitter and CHF/USD had a significant effect on Bitcoin in the long term during the COVID-19 pandemic.
Research limitations/implications
This research provides explanation about factors affecting Bitcoin so investors and regulators can pay more attention and prepare for the potential risks as well as to get a good understanding of market conditions for greater crypto adoption in the future.
Originality/value
The novelty in this study is the various factors driving the Bitcoin price were analyzed before and during the COVID-19 pandemic including the social media, as sentiment, interestingly, is being a predictive power for Bitcoin price return.
Details
Keywords
Annye Braca and Pierpaolo Dondio
Prediction is a critical task in targeted online advertising, where predictions better than random guessing can translate to real economic return. This study aims to use machine…
Abstract
Purpose
Prediction is a critical task in targeted online advertising, where predictions better than random guessing can translate to real economic return. This study aims to use machine learning (ML) methods to identify individuals who respond well to certain linguistic styles/persuasion techniques based on Aristotle’s means of persuasion, rhetorical devices, cognitive theories and Cialdini’s principles, given their psychometric profile.
Design/methodology/approach
A total of 1,022 individuals took part in the survey; participants were asked to fill out the ten item personality measure questionnaire to capture personality traits and the dysfunctional attitude scale (DAS) to measure dysfunctional beliefs and cognitive vulnerabilities. ML classification models using participant profiling information as input were developed to predict the extent to which an individual was influenced by statements that contained different linguistic styles/persuasion techniques. Several ML algorithms were used including support vector machine, LightGBM and Auto-Sklearn to predict the effect of each technique given each individual’s profile (personality, belief system and demographic data).
Findings
The findings highlight the importance of incorporating emotion-based variables as model input in predicting the influence of textual statements with embedded persuasion techniques. Across all investigated models, the influence effect could be predicted with an accuracy ranging 53%–70%, indicating the importance of testing multiple ML algorithms in the development of a persuasive communication (PC) system. The classification ability of models was highest when predicting the response to statements using rhetorical devices and flattery persuasion techniques. Contrastingly, techniques such as authority or social proof were less predictable. Adding DAS scale features improved model performance, suggesting they may be important in modelling persuasion.
Research limitations/implications
In this study, the survey was limited to English-speaking countries and largely Western society values. More work is needed to ascertain the efficacy of models for other populations, cultures and languages. Most PC efforts are targeted at groups such as users, clients, shoppers and voters with this study in the communication context of education – further research is required to explore the capability of predictive ML models in other contexts. Finally, long self-reported psychological questionnaires may not be suitable for real-world deployment and could be subject to bias, thus a simpler method needs to be devised to gather user profile data such as using a subset of the most predictive features.
Practical implications
The findings of this study indicate that leveraging richer profiling data in conjunction with ML approaches may assist in the development of enhanced persuasive systems. There are many applications such as online apps, digital advertising, recommendation systems, chatbots and e-commerce platforms which can benefit from integrating persuasion communication systems that tailor messaging to the individual – potentially translating into higher economic returns.
Originality/value
This study integrates sets of features that have heretofore not been used together in developing ML-based predictive models of PC. DAS scale data, which relate to dysfunctional beliefs and cognitive vulnerabilities, were assessed for their importance in identifying effective persuasion techniques. Additionally, the work compares a range of persuasion techniques that thus far have only been studied separately. This study also demonstrates the application of various ML methods in predicting the influence of linguistic styles/persuasion techniques within textual statements and show that a robust methodology comparing a range of ML algorithms is important in the discovery of a performant model.
Details
Keywords
Zhiwei Zeng, Chunyan Miao, Cyril Leung and Zhiqi Shen
This paper aims to adapt and computerize the Trail Making Test (TMT) to support long-term self-assessment of cognitive abilities.
Abstract
Purpose
This paper aims to adapt and computerize the Trail Making Test (TMT) to support long-term self-assessment of cognitive abilities.
Design/methodology/approach
The authors propose a divide-and-combine (DAC) approach for generating different instances of TMT that can be used in repeated assessments with nearly no discernible practice effects. In the DAC approach, partial trails are generated separately in different layers and then combined to form a complete TMT trail.
Findings
The proposed approach was implemented in a computerized test application called iTMT. A pilot study was conducted to evaluate iTMT. The results show that the instances of TMT generated by the DAC approach had an adequate level of difficulty. iTMT also achieved a stronger construct validity, higher test–retest reliability and significantly reduced practice effects than existing computerized tests.
Originality/value
The preliminary results suggest that iTMT is suitable for long-term monitoring of cognitive abilities. By supporting self-assessment, iTMT also can help to crowdsource the assessment processes, which need to be administered by healthcare professionals conventionally, to the patients themselves.
Details
Keywords
Warisa Thangjai and Sa-Aat Niwitpong
Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty…
Abstract
Purpose
Confidence intervals play a crucial role in economics and finance, providing a credible range of values for an unknown parameter along with a corresponding level of certainty. Their applications encompass economic forecasting, market research, financial forecasting, econometric analysis, policy analysis, financial reporting, investment decision-making, credit risk assessment and consumer confidence surveys. Signal-to-noise ratio (SNR) finds applications in economics and finance across various domains such as economic forecasting, financial modeling, market analysis and risk assessment. A high SNR indicates a robust and dependable signal, simplifying the process of making well-informed decisions. On the other hand, a low SNR indicates a weak signal that could be obscured by noise, so decision-making procedures need to take this into serious consideration. This research focuses on the development of confidence intervals for functions derived from the SNR and explores their application in the fields of economics and finance.
Design/methodology/approach
The construction of the confidence intervals involved the application of various methodologies. For the SNR, confidence intervals were formed using the generalized confidence interval (GCI), large sample and Bayesian approaches. The difference between SNRs was estimated through the GCI, large sample, method of variance estimates recovery (MOVER), parametric bootstrap and Bayesian approaches. Additionally, confidence intervals for the common SNR were constructed using the GCI, adjusted MOVER, computational and Bayesian approaches. The performance of these confidence intervals was assessed using coverage probability and average length, evaluated through Monte Carlo simulation.
Findings
The GCI approach demonstrated superior performance over other approaches in terms of both coverage probability and average length for the SNR and the difference between SNRs. Hence, employing the GCI approach is advised for constructing confidence intervals for these parameters. As for the common SNR, the Bayesian approach exhibited the shortest average length. Consequently, the Bayesian approach is recommended for constructing confidence intervals for the common SNR.
Originality/value
This research presents confidence intervals for functions of the SNR to assess SNR estimation in the fields of economics and finance.
Details
Keywords
Noha Hesham Ghazy, Hebatallah Ghoneim and Dimitrios Paparas
One of the main theories regarding the relationship between government expenditure and gross domestic product (GDP) is Wagner’s law. This law was developed in the late-19th…
Abstract
Purpose
One of the main theories regarding the relationship between government expenditure and gross domestic product (GDP) is Wagner’s law. This law was developed in the late-19th century by Adolph Wagner (1835–1917), a prominent German economist, and depicts that an increase in government expenditure is a feature often associated with progressive states. This paper aims to examine the validity of Wagner’s law in Egypt for 1960–2018. The relationship between real government expenditure and real GDP is tested using three versions of Wagner’s law.
Design/methodology/approach
To test the validity of Wagner in Egypt, law time-series analysis is used. The methodology used in this paper is: unit-root tests for stationarity, Johansen cointegration approach, error-correction model and Granger causality.
Findings
The results provide strong evidence of long-term relationship between GDP and government expenditure. Moreover, the causal relationship is found to be bi-directional. Hence, this study provides support for Wagner’s law in the examined context.
Research limitations/implications
It should be noted, however, that there are some limitations to this study. For instance, in this paper, the government’s size was measured through government consumption expenditure rather than government expenditure due to data availability, which does not fully capture the government size. Moreover, the data available was limited and does not fully cover the earliest stages of industrialization and urbanization for Egypt. Furthermore, although time-series analysis provides a more contextualized results and conclusions, the obtained conclusions suffer from their limited generalizability.
Originality/value
This paper aims to specifically make a contribution to the empirical literature for Wagner’s law, by testing the Egyptian data using time-series econometric techniques for the longest time period examined so far, which is 1960–2018.
Details
Keywords
Roberto De Luca, Antonino Ferraro, Antonio Galli, Mosè Gallo, Vincenzo Moscato and Giancarlo Sperlì
The recent innovations of Industry 4.0 have made it possible to easily collect data related to a production environment. In this context, information about industrial equipment …
Abstract
Purpose
The recent innovations of Industry 4.0 have made it possible to easily collect data related to a production environment. In this context, information about industrial equipment – gathered by proper sensors – can be profitably used for supporting predictive maintenance (PdM) through the application of data-driven analytics based on artificial intelligence (AI) techniques. Although deep learning (DL) approaches have proven to be a quite effective solutions to the problem, one of the open research challenges remains – the design of PdM methods that are computationally efficient, and most importantly, applicable in real-world internet of things (IoT) scenarios, where they are required to be executable directly on the limited devices’ hardware.
Design/methodology/approach
In this paper, the authors propose a DL approach for PdM task, which is based on a particular and very efficient architecture. The major novelty behind the proposed framework is to leverage a multi-head attention (MHA) mechanism to obtain both high results in terms of remaining useful life (RUL) estimation and low memory model storage requirements, providing the basis for a possible implementation directly on the equipment hardware.
Findings
The achieved experimental results on the NASA dataset show how the authors’ approach outperforms in terms of effectiveness and efficiency the majority of the most diffused state-of-the-art techniques.
Research limitations/implications
A comparison of the spatial and temporal complexity with a typical long-short term memory (LSTM) model and the state-of-the-art approaches was also done on the NASA dataset. Despite the authors’ approach achieving similar effectiveness results with respect to other approaches, it has a significantly smaller number of parameters, a smaller storage volume and lower training time.
Practical implications
The proposed approach aims to find a compromise between effectiveness and efficiency, which is crucial in the industrial domain in which it is important to maximize the link between performance attained and resources allocated. The overall accuracy performances are also on par with the finest methods described in the literature.
Originality/value
The proposed approach allows satisfying the requirements of modern embedded AI applications (reliability, low power consumption, etc.), finding a compromise between efficiency and effectiveness.
Details
Keywords
Ann-Zofie Duvander and Ida Viklund
Parental leave in Sweden can be taken both as paid and unpaid leave and often parents mix these forms in a very flexible way. Therefore, multiple methodological issues arise…
Abstract
Purpose
Parental leave in Sweden can be taken both as paid and unpaid leave and often parents mix these forms in a very flexible way. Therefore, multiple methodological issues arise regarding how to most accurately measure leave length. The purpose of this paper is to review the somewhat complex legislation and the possible ways of using parental leave before presenting a successful attempt of a more precise way of measuring leave lengths, including paid and unpaid days, for mothers and fathers.
Design/methodology/approach
The study makes use of administrative data for a complete cohort of parents of first born children in 2009 in Sweden. The authors examine what characteristics are associated with the use of paid and unpaid leave for mothers and fathers during the first two years of the child’s life, focusing particularly on how individual and household income is associated with leave patterns.
Findings
Among mothers, low income is associated with many paid leave days whereas middle income is associated with most unpaid days. High income mothers use a shorter leave. Among fathers it is the both ends with high and low household income that uses most paid and unpaid leave.
Practical implications
A measure that includes unpaid parental leave will be important to not underestimate the parental leave and to prevent faulty comparisons between groups by gender and by socioeconomic status.
Originality/value
A measure of parental leave including both paid and unpaid leave will also facilitate international comparisons of leave length.
Details
Keywords
Weiwei Zhu, Jinglin Wu, Ting Fu, Junhua Wang, Jie Zhang and Qiangqiang Shangguan
Efficient traffic incident management is needed to alleviate the negative impact of traffic incidents. Accurate and reliable estimation of traffic incident duration is of great…
Abstract
Purpose
Efficient traffic incident management is needed to alleviate the negative impact of traffic incidents. Accurate and reliable estimation of traffic incident duration is of great importance for traffic incident management. Previous studies have proposed models for traffic incident duration prediction; however, most of these studies focus on the total duration and could not update prediction results in real-time. From a traveler’s perspective, the relevant factor is the residual duration of the impact of the traffic incident. Besides, few (if any) studies have used dynamic traffic flow parameters in the prediction models. This paper aims to propose a framework to fill these gaps.
Design/methodology/approach
This paper proposes a framework based on the multi-layer perception (MLP) and long short-term memory (LSTM) model. The proposed methodology integrates traffic incident-related factors and real-time traffic flow parameters to predict the residual traffic incident duration. To validate the effectiveness of the framework, traffic incident data and traffic flow data from Shanghai Zhonghuan Expressway are used for modeling training and testing.
Findings
Results show that the model with 30-min time window and taking both traffic volume and speed as inputs performed best. The area under the curve values exceed 0.85 and the prediction accuracies exceed 0.75. These indicators demonstrated that the model is appropriate for this study context. The model provides new insights into traffic incident duration prediction.
Research limitations/implications
The incident samples applied by this study might not be enough and the variables are not abundant. The number of injuries and casualties, more detailed description of the incident location and other variables are expected to be used to characterize the traffic incident comprehensively. The framework needs to be further validated through a sufficiently large number of variables and locations.
Practical implications
The framework can help reduce the impacts of incidents on the safety of efficiency of road traffic once implemented in intelligent transport system and traffic management systems in future practical applications.
Originality/value
This study uses two artificial neural network methods, MLP and LSTM, to establish a framework aiming at providing accurate and time-efficient information on traffic incident duration in the future for transportation operators and travelers. This study will contribute to the deployment of emergency management and urban traffic navigation planning.
Details
Keywords
Craig M. Reddock, Elena M. Auer and Richard N. Landers
Branched situational judgment tests (BSJTs) are an increasingly common employee selection method, yet there is no theory and very little empirical work explaining the designs and…
Abstract
Purpose
Branched situational judgment tests (BSJTs) are an increasingly common employee selection method, yet there is no theory and very little empirical work explaining the designs and impacts of branching. To encourage additional research on BSJTs, and to provide practitioners with a common language to describe their current and future practices, we sought to develop a theory of BSTJs.
Design/methodology/approach
Given the absence of theory on branching, we utilized a ground theory qualitative research design, conducting interviews with 25 BSJT practitioner subject matter experts.
Findings
Our final theory consists of three components: (1) a taxonomy of BSJT branching features (contingency, parallelism, convergence, and looping) and options within those features (which vary), (2) a causal theoretical model describing impacts of branching in general on applicant reactions via proximal effects on face validity, and (3) a causal theoretical model describing impacts on applicant reactions among branching designs via proximal effects on consistency of administration and opportunity to perform.
Originality/value
Our work provides the first theoretical foundation on which future confirmatory research in the BSJT domain can be built. It also gives both researchers and practitioners a common language for describing branching features and their options. Finally, it reveals BSJTs as the results of a complex set of interrelated design features, discouraging the oversimplified contrasting of “branching” vs “not branching.”
Details