Search results
1 – 4 of 4Mark J. Holmes and Nabil Maghrebi
The purpose of this study is to investigate nonlinearities in the behavior of investment expenditure. Conventional wisdom suggests that Tobin’s Q criterion is an important…
Abstract
Purpose
The purpose of this study is to investigate nonlinearities in the behavior of investment expenditure. Conventional wisdom suggests that Tobin’s Q criterion is an important explanation of investment behaviour that bridges the financial and real sides of the economy. However, the empirical evidence in support of Q as a means of explaining aggregate business investment is rather weak. We answer a number of questions about the relationship between investment expenditure and Q. In particular, is the relationship governed by non-linearities? If so, what is the nature of the non-linearities present?
Design/methodology/approach
The rationale for paying closer attention to non-linearities is based on the presence of information asymmetries and possible dependence of adjustments on non-linearities with respect to factors such as fixed costs, threshold effects and irreversibility, which are entertained in the investment literature. Using the non-linear vector error-correction model procedure advocated by Hansen and Seo, we show that in the context of the US economy, investment has a long-run relationship with Q that is based on threshold error correction.
Findings
There are asymmetries present with respect to error correction or the speed of adjustment towards long-run equilibrium. We find that investment expenditure only responds significantly to long-run disequilibrium from Q during a particular regime. Such a regime is characterised by long-run disequilibrium based on high or rising investment expenditure compared with a relatively weak stock market.
Originality/value
The authors provide new insights into the relationship between Tobin’s Q and real investment. In contrast to previous work, they find that error correction based on the adjustment of real investment is regime-specific and function of the size of departures from long-run equilibrium. The tests also allow for the identification of periods when error correction has occurred. Not only are these insights significant for future research on financial crises, market volatility and the impact of debt, but for policymaking purposes as well.
Details
Keywords
Guellil Imane, Darwish Kareem and Azouaou Faical
This paper aims to propose an approach to automatically annotate a large corpus in Arabic dialect. This corpus is used in order to analyse sentiments of Arabic users on social…
Abstract
Purpose
This paper aims to propose an approach to automatically annotate a large corpus in Arabic dialect. This corpus is used in order to analyse sentiments of Arabic users on social medias. It focuses on the Algerian dialect, which is a sub-dialect of Maghrebi Arabic. Although Algerian is spoken by roughly 40 million speakers, few studies address the automated processing in general and the sentiment analysis in specific for Algerian.
Design/methodology/approach
The approach is based on the construction and use of a sentiment lexicon to automatically annotate a large corpus of Algerian text that is extracted from Facebook. Using this approach allow to significantly increase the size of the training corpus without calling the manual annotation. The annotated corpus is then vectorized using document embedding (doc2vec), which is an extension of word embeddings (word2vec). For sentiments classification, the authors used different classifiers such as support vector machines (SVM), Naive Bayes (NB) and logistic regression (LR).
Findings
The results suggest that NB and SVM classifiers generally led to the best results and MLP generally had the worst results. Further, the threshold that the authors use in selecting messages for the training set had a noticeable impact on recall and precision, with a threshold of 0.6 producing the best results. Using PV-DBOW led to slightly higher results than using PV-DM. Combining PV-DBOW and PV-DM representations led to slightly lower results than using PV-DBOW alone. The best results were obtained by the NB classifier with F1 up to 86.9 per cent.
Originality/value
The principal originality of this paper is to determine the right parameters for automatically annotating an Algerian dialect corpus. This annotation is based on a sentiment lexicon that was also constructed automatically.
Details
Keywords
This paper purposed a multi-facet sentiment analysis system.
Abstract
Purpose
This paper purposed a multi-facet sentiment analysis system.
Design/methodology/approach
Hence, This paper uses multidomain resources to build a sentiment analysis system. The manual lexicon based features that are extracted from the resources are fed into a machine learning classifier to compare their performance afterward. The manual lexicon is replaced with a custom BOW to deal with its time consuming construction. To help the system run faster and make the model interpretable, this will be performed by employing different existing and custom approaches such as term occurrence, information gain, principal component analysis, semantic clustering, and POS tagging filters.
Findings
The proposed system featured by lexicon extraction automation and characteristics size optimization proved its efficiency when applied to multidomain and benchmark datasets by reaching 93.59% accuracy which makes it competitive to the state-of-the-art systems.
Originality/value
The construction of a custom BOW. Optimizing features based on existing and custom feature selection and clustering approaches.
Details