Search results
1 – 10 of over 10000Yousra Trichilli, Mouna Boujelbène Abbes and Sabrine Zouari
This paper examines the impact of political instability on the investors' behavior, measured by Google search queries, and on the dynamics of stock market returns.
Abstract
Purpose
This paper examines the impact of political instability on the investors' behavior, measured by Google search queries, and on the dynamics of stock market returns.
Design/methodology/approach
First, by using the DCC-GARCH model, the authors examine the effect of investor sentiment on the Tunisian stock market return. Second, the authors employ the fully modified dynamic ordinary least square method (FMOL) to estimate the long-term relationship between investor sentiment and Tunisian stock market return. Finally, the authors use the wavelet coherence model to test the co-movement between investor sentiment measured by Google Trends and Tunisian stock market return.
Findings
Using the dynamic conditional correlation (DCC), the authors find that Google search queries index has the ability to reflect political events especially the Tunisian revolution. In addition, empirical results of fully modified ordinary least square (FMOLS) method reveal that Google search queries index has a slightly higher effect on Tunindex return after the Tunisian revolution than before this revolution. Furthermore, by employing wavelet coherence model, the authors find strong comovement between Google search queries index and return index during the period of the Tunisian revolution political instability. Moreover, in the frequency domain, strong coherence can be found in less than four months and in 16–32 months during the Tunisian revolution which show that the Google search queries measure was leading over Tunindex return. In fact, wavelet coherence analysis confirms the result of DCC that Google search queries index has the ability to detect the behavior of Tunisian investors especially during the period of political instability.
Research limitations/implications
This study provides empirical evidence to portfolio managers that may use Google search queries index as a robust measure of investor's sentiment to select a suitable investment and to make an optimal investments decisions.
Originality/value
The important research question of how political instability affects stock market dynamics has been neglected by scholars. This paper attempts principally to fill this void by investigating the time-varying interactions between market returns, volatility and Google search based index, especially during Tunisian revolution.
Details
Keywords
Recent research has found significant relationships between internet search volume and real estate markets. This paper aims to examine whether Google search volume data can serve…
Abstract
Purpose
Recent research has found significant relationships between internet search volume and real estate markets. This paper aims to examine whether Google search volume data can serve as a leading sentiment indicator and are able to predict turning points in the US housing market. One of the main objectives is to find a model based on internet search interest that generates reliable real-time forecasts.
Design/methodology/approach
Starting from seven individual real-estate-related Google search volume indices, a multivariate probit model is derived by following a selection procedure. The best model is then tested for its in- and out-of-sample forecasting ability.
Findings
The results show that the model predicts the direction of monthly price changes correctly, with over 89 per cent in-sample and just above 88 per cent in one to four-month out-of-sample forecasts. The out-of-sample tests demonstrate that although the Google model is not always accurate in terms of timing, the signals are always correct when it comes to foreseeing an upcoming turning point. Thus, as signals are generated up to six months early, it functions as a satisfactory and timely indicator of future house price changes.
Practical implications
The results suggest that Google data can serve as an early market indicator and that the application of this data set in binary forecasting models can produce useful predictions of changes in upward and downward movements of US house prices, as measured by the Case–Shiller 20-City House Price Index. This implies that real estate forecasters, economists and policymakers should consider incorporating this free and very current data set into their market forecasts or when performing plausibility checks for future investment decisions.
Originality/value
This is the first paper to apply Google search query data as a sentiment indicator in binary forecasting models to predict turning points in the housing market.
Details
Keywords
Marian Alexander Dietzel, Nicole Braun and Wolfgang Schäfers
The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve…
Abstract
Purpose
The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.
Design/methodology/approach
This paper examines internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.
Findings
The empirical results show that all models augmented with Google data, combining both macro and search data, significantly outperform baseline models which abandon internet search data. Models based on Google data alone, outperform the baseline models in all cases. The models achieve a reduction over the baseline models of the mean squared forecasting error for transactions and prices of up to 35 and 54 per cent, respectively.
Practical implications
The results suggest that Google data can serve as an early market indicator. The findings of this study suggest that the inclusion of Google search data in forecasting models can improve forecast accuracy significantly. This implies that commercial real estate forecasters should consider incorporating this free and timely data set into their market forecasts or when performing plausibility checks for future investment decisions.
Originality/value
This is the first paper applying Google search query data to the commercial real estate sector.
Details
Keywords
A. Hossein Farajpahlou and Faeze Tabatabai
The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information…
Abstract
Purpose
The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information environments by general search engines such as Google and Yahoo!
Design/methodology/approach
In total, 100 XML content objects were divided into two groups: those with DCXML elements and those with MARCXML elements. Both groups were published on the web site www.marcdcmi.ir in late July 2009 and were online until June 2010. The web site was introduced to Google and Yahoo! search engines. The indexing quality of metadata elements embedded in the content objects in a dynamic online information environment and their indexing and ranking capabilities were compared and examined.
Findings
Google search engine was able to retrieve fully all the content objects through their Dublin Core and MARC 21 metadata elements; Yahoo! search engine, however, did not respond at all. Results of the study showed that all Dublin Core and MARC 21 metadata elements were indexed by Google search engine. No difference was observed between indexing quality and ranking of DCXML metadata elements with that of MARCXML. The results of the study revealed that neither the XML‐based Dublin Core Metadata Initiative nor MARC 21 demonstrate any preference regarding access in dynamic online information environments through Google search engine.
Practical implications
The findings can provide useful information for search engine designers.
Originality/value
The present study was conducted for the first time in dynamic environments using XML‐based metadata elements. It can provide grounds for further studies of this kind.
Details
Keywords
Ying Liu, Geng Peng, Lanyi Hu, Jichang Dong and Qingqing Zhang
With the ascendance of information technology, particularly through the internet, external information sources and their impacts can be readily transferred to influence the…
Abstract
Purpose
With the ascendance of information technology, particularly through the internet, external information sources and their impacts can be readily transferred to influence the performance of financial markets within a short period of time. The purpose of this paper is to investigate how incidents affect stock prices and volatility using vector error correction and autoregressive-generalized auto regressive conditional Heteroskedasticity models, respectively.
Design/methodology/approach
To characterize the investors’ responses to incidents, the authors introduce indices derived using search volumes from Google Trends and the Baidu Index.
Findings
The empirical results indicate that an outbreak of disasters can increase volatility temporarily, and exert significant negative effects on stock prices in a relatively long time. In addition, indices derived from different search engines show differentiation, with the Google Trends search index mainly representing international investors and appearing more significant and persistent.
Originality/value
This study contributes to the existing literature by incorporating open-source data to analyze how catastrophic events affect financial markets and effect persistence.
Details
Keywords
Philipp Mayr and Anne‐Kathrin Walter
The purpose of this paper is to discuss the new scientific search service Google Scholar (GS). It aims to discuss this search engine, which is intended exclusively for searching…
Abstract
Purpose
The purpose of this paper is to discuss the new scientific search service Google Scholar (GS). It aims to discuss this search engine, which is intended exclusively for searching scholarly documents, and then empirically test its most important functionality. The focus is on an exploratory study which investigates the coverage of scientific serials in GS.
Design/methodology/approach
The study is based on queries against different journal lists: international scientific journals from Thomson Scientific (SCI, SSCI, AH), open access journals from the DOAJ list and journals from the German social sciences literature database SOLIS as well as the analysis of result data from GS. All data gathering took place in August 2006.
Findings
The study shows deficiencies in the coverage and up‐to‐dateness of the GS index. Furthermore, the study points out which web servers are the most important data providers for this search service and which information sources are highly represented. The paper can show that there is a relatively large gap in Google Scholar's coverage of German literature as well as weaknesses in the accessibility of Open Access content. Major commercial academic publishers are currently the main data providers.
Research limitations/implications
Five different journal lists were analysed, including approximately 9,500 single titles. The lists are from different fields and of various sizes. This limits comparability. There were also some problems matching the journal titles of the original lists to the journal title data provided by Google Scholar. The study was only able to analyse the top 100 Google Scholar hits per journal.
Practical implications
The paper concludes that Google Scholar has some interesting pros (such as citation analysis and free materials) but the service cannot be seen as a substitute for the use of special abstracting and indexing databases and library catalogues due to various weaknesses (such as transparency, coverage and up‐to‐dateness).
Originality/value
The authors do not know of any other study using such a brute force approach and such a large empirical basis. The study can be considered as using brute force in the sense that it gathered lots of data from Google and then analysed the data in a macroscopic way.
Details
Keywords
Karim Rochdi and Marian Dietzel
– The purpose of this paper is to investigate whether there is a relationship between asset-specific online search interest and movements in the US REIT market.
Abstract
Purpose
The purpose of this paper is to investigate whether there is a relationship between asset-specific online search interest and movements in the US REIT market.
Design/methodology/approach
The authors collect search volume (SV) data from “Google Trends” for a set of keywords representing the information demand of real estate (equity) investors. On this basis, the authors test hypothetical investment strategies based on changes in internet SV, to anticipate REIT market movements.
Findings
The results reveal that people’s information demand can indeed serve as a successful predictor for the US REIT market. Among other findings, evidence is provided that there is a significant relationship between asset-specific keywords and the US REIT market. Specifically, investment strategies based on weekly changes in Google SV would have outperformed a buy-and-hold strategy (0.1 percent p.a.) for the Morgan Stanley Capital International US REIT Index by a remarkable 15.4 percent p.a. between 2006 and 2013. Furthermore, the authors find that real-estate-related terms are more suitable than rather general, finance-related terms for predicting REIT market movements.
Practical implications
The findings should be of particular interest for REIT market investors, as the established relationships can potentially be utilized to anticipate short-term REIT market movements.
Originality/value
This is the first paper which applies Google search query data to the REIT market.
Details
Keywords
The purpose of this paper is to use data mined from Google Trends, in order to predict the unemployment rate prevailing among Canadians between 25 and 44 years of age.
Abstract
Purpose
The purpose of this paper is to use data mined from Google Trends, in order to predict the unemployment rate prevailing among Canadians between 25 and 44 years of age.
Design/methodology/approach
Based on a theoretical framework, this study argues that the intensity of online leisure activities is likely to improve the predictive power of unemployment forecasting models.
Findings
Mining the corresponding data from Google Trends, the analysis indicates that prediction models including variables which reflect online leisure activities outperform those solely based on the intensity of online job search. The paper also outlines the most propitious ways of mining data from Google Trends. The implications for research and policy are discussed.
Originality/value
This paper, for the first time, augments the forecasting models with data on the intensity of online leisure activities, in order to predict the Canadian unemployment rate.
Details
Keywords
Sayyed Mahdi Taheri and Nadjla Hariri
The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements…
Abstract
Purpose
The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements by general search engines (Google and Yahoo!), in a comparative analytical study.
Design/methodology/approach
One hundred XML content objects in two groups were analyzed: those with MARCXML elements (50 records) and those with DCXML (50 records) published on two web sites (www.dcmixml.islamicdoc.org and www.marcxml.islamicdoc.org).The web sites were then introduced to the Google and Yahoo! search engines.
Findings
The indexing of metadata records and the difference between their indexing and ranking were examined using descriptive statistics and a non‐parametric Mann‐Whitney U test. The findings show that the visibility of content objects was possible by all their metadata elements. There was no significant difference between two groups' indexing, but a difference was observed in terms of ranking.
Practical implications
The findings of this research can help search engine designers in the optimum use of metadata elements to improve their indexing and ranking process with the aim of increasing availability. The findings can also help web content object providers in the proper and efficient use of metadata systems.
Originality/value
This is the first research to examine the interoperability between XML‐based metadata and web search engines, and compares the MARC format and DCMI in a research approach.
Details
Keywords
Violina P. Rindova, Luis L. Martins and Adrian Yeow
Strategic management research has shown growing interest in understanding the dynamic resource reconfiguration processes through which firms grow, evolve, and sustain…
Abstract
Strategic management research has shown growing interest in understanding the dynamic resource reconfiguration processes through which firms grow, evolve, and sustain profitability. The goal of our study is to understand how dynamic resource reconfigurations enable firms to pursue growth opportunities. We use the methods of inductive theory building from case studies to elaborate current theoretical understanding about how firms draw on both internal and external resources in the pursuit of growth. We examine the patterns of resource reconfigurations through which Yahoo and Google powered their early growth strategies in their first 10 years of existence. We analyze a total of 192 new product launches in 43 markets by the two firms to capture how they reconfigured resources dynamically. Our analysis reveals that both firms developed highly dynamic strategies exhibiting both surprising similarities and differences. These similarities and differences provided the basis for our theoretical insights about the development of what we term “dynamic resource platforms,” comprising of (a) dynamic resource shifts; (b) targeted resource orchestration; and (c) complementary processes balancing dynamism and capability development. These ideas contribute novel theoretical insights to current strategic management research on dynamic capabilities and on resource reconfiguration and redeployment.
Details