Search results
1 – 10 of over 16000Dinda Thalia Andariesta and Meditya Wasesa
This research presents machine learning models for predicting international tourist arrivals in Indonesia during the COVID-19 pandemic using multisource Internet data.
Abstract
Purpose
This research presents machine learning models for predicting international tourist arrivals in Indonesia during the COVID-19 pandemic using multisource Internet data.
Design/methodology/approach
To develop the prediction models, this research utilizes multisource Internet data from TripAdvisor travel forum and Google Trends. Temporal factors, posts and comments, search queries index and previous tourist arrivals records are set as predictors. Four sets of predictors and three distinct data compositions were utilized for training the machine learning models, namely artificial neural networks (ANNs), support vector regression (SVR) and random forest (RF). To evaluate the models, this research uses three accuracy metrics, namely root mean square error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE).
Findings
Prediction models trained using multisource Internet data predictors have better accuracy than those trained using single-source Internet data or other predictors. In addition, using more training sets that cover the phenomenon of interest, such as COVID-19, will enhance the prediction model's learning process and accuracy. The experiments show that the RF models have better prediction accuracy than the ANN and SVR models.
Originality/value
First, this study pioneers the practice of a multisource Internet data approach in predicting tourist arrivals amid the unprecedented COVID-19 pandemic. Second, the use of multisource Internet data to improve prediction performance is validated with real empirical data. Finally, this is one of the few papers to provide perspectives on the current dynamics of Indonesia's tourism demand.
Details
Keywords
The purpose of this paper is to analyze the effect of investor sentiment, measured with Google internet search data, on volatility forecasts of the US REIT market.
Abstract
Purpose
The purpose of this paper is to analyze the effect of investor sentiment, measured with Google internet search data, on volatility forecasts of the US REIT market.
Design/methodology/approach
The author uses the S&P US REIT index and collects search volume data from Google Trends for all US REIT. Two different Generalized Autoregressive Conditional Heteroskedastic models are then estimated, namely, the baseline model and the Google augmented model. Using these models, one-step-ahead forecasts are conducted and the forecast accuracies of both models are subsequently compared.
Findings
The empirical results reveal that search volume data can be used to predict volatility on the REIT market. Especially in periods of high volatility, Google augmented models outperform the baseline model.
Practical implications
The results imply that Google data can be used on the REIT market as a market indicator. Investors could use Google as an early warning system, especially in periods of high volatility.
Originality/value
This is the first paper to use Google search query data for volatility forecasts of the REIT market.
Details
Keywords
The purpose of this paper is to study the impact of transparency on the political budget cycle (PBC) over time and across countries. So far, the literature on electoral cycles…
Abstract
Purpose
The purpose of this paper is to study the impact of transparency on the political budget cycle (PBC) over time and across countries. So far, the literature on electoral cycles finds evidence that cycles depend on the stage of an economy. However, the author shows – for the first time – a reliance of the budget cycle on transparency. The author uses a new data set consisting of 99 developing and 34 Organization for Economic Cooperation and Development countries. First, the author develops a model and demonstrates that transparency mitigates the political cycles. Second, the author confirms the proposition through the econometric assessment. The author uses time series data from 1970 to 2014 and discovers smaller cycles in countries with higher transparency, especially G8 countries.
Design/methodology/approach
Mathematical model and a respective econometric model testing.
Findings
First, the author shows in the theoretical model that higher transparency mitigates the PBC. Second, the author confirms the theoretical proposition through the econometric model. The author confirms that the countries with higher transparency have smaller budget cycles. Or technically, the author cannot reject the null-hypothesis that the budget cycles are different due to transparency.
Research limitations/implications
As explained in the paper: one issue is the data limitations in respect to the transparency measures. Data for Google are just available since 2004. Data for broadband-subscription are just on annual frequency. But both limitations can be tackled in the future. Hence, the findings are first evidence and a benchmark for future studies.
Practical implications
First, higher public transparency implies smaller budget cycles. In the end, this enhances the stability of economic and fiscal policy. Second, policy-makers have to consider the impact of higher transparency in respect to future election pledges. In a more transparent world, all voters can easily check the commitment of previous election pledges.
Social implications
Transparency helps to improve democracy and thus enhances the political credibility because it allows the voters to check the commitment of the elected policy-makers.
Originality/value
First, the author shows – for the first time – a reliance of the budget cycle on transparency. Second, the author is the first that build a new theoretical model that extends the existing literature in respect to transparency and the size of the budget cycle. Third, the author uses for the first time – in this literature – new internet-based data such as broadband-subscription and Google search data. Fourth, the author empirically proves the new hypothesis based on the new data sources.
Details
Keywords
Mark A. Harris and Amita G. Chin
This paper aims to investigate Google’s top developers’ apps with trust badges to see if they warrant an additional level of trust and confidence from consumers, as stated by…
Abstract
Purpose
This paper aims to investigate Google’s top developers’ apps with trust badges to see if they warrant an additional level of trust and confidence from consumers, as stated by Google.
Design/methodology/approach
Risky app permissions and in-app purchases (IAP) from Google’s top developers and traditional developers were investigated in several Google Play top app categories, including Editor’s Choice apps. Analysis was performed between categories and developer types.
Findings
Overall, Google’s top developers’ apps request more risky permissions and IAP than do traditional developers. Other results indicate that free apps are more dangerous than paid apps and star ratings do not signify safe apps.
Research limitations/implications
Because of a limited number of Google’s top developers and Editor’s Choice apps, conclusions are drawn from a small sample of apps and not the entire market.
Practical implications
Google’s top developers’ apps are suited well for increasing revenue for Google and developers at the consumer’s expense. Consumers should be wary of top developer trust badges.
Social implications
As the lure for “top free” and “top developer” software is strong among consumers, this research contributes to societal welfare in that it makes consumers aware that Google top developer app trust badges and free apps are more dangerous than traditional developer and paid apps, as they request risky permissions at a much higher frequency. Therefore, consumers should be very careful when downloading apps that are advertised as “top free” or “top developer”.
Originality/value
Google’s top developers’ apps and Editors’ Choice apps have not been investigated from the perspective of permissions and IAP before.
Details
Keywords
This paper aims to investigate the determinants of global interest in central bank digital currency (CBDC). It assessed whether global interest in sustainable development and…
Abstract
Purpose
This paper aims to investigate the determinants of global interest in central bank digital currency (CBDC). It assessed whether global interest in sustainable development and cryptocurrency are determinants of global interest in CBDC.
Design/methodology/approach
Google Trends data were analyzed using two-stage least square regression estimation.
Findings
There is a significant positive relationship between global interest in sustainable development and global interest in CBDC. There is a significant positive relationship between global interest in cryptocurrency and global interest in the Nigeria eNaira CBDC. There is a significant negative relationship between global interest in CBDC and global interest in the eNaira CBDC. There is a significant positive relationship between global interest in CBDC and global interest in the China eCNY. There is a significant negative relationship between global interest in cryptocurrency and global interest in the Sand Dollar and DCash.
Originality/value
The literature has not empirically examined whether global interest in sustainable development and cryptocurrency are factors motivating global interest in CBDC. This study fills a gap in the literature by investigating whether global interest in sustainable development and cryptocurrency are factors motivating global interest in CBDC.
Details
Keywords
Nhung Thi Nguyen, An Tuan Nguyen and Dinh Trung Nguyen
This paper aims to examine the effects of investor sentiment on the development of the real estate corporate bond market in Vietnam.
Abstract
Purpose
This paper aims to examine the effects of investor sentiment on the development of the real estate corporate bond market in Vietnam.
Design/methodology/approach
The research uses an autoregressive distributed lag (ARDL) model with quarterly data. Additionally, the study employs Google Trends search data (GVSI) related to topics such as “Real Estate” and “Corporate Bond” to construct a sentiment index.
Findings
The empirical outcomes reveal that real estate market sentiment improves the growth of the real estate corporate bond market, while stock market sentiment reduces it. Also, there is evidence of a long-run negative effect of corporate bond market sentiment on the total value of real estate bond issuance. Further empirical research evidences the short-term effect of sentiment and economic factors on corporate bond development in the real estate industry.
Research limitations/implications
Due to difficulties in collecting data, this paper has the limited sample of 54 valid quarterly observations. Moreover, the sentiment index based on Google search volume data only reflects the interest level of investors, not their attitudes.
Practical implications
These results yield important implications for policymakers in respect of strengthening the corporate bond market platform and maintaining stability in macroeconomic and monetary policies in order to promote efficient and sustainable market development.
Social implications
The study offers some suggestions for regulators and governments to improve the real estate corporate bond market.
Originality/value
This is the first quantitative study to examine the effect of sentiment factors on real estate corporate bond development in Vietnam.
Details
Keywords
Marian Alexander Dietzel, Nicole Braun and Wolfgang Schäfers
The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve…
Abstract
Purpose
The purpose of this paper is to examine internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.
Design/methodology/approach
This paper examines internet search query data provided by “Google Trends”, with respect to its ability to serve as a sentiment indicator and improve commercial real estate forecasting models for transactions and price indices.
Findings
The empirical results show that all models augmented with Google data, combining both macro and search data, significantly outperform baseline models which abandon internet search data. Models based on Google data alone, outperform the baseline models in all cases. The models achieve a reduction over the baseline models of the mean squared forecasting error for transactions and prices of up to 35 and 54 per cent, respectively.
Practical implications
The results suggest that Google data can serve as an early market indicator. The findings of this study suggest that the inclusion of Google search data in forecasting models can improve forecast accuracy significantly. This implies that commercial real estate forecasters should consider incorporating this free and timely data set into their market forecasts or when performing plausibility checks for future investment decisions.
Originality/value
This is the first paper applying Google search query data to the commercial real estate sector.
Details
Keywords
Dylan A. Cooper, Taylan Yalcin, Cristina Nistor, Matthew Macrini and Ekin Pehlivan
Privacy considerations have become a topic with increasing interest from academics, industry leaders and regulators. In response to consumers’ privacy concerns, Google announced…
Abstract
Purpose
Privacy considerations have become a topic with increasing interest from academics, industry leaders and regulators. In response to consumers’ privacy concerns, Google announced in 2020 that Chrome would stop supporting third-party cookies in the near future. At the same time, advertising technology companies are developing alternative solutions for online targeting and consumer privacy controls. This paper aims to explore privacy considerations related to online tracking and targeting methods used for programmatic advertising (i.e. third-party cookies, Privacy Sandbox, Unified ID 2.0) for a variety of stakeholders: consumers, AdTech platforms, advertisers and publishers.
Design/methodology/approach
This study analyzes the topic of internet user privacy concerns, through a multi-pronged approach: industry conversations to collect information, a comprehensive review of trade publications and extensive empirical analysis. This study uses two methods to collect data on consumer preferences for privacy controls: a survey of a representative sample of US consumers and field data from conversations on web-forums created by tech professionals.
Findings
The results suggest that there are four main segments in the US internet user population. The first segment, consisting of 26% of internet users, is driven by a strong preference for relevant ads and includes consumers who accept the premises of both Privacy Sandbox and Unified ID (UID) 2.0. The second segment (26%) includes consumers who are ambivalent about both sets of premises. The third segment (34%) is driven by a need for relevant ads and a strong desire to prevent advertisers from aggressively collecting data, with consumers who accept the premises of Privacy Sandbox but reject the premises of UID 2.0. The fourth segment (15% of consumers) rejected both sets of premises about privacy control. Text analysis results suggest that the conversation around UID 2.0 is still nascent. Google Sandbox associations seem nominally positive, with sarcasm being an important factor in the sentiment analysis results.
Originality/value
The value of this paper lies in its multi-method examination of online privacy concerns in light of the recent regulatory legislation (i.e. General Data Protection Regulation and California Consumer Privacy Act) and changes for third-party cookies in browsers such as Firefox, Safari and Chrome. Two alternatives proposed to replace third-party cookies (Privacy Sandbox and Unified ID 2.0) are in the proposal and prototype stage. The elimination of third-party cookies will affect stakeholders, including different types of players in the AdTech industry and internet users. This paper analyzes how two alternative proposals for privacy control align with the interests of several stakeholders.
Details
Keywords
Nikolaos Askitas and Klaus F. Zimmermann
The purpose of this paper is to recommend the use of internet data for social sciences with a special focus on human resources issues. It discusses the potentials and challenges…
Abstract
Purpose
The purpose of this paper is to recommend the use of internet data for social sciences with a special focus on human resources issues. It discusses the potentials and challenges of internet data for social sciences. The authors present a selection of the relevant literature to establish the wide spectrum of topics, which can be reached with this type of data, and link them to the papers in this International Journal of Manpower special issue.
Design/methodology/approach
Internet data are increasingly representing a large part of everyday life, which cannot be measured otherwise. The information is timely, perhaps even daily following the factual process. It typically involves large numbers of observations and allows for flexible conceptual forms and experimental settings.
Findings
Internet data can successfully be applied to a very wide range of human resource issues including forecasting (e.g. of unemployment, consumption goods, tourism, festival winners and the like), nowcasting (obtaining relevant information much earlier than through traditional data collection techniques), detecting health issues and well-being (e.g. flu, malaise and ill-being during economic crises), documenting the matching process in various parts of individual life (e.g. jobs, partnership, shopping), and measuring complex processes where traditional data have known deficits (e.g. international migration, collective bargaining agreements in developing countries). Major problems in data analysis are still unsolved and more research on data reliability is needed.
Research limitations/implications
The data in the reviewed literature are unexplored and underused and the methods available are confronted with known and new challenges. Current research is highly original but also exploratory and premature.
Originality/value
The paper reviews the current attempts in the literature to incorporate internet data into the mainstream of scholarly empirical research and guides the reader through this Special Issue. The authors provide some insights and a brief overview of the current state of research.
Details
Keywords
The purpose of this paper is to use data mined from Google Trends, in order to predict the unemployment rate prevailing among Canadians between 25 and 44 years of age.
Abstract
Purpose
The purpose of this paper is to use data mined from Google Trends, in order to predict the unemployment rate prevailing among Canadians between 25 and 44 years of age.
Design/methodology/approach
Based on a theoretical framework, this study argues that the intensity of online leisure activities is likely to improve the predictive power of unemployment forecasting models.
Findings
Mining the corresponding data from Google Trends, the analysis indicates that prediction models including variables which reflect online leisure activities outperform those solely based on the intensity of online job search. The paper also outlines the most propitious ways of mining data from Google Trends. The implications for research and policy are discussed.
Originality/value
This paper, for the first time, augments the forecasting models with data on the intensity of online leisure activities, in order to predict the Canadian unemployment rate.
Details