Search results
1 – 10 of over 2000
This chapter investigates the behavior of Reddit’s news subreddit users and the relationship between their sentiment on exchange rates. Using graphical models and natural language…
Abstract
This chapter investigates the behavior of Reddit’s news subreddit users and the relationship between their sentiment on exchange rates. Using graphical models and natural language processing, hidden online communities among Reddit users are discovered. The data set used in this project is a mixture of text and categorical data from Reddit’s news subreddit. These data include the titles of the news pages, as well as a few user characteristics, in addition to users’ comments. This data set is an excellent resource to study user reaction to news since their comments are directly linked to the webpage contents. The model considered in this chapter is a hierarchical mixture model which is a generative model that detects overlapping networks using the sentiment from the user generated content. The advantage of this model is that the communities (or groups) are assumed to follow a Chinese restaurant process, and therefore it can automatically detect and cluster the communities. The hidden variables and the hyperparameters for this model are obtained using Gibbs sampling.
Details
Keywords
Alejandro Villagran and Gabriel Huerta
The problem of model mixing in time series, for which the interest lies in the estimation of stochastic volatility, is addressed using the approach known as Mixture-of-Experts…
Abstract
The problem of model mixing in time series, for which the interest lies in the estimation of stochastic volatility, is addressed using the approach known as Mixture-of-Experts (ME). Specifically, this work proposes a ME model where the experts are defined through ARCH, GARCH and EGARCH structures. Estimates of the predictive distribution of volatilities are obtained using a full Bayesian approach. The methodology is illustrated with an analysis of a section of US dollar/German mark exchange rates and a study of the Mexican stock market index using the Dow Jones Industrial index as a covariate.
Muneza Kagzi, Sayantan Khanra and Sanjoy Kumar Paul
From a technological determinist perspective, machine learning (ML) may significantly contribute towards sustainable development. The purpose of this study is to synthesize prior…
Abstract
Purpose
From a technological determinist perspective, machine learning (ML) may significantly contribute towards sustainable development. The purpose of this study is to synthesize prior literature on the role of ML in promoting sustainability and to encourage future inquiries.
Design/methodology/approach
This study conducts a systematic review of 110 papers that demonstrate the utilization of ML in the context of sustainable development.
Findings
ML techniques may play a vital role in enabling sustainable development by leveraging data to uncover patterns and facilitate the prediction of various variables, thereby aiding in decision-making processes. Through the synthesis of findings from prior research, it is evident that ML may help in achieving many of the United Nations’ sustainable development goals.
Originality/value
This study represents one of the initial investigations that conducted a comprehensive examination of the literature concerning ML’s contribution to sustainability. The analysis revealed that the research domain is still in its early stages, indicating a need for further exploration.
Details
Keywords
Thomas J. Adler, Colin Smith and Jeffrey Dumont
Discrete choice models are widely used for estimating the effects of changes in attributes on a given product's likely market share. These models can be applied directly to…
Abstract
Discrete choice models are widely used for estimating the effects of changes in attributes on a given product's likely market share. These models can be applied directly to situations in which the choice set is constant across the market of interest or in which the choice set varies systematically across the market. In both of these applications, the models are used to determine the effects of different attribute levels on market shares among the available alternatives, given predetermined choice sets, or of varying the choice set in a straightforward way.
Discrete choice models can also be used to identify the “optimal” configuration of a product or service in a given market. This can be computationally challenging when preferences vary with respect to the ordering of levels within an attribute as well the strengths of preferences across attributes. However, this type of optimization can be a relatively straightforward extension of the typical discrete choice model application.
In this paper, we describe two applications that use discrete choice methods to provide a more robust metric for use in Total Unduplicated Reach and Frequency (TURF) applications: apparel and food products. Both applications involve products for which there is a high degree of heterogeneity in preferences among consumers.
We further discuss a significant challenge in using TURF — that with multi-attributed products the method can become computationally intractable — and describe a heuristic approach to support food and apparel applications. We conclude with a summary of the challenges in these applications, which are yet to be addressed.
Hedibert Freitas Lopes and Esther Salazar
In this paper, we propose a Bayesian approach to model the level and the variance of (financial) time series by the special class of nonlinear time series models known as the…
Abstract
In this paper, we propose a Bayesian approach to model the level and the variance of (financial) time series by the special class of nonlinear time series models known as the logistic smooth transition autoregressive models, or simply the LSTAR models. We first propose a Markov Chain Monte Carlo (MCMC) algorithm for the levels of the time series and then adapt it to model the stochastic volatilities. The LSTAR order is selected by three information criteria: the well-known AIC and BIC, and by the deviance information criteria, or DIC. We apply our algorithm to a synthetic data and two real time series, namely the canadian lynx data and the SP500 returns.
Stefano Bresciani, Alberto Ferraris, Marco Romano and Gabriele Santoro
This study is to propose a more effective and efficient analytic methodology based on within-site clickstream associated with path visualization to explore the channel dependence…
Abstract
Purpose
This study is to propose a more effective and efficient analytic methodology based on within-site clickstream associated with path visualization to explore the channel dependence of consumers' latent shopping intent and the related behaviors, with which in turn to gain insight concerning the interactivity between webpages.
Design/methodology/approach
The primary intention of the research is to design and develop a more effective and efficient approach for exploring the consumers' latent shopping intent and the related behaviors from the clickstream data. The proposed methodology is to use text-mining package, consisting of the combination of hierarchical recurrent neural networks and Hopfield-like neural network equipped with Laplacian-based graph visualization to visualize the consumers' browsing patterns. Based on the observed interactivity between webpages, consumers' latent shopping intent and the related behaviors can be understood.
Findings
The key finding is to evidence that consumers' latent shopping intent and related behaviors within website depend on channels the consumers click through. The accessing consumers through channels of paid search and display advertising are identified and categorized as goal-directed and exploratory modes, respectively. The results also indicate that the effect of the content of webpage on the consumer's purchase intent varies with channels. This implies that website optimization and attribution of online advertising should also be channel-dependent.
Practical implications
This is important for the managerial and theoretical implications: First, to uncover the channel dependence of consumer's latent shopping intent and browsing behaviors would be helpful to the attribution of the online advertising for the sales promotion. Second, in the past, webmasters did not understand users' preferences and make decisions of reorganization purely on the user's browsing path (sequential page view) without appraising psychological perspective, that is, user's latent shopping intent.
Originality/value
This study is the first to explore the channel dependences of consumer's latent shopping intent and the related browsing behaviors through within-site clickstream associated with path visualization. The findings are helpful to the attribution of the online advertising for the sales promotion and useful for webmasters to optimize the effectiveness and usability of their websites and in turn promote the purchase decision.
Details
Keywords
Nayanthara De Silva, Malik Ranasinghe and C.R. De Silva
Artificial neural network (ANN) has been used for risk analysis in various applications such as engineering, financial and facilities management. However, use of a single network…
Abstract
Purpose
Artificial neural network (ANN) has been used for risk analysis in various applications such as engineering, financial and facilities management. However, use of a single network has become less accurate when the problem is complex with a large number of variables to be considered. Ensemble neural network (ENN) architecture has proposed to overcome these difficulties of solving a complex problem. ENN consists of many small “expert networks” that learn small parts of the complex problem, which are established by decomposing it into its sub levels. This paper seeks to address these issues.
Design/methodology/approach
ENN model was developed to analyze risks in maintainability of buildings which is known as a complex problem with a large number of risk variables. The model comprised four expert networks to represent building components of roof, façade, internal areas and basement. The accuracy of the model was tested using two error terms such as network error and generalization error.
Findings
The results showed that ENN performed well in solving complex problems by decomposing the problem into its sub levels.
Originality/value
The application of ensemble network would create a new concept of analyzing complex risk analysis problems. The study also provides a useful tool for designers, clients, facilities managers/maintenance managers and users to analyze maintainability risks of buildings at early stages.
Details