Do sensory reviews make more sense? The mediation of objective perception in online review helpfulness

Alberto Lopez (Tecnologico de Monterrey, Business School, Monterrey, Mexico)
Ricardo Garza (Softtek, Corporate Innovation and Emerging Technologies, Monterrey, Mexico)

Journal of Research in Interactive Marketing

ISSN: 2040-7122

Article publication date: 12 October 2021

Issue publication date: 12 July 2022

4571

Abstract

Purpose

Do consumers rate reviews describing other consumers' sensory experience of a product (touch, smell, sight, hear and taste) as helpful or do they rate reviews describing more practical properties (product performance and characteristics/features) as more helpful? What is the effect of review helpfulness on purchase intention? Furthermore, why do consumers perceive sensory and non-sensory reviews differently? This study answers these questions.

Design/methodology/approach

The authors analyze 447,792 Amazon reviews and perform a topic modeling analysis to extract the main topics that consumers express in their reviews. Then, the topics were used as regressors to predict the number of consumers who found the review helpful. Finally, a lab experiment was conducted to replicate the results in a more controlled environment to test the serial mediation effect.

Findings

Contrary to the overwhelming evidence supporting the positive effects of sensory elicitation in marketing, this study shows that sensory reviews are less likely to be helpful than non-sensory reviews. Moreover, a key reason why sensory reviews are less effective is that they decrease the objective perception of the review, a less objective review then decreases the level of helpfulness, which decreases purchase intention.

Originality/value

This study contributes to the interactive marketing field by investigating customer behavior and interactivity in online shopping sites and to the sensory marketing literature by identifying a boundary condition, the authors’ data suggest that sensory elicitations might not be processed positively by consumers when they are not directly experienced, but instead communicated by another consumer. Moreover, this study indicates how companies can encourage consumers to share more effective and helpful reviews.

Keywords

Citation

Lopez, A. and Garza, R. (2022), "Do sensory reviews make more sense? The mediation of objective perception in online review helpfulness", Journal of Research in Interactive Marketing, Vol. 16 No. 3, pp. 438-456. https://doi.org/10.1108/JRIM-04-2021-0121

Publisher

:

Emerald Publishing Limited

Copyright © 2021, Alberto Lopez and Ricardo Garza

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode.


1. Introduction

Consumers write reviews about products they buy and transmit credibility to other consumers through these reviews (Kim et al., 2020). Furthermore, consumers rely on these reviews to a great extent in their buying decisions (Chakraborty, 2019; Huang and Pape, 2020). Studies have found that nearly 90% of consumers read online reviews before purchasing (Trustpilot, 2020). Therefore, consumer reviews have a great impact not only on sales but also on post–purchase evaluations (Jacobsen, 2018), brand extensions (Liu et al., 2017) and the company's share price (Chen et al., 2012).

The ecosystem of the interactive marketing field involves many digital platforms, which allow multi-sided network interactions (Wang, 2021) and a co-creation process (Nettelhorst et al., 2020). Reviews in online sites are a mechanism to encourage customer participation and engagement, which is a key aspect of the interactive marketing field (Wang, 2021), while also allowing multi-sided interactions by inspiring communications among consumers without the direct intervention of the company.

Due to the relevance of consumers' reviews, we investigate how reviews influence consumers and the extent to which people find them helpful in different types of reviews. We analyze more than 400,000 Amazon reviews of different product categories and perform a topic modeling analysis to extract the main topics that consumers express in their reviews. After identifying the main topics, we use them as regressors to predict the number of consumers who found the reviews helpful and were likely influenced by the reviews in making their consumption decisions.

The data reveal that consumers review a product based on their sensory experience (touch, smell, sight, hearing and taste); we call this type of reviews sensory reviews. On the other hand, the data also reveal that consumers review a product based on more practical properties, such as the product's performance, characteristics/features and the installation/set-up process, we call this type of reviews non-sensory reviews.

Our analysis show that consumers significantly find non-sensory reviews to be more helpful than sensory reviews. The mediation analysis reveals that an underlying mechanism driving this effect is that consumers perceive sensory reviews as less objective, which decreases the level of review helpfulness and purchase intention.

From an interactive marketing perspective, this paper contributes to the field by investigating customer behavior and interaction in electronic platforms (Wang, 2021); specifically in online shopping platforms that allow consumers to post and share personal reviews. This paper proposes that consumers engage in the active behavior of writing reviews that mainly concern the sensory and non-sensory characteristics of a product and those types of reviews have a great impact on this interactivity through “word-of-click” (Swani et al., 2013; Wang, 2021), such as likes or votes.

These findings also contribute to the literature on sensory marketing. There is a vast amount of research in the marketing literature supporting the idea that sensory stimuli are effective because it creates subconscious triggers that characterize consumer perceptions of abstract notions of the product (Krishna, 2012). For instance, advertising messages that create a sensory elicitation in consumer's minds increase ad effectiveness (Haase et al., 2018). Previous research has focused on studying sensory elicitations directly communicated from the company/brand to the consumer (Krishna et al., 2016) and neglected the role of sensory elicitations not experienced directly by the consumer but described by a stranger through online reviews.

Furthermore, we also investigate a relevant underlying mechanism in the sensory marketing field. Previous literature argues that given the great amount of explicit marketing messages made to consumers every day, subconscious sensory triggers which appeal to the basic senses may be a more efficient way to engage consumers (Krishna, 2012). However, our data show that these sensory triggers must be perceived as objective to be processed positively by consumers. This is a relevant phenomenon worth studying for both theoretical advancement and managerial importance. Sensory marketing researchers benefit from our findings by identifying a boundary condition to the literature, our data suggest that sensory elicitations might not be processed positively by consumers when they are not directly processed by them, but instead communicated by another consumer. On the other hand, by relying on our findings practitioners can encourage consumers to share more effective and helpful reviews to other consumers and increase review helpfulness, and therefore purchase intention.

Finally, we also contribute to the field by introducing a methodology to study consumers in online settings. We employ a novel natural language processing (NLP) technique—topic modeling—to study latent topics in texts freely shared by consumers.

2. Theoretical background and hypotheses development

2.1 Consumers write and read reviews on electronic shopping platforms

Consumers seek information when making their consumption decisions. Online consumer reviews are an important resource for consumers seeking information about products they plan to buy (Zhu and Zhang, 2010).

The subject of online consumer reviews has been largely studied in the last decade. Most of the studies focused on the characteristics of a review that increase or decrease its effectiveness. For a recent review of the literature on consumer reviews please refer Cioppi et al. (2019).

Although personal reviews allow multi-sided network interaction among consumers and establish an evolving landscape of interactive marketing by moving from a business-initiated marketing toward truly dynamic interaction (Wang, 2021), previous research has not investigated the types of reviews consumers share online. In this research, we start by investigating the main topics that consumers discuss when reviewing a product online. We answer the following question: “What are the main ways that consumers tend to describe products in their online reviews?” To achieve this objective, we employ a novel NLP technique called topic modeling.

2.2 Topic modeling to reveal phenomenon-based constructs in textual data

Recently, researchers have started employing topic modeling in management research as a tool to reveal constructs and render new theory (Hannigan et al., 2019). In this paper, we employ this method to identify the main types of reviews that consumers write when reviewing a product online. Then, we use those constructs to propose and test our hypotheses.

A topic is a label for a collection of words that often occur together across documents. For example, the words “rain,” “storm,” “snow,” “winds” and “ice” would possibly indicate a latent topic in the data about weather. Topic models are a way to identify what a set of documents are about more quickly and objectively than human coding (Blei et al., 2003).

Topic modeling is a machine learning technique that automatically analyzes text data to determine cluster words for a set of documents. It allows us to identify topics that are embedded in textual data. It is assumed that each text document comprises one or more topics that are latent. The topic modeling algorithm extracts the topics from multiple text data documents.

This analysis technique, which was developed in the field of computer science and NLP, has been widely adopted by scholars to study the comments consumers share on social media about brands and products (Ryoo et al., 2021).

2.3 Sensory and non-sensory reviews

We start by applying topic modeling to an Amazon review dataset in order to identify the main constructs in which consumers write online reviews. As explained in study 1A, we find that consumers mainly write reviews in terms of their sensory experience and in terms of more objective properties of the product, which we called non-sensory reviews. We define them next.

Sensory reviews are those that describe and evaluate the product in terms of the consumer's sensory experience with it (touch, smell, sight, hear and taste). This supports previous findings in the literature, which found that consumers are greatly influenced by their senses, especially their perception, judgment and behavior (Krishna, 2012).

On the other hand, non-sensory reviews are those that describe and evaluate the product in terms of non-sensory properties of the product, such as product performance, product characteristics and features, and the installation process. This supports previous research, which found that consumers greatly base their consumption decisions on more practical aspects of the product (Huang and Liang, 2021). See Table 1 for example sensory and non-sensory reviews identified in study 1A.

Once identified the main topics (sensory and non-sensory) that consumers use to write their online reviews, we propose relationships among these constructs and render new theory on the field of interactive marketing, as detailed in the following sections.

2.4 “Word-of-click” through helpful votes

An important shift in the interactive marketing field is the move from word-of-mouth to “word-of-click” (Wang, 2021). Previous research has investigated how the number of likes, reactions, shares and votes exerts an important impact on the effectiveness of an online post (Swani et al., 2013). These behaviors, which include clicking like, recommend, thumbs up and other forms of positive reaction are referred to as “word-of-click” and are implemented in a great variety of digital platforms, social media and online shopping sites.

Companies benefit when consumers interact with their content through “word-of-click” because it encourages multi-sided interactions among consumers and the firm in a low cognition process compared to other social interactions that often require high cognition (Swani et al., 2013). Previous research in this domain has mainly focused on “word-of-click” on social media platforms (i.e. Facebook likes, Twitter shares). This study focuses on “word-of-click” in online shopping sites through the number of helpful votes a review received by other consumers.

Review helpfulness refers to the degree to which consumers perceive a product review to be helpful in their own purchasing decision-making (Wu et al., 2021). Due to the high volume of reviews that might be posted for a single product, it could be difficult for consumers to locate the most helpful reviews when making purchase decisions. Previous research on this field has determined that reviews that receive more helpful votes are read and considered by other consumers more significantly (Ghose and Ipeirotis, 2011). Furthermore, perceived review helpfulness perception has been established as leading to higher purchase behavior (Chen et al., 2008; Mariani and Borghi, 2020).

2.5 Consumers anticipate the subjective nature of sensory reviews and its downstream effect on review helpfulness and purchase intention

There is a vast body of literature on how consumers' senses of haptics, olfaction, audition, taste and vision greatly affect consumer perception, judgment and behavior. Please refer to Krishna (2012) for a complete review of the literature in this field.

However, there is scant research on the topic of why the senses are sometimes not effective in influencing consumer behavior; this is known as non-diagnostic sensory input and refers to when the senses do not influence consumer perception, judgment or behavior (Krishna and Morrin, 2008). In this research, we aim to investigate the role of sensory inputs that are not directly experienced by the consumer but instead narrated by another consumer through online reviews.

Previous research has found that consumers prefer advisors and are more willing to accept others' perspectives as decision input when they believe that others' judgment is objective rather than subjective (Dai et al., 2020).

Objective perception is defined to be the formal structure that envelops the perceiver, the act of perceiving, that which is perceived, and their interrelationships (Haugeland, 1996). Every perceptual experience has an objective and a subjective side. In this research, we argue that sensory reviews are perceived as less objective than non-sensory reviews, as explained next.

Building on this notion, researchers have found that time perception is greatly influenced by senses and the interaction between them (van Wassenhove et al., 2008), making sensory input more subjective. People experience their senses differently, so sensory input is subjective. Therefore, what consumers experience through their senses might be received differently due to its subjective nature.

However, there are more objective inputs, such as the quality (Dai et al., 2020), features, description and technical details of the product (Dash et al., 2021). Research has found that consumer reviews that are more concrete are perceived as more helpful by consumers (Li et al., 2013).

Notably, previous studies have found that review attributes that increase consumer acceptance are mainly objective, including products' concrete characteristics and features (Huang and Liang, 2021). In contrast, others have argued that consumers do not always trust their senses in forming perceptions, judgment and behavior (Sato and Kording, 2014). We propose that consumers anticipate the subjective nature of sensory reviews, which decreases the level of review helpfulness and downstream purchase intention.

Based on the previous arguments, we propose that consumers are more likely to perceive reviews that describe non-sensory properties of a product, such as product performance, characteristics and features, and installation process, as more helpful through “word-of-click.” Thus, we derive our first hypothesis:

H1.

Reviews describing consumers' sensory experience of a product are less likely to be helpful than reviews describing non-sensory properties of the product.

Besides describing the phenomenon of review type (sensory vs non-sensory) on the helpfulness of reviews, we also investigate the underlying mechanism driving this effect on review helpfulness and purchase intention. We argue that the content of a review (sensory vs non-sensory reviews) influences the degree to which consumers perceive the review as objective or subjective.

Building on the above arguments, we propose that sensory reviews decrease the objective perception of the review, a less objective review then decreases the level of helpfulness, which then decreases purchase intention. Thus, we propose a serial mediation effect:

H2.

Objective perception of a review (proximal mediator) and review helpfulness (distal mediator) mediate the relationship between review type and purchase intention.

3. Study 1A: topic modeling to extract review topics

3.1 Data

We employed the Amazon review dataset constructed by Ni et al. (2019). We relied on a subset of the complete Amazon review dataset comprising 447,792 reviews of all product categories. The reviews were published by consumers in 2018 on the Amazon website. Each review comprises a rating score from 1 to 5, the number of helpfulness votes given by consumers, and the review text written by the consumer.

Before performing topic modeling of the data, we pre-process it. Following recent recommendations on how to deal with unstructured text data and topic modeling algorithms, we employ the following steps: (1) cleaning the text data, (2) lower case all text, (3) normalization, (4) stemming, (5) removing stop words and (6) tokenizing (Qomariyah et al., 2019).

First, we cleaned the data by removing white spacing, punctuation, emoticons, hashtags, usernames and Uniform Resource Locators (URLs). Second, we converted all characters to lowercase letters. Third, we eliminated all texts that are not letters, i.e. “a” to “z.” Fourth, basic words, called stems, were generated by removing affixes or changing verbs to nouns. The stem or root of a word is the basic word that remains after removing the prefix, suffix, insertions and combinations of prefixes and suffix. For example, “opportunity” and “opportunities” would both be changed to “opportunity,” which makes the data analysis easier (Banks et al., 2018). Fifth, stop words are words that occur so frequently that they do not add value to identifying topics; examples of stop words are prepositions, conjunctions and pronouns. We eliminated stop words from the dataset. Finally, tokenizing was done to break the sentence into pieces. The sentence in the analysis is broken down into words; each word is called a token (Banks et al., 2018; Qomariyah et al., 2019).

After the data pre-processing, we conduct an exploratory analysis of the frequency of words and omit words that occur too much or too little across documents from the topic modeling. Words that occur too little—sparse terms—are deleted to reduce noise and increase computational speed (Banks et al., 2018; Li et al., 2017). The lower limit of sparse words is a minimum of 1%. Our dataset contains 447,792 reviews, so to be part of the topic modeling, a word should appear in at least 4,477 reviews. However, a word that appears too much across documents is a word that does not add value to identifying the topics, so an upper limit of a maximum of 10% was set.

3.2 Identifying review topics

The next step is the topic modeling algorithm of the data. Several models can be used to perform topic modeling. We employed the probabilistic model called Latent Dirichlet allocation (LDA), which was first introduced by Blei et al. (2003); it is one of the most used and reliable models in recent NLP studies (Sutherland et al., 2020; Xue et al., 2020).

Since we do not know the target variable, we use LDA, which is an unsupervised machine learning method. LDA treats an article as a bag of words and assumes that each document has a set of topics that follow the multinomial logistic distribution. However, the probabilities of the multinomial model for all documents are not fixed. LDA assumes that the distribution of probabilities follows a Dirichlet distribution. Thus, for each document, the topics are randomly drawn from a multinomial distribution (for a full mathematical treatment, please refer to Blei et al., 2003).

Since this analysis assumes that for each word in each document, there is a latent (i.e. unobserved) variable indicating a topic from which that word is drawn, we decide the number of topics to extract beforehand (Chen and Doss, 2019). We employ the topic coherence score proposed by Mimno et al. (2011). This score measures the semantic coherence of the topics, which is the topic quality. Scores closer to zero indicate higher semantic coherence. Figure 1 shows the coherence score of the number of topics returned by the LDA model. We select the number of topics to be five because it was the closest coherence score to zero.

We analyze the Amazon review text data with the five topics using the Gibbs sampling method (Porteous et al., 2008). When displaying topics, each topic is generally presented as a list of the most probable words in that topic in descending order of their topic-specific probabilities (Mimno et al., 2011). Table 1 presents the results of the five salient topics, the most popular words within each topic and the number of reviews under each topic.

3.3 Discussion of the review topics

Although the five topics are open to different interpretations, the authors labeled each of them as follows: (1) reviews about the sense of touch, smell and sight; (2) reviews about the sense of hearing; (3) reviews about the set-up process; (4) reviews about the product performance and characteristics; and (5) reviews about the sense of taste. The words in Table 1 might be misspelled since they were converted to their stem (e.g. “qualiti” instead of “quality”).

Our analyses show that consumers mainly write reviews about these five topics. In the five topics, we find evidence for all senses. Topics 1, 2 and 5 capture consumers' reviews describing their sensory experience of a product. We also find evidence for non-sensory reviews in topics 3 and 4. All the five topics are evenly distributed across the 447,792 reviews.

4. Study 1 B: empirical analysis of the effect of sensory and non-sensory reviews on review helpfulness

Study 1B tests H1 to determine whether consumers find sensory reviews to be less helpful than non-sensory reviews. We employ the Amazon dataset used for study 1A and the five topics identified.

4.1 Measures

For each review on Amazon, consumers were asked, “Was this review helpful to you?” Following previous research, we employ the number of helpfulness votes of each review as a measure of how helpful the review was to consumers (Dai et al., 2020). This measure is our dependent variable.

We employed the probabilities of each topic as regressors. The LDA model calculates the probability for each review to be part of each topic. These probabilities are calculated based on the semantic approximation of each review's words to the words associated with each topic. For example, topic 1 is about the sense of touch, smell and sight, therefore a review that contains a lot of words that are semantically related to scents, fragrances, colors, textures, etc. would have a high probability in topic 1 and a low probability in topics 2–5. By contrast, a review that contains words related to product characteristics and features would have a high probability in topic 4.

Following previous research on online consumer reviews, we controlled for the review length in words, the title length in words, the star rating (1–5 stars), and whether the review is verified or not (Dai et al., 2020; Lantzy et al., 2021). This dataset did not contain information regarding the specific product category; a limitation that we address in study 2.

4.2 Data analysis

Table 2 displays the summary statistics of and correlations between the variables in the model (helpfulness votes, topic 1 (sense of touch, smell and sight), topic 2 (sense of hearing), topic 3 (set-up process), topic 4 (product performance and characteristics) and topic 5 (sense of taste), rating stars, review length, title length and whether the review is verified).

Since approximately 50% of the data contain zero helpfulness votes, we needed a regression model that fits this type of data. We conducted a zero-inflated Poisson (ZIP) regression, which is a model for count data with excess zeros; ZIP regression models are not only easy to interpret but also lead to more refined data analyses when the data contains a large number of zeros (Lambert, 1992).

ZIP models assume that some zeros occurred in a Poisson process, but others were not even eligible to have the event occur. ZIP model requires that it be theoretically plausible for some individuals to be ineligible for a count. For our study subject, we argue that the process leading to zeros ineligible for a count is a result of the excessive number of reviews in the Amazon platform overwhelming consumers and they just read and vote for only some of them as helpful. This process leads to the excess zeros in our dependent variable, of helpfulness votes. Furthermore, recent research has employed ZIP models to investigate the phenomenon of online consumer reviews (Lantzy et al., 2021).

Since the sum (by row) of topics 1–5 is 1, we need to omit one variable from the model to avoid singularity. We omitted topic1 from the model, so we used it as the base to compare the effects of the coefficients. The dependent variable is the number of helpful votes a review got. The explanatory variables include topics 2–5 and control variables such as rating stars, review length, title length and whether the review is verified. Since the correlations among the independent variables presented in Table 2 are small, we can discard any multicollinearity concerns.

4.3 Results and discussion

Table 3 presents models 1 and 2. Model 1 contains only the variables in H1, and model 2 contains the control variables. The results do not significantly change when the controls are added. The main model is neither dependent on nor qualitatively altered by including covariates; thus, we focus on the interpretation of model 1.

In model 1, topics 2 and 3 significantly decrease the number of helpfulness votes. Topic 2 has a negative effect (β = −0.0045, Z = −2.84, p < 0.001) on the number of helpful votes. Topic 5 also has a negative effect (β = −0.0155, Z = −8.83, p < 0.001). Since topic 1 is the base of the model, it also has a lower effect than topics 3 and 4. Topic 3 has a significant positive effect (β = 0.0356, Z = 23.06, p < 0.001) on the number of helpful votes. Topic 4 also has a positive effect (β = 0.0718, Z = 47.74, p < 0.001) on the helpfulness of the review.

In conclusion, sensory reviews, which describe consumers' sensorial experience of the product decrease the helpfulness of the review than non-sensory reviews. This analysis strongly supports H1, which states that reviews about senses are less likely to be helpful for consumers than reviews about the product's set-up process, performance, and characteristics and features.

4.4 Robustness check

As a robustness check, we combined topics 1, 2 and 5 into a single variable called sensory reviews and topics 3 and 4 into another variable called non-sensory reviews. We then conducted the regression model with these aggregated variables. Since the sum of the probabilities is 1, we need to omit one variable from the model to avoid singularity. We included the aggregated topics of sensory reviews in the model and the aggregated topics of non-sensory reviews served as the base to compare the effect.

Table 4 shows the models conducted; model 1 includes only the sensory aggregated variable (topics 1, 2, and 5), and model 2 Also includes the controls. The results do not significantly change when the controls are added. Thus, we focus on the interpretation of model 1.

The results replicate previous findings. Sensory reviews indicate a negative effect (β = −0.0604, Z = −59.13, p < 0.001) on the number of helpful votes, compared to non-sensory reviews. This robustness check offers further statistical confirmation of our hypothesis that sensory reviews are less likely to be considered helpful by consumers than non-sensory reviews.

5. Study 2: objective perception and review helpfulness as the underlying mechanism driving purchase intention

5.1 Overview and method

The goal of study 2 is to replicate our previous findings and test H2, which proposes that objective perception of the review (proximal mediator) and review helpfulness (distal mediator) mediate the relationship between review type (sensory vs non-sensory) and purchase intention. For parsimonious reasons, we focus on aggregated sensory and non-sensory reviews instead of having them as five separate conditions. Furthermore, to address the limitation of study 1B of not controlling for product category, we included 5 different product categories, chosen due to variability of sensory and non-sensory features.

Study 2 employed a 2 (review type: sensory, non-sensory) × 5 (product category: T-shirt, headphones, suitcase, bed sheets, instant pot) experimental design, with review type and product category as between-subjects independent variables and review helpfulness as dependent variable. See Table 5 for the conditions and reviews employed in study 2.

5.1.1 Participants

We recruited 1,012 panelists from Amazon MTurk (53% female, Mage = 31.79 years); they logged onto the website and completed the study for monetary compensation.

5.1.2 Procedures and materials

The participants received a link to the website. After providing informed consent to a protocol approved by the institution's ethics committee, the participants were randomly assigned to a condition.

At the beginning of the study, the participants were asked to assume that they were looking to buy a product in Amazon, then they were shown a sensory or non-sensory review depending on the assigned condition. See Table 5 for the reviews and conditions employed in study 2.

Finally, the participants were asked to respond to review helpfulness and purchase intention scales and some demographic information.

5.1.3 Measures

Following previous research (Dai et al., 2020), we employed a two-item scale to measure review helpfulness on a seven-point Likert scale from 1 (Not at all) to 7 (Extremely). The items administered are “How helpful the review was” and “How useful the review was” (Pearson r = 0.66).

Objective perception of the review was measured with a single item on a seven-point Likert scale from 1 (Totally subjective) to 7 (Totally objective). Purchase intention was also measured with a single item, “How likely would you be to purchase this product after reading the review?” on a seven-point Likert scale from 1 (Not at all) to 7 (Extremely). The control variables are age, gender, academic level and occupation.

5.2 Analysis and results

Data were submitted to an analysis of variance, with review type and product category as between-subjects independent variables and review helpfulness as dependent variable. The results are consistent with those of Study 1B. The results show a significant direct effect of the review type on review helpfulness (F(1, 1,002) = 104.58, p < 0.001, ηp2 = 0.09). The direct effect of the product category is not significant (F(4, 1,002) = 0.31, p = 0.87) nor the interaction term (F(4, 1,002) = 0.57, p = 0.68). The product category did not significantly affect the results, so it is not discussed further. None of the covariates had a significant effect.

Replicating the results of study 1B, the participants who read a sensory review reported a lower level of helpfulness (M = 4.25; SD = 1.39) than those who read a non-sensory review (M = 5.12; SD = 1.27; t = 10.31, p < 0.001). Figure 2 illustrates the results.

Study 2 was also conducted to test H2, which proposes that the effect of review type on purchase intention is serially mediated by the objective perception of the review (proximal mediator) and the review's helpfulness (distal mediator). A serial mediation analysis with 10,000 bootstrap samples (model 6; Hayes, 2017) with review type as the independent variable (sensory review = 1, non-sensory review = 0), objective perception of the review as the proximal mediator, review helpfulness as the distal mediator and purchase intention as the dependent variable revealed a significant negative effect of review type on objective perception of the review (β = −1.58, SE = 0.10; t = −16.25, p < 0.001; R2 = 0.21). When we controlled for review type, objective perception of the review significantly increased review helpfulness (β = 0.23, SE = 0.03; t = 8.85, p < 0.001; R2 = 0.16). Finally, controlling for the review type and objective perception, review helpfulness had a positive effect on purchase intention (β = 0.09, SE = 0.03; t = 2.92, p < 0.01; R2 = 0.32). The confidence interval for the indirect effect through objective perception and review helpfulness did not include zero, suggesting a significant indirect effect (βindirect = 0.03, SE = 0.01; 95% CI = [−0.0562, −0.0088]). Figure 3 shows the model plot and the results.

In order to compute the effect sizes for this serial mediation effect, we followed the guidelines proposed by Preacher and Kelley (2011). We computed the partially standardized indirect effect (abps), which is the ratio of the indirect effect to the standard deviation of the dependent variable. This index represents the size of the indirect effect in terms of standard deviation units. For the present research, the partially standardized indirect effect of review type on purchase intention through objective perception and review helpfulness is abps = −0.36, implying that purchase intention is expected to decrease by 0.36 standard deviations (SD purchase intention = 1.44) when the review is sensory (vs non-sensory) indirectly via objective perception and review helpfulness.

5.3 Discussion

As Figure 3 shows, the regression betas support our serial mediation hypothesis. This implies that sensory reviews significantly decrease the level of review helpfulness (H1). This effect also holds when controlling for product category. Moreover, sensory reviews cause consumers to perceive the review's subjective nature, decreasing the level of objective perception of the review. Perceptions of objectivity increase the level of review helpfulness, which raises the intent to purchase a product.

6. General discussion

In summary, conducting an archival study of 447,792 Amazon reviews from different product categories and a controlled-lab experiment, this research reveals that consumers online reviews about products can be grouped into five main categories, which are (1) reviews about the sense of touch, smell and sight; (2) reviews about the sense of hearing; (3) reviews about the set-up process; (4) reviews about the product performance and characteristics; and (5) reviews about the sense of taste. We further categorize them into two broad types, sensory and non-sensory reviews.

Contrary to the overwhelming evidence supporting the positive effects of sensory elicitation in marketing, we find that reviews describing consumers' sensory experience of the product are less likely to be helpful than reviews describing the product's objective properties. Moreover, a key reason why sensory reviews are less helpful is that consumers are aware of the subjective nature of them. Our data show that consumers do not rely on other people's senses in making their consumption decision since they might believe their sensorial experience would be different.

6.1 Contributions

Our research contributes to the interactive marketing field by investigating customer behavior and interactivity in electronic platforms (Wang, 2021); specifically, in online shopping sites that allow consumers to post and share personal reviews. Previous research categorized online reviews into two broad types, positive and negative reviews (Bi et al., 2019). Our research takes a step forward, proposing that consumers exhibit active behavior when writing reviews mainly concerning the sensory and non-sensory characteristics of a product, and those types of reviews have a great impact on the interactivity through “word-of-click” (Swani et al., 2013; Wang, 2021). Our findings demonstrate that non-sensory reviews significantly increase consumer interactivity in electronic platforms, a key aspect of the interactive marketing field (Wang, 2021), while also allowing multi-sided interaction by encouraging communication among consumers without a company's direct intervention.

Second, our results highlight the nascent literature regarding the concept of “word-of-click” (Wang, 2021) by confirming that non-sensory reviews increase the likelihood of consumers' interaction in such environments by clicking on helpful, with downstream effects on purchase intention. Previous research has proposed that the “word-of-click” is a low cognition process (Swani et al., 2013), our research extends this literature, offering evidence of a more complete and complex cognitive process. We propose that a major driver of this phenomenon is the perception of a review's objectivity; the higher the objective perception, the higher the “word-of-click” through helpfulness votes. Our research also contributes to previous research by replicating prior results on the downstream effects of “word-of-click” on purchase intention.

Third, our findings contribute to the literature on sensory marketing by studying it from a different perspective. Previous research has not examined the negative aspects of engaging consumers' senses (Krishna, 2012). Contrary to the previous literature, which identified that sensory marketing has positive effects on brand loyalty, sales, profits and even market share (Hussain, 2019), our studies reveal that when it comes to online reviews, consumers do not trust other people's senses when making their consumption decisions. Since people experience things differently through their senses, it might be that consumers prefer to base their consumption decisions on more objective and concrete recommendations from other people (Huang and Liang, 2021). This research identified a relevant boundary condition to the sensory marketing literature by investigating this counter-intuitive effect.

Recent research has started to explore the role of sensory marketing in digital environments (Petit et al., 2019). Our research extends this line of research by identifying that the sensory triggers communicated to consumers in digital environments must be directly experienced by the consumer and perceived as objective stimuli.

Fourth, the methodology of our research contributes to the field of consumer behavior in virtual environments. We propose that since the amount of information available in social media and virtual environments goes beyond the processing capacities of humans, topic modeling is a valuable NLP technique to uncover general topics in a large body of unstructured documents (e.g. online reviews, social media comments and brand posts).

6.2 Managerial implications

Our findings are also useful for marketing practitioners. Our data suggest that in order to increase interactivity through “word-of-click” and purchase intention, companies should encourage their consumers to write online reviews based on more objective aspects and to avoid reviews describing their sensorial experience of products. Companies can ask consumers to write reviews about their products in a way that encourages them to employ more objective words. For instance, certain companies ask their clients about their experience of using the company's product to foment that they write an online review. This type of messages makes consumer describe their sensorial experience of the product. Instead, of broad prompters, companies can employ other encouraging messages, such as “how was the performance of the product?” “What do you think about the quality of the product?” “Did you like the product's features?” and “Was it easy to set up and install the product?” By employing this type of prompter's messages, consumers are more likely to narrate their review using more concrete and objective words, which will be more beneficial to the consumers and to the company.

Since the perception of the objectivity of a review is an important underlying mechanism that drives the effect of review type on “word-of-click” from helpfulness votes as well as purchase intention, another managerial application might be to include an objective opinion vote button in online shopping platforms. This way, consumers will be able to vote for those reviews they perceive as most objective.

6.3 Limitations and directions for future research

Our research focused on a limited number of product categories. Even though our data did not reveal any differences by product category, we acknowledge that sensory reviews might not always be less helpful for consumers. Future research can explore boundary conditions to the effect we describe in this paper. For instance, sensory reviews might be more helpful for some products such as food, cosmetics or modeling clay.

In this research, we did not account for the sentiment of the review. Previous research has identified that the sentiments expressed in online comments play a major role in its effect (Lopez et al., 2020; Rambocas and Pacheco, 2018). Future research can explore how the sentiments expressed by consumers in their review influences objective perception and “word-of-click.”

Previous research has found that the valence of the review, how positive or negative it is, significantly influences its effects (Bi et al., 2019). However, our research does not address the valence of the review in terms of the type of review (sensory vs non-sensory) and its effect on “word-of-click” through helpful votes.

Recent research in the field of electronic word-of-mouth (eWOM) in the form of customer reviews has found that culture greatly influences eWOM motivation, quality, and effectiveness (Chan and Yang, 2021). Culture has also been shown to influence perceived sensory experiences (Swallow and Wang, 2020). An interesting avenue for future research would be investigating how consumers perceive sensory reviews across different cultures.

Our research did not examine the role of brand image, symbolism, positioning, or personality. For instance, previous research has found that functional and symbolic brands are evaluated differently by consumers (Liao and Wang, 2020; Liu et al., 2017). We employed Amazon data, which mainly contain functional brands, so future research could incorporate the role that prestige and luxury brands play in this field.

There are many digital platforms in which brands and consumers interact (Wang, 2021). Our research focused exclusively on online shopping platforms. Future research can build on these results and extend the contributions to additional digital platforms.

Figures

Coherence score for the number of topics to select by the LDA algorithm

Figure 1

Coherence score for the number of topics to select by the LDA algorithm

Review helpfulness by review type

Figure 2

Review helpfulness by review type

Model plot for the sequential mediation analysis

Figure 3

Model plot for the sequential mediation analysis

Identified salient topics, most probable words and example reviews

TopicNameMost frequent words within topicsExample reviewNumber of reviews
1Sense of touch, smell, and sightSkin, feel, color, hair, smell, scent, face, dri, eye, smoothXen-tan makes the best self tanners. They smell great, and if you select the right color for your skin they produce a beautiful natural tint. This weekly med/dark mousse smells of yummy vanilla (which dissipates quickly) and is the perfect color for my fair, untanned skin that has an olive undertone81,900
2Sense of hearingMusic, listen, sound, voic, heardI loved this album, it was very catchy and featured a mix of new sounds to music that sounded similar to songs she did in “Title”. I just loved this album. Here are my reviews of each song115,373
3Set-up processInstal, easi, time, includ, set, version, featur, recommend, startThis printer is my first foray into 3D printing and I am seriously impressed. When you order, go to the Monoprice website and print out the owner's manual. It's not included in the box and it's a helpful, good reference. Also, I suggest watching a few of the many, good videos on YouTube for setting it up73,703
4Product performance/characteristicsQualiti, nice, size, plastic, power, brand, small, perfect, pretti, expectThis is my favorite tape measures. It is by far the heaviest tape measure I have ever owned. The case is fairly rugged plastic but the extra weight is from the tape itself- it is at least twice as thick as the tapes in my other tape measures. It is also more rigid when extended-- I was able to extend this tape measure over 7 feet without bending76,544
5Sense of tasteTaste, flavor, snack, eat, delici, sweet, freshWhen Crystal Light first came out on the market, this was the first flavor that I tried and fell in love with. I purchase 4 containers per week because the rest of my family really enjoys the taste. Even after they introduced a variety of flavors, this was still my most favorite. I purchase the others but, I am always stocked with the Lemon Iced Tea100,272

Descriptive statistics and correlations (Study 1)

 MeanSDMinMax12345678
1. Helpfulness votes17.6353.320295
2. Topic 1 sense of touch, smell and sight0.180.29010.0339
3. Topic 2 sense of hearing0.260.3401−0.0754−0.3039
4. Topic 3 set-up process0.160.27010.0567−0.2061−0.2577
5. Topic 4 product performance0.170.28010.0881−0.1958−0.2774−0.1856
6. Topic 5 sense of taste0.220.3201−0.0754−0.2492−0.3403−0.2285−0.2306
7. Rating stars4.520.9415−0.1248−0.04240.0989−0.0858−0.02060.0239
8. Review length51.94105.181485,8200.35320.0397−0.01480.10800.0517−0.1591−0.1281
9. Title length4.323.620560.16850.0585−0.07560.03810.0680−0.0644−0.11420.2771
10. Verified0.80

Note(s): All correlations are significant at the 0.01 level (2-tailed)

Effects of topics and controls on the number of helpfulness votes

 Model 1Model 2
Topic2 sense of hearing−0.0045 (0.0016)−0.0087 (0.0016)
Topic3 set-up process0.0356 (0.0015)0.0187 (0.0016)
Topic4 product performance0.0718 (0.0015)0.0689 (0.0015)
Topic5 sense of taste−0.0155 (0.0017)−0.0068 (0.0018)
Rating stars −0.0104 (0.0003)
Review length −1.019e−02 (7.617e−05)
Title length 0.0024 (1.471e−03)
Verified −0.0149 (0.0009)
Intercept 4.9915 (0.0011)5.0277 (0.0019)

Note(s): All coefficients are significant at the 0.01 level

Cells report coefficients for each predictor; standard errors are reported in parentheses

Model specification zero-inflated Poisson regression

Robustness check, effects of aggregated sensory topics and controls on the number of helpfulness votes

 Model 1Model 2
Sensory reviews (topics 1, 2, and 5)−0.0604 (0.0010)−0.0502 (0.0017)
Rating stars −0.009 (0.0003)
Review length 3.82e−05 9.13e−07
Title length 0.0026 9.76e−05
Verified −0.0082 (0.0009)
Intercept 5.0462 (0.0007)5.0630 (0.0017)

Note(s): All coefficients are significant at the 0.01 level

Cells report coefficients for each predictor; standard errors are reported in parentheses

Model specification zero-inflated Poisson regression

Conditions and reviews employed for study 2

ConditionReview
Product category: T-shirtTitle: Fabric super soft and beautiful color!
Sensory reviewReview: “I have to say I really like this T-shirt. It's actually fitted to my body which is what I wanted. The fabric is super soft, and the color looks beautiful. Overall, I really like the T-shirt”
Product category: T-shirtTitle: Fabric super high quality and durable!
Non-sensory reviewReview: “I have to say I really like this T-shirt. It's actually fitted to my body which is what I wanted. The fabric is super high quality and durable. Overall, I really like the T-shirt”
Product category: Headphones
Sensory review
Title: Comfortable and soft for your ears!
Review: “I have to say I really like these headphones. They are very soft and comfortable for your ears, which is what I wanted. The sound is great, and the color looks beautiful. Overall, I really like the headphones”
Product category: HeadphonesTitle: High quality and durable materials!
Non-sensory reviewReview: “I have to say I really like these headphones. They have excellent sound quality and noise cancellation. The materials are super high quality and durable. Overall, I really like the headphones”
Product category: SuitcaseTitle: Beautiful color and design!
Sensory reviewReview: “I have to say I really like this suitcase. The wheels run smoothly and silently. The color looks great, and the design is beautiful. Overall, I really like this suitcase”
Product category: SuitcaseTitle: High quality materials, it is built to last!
Review: “I have to say I really like this suitcase. The wheels and case are easy to set-up and it has a great maneuverability. The materials are super high quality and durable. Overall, I really like this suitcase”
Non-sensory review
Product category: Bed sheets
Sensory review
Title: Fabric super soft and great to snuggle!
Review: “I have to say I really like these bed sheets. They fit perfectly to my bed and are great to snuggle with at night. The fabric is super soft, and the color looks beautiful. Overall, I really like these bed sheets”
Product category: Bed sheetsTitle: Fabric super high quality and durable!
Non-sensory reviewReview: “I have to say I really like these bed sheets. They fit perfectly to my bed and are easy to put on. The fabric is super high quality and durable. Overall, I really like these bed sheets”
Product category: Instant potTitle: Delicious dishes!
Sensory reviewReview: “I have to say I really like this instant pot. You can prepare many delicious dishes. The food tastes, looks, and smells great. Overall, I really like this instant pot”
Product category: Instant potTitle: High quality materials and easy to set-up!
Non-sensory reviewReview: “I have to say I really like this instant pot. It has many features to prepare different recipes and is easy to install and clean. The materials are super high quality and durable. Overall, I really like this instant pot”

References

Banks, G.C., Woznyj, H.M., Wesslen, R.S. and Ross, R.L. (2018), “A review of best practice recommendations for text analysis in R (and a user-friendly app)”, Journal of Business and Psychology, Springer New York LLC, Vol. 33 No. 4, pp. 445-459.

Bi, N.C., Zhang, R. and Ha, L. (2019), “Does valence of product review matter?: the mediating role of self-effect and third-person effect in sharing YouTube word-of-mouth (vWOM)”, Journal of Research in Interactive Marketing, Emerald Group Publishing, Vol. 13 No. 1, pp. 79-95.

Blei, D.M., Ng, A.Y. and Edu, J.B. (2003), “Latent dirichlet allocation Michael I. Jordan”, Journal of Machine Learning Research, Vol. 3, pp. 993-1022.

Chakraborty, U. (2019), “The impact of source credible online reviews on purchase intention: the mediating roles of brand equity dimensions”, Journal of Research in Interactive Marketing, Emerald Group Publishing, Vol. 13 No. 2, pp. 142-161.

Chan, H. and Yang, M.X. (2021), “Culture and electronic word of mouth: a synthesis of findings and an agenda for research”, Journal of Global Marketing, Routledge, Vol. 34 No. 3, doi: 10.1080/08911762.2021.1903642.

Chen, Z. and Doss, H. (2019), “Inference for the number of topics in the latent dirichlet allocation model via Bayesian mixture modeling”, Journal of Computational and Graphical Statistics, American Statistical Association, Vol. 28 No. 3, pp. 567-585.

Chen, P.-Y., Dhanasobhon, S. and Smith, M.D. (2008), “All reviews are not created equal: the disaggregate impact of reviews and reviewers at Amazon.Com”, SSRN Electronic Journal, Elsevier BV, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=918083, doi: 10.2139/ssrn.918083 (accessed 16 September 2019).

Chen, Y., Liu, Y. and Zhang, J. (2012), “When do third-party product reviews affect firm value and what can firms do? The case of media critics and professional movie reviews”, Journal of Marketing, SAGE Publications, Los Angeles, CA, Vol. 76 No. 2, pp. 116-134.

Cioppi, M., Curina, I., Forlani, F. and Pencarelli, T. (2019), “Online presence, visibility and reputation: a systematic literature review in management studies”, Journal of Research in Interactive Marketing, Vol. 13 No. 4, pp. 547-577.

Dai, H., Chan, C. and Mogilner, C. (2020), “People rely less on consumer reviews for experiential than material purchases”, in Dahl, D.W., Campbell, M.C. and Lamberton, C. (Eds), Journal of Consumer Research, Oxford University Press, Vol. 46 No. 6, pp. 1052-1075.

Dash, A., Zhang, D. and Zhou, L. (2021), “Personalized ranking of online reviews based on consumer preferences in product features”, International Journal of Electronic Commerce, Routledge, Vol. 25 No. 1, pp. 29-50.

Ghose, A. and Ipeirotis, P.G. (2011), “Estimating the helpfulness and economic impact of product reviews: mining text and reviewer characteristics”, IEEE Transactions on Knowledge and Data Engineering, Vol. 23 No. 10, pp. 1498-1512.

Haase, J., Wiedmann, K.-P. and Bettels, J. (2018), “Sensory imagery in advertising: how the senses affect perceived product design and consumer attitude”, Routledge, Vol. 26 No. 5, pp. 475-487, doi: 10.1080/13527266.2018.1518257.

Hannigan, T.R., Haans, R.F., Vakili, K., Tchalian, H., Glaser, V.L., Wang, M.S., Kaplan, S. and Jennings, P.D. (2019), “Topic modeling in management research: rendering new theory from textual data”, Academy of Management Annals, Vol. 13 No. 2, pp. 586-632, doi: 10.5465/Annals.2017.0099.

Haugeland, J. (1996), “Objective perception”, Perception, No. 5, p. 268.

Hayes, A.F. (2017), Introduction to Mediation, Moderation, and Conditional Process Analysis: A Regression-Based Approach, Guilford Publications, New York.

Huang, G. and Liang, H. (2021), “Uncovering the effects of textual features on trustworthiness of online consumer reviews: a computational-experimental approach”, Journal of Business Research, Elsevier, Vol. 126, pp. 1-11.

Huang, M. and Pape, A.D. (2020), “The impact of online consumer reviews on online sales: the case-based decision theory approach”, Journal of Consumer Policy, Springer, Vol. 43 No. 3, pp. 463-490.

Hussain, S. (2019), “Sensory marketing strategies and consumer behavior: sensible selling using all five senses”, IUP Journal of Business Strategy, Vol. 16 No. 3, pp. 34-44.

Jacobsen, S. (2018), “Why did I buy this?: the effect of WOM and online reviews on post purchase attribution for product outcomes”, Journal of Research in Interactive Marketing, Emerald Group Publishing, Vol. 12 No. 3, pp. 370-395.

Kim, J.M., Kim, M. and Key, S. (2020), “When profile photos matter: the roles of reviewer profile photos in the online review generation and consumption processes”, Journal of Research in Interactive Marketing, Emerald Group Holdings, Vol. 14 No. 4, pp. 391-412.

Krishna, A. (2012), “An integrative review of sensory marketing: engaging the senses to affect perception, judgment and behavior”, Journal of Consumer Psychology, Vol. 22 No. 3, doi: 10.1016/j.jcps.2011.08.003.

Krishna, A. and Morrin, M. (2008), “Does touch affect taste? The perceptual transfer of product container haptic cues”, Journal of Consumer Research, Vol. 34 No. 6, pp. 807-818.

Krishna, A., Cian, L. and Sokolova, T. (2016), “The power of sensory marketing in advertising”, Current Opinion in Psychology, Elsevier, Vol. 10, pp. 142-147.

Lambert, D. (1992), “Zero-inflated Poisson regression, with an application to defects in manufacturing”, Technometrics, Vol. 34 No. 1, pp. 1-14.

Lantzy, S., Hamilton, R.W., Chen, Y.-J. and Stewart, K. (2021), “Online reviews of credence service providers: what do consumers evaluate, do other consumers believe the reviews, and are interventions needed?”, Journal of Public Policy and Marketing, SAGE Publications, Vol. 40 No. 1, pp. 27-44.

Li, M., Huang, L., Tan, C.H. and Wei, K.K. (2013), “Helpfulness of online product reviews as seen by consumers: source and content features”, International Journal of Electronic Commerce, M.E. Sharpe, Vol. 1, pp. 101-136.

Li, C., Duan, Y., Wang, H., Zhang, Z., Sun, A. and Ma, Z. (2017), “Enhancing topic modeling for short texts with auxiliary word embeddings”, ACM Transactions on Information Systems, Association for Computing Machinery, Vol. 36 No. 2, pp. 1-30.

Liao, J. and Wang, D. (2020), “When does an online brand community backfire? An empirical study”, Journal of Research in Interactive Marketing, Emerald Group Holdings, Vol. 14 No. 4, pp. 413-430.

Liu, X., Hu, J. and Xu, B. (2017), “Does eWOM matter to brand extension?”, Journal of Research in Interactive Marketing, Vol. 11 No. 3, pp. 232-245.

Lopez, A., Guerra, E., Gonzalez, B. and Madero, S. (2020), “Consumer sentiments toward brands: the interaction effect between brand personality and sentiments on electronic word of mouth”, Journal of Marketing Analytics, Palgrave Macmillan, Vol. 8 No. 4, pp. 203-223.

Mariani, M.M. and Borghi, M. (2020), “Online review helpfulness and firms' financial performance: an empirical study in a service industry”, International Journal of Electronic Commerce, Routledge, Vol. 24 No. 4, pp. 421-449.

Mimno, D., Wallach, H.M., Talley, E., Leenders, M. and Mccallum, A. (2011), Optimizing Semantic Coherence in Topic Models, Association for Computational Linguistics, Edinburgh.

Nettelhorst, S., Brannon, L., Rose, A. and Whitaker, W. (2020), “Online viewers' choices over advertisement number and duration”, Journal of Research in Interactive Marketing, Emerald Group Publishing, Vol. 14 No. 2, pp. 215-238.

Ni, J., Li, J. and McAuley, J. (2019), “Justifying recommendations using distantly-labeled reviews and fine-grained aspects”, EMNLP-IJCNLP 2019-2019 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference. doi: 10.18653/v1/d19-1018.

Petit, O., Velasco, C. and Spence, C. (2019), “Digital sensory marketing: integrating new technologies into multisensory online experience”, Journal of Interactive Marketing, Elsevier, Vol. 45, pp. 42-61.

Porteous, I., Newman, D., Ihler, A., Asuncion, A., Smyth, P. and Welling, M. (2008), “Fast collapsed Gibbs sampling for latent dirichlet allocation”, Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, New York, USA, ACM Press, pp. 569-577.

Preacher, K.J. and Kelley, K. (2011), “Effect size measures for mediation models: quantitative strategies for communicating indirect effects”, Psychological Methods, Vol. 16 No. 2, pp. 93-115.

Qomariyah, S., Iriawan, N. and Fithriasari, K. (2019), “Topic modeling twitter data using latent dirichlet allocation and latent semantic analysis”, Proceedings, Vol. 2194, p. 20042.

Rambocas, M. and Pacheco, B.G. (2018), “Online sentiment analysis in marketing research: a review”, Journal of Research in Interactive Marketing, Emerald Group Publishing, Vol. 12 No. 2, pp. 146-163.

Ryoo, J.H. (Joseph), Wang, X. (Shane) and Lu, S. (2021), “Do spoilers really spoil? Using topic modeling to measure the effect of spoiler reviews on box office revenue”, Journal of Marketing, SAGE Publications, Vol. 85 No. 2, pp. 70-88.

Sato, Y. and Kording, K.P. (2014), “How much to trust the senses: likelihood learning”, Journal of Vision, Association for Research in Vision and Ophthalmology, Vol. 14 No. 13, pp. 13-13.

Sutherland, I., Sim, Y., Lee, S.K., Byun, J. and Kiatkawsin, K. (2020), “Topic modeling of online accommodation reviews via latent dirichlet allocation”, Sustainability, MDPI AG, Vol. 12 No. 5, p. 1821.

Swallow, K.M. and Wang, Q. (2020), “Culture influences how people divide continuous sensory experience into events”, Cognition, Elsevier B.V., Vol. 205, p. 104450.

Swani, K., Milne, G. and Brown, B.P. (2013), “Spreading the word through likes on Facebook: evaluating the message strategy effectiveness of Fortune 500 companies”, Journal of Research in Interactive Marketing, Emerald Group Publishing, Vol. 7 No. 4, pp. 269-294.

Trustpilot (2020), “The critical role of reviews in Internet trust - trustpilot Business Blog”, available at: https://business.trustpilot.com/guides-reports/build-trusted-brand/the-critical-role-of-reviews-in-internet-trust#downloadreport (accessed 21 March 2021).

van Wassenhove, V., Buonomano, D.V., Shimojo, S. and Shams, L. (2008), “Distortions of subjective time perception within and across senses”, in Eagleman, D. (Ed.), PLoS ONE, Public Library of Science, Vol. 3 No. 1, p. e1437.

Wang, C.L. (2021), “New frontiers and future directions in interactive marketing: inaugural Editorial”, Journal of Research in Interactive Marketing, Emerald Publishing, Vol. 15 No. 1, pp. 1-9.

Wu, R., Wu, H.-H. and Wang, C.L. (2021), “Why is a picture ‘worth a thousand words’? Pictures as information in perceived helpfulness of online reviews”, International Journal of Consumer Studies, John Wiley & Sons, Vol. 45 No. 3, pp. 364-378.

Xue, J., Chen, J., Chen, C., Zheng, C., Li, S. and Zhu, T. (2020), “Public discourse and sentiment during the COVID 19 pandemic: using latent dirichlet allocation for topic modeling on twitter”, in Zhao, J. (Ed.), PLOS ONE, Public Library of Science, Vol. 15 No. 9, e0239441.

Zhu, F. and Zhang, X. (Michael) (2010), “Impact of online consumer reviews on sales: the moderating role of product and consumer characteristics”, Journal of Marketing, SAGE Publications, Vol. 74 No. 2, pp. 133-148.

Acknowledgements

The authors are grateful for the helpful and constructive comments made by the editor, Cheng Wang, the Associate editor Morgan Yang, and the three anonymous reviewers.

Corresponding author

Alberto Lopez is the corresponding author and can be contacted at: alberto_lopez@tec.mx

About the authors

Alberto Lopez is a professor of marketing and business analytics at Tecnologico de Monterrey, Mexico. His lines of research focus on children's consumer behavior, branding and marketing analytics. He has published scientific articles in the Journal of Consumer Marketing, the Journal of Experimental Psychology, Marketing Intelligence and Planning, among others.

Ricardo Garza is the Chief Technology Officer at Softek, where he leads the innovation team. His research interests focus on the areas of innovation culture and machine learning algorithms to study and predict behaviors.

Related articles