Search results

1 – 10 of over 68000
Book part
Publication date: 13 March 2023

Jochen Hartmann and Oded Netzer

The increasing importance and proliferation of text data provide a unique opportunity and novel lens to study human communication across a myriad of business and marketing…

Abstract

The increasing importance and proliferation of text data provide a unique opportunity and novel lens to study human communication across a myriad of business and marketing applications. For example, consumers compare and review products online, individuals interact with their voice assistants to search, shop, and express their needs, investors seek to extract signals from firms' press releases to improve their investment decisions, and firms analyze sales call transcripts to increase customer satisfaction and conversions. However, extracting meaningful information from unstructured text data is a nontrivial task. In this chapter, we review established natural language processing (NLP) methods for traditional tasks (e.g., LDA for topic modeling and lexicons for sentiment analysis and writing style extraction) and provide an outlook into the future of NLP in marketing, covering recent embedding-based approaches, pretrained language models, and transfer learning for novel tasks such as automated text generation and multi-modal representation learning. These emerging approaches allow the field to improve its ability to perform certain tasks that we have been using for more than a decade (e.g., text classification). But more importantly, they unlock entirely new types of tasks that bring about novel research opportunities (e.g., text summarization, and generative question answering). We conclude with a roadmap and research agenda for promising NLP applications in marketing and provide supplementary code examples to help interested scholars to explore opportunities related to NLP in marketing.

Article
Publication date: 31 October 2023

Hong Zhou, Binwei Gao, Shilong Tang, Bing Li and Shuyu Wang

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly…

Abstract

Purpose

The number of construction dispute cases has maintained a high growth trend in recent years. The effective exploration and management of construction contract risk can directly promote the overall performance of the project life cycle. The miss of clauses may result in a failure to match with standard contracts. If the contract, modified by the owner, omits key clauses, potential disputes may lead to contractors paying substantial compensation. Therefore, the identification of construction project contract missing clauses has heavily relied on the manual review technique, which is inefficient and highly restricted by personnel experience. The existing intelligent means only work for the contract query and storage. It is urgent to raise the level of intelligence for contract clause management. Therefore, this paper aims to propose an intelligent method to detect construction project contract missing clauses based on Natural Language Processing (NLP) and deep learning technology.

Design/methodology/approach

A complete classification scheme of contract clauses is designed based on NLP. First, construction contract texts are pre-processed and converted from unstructured natural language into structured digital vector form. Following the initial categorization, a multi-label classification of long text construction contract clauses is designed to preliminary identify whether the clause labels are missing. After the multi-label clause missing detection, the authors implement a clause similarity algorithm by creatively integrating the image detection thought, MatchPyramid model, with BERT to identify missing substantial content in the contract clauses.

Findings

1,322 construction project contracts were tested. Results showed that the accuracy of multi-label classification could reach 93%, the accuracy of similarity matching can reach 83%, and the recall rate and F1 mean of both can reach more than 0.7. The experimental results verify the feasibility of intelligently detecting contract risk through the NLP-based method to some extent.

Originality/value

NLP is adept at recognizing textual content and has shown promising results in some contract processing applications. However, the mostly used approaches of its utilization for risk detection in construction contract clauses predominantly are rule-based, which encounter challenges when handling intricate and lengthy engineering contracts. This paper introduces an NLP technique based on deep learning which reduces manual intervention and can autonomously identify and tag types of contractual deficiencies, aligning with the evolving complexities anticipated in future construction contracts. Moreover, this method achieves the recognition of extended contract clause texts. Ultimately, this approach boasts versatility; users simply need to adjust parameters such as segmentation based on language categories to detect omissions in contract clauses of diverse languages.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 1 May 1991

Ye‐Sho Chen

A major difficulty in continuous speech recognition research is the lack of effective and objective evaluation of the statistical models of text. Herbert Simon's view for…

Abstract

A major difficulty in continuous speech recognition research is the lack of effective and objective evaluation of the statistical models of text. Herbert Simon's view for evaluating theories is here applied to the statistical modelling of text. Three significant contributions can be identified. First, a time‐series representation of text is used to identify three well‐known empirical laws of text generation. These laws provide an effective and objective approach for evaluating four leading statistical models of text. Second, it is shown that the Simon‐Yule model of text provides a constructive mechanism for those laws. Third, based on Simon's explanatory processes of imitation and association, an adaptive framework for continuous speech recognition is suggested.

Details

Kybernetes, vol. 20 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 4 June 2021

Lixue Zou, Xiwen Liu, Wray Buntine and Yanli Liu

Full text of a document is a rich source of information that can be used to provide meaningful topics. The purpose of this paper is to demonstrate how to use citation context (CC…

Abstract

Purpose

Full text of a document is a rich source of information that can be used to provide meaningful topics. The purpose of this paper is to demonstrate how to use citation context (CC) in the full text to identify the cited topics and citing topics efficiently and effectively by employing automatic text analysis algorithms.

Design/methodology/approach

The authors present two novel topic models, Citation-Context-LDA (CC-LDA) and Citation-Context-Reference-LDA (CCRef-LDA). CC is leveraged to extract the citing text from the full text, which makes it possible to discover topics with accuracy. CC-LDA incorporates CC, citing text, and their latent relationship, while CCRef-LDA incorporates CC, citing text, their latent relationship and reference information in CC. Collapsed Gibbs sampling is used to achieve an approximate estimation. The capacity of CC-LDA to simultaneously learn cited topics and citing topics together with their links is investigated. Moreover, a topic influence measure method based on CC-LDA is proposed and applied to create links between the two-level topics. In addition, the capacity of CCRef-LDA to discover topic influential references is also investigated.

Findings

The results indicate CC-LDA and CCRef-LDA achieve improved or comparable performance in terms of both perplexity and symmetric Kullback–Leibler (sKL) divergence. Moreover, CC-LDA is effective in discovering the cited topics and citing topics with topic influence, and CCRef-LDA is able to find the cited topic influential references.

Originality/value

The automatic method provides novel knowledge for cited topics and citing topics discovery. Topic influence learnt by our model can link two-level topics and create a semantic topic network. The method can also use topic specificity as a feature to rank references.

Details

Library Hi Tech, vol. 39 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 4 November 2014

Sangkil Moon, Yoonseo Park and Yong Seog Kim

The aim of this research is to theorize and demonstrate that analyzing consumers’ text product reviews using text mining can enhance the explanatory power of a product sales model

3223

Abstract

Purpose

The aim of this research is to theorize and demonstrate that analyzing consumers’ text product reviews using text mining can enhance the explanatory power of a product sales model, particularly for hedonic products, which tend to generate emotional and subjective product evaluations. Previous research in this area has been more focused on utilitarian products.

Design/methodology/approach

Our text clustering-based procedure segments text reviews into multiple clusters in association with consumers’ numeric ratings to address consumer heterogeneity in taste preferences and quality valuations and the J-distribution of numeric product ratings. This approach is novel in terms of combining text clustering with numeric product ratings to address consumers’ subjective product evaluations.

Findings

Using the movie industry as our empirical application, we find that our approach of making use of product text reviews can improve the explanatory power and predictive validity of the box-office sales model.

Research limitations/implications

Marketing scholars have actively investigated the impact of consumers’ online product reviews on product sales, primarily focusing on consumers’ numeric product ratings. Recently, studies have also examined user-generated content. Similarly, this study looks into users’ textual product reviews to explain product sales. It remains to be seen how generalizable our empirical results are beyond our movie application.

Practical implications

Whereas numeric ratings can indicate how much viewers liked products, consumers’ reviews can convey why viewers liked or disliked them. Therefore, our review analysis can help marketers understand what factors make new products succeed or fail.

Originality/value

Primarily our approach is suitable to products subjectively evaluated, mostly, hedonic products. In doing so, we consider consumer heterogeneity contained in reviews through our review clusters based on their divergent impacts on sales.

Details

European Journal of Marketing, vol. 48 no. 11/12
Type: Research Article
ISSN: 0309-0566

Keywords

Article
Publication date: 3 November 2020

Jagroop Kaur and Jaswinder Singh

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of…

Abstract

Purpose

Normalization is an important step in all the natural language processing applications that are handling social media text. The text from social media poses a different kind of problems that are not present in regular text. Recently, a considerable amount of work has been done in this direction, but mostly in the English language. People who do not speak English code mixed the text with their native language and posted text on social media using the Roman script. This kind of text further aggravates the problem of normalizing. This paper aims to discuss the concept of normalization with respect to code-mixed social media text, and a model has been proposed to normalize such text.

Design/methodology/approach

The system is divided into two phases – candidate generation and most probable sentence selection. Candidate generation task is treated as machine translation task where the Roman text is treated as source language and Gurmukhi text is treated as the target language. Character-based translation system has been proposed to generate candidate tokens. Once candidates are generated, the second phase uses the beam search method for selecting the most probable sentence based on hidden Markov model.

Findings

Character error rate (CER) and bilingual evaluation understudy (BLEU) score are reported. The proposed system has been compared with Akhar software and RB\_R2G system, which are also capable of transliterating Roman text to Gurmukhi. The performance of the system outperforms Akhar software. The CER and BLEU scores are 0.268121 and 0.6807939, respectively, for ill-formed text.

Research limitations/implications

It was observed that the system produces dialectical variations of a word or the word with minor errors like diacritic missing. Spell checker can improve the output of the system by correcting these minor errors. Extensive experimentation is needed for optimizing language identifier, which will further help in improving the output. The language model also seeks further exploration. Inclusion of wider context, particularly from social media text, is an important area that deserves further investigation.

Practical implications

The practical implications of this study are: (1) development of parallel dataset containing Roman and Gurmukhi text; (2) development of dataset annotated with language tag; (3) development of the normalizing system, which is first of its kind and proposes translation based solution for normalizing noisy social media text from Roman to Gurmukhi. It can be extended for any pair of scripts. (4) The proposed system can be used for better analysis of social media text. Theoretically, our study helps in better understanding of text normalization in social media context and opens the doors for further research in multilingual social media text normalization.

Originality/value

Existing research work focus on normalizing monolingual text. This study contributes towards the development of a normalization system for multilingual text.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 13 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 19 January 2023

Peter Organisciak, Michele Newman, David Eby, Selcuk Acar and Denis Dumas

Most educational assessments tend to be constructed in a close-ended format, which is easier to score consistently and more affordable. However, recent work has leveraged…

Abstract

Purpose

Most educational assessments tend to be constructed in a close-ended format, which is easier to score consistently and more affordable. However, recent work has leveraged computation text methods from the information sciences to make open-ended measurement more effective and reliable for older students. The purpose of this study is to determine whether models used by computational text mining applications need to be adapted when used with samples of elementary-aged children.

Design/methodology/approach

This study introduces domain-adapted semantic models for child-specific text analysis, to allow better elementary-aged educational assessment. A corpus compiled from a multimodal mix of spoken and written child-directed sources is presented, used to train a children’s language model and evaluated against standard non-age-specific semantic models.

Findings

Child-oriented language is found to differ in vocabulary and word sense use from general English, while exhibiting lower gender and race biases. The model is evaluated in an educational application of divergent thinking measurement and shown to improve on generalized English models.

Research limitations/implications

The findings demonstrate the need for age-specific language models in the growing domain of automated divergent thinking and strongly encourage the same for other educational uses of computation text analysis by showing a measurable difference in the language of children.

Social implications

Understanding children’s language more representatively in automated educational assessment allows for more fair and equitable testing. Furthermore, child-specific language models have fewer gender and race biases.

Originality/value

Research in computational measurement of open-ended responses has thus far used models of language trained on general English sources or domain-specific sources such as textbooks. To the best of the authors’ knowledge, this paper is the first to study age-specific language models for educational assessment. In addition, while there have been several targeted, high-quality corpora of child-created or child-directed speech, the corpus presented here is the first developed with the breadth and scale required for large-scale text modeling.

Details

Information and Learning Sciences, vol. 124 no. 1/2
Type: Research Article
ISSN: 2398-5348

Keywords

Article
Publication date: 15 October 2022

Lauren N. Irwin and Julie R. Posselt

Developing leaders for a diverse democracy is an increasingly important aim of higher education and social justice is ever more a goal of leadership education efforts…

Abstract

Developing leaders for a diverse democracy is an increasingly important aim of higher education and social justice is ever more a goal of leadership education efforts. Accordingly, it is important to explore how dominant leadership models, as blueprints for student leadership development, account for and may unwittingly reinforce systems of domination, like racism. This critical discourse analysis, rooted in racialization and color-evasiveness, examines three prominent college student leadership development models to examine how leaders and leadership are racialized. We find that all three leadership texts frame leaders and leadership in color-evasive ways. Specifically, the texts’ discourses reveal three mechanisms for evading race in leadership: focusing on individual identities, emphasizing universality, and centering collaboration. Implications for race in leadership development, the social construction of leadership more broadly, and future scholarship are discussed.

Details

Journal of Leadership Education, vol. 21 no. 4
Type: Research Article
ISSN: 1552-9045

Article
Publication date: 29 April 2021

Heng-Yang Lu, Yi Zhang and Yuntao Du

Topic model has been widely applied to discover important information from a vast amount of unstructured data. Traditional long-text topic models such as Latent Dirichlet…

Abstract

Purpose

Topic model has been widely applied to discover important information from a vast amount of unstructured data. Traditional long-text topic models such as Latent Dirichlet Allocation may suffer from the sparsity problem when dealing with short texts, which mostly come from the Web. These models also exist the readability problem when displaying the discovered topics. The purpose of this paper is to propose a novel model called the Sense Unit based Phrase Topic Model (SenU-PTM) for both the sparsity and readability problems.

Design/methodology/approach

SenU-PTM is a novel phrase-based short-text topic model under a two-phase framework. The first phase introduces a phrase-generation algorithm by exploiting word embeddings, which aims to generate phrases with the original corpus. The second phase introduces a new concept of sense unit, which consists of a set of semantically similar tokens for modeling topics with token vectors generated in the first phase. Finally, SenU-PTM infers topics based on the above two phases.

Findings

Experimental results on two real-world and publicly available datasets show the effectiveness of SenU-PTM from the perspectives of topical quality and document characterization. It reveals that modeling topics on sense units can solve the sparsity of short texts and improve the readability of topics at the same time.

Originality/value

The originality of SenU-PTM lies in the new procedure of modeling topics on the proposed sense units with word embeddings for short-text topic discovery.

Details

Data Technologies and Applications, vol. 55 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 21 June 2023

Parvin Reisinezhad and Mostafa Fakhrahmad

Questionnaire studies of knowledge, attitude and practice (KAP) are effective research in the field of health, which have many shortcomings. The purpose of this research is to…

Abstract

Purpose

Questionnaire studies of knowledge, attitude and practice (KAP) are effective research in the field of health, which have many shortcomings. The purpose of this research is to propose an automatic questionnaire-free method based on deep learning techniques to address the shortcomings of common methods. Next, the aim of this research is to use the proposed method with public comments on Twitter to get the gaps in KAP of people regarding COVID-19.

Design/methodology/approach

In this paper, two models are proposed to achieve the mentioned purposes, the first one for attitude and the other for people’s knowledge and practice. First, the authors collect some tweets from Twitter and label them. After that, the authors preprocess the collected textual data. Then, the text representation vector for each tweet is extracted using BERT-BiGRU or XLNet-GRU. Finally, for the knowledge and practice problem, a multi-label classifier with 16 classes representing health guidelines is proposed. Also, for the attitude problem, a multi-class classifier with three classes (positive, negative and neutral) is proposed.

Findings

Labeling quality has a direct relationship with the performance of the final model, the authors calculated the inter-rater reliability using the Krippendorf alpha coefficient, which shows the reliability of the assessment in both problems. In the problem of knowledge and practice, 87% and in the problem of people’s attitude, 95% agreement was reached. The high agreement obtained indicates the reliability of the dataset and warrants the assessment. The proposed models in both problems were evaluated with some metrics, which shows that both proposed models perform better than the common methods. Our analyses for KAP are more efficient than questionnaire methods. Our method has solved many shortcomings of questionnaires, the most important of which is increasing the speed of evaluation, increasing the studied population and receiving reliable opinions to get accurate results.

Research limitations/implications

Our research is based on social network datasets. This data cannot provide the possibility to discover the public information of users definitively. Addressing this limitation can have a lot of complexity and little certainty, so in this research, the authors presented our final analysis independent of the public information of users.

Practical implications

Combining recurrent neural networks with methods based on the attention mechanism improves the performance of the model and solves the need for large training data. Also, using these methods is effective in the process of improving the implementation of KAP research and eliminating its shortcomings. These results can be used in other text processing tasks and cause their improvement. The results of the analysis on the attitude, practice and knowledge of people regarding the health guidelines lead to the effective planning and implementation of health decisions and interventions and required training by health institutions. The results of this research show the effective relationship between attitude, practice and knowledge. People are better at following health guidelines than being aware of COVID-19. Despite many tensions during the epidemic, most people still discuss the issue with a positive attitude.

Originality/value

To the best of our knowledge, so far, no text processing-based method has been proposed to perform KAP research. Also, our method benefits from the most valuable data of today’s era (i.e. social networks), which is the expression of people’s experiences, facts and free opinions. Therefore, our final analysis provides more realistic results.

Details

Kybernetes, vol. 52 no. 7
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 68000