Search results
1 – 10 of over 15000Bijitaswa Chakraborty and Titas Bhattacharjee
The purpose of this paper is to give a comprehensive review and synthesis of automated textual analysis of corporate disclosure to show how the accuracy of disclosure tone has…
Abstract
Purpose
The purpose of this paper is to give a comprehensive review and synthesis of automated textual analysis of corporate disclosure to show how the accuracy of disclosure tone has been incremented with the evolution of developed automated methods that have been used to calculate tone in prior studies.
Design/methodology/approach
This study have conducted the survey on “automated textual analysis of corporate disclosure and its impact” by searching at Google Scholar and Scopus research database after the year 2000 to prepare the list of papers. After classifying the prior literature into a dictionary-based and machine learning-based approach, this study have again sub-classified those papers according to two other dimensions, namely, information sources of disclosure and the impact of tone on the market.
Findings
This study found literature on how value relevance of tone is varied with the use of different automated methods and using different information sources. This study also found literature on the impact of such tone on market. These are contributing to help investor’s decision-making and earnings and returns prediction by researchers. The literature survey shows that the research gap lies in the development of methodologies toward the calculation of tone more accurately. This study also mention how different information sources and methodologies can influence the change in disclosure tone for the same firm, which, in turn, may change market performance. The research gap also lies in finding the determinants of disclosure tone with large scale data.
Originality/value
After reviewing some papers based on automated textual analysis of corporate disclosure, this study shows how the accuracy of the result is incrementing according to the evolution of automated methodology. Apart from the methodological research gaps, this study also identify some other research gaps related to determinants (corporate governance, firm-level, macroeconomic factors, etc.) and transparency or credibility of disclosure which could stimulate new research agendas in the areas of automated textual analysis of corporate disclosure.
Details
Keywords
Access to high-quality data is a challenge for humanitarian logistics researchers. However, humanitarian organizations publish large quantities of documents for various…
Abstract
Purpose
Access to high-quality data is a challenge for humanitarian logistics researchers. However, humanitarian organizations publish large quantities of documents for various stakeholders. Researchers can use these as secondary data, but interpreting big volumes of text is time consuming. The purpose of this paper is to present an automated quantitative content analysis (AQCA) approach that allows researchers to analyze such documents quickly and reliably.
Design/methodology/approach
Content analysis is a method to facilitate a systematic description of documents. This paper builds on an existing content analysis method, to which it adds automated steps for processing large quantities of documents. It also presents different measures for quantifying the content of documents.
Findings
The AQCA approach has been applied successfully in four papers. For example, it can identify the main theme in a document, categorize documents along different dimensions, or compare the use of a theme in different documents. This paper also identifies several limitations of content analysis in the field of humanitarian logistics research and suggests ways to mitigate them.
Research limitations/implications
The AQCA approach does not provide an exhaustive qualitative analysis of documents. Instead, it aims to analyze documents quickly and reliably to extract the contents’ quantifiable aspects.
Originality/value
Although content analysis has been used in humanitarian logistics research before, no paper has yet proposed an automated, step-by-step approach that researchers can use. It also is the first study to discuss specific limitations of content analysis in the context of humanitarian logistics.
Details
Keywords
Tamer Elshandidy, Philip J. Shrives, Matt Bamber and Santhosh Abraham
This paper provides a wide-ranging and up-to-date (1997–2016) review of the archival empirical risk-reporting literature. The reviewed papers are classified into two principal…
Abstract
This paper provides a wide-ranging and up-to-date (1997–2016) review of the archival empirical risk-reporting literature. The reviewed papers are classified into two principal themes: the incentives for and/or informativeness of risk reporting. Our review demonstrates areas of significant divergence in the literature specifically: mandatory versus voluntary risk reporting, manual versus automated content analysis, within-country versus cross-country variations in risk reporting, and risk reporting in financial versus non-financial firms. Our paper identifies a number of issues which require further research. In particular we draw attention to two: first, a lack of clarity and consistency around the conceptualization of risk; and second, the potential costs and benefits of standard-setters’ involvement.
Details
Keywords
Jeannette Paschen, Matthew Wilson and Karen Robson
This study aims to investigate motivations and human values of everyday consumers who participate in the annual day of consumption restraint known as Buy Nothing Day (BND). In…
Abstract
Purpose
This study aims to investigate motivations and human values of everyday consumers who participate in the annual day of consumption restraint known as Buy Nothing Day (BND). In addition, this study demonstrates a hybrid content analysis method in which artificial intelligence and human contributions are used in the data analysis.
Design/methodology/approach
This research uses a hybrid method of content analysis of a large Twitter data set spanning three years.
Findings
Consumer motivations are categorized as relating to consumerism, personal welfare, wastefulness, environment, inequality, anti-capitalism, financial responsibility, financial necessity, health, ethics and resistance to American culture. Of these, consumerism and personal welfare are the most common. Moreover, human values related to “openness to change” and “self-transcendence” were prominent in the BND tweets.
Research limitations/implications
This research demonstrates the effectiveness of a hybrid content analysis methodology and uncovers the motivations and human values that average consumers (as opposed to consumer activists) have to restrain their consumption. This research also provides insight for firms wishing to better understand and respond to consumption restraint.
Practical implications
This research provides insight for firms wishing to better understand and respond to consumption restraint.
Originality/value
The question of why everyday consumers engage in consumption restraint has received little attention in the scholarly discourse; this research provides insight into “everyday” consumer motivations for engaging in restraint using a hybrid content analysis of a large data set spanning over three years.
Details
Keywords
Suzan Abed, Basil Al-Najjar and Clare Roberts
This paper aims to investigate empirically the common alternative methods of measuring annual report narratives. Five alternative methods are employed, a weighted and un-weighted…
Abstract
Purpose
This paper aims to investigate empirically the common alternative methods of measuring annual report narratives. Five alternative methods are employed, a weighted and un-weighted disclosure index and three textual coding systems, measuring the amount of space devoted to relevant disclosures.
Design/methodology/approach
The authors investigate the forward-looking voluntary disclosures of 30 UK non-financial companies. They employ descriptive analysis, correlation matrix, mean comparison t-test, rankings and multiple regression analysis of disclosure measures against determinants of corporate voluntary reporting.
Findings
The results reveal that while the alternative methods of forward-looking voluntary disclosure are highly correlated, important significant differences do nevertheless emerge. In particular, it appears important to measure volume rather than simply the existence or non-existence of each type of disclosure. Overall, we detect that the optimal method is content analysis by text-unit rather than by sentence.
Originality/value
This paper contributes to the extant literature in forward-looking disclosure by reporting important differences among alternative content analyses. However, the decision regarding whether this should be a computerised or a manual content analysis appears not to be driven by differences in the resulting measures. Rather, the choice is the outcome of a trade-off between the time involved in setting up coding rules for computerised analysis versus the time saved undertaking the analysis itself.
Details
Keywords
Annamaria Tuan, Daniele Dalli, Alessandro Gandolfo and Anastasia Gravina
The authors have systematically reviewed 534 corporate social responsibility communication (CSRC) papers, updating the current debate about the ontological and epistemological…
Abstract
Purpose
The authors have systematically reviewed 534 corporate social responsibility communication (CSRC) papers, updating the current debate about the ontological and epistemological paradigms that characterize the field, and providing evidence of the interactions between these paradigms and the related methodological choices. The purpose of this paper is to provide theoretical and methodological implications for future research in the CSRC research domain.
Design/methodology/approach
The authors used the Scopus database to search for titles, abstracts and related keywords with two queries sets relating to corporate social responsibility (e.g. corporate ethical, corporate environmental, social responsibility, corporate accountability) and CSRC (e.g. reporting, disclosure, dialogue, sensemaking). The authors identified 534 empirical papers (2000–2016), which the authors coded manually to identify the research methods and research designs (Creswell, 2013). The authors then developed an ad hoc dictionary whose keywords relate to the three primary CSRC approaches (instrumental, normative and constitutive). Using the software Linguistic Inquiry and Word Count, the authors undertook an automated content analysis in order to measure these approaches’ relative popularity and compare the methods employed in empirical research.
Findings
The authors found that the instrumental approach, which belongs to the functionalist paradigm, dominates the CSRC literature with its relative weight being constant over time. The normative approach also belongs to the functionalist paradigm, but plays a minor yet enduring role. The constitutive approach belongs to the interpretive paradigm and grew slightly over time, but still remains largely beyond the instrumental approach. In the instrumental approach, many papers report on descriptive empirical analyses. In the constitutive approach, theory-method relationships are in line with the various paradigmatic traits, while the normative approach presents critical issues. Regarding methodology, according to the findings, the literature review underlines three major limitations that characterize the existing empirical evidence and provides avenues for future research. While multi-paradigmatic research is promoted in the CRSC literature (Crane and Glozer, 2016; Morsing, 2017; Schoeneborn and Trittin, 2013), the authors found no empirical evidence.
Originality/value
This is the first paper to systematically review empirical research in the CSRC field and is also the first to address the relationship between research paradigms, theoretical approaches, and methods. Further, the authors suggest a novel way to develop systematic reviews (i.e. via quantitative, automated content analysis), which can now also be applied in other literature streams and in other contexts.
Details
Keywords
Birte Fähnrich, Jens Vogelgesang and Michael Scharkow
This study is dedicated to universities' strategic social media communication and focuses on the fan engagement triggered by Facebook postings. The study contributes to a growing…
Abstract
Purpose
This study is dedicated to universities' strategic social media communication and focuses on the fan engagement triggered by Facebook postings. The study contributes to a growing body of knowledge that addresses the strategic communication of universities that have thus far hardly dealt with questions of resonance and evaluation of their social media messages.
Design/methodology/approach
Using the Facebook Graph API, the authors collected posts from the official Facebook fan pages of the universities listed on Shanghai Ranking's Top 50 of 2015. Specifically, the authors retrieved all posts in a three-year range from October 2012 to September 2015. After downloading the Facebook posts, the authors used tools for automated content analysis to investigate the features of the post messages.
Findings
Overall, the median number of likes per 10,000 fans was 4.6, while the number of comments (MD = 0.12) and shares (MD = 0.40) were considerably lower. The average Facebook Like Ratio of universities per 10,000 fans was 17.93%, the average Comment Ratio (CR) was 0.56% and the average Share Ratio (SR) was 2.82%. If we compare the average Like Ratios (17.93%) and Share Ratios (2.82%) of the universities with the respective Like Ratios (5.90%) and Share Ratios (0.45%) of global brands per 10,000 fans, we may find that universities are three times (likes) and six times (shares) as successful as are global brands in triggering engagement among their fan bases.
Research limitations/implications
The content analysis was solely based on the publicly observable Facebook communication of the Top 50 Shanghai Ranking universities. Furthermore, the content analysis was limited to universities listed on the Shanghai Ranking's Top 50. Also, the Facebook posts have been sampled between 2012 and September 2015. Moreover, the authors solely focused on one social media channel (i.e., Facebook), which might restrict the generalizability of the study findings. The limitations notwithstanding, university communicators are invited to take advantage of the study's insights to become more successful in generating fan engagement.
Practical implications
First, posts published on the weekend generate significantly more engagement than those published on workdays. Second, the findings suggest that posts published in the evening generate more engagement than those published during other times of day. Third, research-related posts trigger a certain number of shares, but at the same time these posts tend to lower engagement with regard to liking and commenting.
Originality/value
To the authors’ best knowledge, the automated content analysis of 72,044 Facebook posts of universities listed in the Top 50 of the Shanghai Ranking is the first large scale longitudinal investigation of a social media channel of higher education institutions.
Details
Keywords
Saif Mir, Shih-Hao Lu, David Cantor and Christian Hofer
Content analysis is a methodology that has been used in many academic disciplines as a means to extract quantitative measures from textual information. The purpose of this paper…
Abstract
Purpose
Content analysis is a methodology that has been used in many academic disciplines as a means to extract quantitative measures from textual information. The purpose of this paper is to document the use of content analysis in the supply chain literature. The authors also discuss opportunities for future research.
Design/methodology/approach
The authors conduct a literature review of 13 leading supply chain journals to assess the state of the content analysis-based literature and identify opportunities for future research. Additionally, the authors provide a general schema for and illustration of the use of content analysis.
Findings
The findings suggest that content analysis for quantitative studies and hypothesis testing purposes has rarely been used in the supply chain discipline. The research also suggests that in order to fully realize the potential of content analysis, future content analysis research should conduct more hypothesis testing, employ diverse data sets, utilize state-of-the-art content analysis software programs, and leverage multi-method research designs.
Originality/value
The current research synthesizes the use of content analysis methods in the supply chain domain and promotes the need to capitalize on the advantages offered by this research methodology. The paper also presents several topics for future research that can benefit from the content analysis method.
Details
Keywords
Klaus Weber, Hetal Patel and Kathryn L. Heinze
Much of contemporary institutional theory rests on the identification of structured, coherent, and encompassing logics, and from there proceeds to examine multilevel dynamics or…
Abstract
Much of contemporary institutional theory rests on the identification of structured, coherent, and encompassing logics, and from there proceeds to examine multilevel dynamics or the relationship between logics in a field. Less research directly studies the internal properties and dynamics of logics and how they are structured over time. In this paper, we propose a method for understanding the content and organization of logics over time. We advocate for an analysis of logics that is grounded in a repertoire view of culture (Swidler, 1986; Weber, 2005). This approach involves identifying the set of cultural categories that can make up logics, and measuring empirically the dimensions that mark a cultural system as more or less logic-like. We discuss several text analytic approaches suitable for discourse data, and outline a seven-step method for describing the internal organization of a cultural repertoire in term of its “logic-ness.” We provide empirical illustrations from a historical analysis of the field of alternative livestock agriculture. Our approach provides an integrated theoretical and methodological framework for the analysis of logics across a range of settings.
Details
Keywords
Klaus Weber, Hetal Patel and Kathryn L. Heinze
Much of contemporary institutional theory rests on the identification of structured, coherent, and encompassing logics, and from there proceeds to examine multilevel dynamics or…
Abstract
Much of contemporary institutional theory rests on the identification of structured, coherent, and encompassing logics, and from there proceeds to examine multilevel dynamics or the relationship between logics in a field. Less research directly studies the internal properties and dynamics of logics and how they are structured over time. In this paper, we propose a method for understanding the content and organization of logics over time. We advocate for an analysis of logics that is grounded in a repertoire view of culture (Swidler, 1986; Weber, 2005). This approach involves identifying the set of cultural categories that can make up logics, and measuring empirically the dimensions that mark a cultural system as more or less logic-like. We discuss several text analytic approaches suitable for discourse data, and outline a seven-step method for describing the internal organization of a cultural repertoire in term of its “logic-ness.” We provide empirical illustrations from a historical analysis of the field of alternative livestock agriculture. Our approach provides an integrated theoretical and methodological framework for the analysis of logics across a range of settings.
Details