Search results
1 – 10 of 794The main objective of this study is to investigate whether adoption of International Financial Reporting Standards (IFRS) improve the quality of financial reporting in Nigeria…
Abstract
The main objective of this study is to investigate whether adoption of International Financial Reporting Standards (IFRS) improve the quality of financial reporting in Nigeria. Financial reporting quality was measured in terms of fundamental qualitative characteristics such as relevance and faithful representation and enhancing qualitative characteristics such as understandability, comparability, verifiability, and timeliness as contained in the conceptual framework. The study was conducted on a sample of 162 companies listed on the Nigerian Stock Exchange. A compound measurement tool in form of an index was developed to comprehensively assess the quality of financial reporting based on information disclosed in the financial statement of the selected companies. From both univariate and multivariate analysis, I found strong evidence suggesting that accounting standard used in the preparation of financial statement have significant influence on the quality of financial report of the reporting entity. The result persists for all the three models (overall financial reporting quality, fundamental, and enhancing qualitative characteristics) tested in this analysis. The result also revealed that apart from firm age and firm growth, most of the firm-specific variables investigated have statistically significant influence on the financial reporting quality.
Details
Keywords
Dineshwar Ramdhony, Mohamed Omran and Khaled Hussainey
This paper aims to answer whether board attributes affect corporate social responsibility disclosure quality (CSRDQ) and whether these findings are sensitive to CSRDQ measurement.
Abstract
Purpose
This paper aims to answer whether board attributes affect corporate social responsibility disclosure quality (CSRDQ) and whether these findings are sensitive to CSRDQ measurement.
Design/methodology/approach
The authors use the content analysis method to measure CSRDQ in annual report narratives of 41 Mauritian-listed companies for 2008–2019. System-generalized method of moments is used to test research hypotheses.
Findings
The analysis shows that board attributes affect CSRDQ. It also shows that the impact of CSRDQ is sensitive to CSRDQ measurement.
Practical implications
This study informs stakeholders on the drivers of CSRDQ. Mauritius authorities could revise the corporate governance code to enhance CSRDQ, and the Stock Exchange of Mauritius could also provide regulations/guidance to listed companies to improve their CSRDQ.
Originality/value
This study brings new insights by viewing CSRDQ based on verifiability, as verifiable CSR reporting improves the fairness of information disclosed by management.
Details
Keywords
Sagar Dua, Mohita Gangwar Sharma, Vinaytosh Mishra and Sourabh Devidas Kulkarni
Blockchain has been considered a disrupting technology that can add value in various supply chains differently. The provenance framework matches the four blockchain capabilities…
Abstract
Purpose
Blockchain has been considered a disrupting technology that can add value in various supply chains differently. The provenance framework matches the four blockchain capabilities of traceability, certifiability, trackability and verifiability to the five generic risks, namely, the financial risk, psychological risk, social risk, physical risk and performance risk. This will help in uncording which specific risk gets mitigated by the use of blockchain in a specific supply chain.
Design/methodology/approach
This study illustrates four supply chains, namely, pharmaceutical industry, fast moving consumer goods industry, precious metal and automotive industry, and maps the risks associated with them to the provenance framework wherein the applicability of blockchain is mapped. Fuzzy analytical hierarchical processing (F-AHP) is used to rank the risks in the supply chain.
Findings
Blockchain capabilities can elevate the provenance knowledge leading to assurance in terms of origin, authenticity, custody and integrity to mitigate the supply chain risks. Present work highlights the thrust areas across various supply chains and identifies the risk priority tasks aligning the contextual supply chain risks. This study has covered five major risk perceptions. This study contributes to the literature on blockchain, customer perceived risk, provenance and supply chain.
Practical implications
This methodology can be adopted to understand and market the application of blockchain in a supply chain. It brings the marketers and marketing perspective to the supply chain. Exhaustive risk perception can be included to get more comprehensive data on mapping the risks along different supply chains. Vertical extensions of this work can be consideration of other supply chains including dairy, fruits and vegetables, electronics and component assemblies to derive the comprehensive framework for mapping risk perceptions and thereby supply chain risk mitigation through blockchain technology.
Originality/value
This linkage between blockchain, perceived risk, applications in the supply chain and a tool to convince the customers about the blockchain applicability has not been discussed in the literature. Adopting the multi-criteria decision-making F-AHP approach, this study attempt to rank the risks and stimulate conversations around a common framework for multiple sectors.
Details
Keywords
John J. Wild and Jonathan M. Wild
This study aims to examine several hypotheses, in conjunction with fundamental accounting concepts, to explain variations in the explanatory power of earnings for returns.
Abstract
Purpose
This study aims to examine several hypotheses, in conjunction with fundamental accounting concepts, to explain variations in the explanatory power of earnings for returns.
Design/methodology/approach
The authors explore three factors for their impact on the explanatory power of earnings. First, the accounting period preceding the earnings report is characterized by distinct intratemporal subperiod behavior. Recognizing this intratemporal nonstationarity is hypothesized to increase the explanatory power of earnings. Second, disaggregation of earnings into operating components is hypothesized to increase the explanatory power of earnings. Moreover, joint consideration of these first two factors is investigated. Third, the authors hypothesize that recognizing fundamental accounting concepts such as timeliness, predictive value, objectivity and verifiability offer key insights into the explanatory power of earnings.
Findings
The authors explore a sample of firms with management forecasts, which yields natural intratemporal subperiods – preforecast, forecast and realization periods – to generate hypotheses rooted in fundamental accounting concepts. The empirical evidence shows that recognition of nonstationary intratemporal behavior and earnings disaggregation yields a significant increase in the explanatory power of earning for returns. These findings are linked to fundamental concepts of accounting information.
Originality/value
This study is unique as it examines the joint role of nonstationarity and disaggregation in assessing the information conveyed in earnings. Importantly, results on these factors are linked to fundamental accounting concepts of timeliness, predictive value, objectivity and verifiability, along with their inherent trade-offs.
Details
Keywords
Wolter Pieters and Robert van Haren
The aim of the research described was to identify reasons for differences between discourses on electronic voting in the UK and The Netherlands, from a qualitative point of view.
Abstract
Purpose
The aim of the research described was to identify reasons for differences between discourses on electronic voting in the UK and The Netherlands, from a qualitative point of view.
Design/methodology/approach
From both countries, eight e‐voting experts were interviewed on their expectations, risk estimations, cooperation and learning experiences. The design was based on the theory of strategic niche management. A qualitative analysis of the data was performed to refine the main variables and identify connections.
Findings
The results show that differences in these variables can partly explain the variations in the embedding of e‐voting in the two countries, from a qualitative point of view. Key differences include the goals of introducing e‐voting, concerns in relation to verifiability and authenticity, the role of the Electoral Commissions and a focus on learning versus a focus on phased introduction.
Research limitations/implications
The current study was limited to two countries. More empirical data can reveal other relevant subvariables, and contribute to a framework that can improve our understanding of the challenges of electronic voting.
Originality/value
This study shows the context‐dependent character of discussions on information security. It can be informative for actors involved in e‐voting in the UK and The Netherlands, and other countries using or considering electronic voting.
Details
Keywords
To elaborate the nature of fact-checking in the domain of political information by examining how fact-checkers assess the validity of claims concerning the Russo-Ukrainian…
Abstract
Purpose
To elaborate the nature of fact-checking in the domain of political information by examining how fact-checkers assess the validity of claims concerning the Russo-Ukrainian conflict and how they support their assessments by drawing on evidence acquired from diverse sources of information.
Design/methodology/approach
Descriptive quantitative and qualitative content analysis of 128 reports written by the fact-checkers of Snopes – an established fact-checking organisation – during the period of 24 February 2022 – 28 June, 2023. For the analysis, nine evaluation grounds were identified, most of them inductively from the empirical material. It was examined how the fact-checkers employed such grounds while assessing the validity of claims and how the assessments were bolstered by evidence acquired from information sources such as newspapers.
Findings
Of the 128 reports, the share of assessments indicative of the invalidity of the claims was 54.7%, while the share of positive ratings was 26.7%. The share of mixed assessments was 15.6%. In the fact-checking, two evaluation grounds, that is, the correctness of information and verifiability of an event presented in a claim formed the basis for the assessment. Depending on the topic of the claim, grounds such as temporal and spatial compatibility, as well as comparison by similarity and difference occupied a central role. Most popular sources of information offering evidence for the assessments include statements of government representatives, videos and photographs shared in social media, newspapers and television programmes.
Research limitations/implications
As the study concentrated on fact-checking dealing with political information about a specific issue, the findings cannot be extended to concern the fact-checking practices in other contexts.
Originality/value
The study is among the first to characterise how fact-checkers employ evaluation grounds of diverse kind while assessing the validity of political information.
Details
Keywords
The world's most popular noncommercial website is built on five pillars, which include an assumption of good faith and ensuring all points of view are included in every…
Abstract
The world's most popular noncommercial website is built on five pillars, which include an assumption of good faith and ensuring all points of view are included in every encyclopedia article. How does this pan out in the day-to-day reality of fake news and the ever-growing climate of post-truth? How apt are mechanisms established by Wikipedia over a decade ago in the face of unreliable news sources and beliefs based on gut feelings and emotions rather than verifiable evidence? Active editors of Wikipedia firmly believe that this open online encyclopedia and other wikis operating under the same value system are lifeboats for truth seekers in a post-truth society. The mechanisms established over many years for sharing open knowledge through this online platform are even more useful now than they may have been in previous times, even though this too is understandably debatable.
Details
Keywords
The aim of this paper is to explore how trustworthy knowledge claims in Wikipedia are constructed by focusing on the everyday practices of Wikipedia editors. The paper seeks to…
Abstract
Purpose
The aim of this paper is to explore how trustworthy knowledge claims in Wikipedia are constructed by focusing on the everyday practices of Wikipedia editors. The paper seeks to focus particularly on the role of references to external sources for the stabilisation of knowledge in Wikipedia.
Design/methodology/approach
The study is inspired by online ethnography. It includes 11 Wikipedia editors, together with the sociotechnical resources in Wikipedia. The material was collected through interviews, online observations, web documents and discussions, and e‐mail questions. The analysis was carried out from a perspective of science and technology studies (STS).
Findings
Wikipedia can be regarded as a laboratory for knowledge construction in which the already published is being recycled. The references to external sources anchor the participatory encyclopaedia in the ecology of established media and attribute trust to the knowledge published. The policy on Verifiability is analysed as an obligatory passage point to which all actors have to adjust. Active Wikipedia editors can be seen as being akin to janitors of knowledge, as they are those who, through their hands‐on activities, keep Wikipedia stable.
Originality/value
The study develops an innovative understanding of the knowledge construction culture in one of the most popular sources for information on the internet. By highlighting the ways in which trust is established in Wikipedia, a more reflexive use of the participatory encyclopaedia is made possible. This is of value for information literacy training.
Details
Keywords
Librarians have long been part of a group of professionals that took responsibility for the reliability of information and protected their users from the bad epistemic…
Abstract
Purpose
Librarians have long been part of a group of professionals that took responsibility for the reliability of information and protected their users from the bad epistemic consequences caused by inaccurate information. Now users are acquiring information from the internet and using it to make important decisions. This method of acquisition is threatening the epistemological protection librarians have provided. The problem is one of verifiability, the users do not have a way to verify whether information is accurate or inaccurate. The verification is even more difficult with disinformation. The purpose of this paper is to explore possible alternatives to this problem and recommend using a new multi‐literacy instructional method as the solution.
Design/methodology/approach
A review of current literature confirmed the problem of disinformation and this paper examines possible solutions to controlling disinformation and makes suggestions on how we, as librarians, can use instruction to protect internet users from the harmful effects of using the false information.
Findings
Research found that disinformation is a widespread problem and its use has epistemic consequences that are harmful to internet users. The paper proposes a new method of instruction using a combination of learning paradigms to help users protect themselves from disinformation.
Originality/value
The paper presents a new instructional method that may help in identifying disinformation and help internet users avoid the bad epistemic consequences of using disinformation.
Details
Keywords
Lei Li, Chengzhi Zhang, Daqing He and Jia Tina Du
Through a two-stage survey, this paper examines how researchers judge the quality of answers on ResearchGate Q&A, an academic social networking site.
Abstract
Purpose
Through a two-stage survey, this paper examines how researchers judge the quality of answers on ResearchGate Q&A, an academic social networking site.
Design/methodology/approach
In the first-stage survey, 15 researchers from Library and Information Science (LIS) judged the quality of 157 answers to 15 questions and reported the criteria that they had used. The content of their reports was analyzed, and the results were merged with relevant criteria from the literature to form the second-stage survey questionnaire. This questionnaire was then completed by researchers recognized as accomplished at identifying high-quality LIS answers on ResearchGate Q&A.
Findings
Most of the identified quality criteria for academic answers—such as relevance, completeness, and verifiability—have previously been found applicable to generic answers. The authors also found other criteria, such as comprehensiveness, the answerer's scholarship, and value-added. Providing opinions was found to be the most important criterion, followed by completeness and value-added.
Originality/value
The findings here show the importance of studying the quality of answers on academic social Q&A platforms and reveal unique considerations for the design of such systems.
Details