Abstract
Purpose
Two major issues exist in measuring social impact within the value chain: trade-offs between data accessibility and data qualitative characteristics and shared accountability for digital data. This study aims to investigate their interconnectedness to identify tensions between impact measurement and accountability conditions and examine how these tensions align with the qualitative characteristics of data. The main objective is to develop a framework for identifying these tensions downstream, which can offer valuable insights into the challenges hindering accurate social impact measurements.
Design/methodology/approach
Participatory action research was conducted in an IT company to ensure grounding in practitioners’ experiences with social accounting. Since the use of primary data was prohibited, this paper gathered impact indicators from a variety of secondary sources, including document and literature reviews, interviews, focus groups and a survey. Through inductive analysis of this data, this paper uncovered tensions in the measurement of social impact, which were then further examined using the international financial reporting standards (IFRS) conceptual framework and the five conditions of accountability.
Findings
Five categories of tensions were identified that hinder accurate measurement of the technologies’ social impacts. Using the IFRS conceptual framework and the five conditions of accountability, this paper show that these tensions relate to trade-offs between data qualitative characteristics and can lead to incomplete accountability of the company for its impact on the downstream value chain.
Originality/value
The originality of this study lies in demonstrating how the challenges of measuring technologies’ social impact are linked to the conditions under which IT companies can be held accountable for their activities and those of their customers.
Keywords
Citation
Anarbaeva, A. and Garst, J. (2024), "Accounting for downstream value chain: examining the accountability for social impact of digitalisation", Meditari Accountancy Research, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/MEDAR-02-2024-2387
Publisher
:Emerald Publishing Limited
Copyright © 2024, Akylai Anarbaeva and Jilde Garst.
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
After being a voluntary activity for decades, an increasing number of countries are now enacting laws that make reporting of social impact mandatory. With the European Union (EU) Corporate Sustainability Reporting Directive (CSRD) 2022/2464 and the European Sustainability Reporting Standards (ESRS) (EFRAG, 2024) as a notable example, these regulations follow in the footsteps of social accounting scholars and aim to address the externalities generated by corporate activities (Gray et al., 1997). However, while accurately mapping the social impacts of a company’s operations is already a significant challenge (Unerman et al., 2018), these regulations introduce an additional complexity by extending the scope beyond the company’s boundaries (EFRAG, 2024). Social impacts are not confined to a company’s physical premises or legal boundaries. Decisions made within a company reverberate throughout its value chain, creating ripple effects that impact various stakeholders (Pitta and Laric, 2004; Porter and Millar, 1985). Given that social accounting aims to increase transparency and accountability for these externalities (Gray et al., 1997), the question arises: how do social impact measurements capture the full scope of these ripple effects (Bennett and Grabs, 2024; Butollo et al., 2022; Fearne et al., 2012; Hellin and Meijer, 2006).
This question can potentially be addressed by the digitalisation processes that nowadays encompass various value chains. Deployed digital technologies and information systems facilitate data sharing across company boundaries and reduce the information asymmetry between value chain actors (Spanò and Ginesti, 2022; Tiwari and Khan, 2020; De Santis and Presti, 2018). As these technologies can connect multiple companies’ information systems, the social impact on stakeholders upstream and downstream could be mapped and measured (EFRAG, 2024). Enhancing the social impact measurement is particularly important in the downstream value chain where technology users and vulnerable local communities, directly and indirectly and experience the companies’ impacts (EFRAG, 2024; Porter, 1985; Porter and Millar, 1985). However, despite the potential of technologies to bridge these gaps, measuring social impacts on these stakeholders is easier said than done. They are often more difficult to identify and engage with – unlike more visible stakeholders such as the company’s suppliers – making it challenging to accurately capture their experiences and needs (Butollo et al., 2022; Garst et al., 2021; Freeman et al., 2010; Hellin and Meijer, 2006). Nonetheless, failure to engage with these difficult-to-reach or voiceless stakeholders can significantly limit the accountability of companies for their social impacts (Garst et al., 2021; Fearne et al., 2012; Maas and Liket, 2011).
The technologies’ ability to measure social impact across value chains, however, is hindered by the restricted access to data. Much of the primary [1] data collected by technologies is owned by users and therefore, remains either inaccessible or subject to legal restrictions (Motti and Berkovsky, 2022; Barocas and Nissenbaum, 2014). These restrictions have emerged due to ethical concerns surrounding privacy and unauthorised data usage, which have gained prominence over the past decade (McGraw and Mandl, 2021). For instance, by applying data mining techniques on readily available data, companies may identify the likelihood of a particular medical condition among users and exploit this information for commercial purposes (Barocas and Nissenbaum, 2014). Several regulations – the Artificial Intelligence (AI) Act 2021/0106 and the General Data Protection Regulation (GDPR) 2016/679 – therefore, aim to set clear guidelines for primary data usage. While secondary or aggregated data sources may offer alternatives, they often come with trade-offs in terms of data qualitative characteristics such as accuracy and relevance (Reimsbach et al., 2020; Unerman et al., 2018; De Santis and Presti, 2018).
In addition to data accessibility challenges, digital data sharing is often seen as aggravating issues related to shared accountability – commonly known as the “problem of many hands”. As technology development and engineering processes involve multiple stakeholders, assigning responsibility and liability for the outcomes of technology deployment becomes increasingly complex (Doorn, 2012; Nissenbaum, 1996). The widespread use of digital technologies and the application of AI and machine learning (ML) algorithms on big data have further blurred the accountability within value chains, making it more ambiguous (Butollo et al., 2022; Dranove and Garthwaite, 2022).
The two issues related to social impact measurements in the value chain – a) trade-offs between data accessibility and data qualitative characteristics, and b) shared accountability for digital data – are often discussed separately. However, in practice, these issues frequently overlap and interconnect. To address this complexity and investigate both issues simultaneously, we conduct an in-depth case study that pursues the twofold aim:
to identify tensions between social impact measurement and accountability conditions; and
to explore how these tensions relate to the qualitative characteristics outlined in accounting standards.
The objective of our research is to present a framework for identifying the tensions between impact measurement practices and accountability conditions for social impact. Notably, to outline the research scope, we focus on the “S” (social) aspect of corporate impact, while acknowledging the equal importance of the “E” (environmental) dimension. The social impact will be investigated in the downstream value chain where it is particularly difficult to engage voiceless stakeholders – technology end-users and local communities.
As such, this study is guided by the following research question:
Which accountability tensions arise when developing measurements for the social impact of technologies in the downstream value chain?
To ground our framework in practical experience, we conducted participatory action research (PAR) in a case study (Greco et al., 2023; Reason and Bradbury, 2012). Unlike prior studies that mainly examine technology deployers, this research investigates an information technology (IT) company. Arguably, rapid widespread digitalisation has made IT companies more active which underscores the importance of scrutinising them (Motti and Berkovsky, 2022; McGraw and Mandl, 2021). Our case study, GPI S.p.A. (Gruppo Per l'Informatica; hereafter GPI), is a representative company that develops and provides various technologies, particularly for the health-care sector. This PAR sought to identify indicators for measuring GPI’s social impacts on voiceless stakeholders, the retrospective analysis of which further unveiled multiple tensions associated with managing primary and secondary data (including privacy-sensitive data) for social accounting. Furthermore, the collected empirical evidence shows how IT companies, especially those in the EU, navigate social accounting regulations (e.g. the CSRD) alongside data privacy laws (e.g. the AI Act and GDPR).
The PAR was conducted from May 2021 to September 2023 by using a multi-method approach to collect data. This includes documentation analysis, semi-structured interviews, focus groups, an online survey and a literature review. The collected data included 57 internal documents, 23 semi-structured interviews with 24 managers, an online survey of 236 employees, 14 discussions and two informal meetings with various organisational members, and 73 publications retrieved from the literature analysis. The PAR was structured in three phases to:
identify indicators with secondary data sources;
assess the usability of these indicators with organisational members; and
retrospectively determine qualitative characteristics of social impact indicators and existing tensions.
The remainder of this paper is organised as follows. The second section explores how the tensions between shared accountability in the value chain and data qualitative characteristics relate to the context of health-care technologies. The third section details the PAR design and the fourth section, Findings, presents tensions in measuring the impact of technologies, which are classified into five retrospectively determined categories. Then, in the fifth section, Discussion, we link these five categories to the data qualitative characteristics outlined by the international financial reporting standards (IFRS) and the five conditions of accountability identified in applied ethics (Doorn, 2012). The paper concludes with the sixth section that summarises key insights.
2. Measuring social impact down the value chain
2.1 Social impact measurement: accountability and qualitative characteristics of data
The field of social accounting has long focussed on how to account for the social impacts of an entity’s activities (e.g. a company) that are borne by others – in other words, how to internalise the externalities of corporate activities (Unerman et al., 2018; Gray et al., 1997). While two important concepts in social accounting – value and impact – have many meanings (Garst et al., 2021; De Santis and Presti, 2018), scholars often ascribe them to decision-making that favours the most powerful stakeholders, thereby neglecting the needs of the voiceless stakeholders (Fearne et al., 2012; Freeman et al., 2010). The tendency to prioritise the interests of more powerful or influential stakeholders, often at the expense of less powerful groups, leads to a narrow definition of “value” and “impact” as short-term economic effects or gains (Van Der Linden et al., 2024; Unerman et al., 2018). To holistically understand the company’s externalities and prevent them from being held accountable to a few stakeholders, social accounting has introduced alternative frameworks that promote a more inclusive and pluralistic perspective on the value creation and impact measurement (Van der Linden et al., 2024; Cho, 2020; Gray et al., 1997).
One such framework discussed in the literature is the full cost accounting (FCA) introduced by Bebbington et al. (2001). Many scholars and companies have experimented with the four steps of FCA (Unerman et al., 2018). Even the CSRD and the ESRS are influenced by FCA, as evidenced by the similarity between the double materiality assessment and FCA’s second and third steps (EFRAG, 2024; Renes and Garst, 2023). These experiments with FCA suggest that what appears to be a straightforward framework involves highly complex and extensive data collection and analysis. To connect our paper with the extant literature, hereby we focus on the tensions present in the FCA’s third step: identifying and measuring the externalities in “physical terms”.
Concerning this step, Unerman et al. (2018) highlight two significant issues encountered during the FCA experiments: (1) the confidentiality of results due to accountability concerns; and (2) the inconsistent definition and application of data qualitative characteristics by both standard-setters and practitioners.
The first issue states that the results of FCA experiments often remain confidential due to concerns that they may fuel debates surrounding the accountability of the companies being evaluated (Unerman et al., 2018). To unravel this argument, it is important to understand the conditions of accountability. In applied ethics, accountability can be defined by five conditions (Doorn, 2012):
agents possess moral agency to act;
they have the freedom to act;
they are aware of the consequences of their actions;
they play a role in the cause-effect chain of those consequences; and
they know they have violated a norm.
While the first condition – moral agency – is often assumed in FCA (Moore, 1999), meaning that the company is seen to have the moral agency to act upon the social impact measurement, the other four conditions can still create tensions. These tensions are particularly relevant in relation to the impacts generated in the downstream value chain. An example of such tensions can be provided by using Porter’s (2008, 1979) Five Forces framework: how much freedom does a company truly have when its bargaining power is constrained by reliance on a single customer, while many competitors offer a similar product? Should the company’s limited bargaining power be considered when determining its accountability for the social impact generated for the customer? As Gray et al. (1997, p. 328) note, “almost any system of social accounting of which one can conceive will involve transfers of power”. Another example arises from the third condition: How much effort is required from a company to understand the consequences of its actions before it can be accused of negligence through ignorance? Understanding these tensions in accountability for social impact can offer valuable insights into how standards for social accounting should be developed.
The second issue is related to the inconsistencies in the use of qualitative characteristics of data – specifically, what qualifies as a “good” measurement based on “good” data. The overview of Unerman et al. (2018) shows that, across the six main reporting standards for externalities information, only four out of 18 characteristics are applied consistently: a) materiality, b) completeness, c) neutrality and d) accuracy. This conceptual dissonance and inconsistent application of qualitative characteristics in social impact data lead to confusion during the data collection and analysis stages of social impact assessments (Unerman et al., 2018). As Bebbington et al. (2001, p. 73) pointed out, FCA results are “likely to emerge from an interaction of data availability and the ‘story’ that the FCA exercise is attempting to tell”.
These two issues are often investigated separately: one literature stream discusses the accountability challenges in social impact measurements and another discusses data availability and quality. However, their interdependency became evident during our case study on the social impact of IT companies and their products. For example, IT companies often face difficulties in measuring their social impacts while simultaneously adhering to privacy regulations in the health-care sector – the topic of the following subsection. The investigated literature will provide the backbone to our case study, aiming to build a framework on the interconnectedness of accountability issues for social impact and data qualitative characteristics.
2.2 Technologies in the health-care value chain
The health-care value chain inherently includes patients, care providers, pharmacies, insurers and government agencies (Maas et al., 2016; Pitta and Laric, 2004; Walters and Jones, 2001). Several prior studies have recognised the rising importance of another actor – IT companies that drive digitalisation by offering technologies with various configurations (Dranove and Garthwaite, 2022; Motti and Berkovsky, 2022; McGraw and Mandl, 2021). While these technologies – such as digital diagnostics, wearables and clinical decision support systems – have the potential to enhance health-care outcomes, their social impacts remain underexplored, with mixed findings in the literature (Martin et al., 2020; Zehrouni et al., 2019; Kidholm et al., 2012).
Proponents of health technologies highlight their value for patients and medical professionals (Spanò and Ginesti, 2022; Tiwari and Khan, 2020; De Santis and Presti, 2018; Flott et al., 2016). For example, Spanò and Ginesti (2022) argue that big data can enhance performance management by optimising clinical outcomes. Other studies emphasise the potential for real-time patient monitoring, data collection and improved communication among care providers, which could alleviate resource constraints (Kidholm et al., 2012; Fitterer et al., 2011). However, scholars with a sceptical perspective question the empirical evidence behind these claims, pointing out that the link between technology and improved clinical outcomes or cost-effectiveness has not been conclusively proven (Black et al., 2011). Likewise, Martin et al. (2020) point out that many factors, including organisational quality, complicate the ability to establish a direct causal relationship between technologies and care improvements.
In addition to these operational concerns, technologies in the health-care value chain raise significant ethical concerns such as unauthorised usage of the primary users’ data for commercial purposes (Dranove and Garthwaite, 2022; Motti and Berkovsky, 2022; McGraw and Mandl, 2021). To mitigate these issues, several regulations have been set, for example, to govern the processing of personal data – GDPR – and to regulate how developers and deployers can use AI – the AI Act. However, Barocas and Nissenbaum (2014) argue that regulations and even advanced technical measures may not fully ensure users’ data privacy. As technologies continue to evolve, traditional methods of ensuring informed consent or anonymisation may struggle to keep up. Another ethical concern stems from the “many hands problem”, to which technologies are particularly vulnerable. Since technologies are developed by multiple teams and companies, it becomes challenging to determine who is responsible for the resulting harm or malfunction (Doorn, 2012). As such, by obscuring clear responsibility among involved actors, the “many hands problem” can undermine the concept of accountability (Doorn, 2012; Nissenbaum, 1996).
Given this mixed evidence regarding the implications of health technologies and arising ethical concerns, measuring social impact is especially critical for IT companies. However, even when they are motivated to act upon their moral agency, these companies face challenges in balancing compliance with both social accounting regulations and privacy norms, particularly in the highly regulated health-care sector. While IT companies are legally obligated to protect primary data under GDPR and the AI Act, they are also required by CSRD to disclose the social impacts of their technologies. This creates an ethical dilemma: although primary data could provide valuable insights into the social impacts of technology deployment, access to this data is legally restricted (Motti and Berkovsky, 2022; De Santis and Presti, 2018; Langer, 2017; Fitterer et al., 2011). Therefore, some scholars suggest introducing more nuanced regulations that allow selective access to data for research purposes, which could help generate insights into public health while addressing privacy concerns (McGraw and Mandl, 2021). These data-driven insights could also serve as a secondary source of information, enabling IT companies to navigate ethical challenges while providing transparent accounts of their technologies’ broader social impact.
Despite the increasing attention on social accounting, questions persist about how IT companies can provide transparent accounts of their value chains without violating privacy regulations (McGraw and Mandl, 2021; Fitterer et al., 2011). Although the existing literature advocates for holistic frameworks to capture broader externalities affecting various stakeholder groups and societies (Bennett and Grabs, 2024; Fasan and Mio, 2017; Fearne et al., 2012; Walters and Jones, 2001), there is still no consensus on how to effectively measure social impacts (Butollo et al., 2022; Martin et al., 2020; Serafeim et al., 2019).
In this section, we reviewed relevant studies from social accounting and IT literature to synthesise academic insights on legislation, implications arising from digitalisation and the ethical concerns surrounding the use of primary and secondary data. Given the lack of formal standards for measuring the social impact of technologies, the “neglected status of accountability for the impacts of computing” is likely to persist (Nissenbaum, 1996, p. 26). For IT companies measuring their technologies’ impacts is constrained by privacy regulations. Although technologies facilitate data-sharing processes, these regulations prevent transmitting users’ data collected in health-care facilities to third parties. Consequently, these companies must either violate privacy norms or rely on data that lacks accuracy and completeness, thereby hindering their accountability. This example underscores the need for further research to identify the tensions between impact measurement practices and accountability conditions in the downstream value chain (i.e. after-sale activities).
3. Research design
The study was initiated to identify indicators for measuring social impact using secondary data sources, given the primary versus secondary data issues. This section outlines the process of how these indicators were collected, discussed and retrospectively analysed.
3.1 Participatory action research methodology
As the research objectives required the scrutiny of a real-life problem, the methodology needed to effectively capture practitioners’ experiences while fostering active collaboration between researchers and organisational members. As PAR emphasises both “participation” and “action” (Meyer, 2000), it was selected as the most suitable approach to meet our research aims. By using the PAR methodology, we also sought to bridge theory and practice. This was encouraged by social accounting scholars, including Gray et al. (1997, p. 326):
While the theoretical critiques of accounting and new accountings must, in the interests of scholarship, be fully engaged with, this is not enough. Practice must be encouraged, and we must find ways to develop that practice in a manner which is potentially emancipatory – not repressive.
Their perspective aligns with our goal of fostering practical, meaningful advancements in measuring social impacts.
PAR is a qualitative and systematic research method that evolved from action research (Lewin, 1997), designed to find practical solutions to real-world problems (Greco et al., 2023; Adams and McNicholas, 2007). Unlike other research methods, PAR operates on the democratic principle, fostering equality in the collaboration between researchers and organisational members. It asserts that no single perspective holds more authority than another (Reason and Bradbury, 2012). In line with this principle, the present research sought to generate practical solutions by involving stakeholders who are directly engaged with the research context (Greco et al., 2023).
Indeed, the active participation of organisational members in problem exploration and decision-making is crucial as they possess insider knowledge of the research setting and can assess whether the developed solutions are feasible (Adams and McNicholas, 2007; Adelman, 1993). The integration of the democratic principle, through both participation and action, fostered mutual learning and knowledge creation during our researcher-practitioner collaboration. This participatory process not only enhanced our understanding of the research phenomenon but also ensured that the research outcomes were grounded in the real needs of the case study (Reason and Bradbury, 2012; Meyer, 2000).
3.2 Organisational background and research problem identification
The case study is GPI, an IT company specialising in technologies for the health-care sector. Founded in 1988 in Italy, GPI started as a family business and has since grown into a multinational corporation with 7,217 employees operating in over 70 countries (Gpi Group website, 2024). The company is structured into five strategic business units, each focusing on different technology groups. The Care unit manages health booking services and virtual care (telemedicine), while the Software unit develops blood management systems, remote care sensors and AI-based technologies. The Automation unit specialises in pharmaceutical warehouse solutions, and the Information and Communication Technology (ICT) unit provides IT systems and cybersecurity services.
The downstream value chain of GPI includes its direct customers such as private hospitals and public health-care authorities, with whom it has contractual obligations. The latter empowers customers to request social accounts from the company and also allows GPI to involve customers in its impact measurement practices. However, as is common for companies with a business-to-business (B2B) model, the actual technology users – patients, medical professionals and broader society – are not direct stakeholders. Although end-users are the ultimate value receivers, GPI lacks contractual relationships with them and thus, cannot directly engage them in social impact measurement. Stakeholder engagement in this scenario requires authorisation from GPI’s customers, making direct involvement unfeasible. Consequently, no established dataflows or communication exchanges exist between GPI and technology users. Additionally, stringent regulations protecting primary data further complicate GPI’s efforts to establish effective dataflows and communication channels with end-users.
Despite these challenges, GPI is required to comply with the CSRD and measure its social impact across the value chain. However, the lack of direct engagement with voiceless and difficult-to-reach stakeholders downstream creates an accountability gap, complicating GPI’s ability to comprehensively map the impact of its products throughout the value chain. To address this issue, a research project has been initiated to derive indicators from secondary data sources, rather than relying on primary data and uncover tensions that hinder social impact measurement on these difficult-to-reach stakeholders.
3.3 Participatory action research: data sources and methods of analysis
This PAR at GPI spanned from May 2021 to September 2023 as part of the main author’s doctoral research. The study was divided into three phases, each with its own objectives and research methods (see Table 1). Following the approach of Adams and McNicholas (2007) and Meyer (2000), the PAR method involved iterative cycles of data collection, analysis, feedback from the case company, and subsequent actions based on this feedback. This collaborative process enriched the data and provided additional insights as organisational members contributed their perspectives.
3.3.1 Participatory action research – Phase I.
The first phase focused on identifying indicators from multiple secondary data sources outlined in Table 1. Publicly available and internally circulating documents were gathered to better understand the company’s stakeholders and technologies.
As the research progressed and the gap between researchers and organisational members narrowed, the main researcher gained access to informal interactions with staff in Italy and abroad. This also facilitated the administration of 23 interviews that were conducted through face-to-face interaction, phone calls and video meetings (de Villiers et al., 2022; Farooq and de Villiers, 2017). A semi-structured interview guide was developed by following Roulston (2010). All consented interviews were recorded and transcribed, while written notes were taken for those where recording was not permitted (see Table A1 in the Appendix). These transcriptions and notes were further analysed using a coding process following Corbin and Strauss (2008).
Moreover, to expand the data collection, an online survey was administered in December 2022. The questionnaire was prepared in both English and Italian, and its clarity was tested by two scholars and two GPI managers. A formal invitation was sent to 1,000 randomly selected employees, resulting in 236 completed responses (see Table A2 in the Appendix). During the analysis process, all identifying personal information from the survey was encrypted.
However, the direct involvement of difficult-to-reach stakeholders in this study was not feasible, necessitating the use of additional secondary data sources to complement the collected data. Therefore, an extensive literature review was conducted by searching relevant keywords across academic databases, performing a bibliography analysis and reviewing grey literature. A total of 4,419 publications were identified, which were then narrowed down by applying exclusion criteria: (1) absence of robust empirical evidence; (2) theoretical or opinion-based studies; (3) focus on a single disease; (4) studies related to health determinants; (5) studies focused on mental health or pharmacology; and (6) closed access publications.
Figure 1 illustrates the stepwise reduction of the initial pool of publications, ultimately narrowing down to 73 relevant records.
The data collected during this phase was then used to derive 384 indicators measuring technologies’ impacts on stakeholders. These indicators were related to diagnostic quality (e.g. diagnostic errors), hospital efficiency (e.g. financial savings and number of hospitalisations), and patient outcomes (e.g. quality of life and treatment satisfaction) (Benson, 2020; Kidholm et al., 2012; Fitterer et al., 2011).
3.3.2 Participatory action research – Phase II.
The second phase aimed to reduce the initial list of 384 indicators to a manageable set for the case study. This was achieved through three rounds of interviews and focus groups. Participants were selected from different departments and subsidiaries depending on their expertise and knowledge in health technologies, the health-care system and health data (see Table A3 in the Appendix). During the 14 interviews and focus groups, 22 participants initially reviewed the list of indicators and then provided their feedback. This process was repeated over three rounds, gradually refining the list, which was ultimately reduced to 21 indicators (see Figure 2).
To discuss final indicators, two meetings were organised: firstly, with the company CEO, the research and development director (R&D) and the communication and media manager; and secondly, with the GPI environmental, social, governance (ESG) committee and two external stakeholders from a local agency specialising in social accounting.
3.3.3 Participatory action research – Phase III.
The research team retrospectively analysed the results of previous PAR phases, comparing the selected indicators with those that were not chosen to understand the reasoning behind these decisions. The main researcher revised empirical material collected through the multimethod approach to identify hidden tensions. Additionally, a value chain analysis (Porter and Millar, 1985) was applied to explore how actors in this specific value chain exchange digital data that potentially could contribute to the exclusion of certain voices in the context of social impact measurement. This analysis resulted in the creation of an initial framework outlining categories of tensions. A second researcher independently reviewed the data and identified additional categories. Both researchers then discussed the tensions that technology-driven companies should address in their social impact measurement and finalised the framework collaboratively.
4. Findings
This section first presents the identified scenarios related to dataflow and communication exchanges relevant to the IT company’s primary activities in the downstream value chain, followed by a discussion of the revealed tensions.
4.1 Dataflow scenarios in the impact measurement
The dataflow within the value chain, particularly in the after-sale services, can be seen as a process where input data is transformed into output data. This process starts with medical professionals entering input data into the information system, which is then processed to generate output data. The output data refers to primary information that not only holds valuable insights about users but also can disclose how technologies create value for them. Based on the stakeholders for whom this output data is most relevant, we define four distinct scenarios (see Figure 3).
In Scenarios A and B, output data, such as treatment information and clinical results, is shared with patients and medical professionals. In Scenario C, the output data is aggregated and provided to health-care public authorities, enabling them to assess the effectiveness and quality of care services (Pitta and Laric, 2004). Typically, the literature focuses on these scenarios, as scholars are primarily concerned with the impacts of technologies on the quality of care that involves patients, medical professionals and the health-care system (Benson, 2020; Flott et al., 2016). However, the case study analysis reveals the existence of Scenario D, which pinpoints that the primary data is important also for the IT company. This can be observed in the following interview:
When Lombardy [region in Italy] had a problem with the COVID-positive cases, [our technologies] monitored 100,000 patients […] But it is not easy to get data, especially data on how [our] services were delivered. How can we measure the impact of our technological solutions? (Project Manager, R&D)
This interview extract shows that although the IT company needs primary data to evaluate the quality of its services, it was found to be difficult to collect. As a result, much of the information on how technologies create value for the downstream stakeholders remains uncaptured and undisclosed by the company. By further applying Porter’s value chain framework to the IT company’s business operations and dataflow scenarios, we could further pinpoint where this uncaptured value is located.
However, before adopting Porter’s traditional model, our analysis has indicated that several modifications are necessary to adjust this model to the IT company’s context. Firstly, we added “technology development” to the “operations” category of primary activities, reflecting GPI’s core activities of developing, testing, configuring and implementing digital technologies and information systems. Secondly, in the secondary activities, we replaced “technology development” with “research and innovation”, as this area more accurately supports the primary activities of the IT company (see Figure 4).
The above-described scenarios complemented with Porter’s value chain analysis suggest that scenario D is positioned in the “post-sale services”, where direct data exchange between the IT company and stakeholders (i.e. end-users and local communities) are missing. Since dataflow and communication exchange are lacking between them, the company cannot engage these stakeholders and the latter remain voiceless in the social accounting practices. This issue likely persists because, unlike in scenarios A, B and C – where relations exist between the IT company and customers (i.e. hospitals or health-care authorities) – there is no direct contractual relationship linking the IT company, for example, to end-users. Consequently, a significant portion of the value generated downstream remains uncaptured, leading to an incomplete and inaccurate representation of social impacts.
4.2 Framework of categories of tensions
To further investigate the issue described above, we identified recurring themes highlighted by organisational members, which led to the discovery of several tensions within GPI’s social accounting practices. The themes are categorised into five groups pointing to the tensions associated with accuracy, data availability, digital data ownership, information relevance and accountability (see Table 2).
4.2.1 Accuracy.
The first category of tensions that we named accuracy, emerged as the most prominent for the IT company. Accurate indicators can evaluate how technologies reduce computational errors or alert end-users about potential allergic reactions to prescribed medications. However, participants could not easily identify accurate indicators, as many factors can influence clinical outcomes:
We need KPIs [Key Performance Indicators] showing how we change the life of a person and how we help the healthcare sector to sustain its activities, especially nowadays when many problems emerge from the population ageing. In my opinion, we should compare allocated resources before and after the implementation of the solution. Also, we have to find a [impact] dimension to measure. […] But [population] ageing has multiple impact dimensions. […] (Project Manager, R&D).
This interviewee notes that measuring technologies’ impacts on health is not straightforward as health phenomena (e.g. population ageing) are complex and multidimensional. In other words, the accuracy of impact measurement does not solely depend on medical professionals entering correct information or the technology’s ability to identify errors. A similar message is received from another participant:
I do not know [how to measure the impact of technologies on pharmaceutical prescription errors] because, especially in pharmaceutical prescription, it is not easy: everyone is different, so one can have an adverse reaction. It is not simple with the KPIs or algorithms to say that this is a problem of the software. (Director, Software Unit).
Indeed, with technological advances and by training AI and ML algorithms, we can flag human errors and make technologies more reliable. For example, technologies can inform about discrepancies in the input data (e.g. weight and age), but other outcomes cannot be easily achieved – like preventing adverse reactions to medications – since they are beyond the algorithms’ capabilities and influenced by the human body’s uniqueness. This underscores the need for the accurate differentiation of dimensions that are actually dependent on technological factors, which is essential to ensure that social accounting focuses on the true drivers of impact within the technological context.
In scenarios where created value is not directly related to patient health, participants have suggested that accuracy should reflect how well the data aligns with actual events. For example, to measure users’ satisfaction with technologies (Flott et al., 2016; Kidholm et al., 2012), indicators should rely on validated, self-reported data using clear and consistent questions. Or in the case of cybersecurity protection, an indicator can track the number of cybersecurity audits (Langer, 2017) that have been recorded by employees in the information system.
4.2.2 Data availability.
Tensions with data availability are a major concern for the IT company due to the ethical dilemma surrounding access to primary data. Interviewees note that, although GPI’s technologies store vast amounts of information that can demonstrate the company’s contributions to cost reduction, health-care performance and organisational efficiency, much of this data is inaccessible. One interviewee highlights the challenge of configuring technologies to accurately assess population health needs, as this would require dividing users into groups:
Since we manage the booking of [care] services, we can understand the level of [population] segmentation. But again, we are not able to use this data because it [belongs to] people. So here, data privacy […] hinders most of the contact possibilities [with users] (Project Manager, R&D).
Alternatively, participants have suggested using secondary data as a substitute for primary data. However, not all of them are suitable for use by IT companies. While publicly available data such as research publications or national statistics is generally safe to use, these sources are limited in providing valuable information on the full spectrum of impacts within a specific company’s value chain. Additionally, other secondary data sources may raise ethical concerns. For example, Patient-Reported Outcome Measures could offer valuable insights from end-users and can be collected from GPI’s technologies, but participants highlight challenges in administering such surveys. Firstly, GPI must obtain authorisation from its customers to invite end-users to participate in the survey, and secondly, not all results can be disclosed due to privacy regulations, further limiting the utility of this data:
I don't know if we can measure the satisfaction of people about technology because in this situation we have to administer a survey [….] not all customers want this. Obviously, we provide the service to customers, so if they are interested in [receiving feedback] from citizens, they will activate it [the survey]. (Data Analyst, Care Unit).
This category of tensions informs about the data’s qualitative characteristics and potential trade-offs between primary and secondary data.
4.2.3 Digital data ownership.
The use of digital data in social accounting is closely intertwined with the tensions arising from data ownership considerations. For example, sensors for virtual care continuously collect patients’ physiological data in real-time, which could potentially prevent heart failure incidents. Herein, the “good” indicator would be the number of heart failures that have been prevented thanks to the telemedicine deployment. However, GPI cannot provide this information since the relevant data is managed and owned by hospitals that deploy sensors and telemedicine applications (see Table 2).
Therefore, GPI is forced to rely on secondary data that, according to participants, can be collected in two ways. In the first scenario, GPI gathers contact details directly from its technology users, for example, through the survey with these stakeholders. However, before administering it, the IT company must inform its customers. In the second scenario, which is more complex, GPI enquires the authorisation from its customers to access certain aggregated data for social accounting purposes that would not violate users’ privacy. In this case, customers must secure consent from the users to share their data with third parties like GPI and provide a clear explanation of how their data will be used. Yet, while the second scenario is theoretically feasible, it requires the IT company to navigate through stringent regulations, address the potential reluctance from customers and bear the costs of maintaining high-security standards. In both scenarios, data encryption is essential to protect user privacy.
As this category of tensions is critical due to its implications such as breaches of user privacy and risks of data exposure, IT companies need to ensure a balance between regulations and the necessity to disclose the social impacts.
4.2.4 Relevance of information.
The category of tensions termed “relevance of information” pertains to the IT company’s motivation to measure and disclose the most important aspects of corporate operations. This category has emerged from discussions about whether indicators reflect value dimensions that significantly affect stakeholders’ decision-making process. For instance, one interviewee notes that digital technologies supporting virtual care (e.g. telemedicine) enable medical professionals to better understand patients’ needs, monitor vital physiological parameters in real-time and manage treatments more efficiently (see Table 2). An indicator measuring these values could influence hospitals’ decisions to invest in technologies or motivate medical professionals to integrate technologies into their daily routines. Similarly, customer complaints about cybersecurity issues (Langer, 2017) could provide potential customers (i.e. health-care authorities) with insights into cybersecurity incidents, thereby addressing their concerns about technology safety in health-care settings.
This category is equally relevant for patients, as it supports their decision-making by determining whether technologies offer tangible benefits. For example, indicators measuring heart failure incidents after technology deployment, may affect the users’ trust in the secure handling of their personal information (Fitterer et al., 2011), and users’ confidence in receiving therapy through technology (Kidholm et al., 2012). Hence, these indicators help patients assess the implications of using home care devices or wearables and evaluate the safety, reliability and overall value of these technologies in managing their health.
4.2.5 Accountability.
The final category of tensions arises from discussions about the causal link between the IT company’s actions and the potential harm or benefits experienced by end-users. Indicators in this category emphasise GPI’s moral agency to act upon its accountability for the consequences of its actions or inactions. However, assigning responsibility becomes complex when an accountability vacuum is created by other actors in the value chain, or when the technology reveals inefficiencies within the health-care system. In such cases, participants have expressed reluctance to assume responsibility.
For instance, as technologies analyse large sets of health data, they can uncover systemic inefficiencies such as long waiting times for medical visits. While these insights could potentially improve hospitals’ performance, the IT company is not in a position to directly address these issues. As noted by one participant, “[…] it is not appropriate for us to manage these statistics” (Director, Care Unit). GPI, therefore, supports indicators that assign its responsibility for the direct consequences arising from its technologies such as the effectiveness of cybersecurity mechanisms (Langer, 2017) and the potential harm to patients (Kidholm et al., 2012).
These five categories of tensions have created a challenging situation for GPI. On the one hand, the IT company cannot use primary data for social accounting due to data ownership and privacy protection regulations. On the other hand, it is unable to directly engage with stakeholders to collect information on how its products impact them, further complicating efforts to measure the social impact. To illustrate these tensions and how technology-driven companies can address them effectively, Figure 5 provides a comprehensive summary and guiding questions.
5. Discussion and research avenues
Given the lack of standardised indicators for technology-driven sectors, our research identified multiple tensions in measuring societal value in the downstream value chain. These tensions were grouped into five categories: a) accuracy, b) data availability, c) digital data ownership, d) information relevance and e) accountability. As expected, these tensions relate to the qualitative characteristics commonly observed in accounting (Unerman et al., 2018). To show this relation, we compare the identified tensions to the latest version of the conceptual framework of the IFRS (2018).
The tensions in social impact measurement are more than just a result of conceptual confusion over these qualitative characteristics. As highlighted in our literature review, they also stem from the undefined boundaries of accountability that companies hold for both their positive and negative social impacts. To demonstrate this connection, we apply four conditions of accountability outlined by Doorn (2012), excluding the condition of agency (see Table 3).
The first category of tensions observed in our data was accuracy issues. While the risk for inaccuracy is higher for technology with input data based on self-assessment of users (e.g. mobile apps for recording patient’s adherence to the medicine/dietary intake), the technologies recording physical responses directly (e.g. blood pressure monitors) can also be inaccurate due to technical malfunctions, insensitivities or user errors (e.g. wrongly placed sensor). In such scenarios, the measurements are thus an incorrect representation of reality. In the IFRS (2018) conceptual framework, these issues are covered by the principle of faithful representation, in particular the “free from error” characteristic. If a measurement is inaccurate, its purpose of showing norm adherence or transgression can be questioned. Therefore, inaccurate measurements weaken accountability claims (Doorn, 2012). Correcting these inaccuracies downstream of the value chain depends on their origin:
correcting technical errors requires technological advancement and access to the product for updates; and
correcting user errors requires access to the end-users both to observe their behaviour and to provide improved instructions.
However, if the access to the users is restrained by the privacy norms or bargaining powers limiting freedom to act, user-dependent inaccuracies might not be easily solved. Further research into strategies for solving user-accessibility issues is required to support accurate social impact measurement downstream value chains.
In the category of data availability tensions, the IT company could not obtain the social impact data. While its products might impact the well-being of a patient, the data to show this impact was not collected by technologies. Other information systems of customers might collect this data, but these systems are not accessible to the IT company. IFRS (2018) mentions these data availability issues when discussing the application of the fundamental qualitative characteristics. Without this data, the IT company cannot know the consequences of its technologies, limiting the accountability claim (Doorn, 2012). To avoid ignorance due to negligence accusations, GPI tried using secondary data sources to infer the impact of its technologies. These strategies provide knowledge of the consequences, but the data often does not allow companies to isolate the impact of technologies, thus limiting claims of causality.
In our third category of data ownership tensions, the IT company cannot use social impact data, because the data owners are restricted from sharing it due to the contractual conditions of their ownership (e.g. patient-doctor confidentiality). While their products collect data to support customers’ decision-making, the IT company lacks authorisation to use it for social impact measurements. These issues are described by the IFRS (2018) as issues in applying qualitative characteristics. They hinder the company’s access to the knowledge of the consequences and weaken accountability claims (Doorn, 2012). Three strategies are available for data ownership issues (De Santis and Presti, 2018). Firstly, the IT company could request that its customers have each technology user sign an informed consent form. However, this solution raises practical challenges and concerns about transparency, as users may not fully understand what they consent to (Barocas and Nissenbaum, 2014). Secondly, pseudonymised data can be used, but individuals may still be identifiable, undermining privacy protection (Barocas and Nissenbaum, 2014). Finally, collecting group-level impact data could require fewer permissions. However, this consolidated data may obscure causality, making it harder to attribute the impact directly to the technology and thus the IT company (Doorn, 2012). The effect of using consolidated data on the causality claims for social impact is an avenue for future research.
For the tensions in the relevance category, the technologies’ data supported the decision-making of the customer groups but was not appropriate for social impact measurements by the IT company. These issues fall under the IFRS (2018) principle of relevance – divided into materiality, predictive and confirmatory value – and depend on the audience of the social impact data (Johnson et al., 2018; Maas et al., 2016; Maas and Liket, 2011). While materiality determines which social impacts the audience wants to be measured (Garst et al., 2021; Fasan and Mio, 2017), predictive and confirmatory value determine which indicator represents the audience’s information needs (IFRS, 2018; Maas et al., 2016). Some audiences want to know about changes in impact over time, while others want to compare the social impacts of companies – by IFRS referred to as the enhancing characteristic of comparability (IFRS, 2018; Maas and Liket, 2011). These information needs can be seen as a reflection of how norm transgression is defined: does norm transgression happen at an absolute threshold (e.g. no data security breaches) or does transgression depend on a benchmark value relative to the performance of others (e.g. long waiting time)? Irrelevant data can be caused by a misalignment between the data and the user’s information needs to determine norm adherence or transgression (Doorn, 2012). To avoid irrelevance, the IT company needs to define the audience of the social impact measurement and the definition of norm transgression. However, for complex societal issues, views of norm transgression can be very diverse. IFRS indicates that information completeness is important for faithful representation but a report cannot “provide all the information that every user finds relevant” (IFRS, 2018, p. A27). Possible strategies to prioritise these divergent perspectives and avoid an endless list of indicators for social impact have been proposed but require further development (Van der Linden et al., 2024).
The final category of tensions relates to the role of the IT company as a knowledge creator, enabling the customer’s accountability. Its technologies create knowledge of the consequences of its customer’s actions, fulfilling this condition for the customer’s accountability. Tensions arise when the customer wants to stay ignorant to avoid accountability (Nissenbaum, 1996). This situation could occur when existing technologies detect consequences that were not part of the intended objective of the product, but also when customising products to the information requirements of the customer. Is the IT company responsible for discussing ignorance by negligence with the customer or even reporting their negligence to other stakeholders or authorities? And if the customer’s negligence creates negative impacts, is the IT company accountable for the negative impact and thus required to report it? Literature on technology governance refers to this dilemma as the “problem of many hands” (Doorn, 2012; Nissenbaum, 1996). Based on the IFRS’ definition of the boundaries of an entity, one could argue that the customer’s decisions are not part of the reporting entity and thus not relevant to the report (IFRS, 2018). However, following the conditions of accountability, the IT company is accountable when its technology becomes a part of the causal chain of events (Doorn, 2012). When solely providing knowledge to a customer for decision-making, the role of the IT company in the cause-effect chain can be debated. However, AI-driven technology goes beyond knowledge provision by independent data analysis and interpretation. While the customer still has the discretion to dismiss this interpretation, the technology has partly taken over the decision-making, leading to a stronger argument for causality. How this role of technology in the cause-effect chain of social impact should be considered in measuring and reporting social impact is a question for both accounting scholars and standard-setters of impact assessments and reporting.
One condition of accountability was not yet mentioned, as this condition is relevant for each tension category: the freedom to act. The IT companies’ freedom to act is limited by international and local regulations (e.g. laws on privacy) but also by their bargaining power in negotiations with their customers. While the Five Forces framework is often used at the industry or market level (Porter, 1979, 2008), our case shows that the bargaining power between technology providers and customers depends on the generalisability of the product design. If the IT company makes a general product for multiple customers, it has more freedom to decide the configurations, as the numerous customers give it bargaining power. However, when the IT company starts customising the product to the customer’s needs, the product becomes less attractive to other customers and the IT company loses bargaining power. This loss of bargaining power can impact the freedom to act on previously mentioned tensions. A customer with more bargaining power can block the IT companies’ access to the product to correct for inaccuracy or irrelevance, or access to additional data and end-users to measure social impact. While contract conditions and default configurations can be made non-negotiable by the IT company, any tensions related to social impact that were not anticipated and occurred during implementation cannot be prevented. To what extent the bargaining power of the IT company should be considered in social impact measurements, is an issue to be investigated and discussed among accounting scholars and standard-setters.
6. Conclusions
The paper addresses the challenges of measuring social impacts across value chains, particularly when stringent privacy regulations and data ownership concerns limit access to primary data. While digital technologies and information systems facilitate the collection of vast data that could provide a detailed understanding of companies’ social impacts on voiceless and difficult-to-reach stakeholders, this data is often owned by end-users, making it inaccessible for social accounting purposes. This creates significant tensions and dilemmas for companies, particularly in the technology-driven sector, which must comply with privacy regulations while attempting to demonstrate their social impact.
The study specifically examined two key issues: (1) the trade-offs between data accessibility and the qualitative characteristics of data; and (2) the shared responsibility for managing digital data.
Through an in-depth PAR (Greco et al., 2023; Reason and Bradbury, 2012) conducted at GPI, we:
identified the five categories of tensions between social impact measurement and accountability requirements; and
further explored how these tensions align with the data qualitative characteristics outlined in accounting standards.
Given that the case study operates in a privacy-sensitive context – the health-care value chain – the PAR revealed significant trade-offs between primary and secondary data usage and various digital data management scenarios. The primary outcome of this PAR is a framework for recognising and addressing the tensions between impact measurement practices and accountability conditions that offers valuable insights into how companies can navigate these challenges.
As is common in IT companies, neither the voiceless stakeholders (e.g. technology end-users) nor primary data were accessible for this research (Johnson et al., 2018). To mitigate this methodological constraint, the PAR consolidated indicators of social impact through secondary data sources (i.e. literature analysis, survey and interviews with organisational members). The research team then retrospectively analysed the collected empirical material to uncover existing tensions within digital data management and synthesise the findings into a single framework.
The study identifies a rarely examined scenario in digital data management (Scenario D in Figures 3 and 4), where companies are unable to rely on the primary big data collected by technologies, despite its critical role in measuring social impact. This limitation highlights a key challenge in corporate social accounting – when data availability and data ownership are tightly intertwined, companies must rely on secondary data sources to support their impact measurement efforts. Additionally, we present a framework that encompasses five categories of tensions – a) accuracy, b) data availability, c) digital data ownership, d) information relevance and e) accountability.
The contribution of this framework is twofold. Firstly, the framework shows how these tensions are related to the data qualitative characteristics outlined in accounting frameworks (e.g. IFRS, 2018). Furthermore, it shows how the tensions are related to the conditions of accountability of a company (Doorn, 2012). For example, how bargaining power between IT companies and their customers can influence the company’s ability to access critical data and address inaccuracies in social impact measurements (Porter, 2008).
Secondly, the study advances applied ethics by contributing to the ongoing ethical discourse in impact measurement vis-a-vis data ownership (De Santis and Presti, 2018; Doorn, 2012). It explores the dilemmas companies face in accessing and using data for social accounting, identifying tensions created by privacy laws (e.g. GDPR), sustainability directives (e.g. CSRD) and informed consent in a B2B model, where technology providers are often distanced from end-users (Johnson et al., 2018). Moreover, the research highlights how accountability is complicated by the “problem of many hands” and the blurred responsibility among actors (Doorn, 2012; Nissenbaum, 1996). This advances the discussion on how ethical concerns and legal constraints shape the scope and accuracy of social impact measurement.
The practical implications of this research lie in providing insights into how IT companies, particularly in highly regulated sectors like health-care, navigate the intersection of social accounting regulations and data privacy laws. Through the experience of GPI, the research shows that two key issues are indeed intertwined in practice. By examining these issues together, the presented framework shows how limitations in data access affect a company’s ability to account for its social impact, particularly when measuring impacts on difficult-to-reach and voiceless stakeholders downstream.
The study underscores the need for further research into the role of technology in the value chain and how it impacts accountability in social accounting practices. However, this research is not without limitations. Firstly, the issue related to the findings generalisability may be pinpointed as the case company operates in the health-care value chain. Nevertheless, the framework aims to discuss tensions that are common across companies developing and providing technologies, regardless of their operational contexts and industries. Secondly, the literature review may have overlooked some relevant studies on digital technologies and information systems. This limitation was mitigated by complementing the review process with Google searches and bibliography analyses.
Figures
Lifecycle of PAR
PAR | Phase I May 2021– July 2022 | Phase II August 2022– September 2023 |
---|---|---|
Objectives | To collect indicators measuring social impacts on difficult-to-reach stakeholders | To select indicators in a collaborative setting |
Data sources | • 57 documents; • 23 interviews and 236 survey responses; • 73 scientific articles and policy reports |
• 14 individual and group discussions administered over three rounds; • Two concluding discussions with internal and external stakeholders |
Main outcomes | A database of 384 indicators (362 extracted from the literature and 22 proposed by organisational members) | 21 indicators |
Phase III includes a retrospective analysis of previous phases and hence, is not shown in this table
Source: Authors’ own work
Process of identifying the categories of tensions
Recurring themes/category of tensions | Evidence from interviews |
---|---|
Accuracy | “We work to reduce this kind of risk [i.e., allergic reactions] in the prescription. And you ask us what indicator we use to measure the number of occurred errors. I don’t know, […] it could occur because of the doctor and/or software. It’s not simple […] to say that this is a problem of the software. The only thing we can do is to measure how many cases we intercept with possible interaction of the allergy [a.k.a. the positive impact and not the missing cases]”. (Director, Software Unit) |
“There could be errors, for example, in data entry, that can also be made by the software. You work with a range, so the weight of a person cannot be 1,000 kg and less than zero; age – it is impossible for a 40-year-old patient to weigh 15 kg”. (Data Analyst, Software Unit) | |
Data availability | “The FHIR [the fast healthcare interoperability resources] is a way to collect structured information about a patient’s treatment, problem, diagnosis when a patient is inside the hospital and after being discharged”. (Director, Software Unit) |
“When we manage a project, these are exactly the indicators [patients’ costs and savings]. you deliver a product and have benefits. But […] it is not simple to obtain this information [on benefits]”. (Data Analyst, Software Unit) “I don’t know if we can measure the satisfaction of people with technology because […] we have to administer a survey”. (Data Analyst, Care Unit) |
|
Digital data ownership | “Patient is not our customer. We provide a service, but we cannot oblige customers to [collect data from patients]. We collect consent from the [person who uses booking services] but privacy is granted to the healthcare provider, not to us. We are just intermediaries of a service”. (Data Analyst, Care Unit) |
“We can measure and report data but with the agreement, of course. This is because the data is the property of the hospital”. (Director, Software Unit) | |
From the discussion of whether biomedical device help prevent heart failures: “-Data analyst: We don’t have this data available due to the data privacy. It is a measurable indicator for the healthcare professional, but for us, we cannot provide those numbers. – Biomedical engineering: to give you more context why this is a problem. The information of the patient is hosted not by us but at the server, which is locked within the country. So, patients in Italy, their data remains in Italy. And beyond that, the data is encrypted, so from our end, what we see is gibberish if I may say. Because it’s encrypted. – Biomedical doctor: What system does, it sends alerts but mostly it sends to the physicians and the doctors or the healthcare professionals, whoever is in charge. And it’s sensitive: data is linked to the biomedical data of patients”. (Malta subsidiary) |
|
Relevance of information | “We can manage a lot of information, and we can help our customers in a very pragmatic way to determine the number of slots [for medical visits] and decrease the gap between demand and supply of care services”. (Director, Care Unit) |
“We give the doctor the opportunity to manage patients better, to know exactly what the patient needs. Augmented telemedicine gives such advantages”. (Director, R&D) | |
Accountability | From the discussion of the causal connection between gpi’s actions and (potential) damages or benefits for stakeholders: “Now the problem in the public system is if you have an unnecessary visit, the waiting time between the prescription time and appointment is very long. But it’s not appropriate [for us] to manage these statistics because it’s our customers’ organisations. [….] In my role as the supplier, I can give this information to the customer, but I am not managing the public health system”. (Director, Care Unit) |
“If we have to provide this information [patients’ sensitive data], we might have GDPR application” (Engineering, Malta subsidiary) | |
“It is not easy [to measure technologies’ impact], we cannot measure [used resources] before the implementation because […] local authorities do not share with us how it was before”. (Project Manager, R&D) |
Source: Authors’ own work
Comparison of the five categories of tensions with the IFRS conceptual framework (2018) and four conditions of accountability
Tensions identified in case study | Concepts by IFRS (2018) | Accountability issues causing data issues (Doorn, 2012) | |
---|---|---|---|
Accuracy | Faithful representation → free from error | Transgressing a norm: inaccurate data does not capture the information needed to distinguish between norm adherence and transgression | Freedom to act: with low bargaining power in negotiations with customers, the IT company is limited in its freedom to act upon the identified tensions |
Data availability | Application of qualitative characteristics | Knowledge of the consequences: without data on the social impact, the IT company cannot know all products’ consequences Causality: when using secondary data, the impact cannot be directly linked to the technology’s role in the cause-effect chain |
|
Digital data ownership | Application of qualitative characteristics | Knowledge of the consequences: without data on the impact, the IT company cannot know all products’ consequences Causality: when using consolidated data, the impact cannot be directly linked to the technology’s role in the cause-effect chain |
|
Information relevance | Relevance → materiality, predictive and confirmatory value | Transgressing a norm: irrelevant data does not capture the information needed to assess whether the IT company has transgressed a norm | |
Accountability | Boundaries of the entity | Causality: without information on the function of its technology in decision-making, the causal role of the IT company cannot be determined |
Sources: Authors’ own work; (Doorn, 2012)
Profile of interviewees – PAR Phase I
Department and position | Date | Duration |
---|---|---|
Administration and control manager | 21.06.2021 | 65 |
General manager | 23.06.2021 | 80 |
Project manager | 28.06.2021 | 70 |
Consolidation manager | 29.06.2021 | 15 |
Human resources manager | 30.06.2021 | 80 |
Communication and media manager | 02.06.2021 | 105 |
Director of marketing, communications and investor department | 09.07.2021 | 45 |
Director business unit ICT and software | 12.07.2021 | 65 |
Director of business unit care | 12.07.2021 | 90 |
Director of compliance department | 12.07.2021 | 40 |
Director of business unit automation | 13.07.2021 | 55 |
Director of business unit care (follow-up) | 14.07.2021 | 35 |
Director of R&D department | 16.07.2021 | 45 |
Director of commerce, sales and legal department | 19.07.2021 | 70 |
Procurement department: (1) director, (2) specialist A, (3) specialist B | 19.07.2021 | 60 |
Member of the board of directors | 21.07.2021 | 35 |
CEO and founder | 21.07.2021 | 55 |
Subsidiary 1 (Austria): director | 16.07.2021 | 22 |
Subsidiary 2 (France): (1) president, (2) quality control manager | 21.09.2021 | 60 |
Subsidiary 1 (Austria): human resources manager | 04.02.2022 | 71 |
Subsidiary 3 (USA): human resources manager | 07.02.2022 | 72 |
Director of IT department | 09.03.2022 | 40 |
Director of business unit automation (follow-up) | 29.04.2022 | 60 |
Source: Authors’ own work
Profile of survey respondents – PAR Phase I
Respondent profiles | Response categories | n | % |
---|---|---|---|
Location | Italy | 194 | 82 |
Abroad | 42 | 18 | |
Work experience at GPI group | <one year | 44 | 19 |
1–3 | 62 | 26 | |
4–6 | 70 | 30 | |
7–10 | 28 | 11 | |
>10 | 32 | 14 | |
Total | 236 | 100 |
Source: Authors’ own work
Profile of participants – PAR Phase II
Department and position | Date | Duration |
---|---|---|
1) General manager 2) Director of marketing, communications and investor department |
29.09.2022 | 60 |
Director of marketing, communications and investor department | 07.10.2022 | 60 |
1) General manager 2) Director of marketing, communications and investor department |
14.10.2022 | 60 |
Business unit care (booking services): 1) director, 2) data analyst A | 19.10.2022 | 100 |
General manager | 28.10.2022 | 60 |
Subsidiary (2) France 1) Quality control manager 2) Technical manager 3) Operation manager 4) Commerce manager 5) Business developer |
03.11.2022 | 120 |
Business unit software and ICT: 1) software unit director, 2) data analyst A | 03.11.2022 | 105 |
1) Director of R&D department, 2) director of marketing, communications and investor department | 04.11.2022 | 30 |
Subsidiary (4) Malta 1) Head of operations 2) Head of quality assurance 3) Technical manager 4) Biomedical technology 5) Head of human resources 6) Medical doctor |
09.11.2022 | 120 |
1) Director of R&D department 2) Director of marketing, communications and investor department 3) Communication and media manager |
10.11.2022 | 40 |
Business unit care 1) Director, 2) Data analyst A |
4.11.2022 | 80 |
Subsidiary (2) France: president | 10.11.2022 | |
Business unit software: software unit director | 16.11.2022 | 30 |
Subsidiary (4) Malta: president | 22.11.2022 | 30 |
Business unit software: Data analyst B | 31.01.2023 | 60 |
Source: Authors’ own work
Note
We categorise these types of data based on their sources: primary data is raw, unprocessed data originally generated by technologies (e.g. patient-reported outcome measures and physiological data), while secondary data comes from external sources such as published studies or third-party interviews.
Appendix 1
Appendix 2
Appendix 3
References
Adams, C. and McNicholas, P. (2007), “Making a difference: sustainability reporting, accountability and organizational change”, Accounting, Auditing and Accountability Journal, Vol. 20 No. 3, pp. 382-402.
Adelman, C. (1993), “Kurt Lewin and the origins of action research, educational action research”, Educational Action Research, Vol. 1 No. 1, pp. 7-24, doi: 10.1080/0965079930010102.
Barocas, S. and Nissenbaum, H. (2014), “Big data’s end run around anonymity and consent”, in Lane, J., Stodden, V., Bender, S. and Nissenbaum, H. (Eds), Privacy, Big Data, and the Public Good, Cambridge University Press, pp. 44-75, doi: 10.1017/CBO9781107590205.004.
Bebbington, J., Gray, R., Hibbitt, C. and Kirk, E. (2001), Full Cost Accounting: An Agenda for Action, Certified Accountants Educational Trust, London.
Bennett, E.A. and Grabs, J. (2024), “How can sustainable business models distribute value more equitably in global value chains? Introducing ‘value chain profit sharing’ as an emerging alternative to fair trade, direct trade, or solidarity trade”, Business Ethics, the Environment and Responsibility, doi: 10.1111/beer.12666.
Benson, T. (2020), “Measure what we want: a taxonomy of short generic person-reported outcome and experience measures (PROMs and PREMs)”, BMJ Open Quality, Vol. 9 No. 1, p. e000789.
Black, A.D., Car, J., Pagliari, C., Anandan, C., Cresswell, K., Bokun, T., McKinstry, B., Procter, R., Majeed, A. and Sheikh, A. (2011), “The impact of Ehealth on the quality and safety of health care: a systematic overview”, PLoS Medicine, Vol. 8 No. 1, p. e1000387.
Butollo, F., Gereffi, G., Yang, C. and Krzywdzinski, M. (2022), “Digital transformation and value chains: introduction”, Global Networks, Vol. 22 No. 4, pp. 585-594.
Cho, C.H. (2020), “CSR accounting ‘new wave’ researchers: ‘step up to the plate’… or ‘stay out of the game’, Journal of Accounting and Management Information Systems, Vol. 19 No. 4, pp. 626-650, doi: 10.24818/jamis.2020.04001.
Corbin, J. and Strauss, A. (2008), Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 3rd ed. Sage Publications.
De Santis, F. and Presti, C. (2018), “The relationship between intellectual capital and big data: a review”, Meditari Accountancy Research, Vol. 26 No. 3, pp. 361-380.
de Villiers, C., Farooq, M.B. and Molinari, M. (2022), “Qualitative research interviews using online video technology – challenges and opportunities”, Meditari Accountancy Research, Vol. 30 No. 6, pp. 1764-1782, doi: 10.1108/MEDAR-03-2021-1252.
Doorn, N. (2012), “Responsibility ascriptions in technology development and engineering: three perspectives”, Science and Engineering Ethics, Vol. 18 No. 1, pp. 69-90.
Dranove, D. and Garthwaite, C. (2022), “Artificial intelligence, the evolution of the healthcare value chain, and the future of the physician”, No. w30607, National Bureau of Economic Research.
European Financial Reporting Advisory Group (EFRAG) (2024), “Value chain implementation guidance”, available at: www.efrag.org/Assets/Download?assetUrl=%2Fsites%2Fwebpublishing%2FSiteAssets%2FEFRAG%2520IG%25202%2520Value%2520Chain_final.pdf
Farooq, M.B. and de Villiers, C. (2017), “Telephonic qualitative research interviews: when to consider them and how to do them”, Meditari Accountancy Research, Vol. 25 No. 2, pp. 291-316, doi: 10.1108/MEDAR-10-2016-0083.
Fasan, M. and Mio, C. (2017), “Fostering stakeholder engagement: the role of materiality disclosure in integrated reporting”, Business Strategy and the Environment, Vol. 26 No. 3, pp. 288-305.
Fearne, A., Garcia Martinez, M. and Dent, B. (2012), “Dimensions of sustainable value chains: implications for value chain analysis”, Supply Chain Management: An International Journal, Vol. 17 No. 6, pp. 575-581.
Fitterer, R., Mettler, T., Rohner, P. and Winter, R. (2011), “Taxonomy for multi-perspective assessment of the value of health information systems”, International Journal of Healthcare Technology and Management, Vol. 12 No. 1, pp. 45-61.
Flott, K., Callahan, R., Darzi, A. and Mayer, E. (2016), “A patient-centered framework for evaluating digital maturity of health services: a systematic review”, Journal of Medical Internet Research, Vol. 18 No. 4, p. e75.
Freeman, R.E., Harrison, J.S., Wicks, A.C., Parmar, B.L. and de Colle, S. (2010), Stakeholder Theory: The State of the Art, Cambridge University Press, Cambridge.
Garst, J., Blok, V., Branzei, O., Jansen, L. and Omta, O.S.W.F. (2021), “Toward a Value-Sensitive absorptive capacity framework: navigating intervalue and intravalue conflicts to answer the societal call for health”, Business and Society, Vol. 60 No. 6, pp. 1349-1386, doi: 10.1177/0007650319876108.
GPI Group website (2024), available at: www.gpigroup.com/en/
Gray, R., Dey, C., Owen, D., Evans, R. and Zadek, S. (1997), “Struggling with the praxis of social accounting: stakeholders, accountability, audits and procedures”, Accounting, Auditing and Accountability Journal, Vol. 10 No. 3, pp. 325-364, doi: 10.1108/09513579710178106.
Greco, A., Nielsen, R. and Eikelenboom, M. (2023), “3 fostering sustainability and entrepreneurship through action research: the role of value reciprocity and impact temporality”, in de Jong, G., Faber, N., Folmer, E., Long, T. and Ünal, B. (Eds), Handbook of Sustainable Entrepreneurship Research, De Gruyter, Berlin, Boston, pp. 45-62, doi: 10.1515/9783110756159-004.
Hellin, J. and Meijer, M. (2006), “Guidelines for value chain analysis”, Food and Agriculture Organization (FAO), UN Agricultural Development Economics Division.
International Financial Reporting Standards Foundation (IFRS) (2018), “Conceptual framework for financial reporting”, IFRS Foundation, available at: www.ifrs.org/content/dam/ifrs/publications/pdf-standards/english/2021/issued/part-a/conceptual-framework-for-financial-reporting.pdf
Johnson, M., Redlbacher, F. and Schaltegger, S. (2018), “Stakeholder engagement for corporate sustainability: a comparative analysis of B2C and B2B companies”, Corporate Social Responsibility and Environmental Management, Vol. 25 No. 4, pp. 659-673.
Kidholm, K., Ekeland, A.G., Jensen, L.K., Rasmussen, J., Pedersen, C.D., Bowes, A., Flottorp, S.A. and Bech, M. (2012), “A model for assessment of telemedicine applications: mast”, International Journal of Technology Assessment in Health Care, Vol. 28 No. 1, pp. 44-51, doi: 10.1017/S0266462311000638.
Langer, S.G. (2017), “Cyber-security issues in healthcare information technology”, Journal of Digital Imaging, Vol. 30 No. 1, pp. 117-125.
Lewin, K. (1997), Resolving Social Conflicts and Field Theory in Social Science, American Psychological Association.
McGraw, D. and Mandl, K.D. (2021), “Privacy protections to encourage use of health-relevant digital data in a learning health system”, Digit. Med, Vol. 4 No. 1, p. 2, doi: 10.1038/s41746-020-00362-8.
Maas, K. and Liket, K. (2011), “Social impact measurement: classification of methods”, in Burritt, R., Schaltegger, S., Bennett, M., Pohjola, T. and Csutora, M. (Eds), Environmental Management Accounting and Supply Chain Management, Springer Netherlands, pp. 171-202, doi: 10.1007/978-94-007-1390-1_8.
Maas, K., Schaltegger, S. and Crutzen, N. (2016), “Integrating corporate sustainability assessment, management accounting, control, and reporting”, Journal of Cleaner Production, Vol. 136 No. A, SI, pp. 237-248, doi: 10.1016/j.jclepro.2016.05.008.
Martin, G., Arora, S., Shah, N., King, D. and Darzi, A. (2020), “A regulatory perspective on the influence of health information technology on organisational quality and safety in England”, Health Informatics Journal, Vol. 26 No. 2, pp. 897-910.
Meyer, J. (2000), “Using qualitative methods in health related action research”, BMJ, Vol. 320 No. 7228, pp. 178-181.
Moore, G. (1999), “Corporate moral agency: review and implications”, Journal of Business Ethics, Vol. 21 No. 4, pp. 329-343.
Motti, V.G. and Berkovsky, S. (2022), “Healthcare privacy”, in Knijnenburg, B.P., Page, X., Wisniewski, P., Lipford, H.R., Proferes, N. and Romano, J. (Eds), Modern Socio-Technical Perspectives on Privacy, Springer, Cham, doi: 10.1007/978-3-030-82786-1_10.
Nissenbaum, H. (1996), “Accountability in a computerized society”, Science and Engineering Ethics, Vol. 2 No. 1, pp. 25-42, doi: 10.1007/BF02639315.
Pitta, D.A. and Laric, M.V. (2004), “Value chains in health care”, Journal of Consumer Marketing, Vol. 21 No. 7, pp. 451-464, doi: 10.1108/07363760410568671.
Porter, M.E. (1979), “How competitive forces shape strategy”, Harvard Business Review, available at: www.hbr.org
Porter, M.E. (1985), The Competitive Advantage: Creating and Sustaining Superior Performance, Free Press, New York, NY.
Porter, M.E. (2008), “The five competitive forces that shape strategy”, Harvard Business Review, Vol. 86 No. 12, p. 143.available at: www.hbr.org
Porter, M.E. and Millar, V.E. (1985), “How information gives you competitive advantage”, Harvard Business Review, available at: https://hbr.org/1985/07
Reason, P. and Bradbury, H. (2012), The SAGE Handbook of Action Research: Participative Inquiry and Practice, Sage Publications, London.
Reimsbach, D., Schiemann, F., Hahn, R. and Schmiedchen, E. (2020), “In the eyes of the beholder: experimental evidence on the contested nature of materiality in sustainability reporting”, Organization and Environment, Vol. 33 No. 4, pp. 624-651, doi: doi.org/10.1177/1086026619875436.
Renes, S. and Garst, J. (2023), “Double-entry bookkeeping for non-financial performance-CO2 emissions integrated in bookkeeping”, Academy of Management Proceedings, Vol. 2023 No. 1, p. 14651.
Roulston, K. (2010), “Reflective interviewing: a guide to theory and practice”, Reflective Interviewing, SAGE Publications, pp. 1-216.
Serafeim, G., Zochowski, T.R. and Downing, J. (2019), “Impact-weighted financial accounts: the missing piece for an impact economy”, Harvard Business School, available at: www.hbs.edu/impact-weighted-accounts/Documents/Impact-Weighted-Accounts-Report-2019.pdf
Spanò, R. and Ginesti, G. (2022), “Fostering performance management in healthcare: insights into the role of big data”, Meditari Accountancy Research, Vol. 30 No. 4, pp. 941-963.
Tiwari, K. and Khan, M.S. (2020), “Sustainability accounting and reporting in the industry 4.0”, Journal of Cleaner Production, Vol. 258, p. 120783.
Unerman, J., Bebbington, J. and O’Dwyer, B. (2018), “Corporate reporting and accounting for externalities”, Accounting and Business Research, Vol. 48 No. 5, pp. 497-522, doi: 10.1080/00014788.2018.1470155.
Van Der Linden, B., Wicks, A.C. and Freeman, R.E. (2024), “How to assess multiple-value accounting narratives from a value pluralist perspective? Some metaethical criteria”, Journal of Business Ethics, Vol. 192 No. 2, pp. 243-259, doi: 10.1007/s10551-023-05385-1.
Walters, D. and Jones, P. (2001), “Value and value chains in healthcare: a quality management perspective”, The TQM Magazine, Vol. 13 No. 5, pp. 319-335.
Zehrouni, A., Augusto, V., Xie, X. and Duong, T.A. (2019), “Assessment of the impact of teledermatology using discrete event simulation”, 2019 Winter Simulation Conference (WSC), pp. 1255-1266.
Acknowledgements
The authors express their deepest gratitude to GPI S.p.A., with special thanks to all employees for their invaluable contributions and active participation in this study. The authors further extend the sincere appreciation to the Guest Editor and the anonymous reviewers for their constructive feedback and insightful guidance.