Search results
1 – 10 of 209Adrian Burton, Hylke Koers, Paolo Manghi, Sandro La Bruzzo, Amir Aryani, Michael Diepenbroek and Uwe Schindler
Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for…
Abstract
Purpose
Research data publishing is today widely regarded as crucial for reproducibility, proper assessment of scientific results, and as a way for researchers to get proper credit for sharing their data. However, several challenges need to be solved to fully realize its potential, one of them being the development of a global standard for links between research data and literature. Current linking solutions are mostly based on bilateral, ad hoc agreements between publishers and data centers. These operate in silos so that content cannot be readily combined to deliver a network graph connecting research data and literature in a comprehensive and reliable way. The Research Data Alliance (RDA) Publishing Data Services Working Group (PDS-WG) aims to address this issue of fragmentation by bringing together different stakeholders to agree on a common infrastructure for sharing links between datasets and literature. The paper aims to discuss these issues.
Design/methodology/approach
This paper presents the synergic effort of the RDA PDS-WG and the OpenAIRE infrastructure toward enabling a common infrastructure for exchanging data-literature links by realizing and operating the Data-Literature Interlinking (DLI) Service. The DLI Service populates and provides access to a graph of data set-literature links (at the time of writing close to five million, and growing) collected from a variety of major data centers, publishers, and research organizations.
Findings
To achieve its objectives, the Service proposes an interoperable exchange data model and format, based on which it collects and publishes links, thereby offering the opportunity to validate such common approach on real-case scenarios, with real providers and consumers. Feedback of these actors will drive continuous refinement of the both data model and exchange format, supporting the further development of the Service to become an essential part of a universal, open, cross-platform, cross-discipline solution for collecting, and sharing data set-literature links.
Originality/value
This realization of the DLI Service is the first technical, cross-community, and collaborative effort in the direction of establishing a common infrastructure for facilitating the exchange of data set-literature links. As a result of its operation and underlying community effort, a new activity, name Scholix, has been initiated involving the technological level stakeholders such as DataCite and CrossRef.
Details
Keywords
The purpose of this paper is to present a process-theory-based model of big data value creation in a business context. The authors approach the topic from the viewpoint of a…
Abstract
Purpose
The purpose of this paper is to present a process-theory-based model of big data value creation in a business context. The authors approach the topic from the viewpoint of a single firm.
Design/methodology/approach
The authors reflect current big data literature in two widely used value creation frameworks and arrange the results according to a process theory perspective.
Findings
The model, consisting of four probabilistic processes, provides a “recipe” for converting big data investments into firm performance. The provided recipe helps practitioners to understand the ingredients and complexities that may promote or demote the performance impact of big data in a business context.
Practical implications
The model acts as a framework which helps to understand the necessary conditions and their relationships in the conversion process. This helps to focus on success factors which promote positive performance.
Originality/value
Using well-established frameworks and process components, the authors synthetize big data value creation-related papers into a holistic model which explains how big data investments translate into economic performance, and why the conversion sometimes fails. While the authors rely on existing theories and frameworks, the authors claim that the arrangement and application of the elements to the big data context is novel.
Details
Keywords
The purpose of this paper is to review the literature on prevention in adult safeguarding and to identify the themes that emerge, with particular reference to personalisation and…
Abstract
Purpose
The purpose of this paper is to review the literature on prevention in adult safeguarding and to identify the themes that emerge, with particular reference to personalisation and the views of service users.
Design/methodology/approach
Primarily a brief literature review; the review began with a scope on data, literature, and best practice in relation to prevention in adult safeguarding. Using reference harvesting and expert recommendations, the project manager identified further material, achieving a final list of 52 documents.
Findings
There are many factors that may contribute to preventing abuse in the context of adult safeguarding. However, it is difficult to demonstrate that abuse has been or is being prevented with any certainty. The views of service users consulted for the review of No Secrets are that they would prefer to be empowered to make their own decisions with regard to safeguarding – and not to have all of the decisions made for them in an overly protective or risk‐averse approach to safeguarding. It is recommended that local authorities consider risk enablement for service users as a parallel process to adult safeguarding.
Practical implications
There are some practical suggestions for how local authorities who are tasked with co‐ordinating adult safeguarding can work to prevent abuse within different communities.
Originality/value
Prevention of abuse has not always been high on the adult safeguarding agenda; this article and the accompanying material now occupying the SCIE web site seek to redress this balance.
Details
Keywords
Anneke Zuiderwijk and Mark de Reuver
Existing overviews of barriers for openly sharing and using government data are often conceptual or based on a limited number of cases. Furthermore, it is unclear what categories…
Abstract
Purpose
Existing overviews of barriers for openly sharing and using government data are often conceptual or based on a limited number of cases. Furthermore, it is unclear what categories of barriers are most obstructive for attaining open data objectives. This paper aims to categorize and prioritize barriers for openly sharing and using government data based on many existing Open Government Data Initiatives (OGDIs).
Design/methodology/approach
This study analyzes 171 survey responses concerning existing OGDIs worldwide.
Findings
The authors found that the most critical OGDI barrier categories concern (in order of most to least critical): functionality and support; inclusiveness; economy, policy and process; data interpretation; data quality and resources; legislation and access; and sustainability. Policymakers should prioritize solving functionality and support barriers and inclusiveness barriers because the authors found that these are the most obstructive in attaining OGDI objectives.
Practical implications
The prioritization of open data barriers calls for three main actions by practitioners to reduce the barrier impact: open data portal developers should develop advanced tools to support data search, analysis, visualization, interpretation and interaction; open data experts and teachers should train potential users, and especially those currently excluded from OGDIs because of a lack of digital skills; and government agencies that provide open data should put user-centered design and the user experience central to better support open data users.
Originality/value
This study contributes to the open data literature by proposing a new, empirically based barrier categorization and prioritization based a large number of existing OGDIs.
Details
Keywords
Hugues Seraphin, Vanessa Gowreesunkar, Mustafeed Zaman and Thierry Lorey
Many tourism destinations are now facing the problem of overtourism, and destination management organisations (DMOs) are in search of an effective and sustainable solution. With…
Abstract
Purpose
Many tourism destinations are now facing the problem of overtourism, and destination management organisations (DMOs) are in search of an effective and sustainable solution. With this as a foundation, the purpose of this study is to identify factors causing overtourism at popular tourism destinations and to propose an alternative solution to overcome this phenomenon.
Design/methodology/approach
The research design is based on an inductive and a deductive approach. The paper draws its conclusion from secondary and tertiary data (literature review and online research).
Findings
The study shows that Trexit (tourism exit) is not a sustainable solution to overtourism and that an alternative strategy may be adopted to tackle this phenomenon. The overall outcome shows that if sociological factors, business factors, technological and economic factors are addressed, the effect of overtourism may be managed and controlled.
Practical implications
The findings of this piece of research refer to a Just-in-Time strategy for managing overtourism. The findings could be useful to practitioners, as the study proposes an alternative means to overcome overtourism and manage destinations without affecting visitor flow and profitability.
Originality/value
This research fulfils an existing research gap, as it proposes an alternative solution to tackle overtourism. The proposed model also helps to provide a broader insight of the dynamics surrounding overtourism at tourism destinations. In so doing, it advances the existing body of knowledge by providing new inputs to a topic that has not been discussed, namely, Trexit or tourism exit.
Details
Keywords
Mahdi M. Najafabadi and Felippe A. Cronemberger
This paper aims to explore the open government data initiative in the Food Protection program area within the New York State’s Department of Health to assess the impacts of…
Abstract
Purpose
This paper aims to explore the open government data initiative in the Food Protection program area within the New York State’s Department of Health to assess the impacts of opening data in terms of data quality and public value. An ecosystem lens is used to explore the dynamics of actors and their interactions, the processes involved in the program and the consequences such interplay brought forth to data quality.
Design/methodology/approach
The data were collected through 15 semistructured interviews with multiple stakeholders from different sectors, such as county officials, administrators and technicians, food sanitarians, data journalists and restaurant owners. At the analysis stage, the ecosystem perspective helped to capture the big picture of the open data actor interrelationships within this community regarding the food service inspections datasets.
Findings
Prior research suggests that open data initiatives enhance data quality. However, this study shows how opening data can adversely affect the quality of data. Results are explained by competing dynamics and conflicting interests among open data actors, undermining the expected public value from open data initiatives.
Research limitations/implications
The findings are in contrast with the mainstream open data literature and helps open data scholars to anticipate some currently unexpected results of open data initiatives. Limitations include potential biases associated to interpretation of interview data and that the results are based on a single case study.
Practical implications
This study makes governments and policymakers alert about the possibility of similar open data byproducts and unwanted outcomes and helps them to design more effective open data policies, hence gaining higher economic advantage while lowering costs of open data initiatives.
Originality/value
Detailed open data and open data case studies through the ecosystem perspective are still scarce and can enrich discussions about open data policy design and refinement in the public sector. The data used for this research are not used in any prior papers, and to the best of the authors’ knowledge, this is the first study to identify such adverse effects of data quality that have been reported.
Details
Keywords
Samuel Fosso Wamba, Shahriar Akter, Laura Trinchera and Marc De Bourmont
Big data analytics (BDA) increasingly provide value to firms for robust decision making and solving business problems. The purpose of this paper is to explore information quality…
Abstract
Purpose
Big data analytics (BDA) increasingly provide value to firms for robust decision making and solving business problems. The purpose of this paper is to explore information quality dynamics in big data environment linking business value, user satisfaction and firm performance.
Design/methodology/approach
Drawing on the appraisal-emotional response-coping framework, the authors propose a theory on information quality dynamics that helps in achieving business value, user satisfaction and firm performance with big data strategy and implementation. Information quality from BDA is conceptualized as the antecedent to the emotional response (e.g. value and satisfaction) and coping (performance). Proposed information quality dynamics are tested using data collected from 302 business analysts across various organizations in France and the USA.
Findings
The findings suggest that information quality in BDA reflects four significant dimensions: completeness, currency, format and accuracy. The overall information quality has significant, positive impact on firm performance which is mediated by business value (e.g. transactional, strategic and transformational) and user satisfaction.
Research limitations/implications
On the one hand, this paper shows how to operationalize information quality, business value, satisfaction and firm performance in BDA using PLS-SEM. On the other hand, it proposes an REBUS-PLS algorithm to automatically detect three groups of users sharing the same behaviors when determining the information quality perceptions of BDA.
Practical implications
The study offers a set of determinants for information quality and business value in BDA projects, in order to support managers in their decision to enhance user satisfaction and firm performance.
Originality/value
The paper extends big data literature by offering an appraisal-emotional response-coping framework that is well fitted for information quality modeling on firm performance. The methodological novelty lies in embracing REBUS-PLS to handle unobserved heterogeneity in the sample.
Details
Keywords
Big data clearly represent an important advance in information systems theory, but to describe it as “revolutionary” is premature. Similar technological breakthroughs, from online…
Abstract
Purpose
Big data clearly represent an important advance in information systems theory, but to describe it as “revolutionary” is premature. Similar technological breakthroughs, from online databases to ERP, were clearly modulated by advances in the organizational domain, including matters of structure, strategy and culture and arguably big data will be similar. The purpose of this paper is to encourage discussion of the wider implications of big data for the theory and practice of knowledge management.
Design/methodology/approach
This is a conceptual study based on critical analysis of the relevant literatures including those of organizational studies and management, big data and knowledge management.
Findings
The literature of big data emphasizes the application of algorithms to pattern analysis and prediction, resulting in data-driven decision-making, with data being the creator of value in organizations and societies. This would appear to render obsolete previous depictions of the “data-information-knowledge” relationship and, in effect, spell the end of knowledge management. However, big data literature largely ignores the organizational dimension and, significantly, the importance of frameworks, strategies and cultures for big data. As all of these are present in the literature of knowledge management, it would seem that big data have a long way to go to catch up and qualify even as a sub-discipline. Indeed, on the evidence, big data may well have a future as a contributor to and/or an element of knowledge management. Even for this to happen, however, major advances are required across the spectrum of big data technologies.
Research limitations/implications
This is a position paper written as the precursor for an empirical study.
Originality/value
The paper offers a critical literature-based and knowledge management perspective on big data while pointing out the common thread that runs through decades of advances in information systems technologies.
Details
Keywords
Stéphane Bourliataux-Lajoinie, Frederic Dosquet and Josep Lluís del Olmo Arriaga
This study aims to offer a three-pronged reflection on overtourism in large cities such as Barcelona. The objective is to outline how technology can impact on overtourism and…
Abstract
Purpose
This study aims to offer a three-pronged reflection on overtourism in large cities such as Barcelona. The objective is to outline how technology can impact on overtourism and eventually, how to tackle the problem using technology.
Design/methodology/approach
The research design is based on secondary data (literature and online reviews) and a case study of Barcelona.
Findings
The most significant aspect is the rapid spread of comments and reviews about attractions and venues. Despite the interest in ICT generalisation, these new technologies have a dark side. Closely linked to fashion trends, some tourist destinations find themselves rapidly overbooked.
Originality/value
Unlike other studies, this paper reveals a dark side of technology and attempts to use technology to mitigate the impacts of overtourism.
Details
Keywords
Charanjit Singh and Wangwei Lin
Artificial intelligence has had a major impact on organisations from Banking through to Law Firms. The rate at which technology has developed in terms of tasks that are complex…
Abstract
Purpose
Artificial intelligence has had a major impact on organisations from Banking through to Law Firms. The rate at which technology has developed in terms of tasks that are complex, technical and time-consuming has been astounding. The purpose of this paper is to explore the solutions that AI, RegTech and CharityTech provide to charities in navigating the vast amount of anti-money laundering and counter-terror finance legislation in the UK; so that they comply with the requirements and mitigate the potential risk they face but also develop a more coherent and streamlined set of actions.
Design/methodology/approach
The subject is approached through the analysis of data, literature and, domestic and international regulation. The first part of the paper explores the current obligations and risks charities face, these are then, in the second part, set against the examination of potential technological solutions as of August 2020.
Findings
It is suggested that charities underestimate the importance of the nature and size of the threat posed to them, this is significant, as demonstrated, given the growing size and impact of the sector. Technological solutions are suggested to combat the issues charities face.
Originality/value
The study is original because it is the first to create the notion of CharityTech and to specifically explore what technological advances can assist charities in meeting the regulatory compliance challenge.
Details