Search results
1 – 10 of over 12000Johann Eder and Vladimir A. Shekhovtsov
Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or…
Abstract
Purpose
Medical research requires biological material and data collected through biobanks in reliable processes with quality assurance. Medical studies based on data with unknown or questionable quality are useless or even dangerous, as evidenced by recent examples of withdrawn studies. Medical data sets consist of highly sensitive personal data, which has to be protected carefully and is available for research only after the approval of ethics committees. The purpose of this research is to propose an architecture to support researchers to efficiently and effectively identify relevant collections of material and data with documented quality for their research projects while observing strict privacy rules.
Design/methodology/approach
Following a design science approach, this paper develops a conceptual model for capturing and relating metadata of medical data in biobanks to support medical research.
Findings
This study describes the landscape of biobanks as federated medical data lakes such as the collections of samples and their annotations in the European federation of biobanks (Biobanking and Biomolecular Resources Research Infrastructure – European Research Infrastructure Consortium, BBMRI-ERIC) and develops a conceptual model capturing schema information with quality annotation. This paper discusses the quality dimensions for data sets for medical research in-depth and proposes representations of both the metadata and data quality documentation with the aim to support researchers to effectively and efficiently identify suitable data sets for medical studies.
Originality/value
This novel conceptual model for metadata for medical data lakes has a unique focus on the high privacy requirements of the data sets contained in medical data lakes and also stands out in the detailed representation of data quality and metadata quality of medical data sets.
Details
Keywords
Susanne Leitner-Hanetseder and Othmar M. Lehner
With the help of “self-learning” algorithms and high computing power, companies are transforming Big Data into artificial intelligence (AI)-powered information and gaining…
Abstract
Purpose
With the help of “self-learning” algorithms and high computing power, companies are transforming Big Data into artificial intelligence (AI)-powered information and gaining economic benefits. AI-powered information and Big Data (simply data henceforth) have quickly become some of the most important strategic resources in the global economy. However, their value is not (yet) formally recognized in financial statements, which leads to a growing gap between book and market values and thus limited decision usefulness of the underlying financial statements. The objective of this paper is to identify ways in which the value of data can be reported to improve decision usefulness.
Design/methodology/approach
Based on the authors' experience as both long-term practitioners and theoretical accounting scholars, the authors conceptualize and draw up a potential data value chain and show the transformation from raw Big Data to business-relevant AI-powered information during its process.
Findings
Analyzing current International Financial Reporting Standards (IFRS) regulations and their applicability, the authors show that current regulations are insufficient to provide useful information on the value of data. Following this, the authors propose a Framework for AI-powered Information and Big Data (FAIIBD) Reporting. This framework also provides insights on the (good) governance of data with the purpose of increasing decision usefulness and connecting to existing frameworks even further. In the conclusion, the authors raise questions concerning this framework that may be worthy of discussion in the scholarly community.
Research limitations/implications
Scholars and practitioners alike are invited to follow up on the conceptual framework from many perspectives.
Practical implications
The framework can serve as a guide towards a better understanding of how to recognize and report AI-powered information and by that (a) limit the valuation gap between book and market value and (b) enhance decision usefulness of financial reporting.
Originality/value
This article proposes a conceptual framework in IFRS to regulators to better deal with the value of AI-powered information and improve the good governance of (Big)data.
Details
Keywords
Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…
Abstract
Purpose
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.
Design/methodology/approach
Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.
Findings
Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.
Originality/value
The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.
Details
Keywords
Orlando Troisi, Anna Visvizi and Mara Grimaldi
Digitalization accelerates the need of tourism and hospitality ecosystems to reframe business models in line with a data-driven orientation that can foster value creation and…
Abstract
Purpose
Digitalization accelerates the need of tourism and hospitality ecosystems to reframe business models in line with a data-driven orientation that can foster value creation and innovation. Since the question of data-driven business models (DDBMs) in hospitality remains underexplored, this paper aims at (1) revealing the key dimensions of the data-driven redefinition of business models in smart hospitality ecosystems and (2) conceptualizing the key drivers underlying the emergence of innovation in these ecosystems.
Design/methodology/approach
The empirical research is based on semi-structured interviews collected from a sample of hospitality managers, employed in three different accommodation services, i.e. hotels, bed and breakfast (B&Bs) and guesthouses, to explore data-driven strategies and practices employed on site.
Findings
The findings allow to devise a conceptual framework that classifies the enabling dimensions of DDBMs in smart hospitality ecosystems. Here, the centrality of strategy conducive to the development of data-driven innovation is stressed.
Research limitations/implications
The study thus developed a conceptual framework that will serve as a tool to examine the impact of digitalization in other service industries. This study will also be useful for small and medium-sized enterprises (SMEs) managers, who seek to understand the possibilities data-driven management strategies offer in view of stimulating innovation in the managers' companies.
Originality/value
The paper reinterprets value creation practices in business models through the lens of data-driven approaches. In this way, this paper offers a new (conceptual and empirical) perspective to investigate how the hospitality sector at large can use the massive amounts of data available to foster innovation in the sector.
Details
Keywords
Ilpo Helén and Hanna Lehtimäki
The paper contributes to the discussion on valuation in organization studies and strategic management literature. The nascent literature on valuation practices has examined…
Abstract
Purpose
The paper contributes to the discussion on valuation in organization studies and strategic management literature. The nascent literature on valuation practices has examined established markets where producers and consumers are known and rivalry in the market is a given. Furthermore, previous research has operated with a narrow meaning of value as either a financial profit or a subjective consumer preference. Such a narrow view on value is problematic and insufficient for studying the interlacing of innovation and value creation in emerging technoscientific business domains.
Design/methodology/approach
The authors present an empirical study about value creation in an emerging technoscience business domain formed around personalized medicine and digital health data.
Findings
The results of this analysis show that in a technoscientific domain, valuation of innovations is multiple and malleable, entails pursuing attractiveness in collaboration and partnerships and is performative, and due to emphatic future orientation, values are indefinite and promissory.
Research limitations/implications
As research implications, this study shows that valuation practices in an emerging technoscience business domain focus on defining the potential economic value in the future and attracting partners as probable future beneficiaries. Commercial value upon innovation in an embryonic business milieu is created and situated in valuation practices that constitute the prospective market, the prevalent economic discourse, and rationale. This is in contrast to an established market, where valuation practices are determined at the intersection of customer preferences and competitive arenas where suppliers, producers, service providers and new entrants to the market present value propositions.
Practical implications
The study findings extend discussion on valuation from established business domains to emerging technoscience business domains which are in a “pre-competition” phase where suppliers, customers, producers and their collaborative and competitive relations are not yet established.
Social implications
As managerial implications, this study provides insights into health innovation stakeholders, including stakeholders in the public, private and academic sectors, about the ecosystem dynamics in a technoscientific innovation. Such insight is useful in strategic decision-making about ecosystem strategy and ecosystem business model for value proposition, value creation and value capture in an emerging innovation domain characterized by collaborative and competitive relations among stakeholders. To business managers, the findings of this study about valuation practices are useful in strategic decision-making about ecosystem strategy and ecosystem business model for value proposition, value creation and value capture in an emerging innovation domain characterized by collaborative and competitive relations among stakeholders. To policy makers, this study provides an in-depth analysis of an overall business ecosystem in an emerging technoscience business that can be propelled to increase the financial investments in the field. As a policy implication, this study provides insights into the various dimensions of valuation in technoscience business to policy makers, who make governance decisions to guide and control the development of medical innovation using digital health data.
Originality/value
This study's results expand previous theorizing on valuation by showing that in technoscientific innovation all types of value created – scientific, clinical, social or economic – are predominantly promissory. This study complements the nascent theorizing on value creation and valuation practices of technoscientific innovation.
Details
Keywords
Xiu Susie Fang, Quan Z. Sheng, Xianzhi Wang, Anne H.H. Ngu and Yihong Zhang
This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.
Abstract
Purpose
This paper aims to propose a system for generating actionable knowledge from Big Data and use this system to construct a comprehensive knowledge base (KB), called GrandBase.
Design/methodology/approach
In particular, this study extracts new predicates from four types of data sources, namely, Web texts, Document Object Model (DOM) trees, existing KBs and query stream to augment the ontology of the existing KB (i.e. Freebase). In addition, a graph-based approach to conduct better truth discovery for multi-valued predicates is also proposed.
Findings
Empirical studies demonstrate the effectiveness of the approaches presented in this study and the potential of GrandBase. The future research directions regarding GrandBase construction and extension has also been discussed.
Originality/value
To revolutionize our modern society by using the wisdom of Big Data, considerable KBs have been constructed to feed the massive knowledge-driven applications with Resource Description Framework triples. The important challenges for KB construction include extracting information from large-scale, possibly conflicting and different-structured data sources (i.e. the knowledge extraction problem) and reconciling the conflicts that reside in the sources (i.e. the truth discovery problem). Tremendous research efforts have been contributed on both problems. However, the existing KBs are far from being comprehensive and accurate: first, existing knowledge extraction systems retrieve data from limited types of Web sources; second, existing truth discovery approaches commonly assume each predicate has only one true value. In this paper, the focus is on the problem of generating actionable knowledge from Big Data. A system is proposed, which consists of two phases, namely, knowledge extraction and truth discovery, to construct a broader KB, called GrandBase.
Details
Keywords
Chiehyeon Lim, Min-Jun Kim, Ki-Hun Kim, Kwang-Jae Kim and Paul Maglio
The proliferation of customer-related data provides companies with numerous service opportunities to create customer value. The purpose of this study is to develop a framework to…
Abstract
Purpose
The proliferation of customer-related data provides companies with numerous service opportunities to create customer value. The purpose of this study is to develop a framework to use this data to provide services.
Design/methodology/approach
This study conducted four action research projects on the use of customer-related data for service design with industry and government. Based on these projects, a practical framework was designed, applied, and validated, and was further refined by analyzing relevant service cases and incorporating the service and operations management literature.
Findings
The proposed customer process management (CPM) framework suggests steps a service provider can take when providing information to its customers to improve their processes and create more value-in-use by using data related to their processes. The applicability of this framework is illustrated using real examples from the action research projects and relevant literature.
Originality/value
“Using data to advance service” is a critical and timely research topic in the service literature. This study develops an original, specific framework for a company’s use of customer-related data to advance its services and create customer value. Moreover, the four projects with industry and government are early CPM case studies with real data.
Details
Keywords
Tao Xu, Hanning Shi, Yongjiang Shi and Jianxin You
The purpose of this paper is to explore the concept of data assets and how companies can assetize their data. Using the literature review methodology, the paper first summarizes…
Abstract
Purpose
The purpose of this paper is to explore the concept of data assets and how companies can assetize their data. Using the literature review methodology, the paper first summarizes the conceptual controversies over data assets in the existing literature. Subsequently, the paper defines the concept of data assets. Finally, keywords from the existing research literature are presented visually and a foundational framework for achieving data assetization is proposed.
Design/methodology/approach
This paper uses a systematic literature review approach to discuss the conceptual evolution and strategic imperatives of data assets. To establish a robust research methodology, this paper takes into account two main aspects. First, it conducts a comprehensive review of the existing literature on digital technology and data assets, which enables the derivation of an evolutionary path of data assets and the development of a clear and concise definition of the concept. Second, the paper uses Citespace, a widely used software for literature review, to examine the research framework of enterprise data assetization.
Findings
The paper offers pivotal insights into the realm of data assets. It highlights the changing perceptions of data assets with digital progression and addresses debates on data asset categorization, value attributes and ownership. The study introduces a definitive concept of data assets as electronically recorded data resources with real or potential value under legal parameters. Moreover, it delineates strategic imperatives for harnessing data assets, presenting a practical framework that charts the stages of “resource readiness, capacity building, and data application”, guiding businesses in optimizing their data throughout its lifecycle.
Originality/value
This paper comprehensively explores the issue of data assets, clarifying controversial concepts and categorizations and bridging gaps in the existing literature. The paper introduces a clear conceptualization of data assets, bridging the gap between academia and practice. In addition, the study proposes a strategic framework for data assetization. This study not only helps to promote a unified understanding among academics and professionals but also helps businesses to understand the process of data assetization.
Details
Keywords
Walaa M. El-Sayed, Hazem M. El-Bakry and Salah M. El-Sayed
Wireless sensor networks (WSNs) are periodically collecting data through randomly dispersed sensors (motes), which typically consume high energy in radio communication that mainly…
Abstract
Wireless sensor networks (WSNs) are periodically collecting data through randomly dispersed sensors (motes), which typically consume high energy in radio communication that mainly leans on data transmission within the network. Furthermore, dissemination mode in WSN usually produces noisy values, incorrect measurements or missing information that affect the behaviour of WSN. In this article, a Distributed Data Predictive Model (DDPM) was proposed to extend the network lifetime by decreasing the consumption in the energy of sensor nodes. It was built upon a distributive clustering model for predicting dissemination-faults in WSN. The proposed model was developed using Recursive least squares (RLS) adaptive filter integrated with a Finite Impulse Response (FIR) filter, for removing unwanted reflections and noise accompanying of the transferred signals among the sensors, aiming to minimize the size of transferred data for providing energy efficient. The experimental results demonstrated that DDPM reduced the rate of data transmission to ∼20%. Also, it decreased the energy consumption to 95% throughout the dataset sample and upgraded the performance of the sensory network by about 19.5%. Thus, it prolonged the lifetime of the network.
Details
Keywords
Zhiwen Pan, Jiangtian Li, Yiqiang Chen, Jesus Pacheco, Lianjun Dai and Jun Zhang
The General Society Survey(GSS) is a kind of government-funded survey which aims at examining the Socio-economic status, quality of life, and structure of contemporary society…
Abstract
Purpose
The General Society Survey(GSS) is a kind of government-funded survey which aims at examining the Socio-economic status, quality of life, and structure of contemporary society. GSS data set is regarded as one of the authoritative source for the government and organization practitioners to make data-driven policies. The previous analytic approaches for GSS data set are designed by combining expert knowledges and simple statistics. By utilizing the emerging data mining algorithms, we proposed a comprehensive data management and data mining approach for GSS data sets.
Design/methodology/approach
The approach are designed to be operated in a two-phase manner: a data management phase which can improve the quality of GSS data by performing attribute pre-processing and filter-based attribute selection; a data mining phase which can extract hidden knowledge from the data set by performing data mining analysis including prediction analysis, classification analysis, association analysis and clustering analysis.
Findings
According to experimental evaluation results, the paper have the following findings: Performing attribute selection on GSS data set can increase the performance of both classification analysis and clustering analysis; all the data mining analysis can effectively extract hidden knowledge from the GSS data set; the knowledge generated by different data mining analysis can somehow cross-validate each other.
Originality/value
By leveraging the power of data mining techniques, the proposed approach can explore knowledge in a fine-grained manner with minimum human interference. Experiments on Chinese General Social Survey data set are conducted at the end to evaluate the performance of our approach.
Details