Search results

1 – 10 of over 85000
Article
Publication date: 9 February 2022

Sena Başak, İzzet Kılınç and Aslıhan Ünal

The purpose of this paper is to examine the contribution of big data in the transforming process of an IT firm to a learning organization.

Abstract

Purpose

The purpose of this paper is to examine the contribution of big data in the transforming process of an IT firm to a learning organization.

Design/methodology/approach

The authors adopted a qualitative research approach to define and interpret the ideas and experiences of the IT firms’ employees and to present them to the readers directly. For this purpose, they followed a single-case study design. They researched on a small and medium enterprise operating in the IT sector in Düzce province, Turkey. This paper used a semi-structured interview and document analysis as data collecting methods. In all, eight interviews were conducted with employees. Brochures and website of the organization were used as data sources for the document analysis.

Findings

As a result of in-depth interviews and document analysis, the authors formed five main themes that describe perception of big data and learning organization concepts, methods and practices adopted in transforming process, usage areas of big data in organization and how the sample organization uses big data as a learning organization. The findings of this paper show that the sample organization is a learning IT firm that has used big data in transforming to learning organization and in maintaining the learning culture.

Research limitations/implications

The findings contribute to literature as it is one of the first studies that examine the influence of big data on the transformation process of an IT firm to a learning organization. The findings reveal that IT firms benefit from the solutions of big data while learning. However, as the design of the research is single-case study, the findings may be specific to the sample organization. Future studies are required that examine the subject in different samples and by different research designs.

Originality/value

In literature, research on how IT firms’ managers and employees use big data in organizational learning process is limited. The authors expect that this paper will shed light on future research that examines the effect of big data on the learning process of the organization.

Details

VINE Journal of Information and Knowledge Management Systems, vol. 54 no. 3
Type: Research Article
ISSN: 2059-5891

Keywords

Article
Publication date: 15 February 2008

Richard S. Segall, Gauri S. Guha and Sarath A. Nonis

This paper seeks to present a complete set of graphical and numerical outputs of data mining performed for microarray databases of plant data as described in earlier research by…

Abstract

Purpose

This paper seeks to present a complete set of graphical and numerical outputs of data mining performed for microarray databases of plant data as described in earlier research by the authors. A brief description of data mining is also presented, as well as a brief background of previous research.

Design/methodology/approach

The paper uses applications of data mining using SAS Enterprise Miner Version 4 for plant data from the Osmotic Stress Microarray Information Database (OSMID) that is available on the web for both normalized and log(2) transformed data.

Findings

This paper illustrates that useful information about the effects of environmental stress tolerances (ESTs) on plants can be obtained by using data mining.

Research limitations/implications

Use of SAS Enterprise Miner was very effective for performing data mining of microarray databases with its modules of cluster analysis, decision trees, and descriptive and visual statistics.

Practical implications

The data used from the OSMID database are considered to be representative of those that could be used for biotech application such as the manufacture of plant‐made‐pharmaceuticals and genetically modified foods.

Originality/value

This paper contributes to the discussion on the use of data mining for microarray databases and specifically for studying the effects of ESTs on plants.

Details

Kybernetes, vol. 37 no. 1
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 12 March 2018

Momotaz Begum and Tadashi Dohi

The purpose of this paper is to present a novel method to estimate the optimal software testing time which minimizes the relevant expected software cost via a refined neural…

Abstract

Purpose

The purpose of this paper is to present a novel method to estimate the optimal software testing time which minimizes the relevant expected software cost via a refined neural network approach with the grouped data, where the multi-stage look ahead prediction is carried out with a simple three-layer perceptron neural network with multiple outputs.

Design/methodology/approach

To analyze the software fault count data which follows a Poisson process with unknown mean value function, the authors transform the underlying Poisson count data to the Gaussian data by means of one of three data transformation methods, and predict the cost-optimal software testing time via a neural network.

Findings

In numerical examples with two actual software fault count data, the authors compare the neural network approach with the common non-homogeneous Poisson process-based software reliability growth models. It is shown that the proposed method could provide a more accurate and more flexible decision making than the common stochastic modeling approach.

Originality/value

It is shown that the neural network approach can be used to predict the optimal software testing time more accurately.

Details

Journal of Quality in Maintenance Engineering, vol. 24 no. 1
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 23 March 2021

Hendri Murfi

The aim of this research is to develop an eigenspace-based fuzzy c-means method for scalable topic detection.

Abstract

Purpose

The aim of this research is to develop an eigenspace-based fuzzy c-means method for scalable topic detection.

Design/methodology/approach

The eigenspace-based fuzzy c-means (EFCM) combines representation learning and clustering. The textual data are transformed into a lower-dimensional eigenspace using truncated singular value decomposition. Fuzzy c-means is performed on the eigenspace to identify the centroids of each cluster. The topics are provided by transforming back the centroids into the nonnegative subspace of the original space. In this paper, we extend the EFCM method for scalability by using the two approaches, i.e. single-pass and online. We call the developed topic detection methods as oEFCM and spEFCM.

Findings

Our simulation shows that both oEFCM and spEFCM methods provide faster running times than EFCM for data sets that do not fit in memory. However, there is a decrease in the average coherence score. For both data sets that fit and do not fit into memory, the oEFCM method provides a tradeoff between running time and coherence score, which is better than spEFCM.

Originality/value

This research produces a scalable topic detection method. Besides this scalability capability, the developed method also provides a faster running time for the data set that fits in memory.

Details

Data Technologies and Applications, vol. 55 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 5 October 2022

Michael DeBellis and Biswanath Dutta

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the…

Abstract

Purpose

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the FAIR principles. This study took information from spreadsheets and integrated it into a knowledge graph that could be queried with SPARQL and visualized with the Gruff tool in AllegroGraph.

Design/methodology/approach

The knowledge graph was designed with the Web Ontology Language. The methodology was a hybrid approach integrating the YAMO methodology for ontology design and Agile methods to define iterations and approach to requirements, testing and implementation.

Findings

The hybrid approach demonstrated that Agile can bring the same benefits to knowledge graph projects as it has to other projects. The two-person team went from an ontology to a large knowledge graph with approximately 5 M triples in a few months. The authors gathered useful real-world experience on how to most effectively transform “from strings to things.”

Originality/value

This study is the only FAIR model (to the best of the authors’ knowledge) to address epidemiology data for the COVID-19 pandemic. It also brought to light several practical issues that generalize to other studies wishing to go from an ontology to a large knowledge graph. This study is one of the first studies to document how the Agile approach can be used for knowledge graph development.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 7 March 2016

Benedict M. Uzochukwu, Silvanus J. Udoka and Femi Balogun

Managing product life cycle data is important for achieving design excellence, product continued operational performance, customer satisfaction and sustainment. As a result, it is…

Abstract

Purpose

Managing product life cycle data is important for achieving design excellence, product continued operational performance, customer satisfaction and sustainment. As a result, it is important to develop a sustainment simulator to transform life cycle data into actionable design metrics. Currently, there is apparent lack of technologies and tools to synthesize product life time data. The purpose of this paper is to provide a description of how a product sustainment simulator was developed using fuzzy cognitive map (FCM). As a proof of concept, and to demonstrate the utility of the simulator, an implementation example utilizing product life time data as input was demonstrated.

Design/methodology/approach

The sustainment simulator was developed using visual basic. The simulation experiment was accomplished using a FCM. The Statistical Analytical Software tool was used to run structural equation model programs that provided the initial input into the FCM and the simulator. Product life data were used as input to the simulator.

Findings

There is an apparent lack of technologies and tools to synthesize product life time data. This constitutes an impediment to designing the next generation of sustainable products. Modern tools, technologies and techniques must be used if the goal of removing product design and sustainment disablers is to be achieved. Product sustainment can, therefore, be achieved using the simulator.

Research limitations/implications

The sustainment simulator is a tool that demonstrates in a practical way how a product life time generated data can be transformed into actionable design parameters. This paper includes analysis of a sample generated using random numbers. The lack of actual data set is primarily due to reluctance of organizations to avail the public of actual product life time data. However, this paper provides a good demonstration of how product life time data can be transformed to ensure product sustainment.

Practical implications

The technique used in this research paper would be very useful to product designers, engineers and research and development teams in developing data manipulation tools to improve product operational and sustainable life cycle performance. Sustainment conscious organizations will, no doubt, benefit from a strong comparative and competitive advantage over rivals.

Originality/value

Utilizing the simulator to transform product life time data into actionable design metrics through the help of an efficient decision support tool like the FCM constitutes a step in supporting product life cycle management. The outcome of this paper alerts product designers on parameters which should be taken into account when designing a new generation of a given product(s).

Details

Benchmarking: An International Journal, vol. 23 no. 2
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 1 June 2004

R.H. Khatibi, R. Lincoln, D. Jackson, S. Surendran, C. Whitlow and J. Schellekens

With the diversification of modelling activities encouraged by versatile modelling tools, handling their datasets has become a formidable problem. A further impetus stems from the…

Abstract

With the diversification of modelling activities encouraged by versatile modelling tools, handling their datasets has become a formidable problem. A further impetus stems from the emergence of the real‐time forecasting culture, transforming data embedded in computer programs of one‐off modelling activities of the 1970s‐1980s into dataset assets, an important feature of modelling since the 1990s, where modelling has emerged as a practice with a pivotal role to data transactions. The scope for data is now vast but in legacy data management practices datasets are fragmented, not transparent outside their native software systems, and normally “monolithic”. Emerging initiatives on published interfaces will make datasets transparent outside their native systems but will not solve the fragmentation and monolithic problems. These problems signify a lack of science base in data management and as such it is necessary to unravel inherent generic structures in data. This paper outlines root causes for these problems and presents a tentative solution referred to as “systemic data management”, which is capable of solving the above problems through the assemblage of packaged data. Categorisation is presented as a packaging methodology and the various sources contributing to the generic structure of data are outlined, e.g. modelling techniques, modelling problems, application areas and application problems. The opportunities offered by systemic data management include: promoting transparency among datasets of different software systems; exploiting inherent synergies within data; and treating data as assets with a long‐term view on reuse of these assets in an integrated capability.

Details

Management of Environmental Quality: An International Journal, vol. 15 no. 3
Type: Research Article
ISSN: 1477-7835

Keywords

Article
Publication date: 18 November 2013

Daniel Vila-Suero and Asunción Gómez-Pérez

Linked data is gaining great interest in the cultural heritage domain as a new way for publishing, sharing and consuming data. The paper aims to provide a detailed method and…

1011

Abstract

Purpose

Linked data is gaining great interest in the cultural heritage domain as a new way for publishing, sharing and consuming data. The paper aims to provide a detailed method and MARiMbA a tool for publishing linked data out of library catalogues in the MARC 21 format, along with their application to the catalogue of the National Library of Spain in the datos.bne.es project.

Design/methodology/approach

First, the background of the case study is introduced. Second, the method and process of its application are described. Third, each of the activities and tasks are defined and a discussion of their application to the case study is provided.

Findings

The paper shows that the FRBR model can be applied to MARC 21 records following linked data best practices, librarians can successfully participate in the process of linked data generation following a systematic method, and data sources quality can be improved as a result of the process.

Originality/value

The paper proposes a detailed method for publishing and linking linked data from MARC 21 records, provides practical examples, and discusses the main issues found in the application to a real case. Also, it proposes the integration of a data curation activity and the participation of librarians in the linked data generation process.

Details

Library Hi Tech, vol. 31 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 1 March 2003

Jia‐Lang Seng, Yu Lin, Jessie Wang and Jing Yu

XML emerges and evolves quick and fast as Web and wireless technology penetrates more into the consumer marketplace. Database technology faces new challenges. It has to change to…

1551

Abstract

XML emerges and evolves quick and fast as Web and wireless technology penetrates more into the consumer marketplace. Database technology faces new challenges. It has to change to play the supportive role. Web and wireless applications master the technology paradigm shift. XML and database connectivity and transformation become critical. Heterogeneity and interoperability must be distinctly tackled. In this paper, we provide an in‐depth and technical review of XML and XML database technology. An analytic and comparative framework is developed. Storage method, mapping technique, and transformation paradigm formulate the framework. We collect and compile the IBM, Oracle, Sybase, and Microsoft XML database products. We use the framework and analyze each of these XML database techniques. The comparison and contrast aims to provide an insight into the structural and methodological paradigm shift in XML database technology.

Details

Industrial Management & Data Systems, vol. 103 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

Book part
Publication date: 19 July 2022

Aradhana Rana, Rajni Bansal and Monica Gupta

Introduction: Big data is that disruptive force that affects businesses, industries, and the economy. In 2021, insurance analytics will include more than simply analysing…

Abstract

Introduction: Big data is that disruptive force that affects businesses, industries, and the economy. In 2021, insurance analytics will include more than simply analysing statistics. According to current trends, new insurance big data analytics (BDA) methods will enable firms to do more with their data. The insurance business has traditionally been conservative, but adopting new technology is no longer only a current trend; it must be competitive. Big data technologies aid in processing a huge amount of data, improve workflow efficiency, and lower operating costs.

Purpose: Some of the most recent developments in big data for insurance and how insurers may use the information to stay ahead of their competitors are discussed in this chapter. This chapter’s prime purpose is to analyse how artificial intelligence (AI), blockchain, and mobile technology change the outlook and working of the insurance sector.

Methodology: To achieve our research purpose, we analyse case studies and literature that emphasise how BDA revolutionises the insurance market. For this purpose, various articles and studies on BDA in the insurance market will be selected and studied.

Findings: From the analysis, we find that the use of big data in the insurance business is growing. The development of BDA has proven to be a game-changing technology in insurance, with a slew of benefits. The insurance sector is now grappling with the risks and opportunities that modern technology presents. Big data offers opportunities that every company must avail of. We can safely argue that big data has transformed the insurance sector for the better. The BDA’s consequences have enabled insurers to target clients more accurately. This chapter highlights that new tools and technologies of big data in the insurance market are increasing. AI is emerging as a powerful technology that can alter the entire insurance value stream. The transmission of any type of digital proof for underwriting, including the use of digital health data, might be a blockchain use case (electronic health record (EHR)). As digital forensics becomes easier to include in underwriting, it must expect price and product design changes in the future. In the future, the internet of things (IoT) and AI will combine to automate insurance processes, causing our sector to transform dramatically. We highlight that these technologies transformed insurance practices and revolutionalised the insurance market.

Details

Big Data: A Game Changer for Insurance Industry
Type: Book
ISBN: 978-1-80262-606-3

Keywords

1 – 10 of over 85000