Search results

1 – 10 of over 1000
Article
Publication date: 15 August 2016

Takahiro Komamizu, Toshiyuki Amagasa and Hiroyuki Kitagawa

Linked data (LD) has promoted publishing information, and links published information. There are increasing number of LD datasets containing numerical data such as statistics. For…

208

Abstract

Purpose

Linked data (LD) has promoted publishing information, and links published information. There are increasing number of LD datasets containing numerical data such as statistics. For this reason, analyzing numerical facts on LD has attracted attentions from diverse domains. This paper aims to support analytical processing for LD data.

Design/methodology/approach

This paper proposes a framework called H-SPOOL which provides series of SPARQL (SPARQL Protocol and RDF Query Language) queries extracting objects and attributes from LD data sets, converts them into star/snowflake schemas and materializes relevant triples as fact and dimension tables for online analytical processing (OLAP).

Findings

The applicability of H-SPOOL is evaluated using exiting LD data sets on the Web, and H-SPOOL successfully processes the LD data sets to ETL (Extract, Transform, and Load) for OLAP. Besides, experiments show that H-SPOOL reduces the number of downloaded triples comparing with existing approach.

Originality/value

H-SPOOL is the first work for extracting OLAP-related information from SPARQL endpoints, and H-SPOOL drastically reduces the amount of downloaded triples.

Details

International Journal of Web Information Systems, vol. 12 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Book part
Publication date: 30 September 2020

Bhawna Suri, Shweta Taneja and Hemanpreet Singh Kalsi

This chapter discussed the role of business intelligence (BI) in healthcare twofold strategic decision making of the organization and the stakeholders. The visualization…

Abstract

This chapter discussed the role of business intelligence (BI) in healthcare twofold strategic decision making of the organization and the stakeholders. The visualization techniques of data mining are applied for the early and correct diagnosis of the disease, patient’s satisfaction quotient and also helpful for the hospital to know their best commanders.

In this chapter, the usefulness of BI is shown at two levels: at doctor level and at hospital level. As a case study, a hospital is taken which deals with three different kinds of diseases: Breast Cancer, Diabetes, and Liver disorder. BI can be applied for taking better strategic decisions in the context of hospital and its department’s growth. At the doctor level, on the basis of various symptoms of the disease, the doctor can advise the suitable treatment to the patients. At the hospital level, the best department among all can be identified. Also, a patient’s type of admission, continued their treatments with the hospital, patient’s satisfaction quotient, etc., can be calculated. The authors have used different methods like Correlation matrix, decision tree, mosaic plots, etc., to conduct this analysis.

Details

Big Data Analytics and Intelligence: A Perspective for Health Care
Type: Book
ISBN: 978-1-83909-099-8

Keywords

Article
Publication date: 1 January 2006

Ranjit Bose

Managing enterprise performance is an important, yet a difficult process due to its complexity. The process involves monitoring the strategic focus of an enterprise, whose…

7973

Abstract

Purpose

Managing enterprise performance is an important, yet a difficult process due to its complexity. The process involves monitoring the strategic focus of an enterprise, whose performance is measured from the analysis of data generated from a wide range of interrelated business activities performed at different levels within the enterprise. This study aims to investigate management data systems technologies in terms of how they are used and the issues that are related to their effective management within the broader context of enterprise performance management (EPM).

Design/methodology/approach

A range of recently published research literature on data warehousing, online analytic processing and EPM is reviewed to explore their current state, issues and challenges learned from their practice.

Findings

The findings of the study are reported in two parts. The first part discusses the current business practices of these technologies, and the second part identifies and discusses the issues and challenges the business managers dealing with these technologies face for gaining competitive advantage for their businesses.

Originality/value

The study findings are intended to assist the business managers to effectively understand the issues and technologies behind EPM implementation.

Details

Industrial Management & Data Systems, vol. 106 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 May 2006

Chan‐Chine Chang and Ruey‐Shun Chen

Traditional library catalogs have become inefficient and inconvenient in assisting library users. Readers may spend a lot of time searching library materials via printed catalogs…

3062

Abstract

Purpose

Traditional library catalogs have become inefficient and inconvenient in assisting library users. Readers may spend a lot of time searching library materials via printed catalogs. Readers need an intelligent and innovative solution to overcome this problem. The paper seeks to examine data mining technology which is a good approach to fulfill readers' requirements.

Design/methodology/approach

Data mining is considered to be the non‐trivial extraction of implicit, previously unknown, and potentially useful information from data. This paper analyzes readers' borrowing records using the techniques of data analysis, building a data warehouse, and data mining.

Findings

The paper finds that after mining data, readers can be classified into different groups according to the publications in which they are interested. Some people on the campus also have a greater preference for multimedia data.

Originality/value

The data mining results shows that all readers can be categorized into five clusters, and each cluster has its own characteristics. The frequency with which graduates and associate researchers borrow multimedia data is much higher. This phenomenon shows that these readers have a higher preference for accepting digitized publications. Also, the number of readers borrowing multimedia data has increased over the years. This trend indicates that readers preferences are gradually shifting towards reading digital publications.

Details

The Electronic Library, vol. 24 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Content available
Article
Publication date: 1 January 2004

279

Abstract

Details

Kybernetes, vol. 33 no. 1
Type: Research Article
ISSN: 0368-492X

Article
Publication date: 17 April 2020

Houda Chakiri, Mohammed El Mohajir and Nasser Assem

Most local governance assessment tools are entirely or partially based on stakeholders’ surveys, focus groups and benchmarks of different local governments in the world. These…

Abstract

Purpose

Most local governance assessment tools are entirely or partially based on stakeholders’ surveys, focus groups and benchmarks of different local governments in the world. These tools remain a subjective way of local governance evaluation. To measure the performance of local good-governance using an unbiased assessment technique, the authors have developed a framework to help automate the design process of a data warehouse (DW), which provides local and central decision-makers with factual, measurable and accurate local government data to help assess the performance of local government. The purpose of this paper is to propose the extraction of the DW schema based on a mixed approach that adopts both i* framework for requirements-based representation and domain ontologies for data source representation, to extract the multi-dimensional (MD) elements. The data was collected from various sources and information systems (ISs) deployed in different municipalities.

Design/methodology/approach

The authors present a framework for the design and implementation of a DW for local good-governance assessment. The extraction of facts and dimensions of the DW’s MD schema is done using a hybrid approach, where the extraction of requirement-based DW schema and source-based DW schema are done in parallel followed by the reconciliation of the obtained schemas to obtain the good-governance assessment DW final design.

Findings

The authors developed a novel framework to design and implement a DW for local good-governance assessment. The framework enables the extraction of the DW MD schema by using domain ontologies to help capture semantic artifacts and minimize misconceptions and misunderstandings between different stakeholders. The introduction and use of domain ontologies during the design process serves the generalization and automation purpose of the framework.

Research limitations/implications

The presently conducted research faced two main limitations as follows: the first is the full automation of the design process of the DW and the second, and most important, is access to local government data as it remains limited because of the lack of digitally stored data in municipalities, especially in developing countries in addition to the difficulty of accessing the data because of regulatory aspects and bureaucracy.

Practical implications

The local government environment is among the public administrations most subject to change-adverse cultures and where the authors can face high levels of resistance and significant difficulties during the implementation of decision support systems, despite the commitment/engagement of decision-makers. Access to data sources stored by different ISs might be challenging. While approaching the municipalities for data access, it was done in the framework of a research project within one of the most notorious universities in the country, which gave more credibility and trust to the research team. There is also a need for further testing of the framework to reveal its scalability and performance characteristics.

Originality/value

Compared to other local government assessment ad hoc tools that are partially or entirely based on subjectively collected data, the framework provides a basis for automated design of a comprehensive local government DW using e-government domain ontologies for data source representation coupled with the goal, rationale and business process diagrams for user requirements representations, thus enabling the extraction of the final DW MD schema.

Details

Transforming Government: People, Process and Policy, vol. 14 no. 2
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 17 February 2012

Jing‐Shiuan Hua, Shi‐Ming Huang and David C. Yen

As business globalisation and internet usage continue to grow, the internet‐based version of data warehouse systems (DWS) is expected to improve traditional DWS. However applying…

1952

Abstract

Purpose

As business globalisation and internet usage continue to grow, the internet‐based version of data warehouse systems (DWS) is expected to improve traditional DWS. However applying the web‐based interfaces to client‐server‐based DWS structures may cause problems such as inflexibility, inefficiency, loss of scalability, and threats to security. These arise due to the complexity of manipulation and management of heterogeneous data with various categories of decisional tasks. This paper seeks to develop a flexible mechanism by applying Extensible Markup Language as a foundation for an internet‐based DWS and to overcome the weaknesses of solely client‐server‐based DWS architecture.

Design/methodology/approach

For better control and security the proposed architecture utilises an embedded pull‐push mechanism to propagate the distributed decision information. This research also justifies the feasibility of the proposed mechanism by implementing a prototype, evaluating its performance, and conducting a real business case study.

Findings

The results indicate that the mechanism can not only improve DWS scalability and efficiency, but also enhance security.

Originality/value

The proposed architecture provides a support mechanism for business intelligence to efficiently and flexibly help companies make the right decisions in real time, grasp business opportunities and gain competitive advantage.

Article
Publication date: 23 August 2011

Amir Albadvi and Monireh Hosseini

This paper's main purpose is to provide a systematic approach for mapping the value exchange in B2B relationship marketing. This approach affords a preliminary analysis in order…

5297

Abstract

Purpose

This paper's main purpose is to provide a systematic approach for mapping the value exchange in B2B relationship marketing. This approach affords a preliminary analysis in order to distinguish business customers' different value dimensions (tangibles and intangibles) and to set sights on determining suitable metrics to evaluate and quantify the value of each customer.

Design/methodology/approach

The paper uses a combination of qualitative research approaches, namely an exploratory case study, in‐depth interviews, and consensus expert opinion. The empirical study took place over three months to maximize the proposed approach's expediency in the practitioners' B2B environment and to increase the validity of the research findings.

Findings

In addition to developing a new framework originating in the value network approach for mapping, modeling and analyzing business customers' value network (BCVN), the findings posit a proposed systematic approach for practitioners and marketing scholars to scrutinize the multi‐dimension values of business relationship marketing.

Practical implications

For companies and their business customers alike, the benefits of the systematic approach proposed in this paper are an efficient analytical system giving an opportunity to B2B marketers and managers to understand their business customers' network in detail.

Originality/value

The implicit concept of maximizing customer lifetime value within the business customers' network appeals for an applied approach to better understand and analyze the real value of business customers to retain them.

Details

Journal of Business & Industrial Marketing, vol. 26 no. 7
Type: Research Article
ISSN: 0885-8624

Keywords

Article
Publication date: 16 May 2016

Mohammad A. Rob and Floyd J. Srubar

The purpose of this study is to demonstrate how existing volumes of big city crime data could be converted to significantly useful information by law enforcement agencies using…

Abstract

Purpose

The purpose of this study is to demonstrate how existing volumes of big city crime data could be converted to significantly useful information by law enforcement agencies using readily available data warehouse and OLAP technologies. During the post-9/11 era, criminal data collection by law enforcement agencies received significant attention across the world. Rapid advancement of technology helped collection and storage of these data in large volumes, but often do not get analyzed due to improper data format, lack of technological knowledge and time. Data warehousing (DW) and On-line Analytical Processing (OLAP) tools can be used to organize and present these data in a form strategically meaningful to the general public. In this study, the authors took a seven-month sample crime data from the City of Houston Police Department’s website, cleaned and organized them into a data warehouse with the hope of answering common questions related to crime statistics in a big city in the USA.

Design/methodology/approach

The raw data for the seven-month period was collected from the website in Microsoft Excel spreadsheet format for each month. The data were then cleaned, described, renamed, formatted and then imported into a compiled Access database along with the definition of Facts and Dimensions using a STAR Schema. Data were then transferred to the Microsoft SQL Server data warehouse. SQL Server Analysis Services and Visual Studio Business Intelligent Tool are used to create a Data Cube for OLAP analysis of the summarized data.

Findings

To prove the usefulness of the DW and OLAP cube, the authors have shown few sample queries displaying the number and the types of crimes as a function of time of the day, location, premises, etc. For example, the authors found that 98 crimes occurred on a major street in the city during the early working hours (7 am and 12 pm) when nobody virtually was at home, and among those crimes, roughly two-thirds of them are thefts. This summarized information is significantly useful to the general public and the law enforcement agencies.

Research limitations/implications

The authors’ research is limited to one city’s crime data, whose data set might be different from other cities. In addition to the volume of data and lack of descriptions, the major limitations encountered were the lack of major neighborhood names and their relation to streets. There are other government agencies that provide data to this effect, and a standard set of data would facilitate the process. The authors also looked at data for a nine-month period only. Analyzing data over many years will provide time-trend of crime statistics for a longer period of time.

Practical implications

Many federal, state and local law enforcement agencies are rapidly embracing technology to publish crime data through their websites. However, more attention will need to be paid to the quality and utility of this information to the general public. At the time, there exists no compiled source of crime data or its trend as a function of time, crime type, location and premises. There needs to be a coherent system that allows for an average citizen to obtain this information in a more consumable package. DW and OLAP tools can provide this information package.

Social implications

Having the crime data of a big city in a consumable form is immensely useful for all segments of the constituency that the government agencies serve and will become a service that these offices will be expected to deliver on demand. This information could also be useful in many instances for the decision makers, ranging from those seeking to start a business, to those seeking a place to live who may not necessarily know which neighborhoods or parts of the city are more prone to criminal activity than others.

Originality/value

While there have been few reports of possible use of DW and OALP technologies to study criminal data, the authors found that not many authors used actual crime data, the data sets and formats used in each case are different, results are not presented in most cases and the actual vendor technologies implemented can be different as well. In this paper, the authors present how DW and OLAP tools readily available in most enterprises can be used to analyze publicly available criminal datasets and convert them into meaningful information, which can be valuable not only to the law enforcement agencies but to the public at large.

Details

Transforming Government: People, Process and Policy, vol. 10 no. 2
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 1 May 2006

Rajugan Rajagopalapillai, Elizabeth Chang, Tharam S. Dillon and Ling Feng

In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources…

Abstract

In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of EXtensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user‐defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi‐structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three‐fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a viewdriven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction.

Details

International Journal of Web Information Systems, vol. 2 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 1000