Search results

1 – 10 of over 2000
Book part
Publication date: 14 December 2023

Steven A. Harrast, Lori Olsen and Yan (Tricia) Sun

Prior research (Harrast, Olsen, & Sun, 2023) analyzes the eight emerging topics to be included in future CPA exams and discusses their importance to career success and appropriate…

Abstract

Prior research (Harrast, Olsen, & Sun, 2023) analyzes the eight emerging topics to be included in future CPA exams and discusses their importance to career success and appropriate teaching locus in light of survey evidence. They find that the general topic of data analytics is the most important of the eight emerging topics. To further understand the topics most important to career success, this study analyzes subtopics underlying the eight emerging topics. The results show that advanced Excel analysis tools, data visualization, and data extraction, transformation, and loading (ETL) are the most important data analytics subskills for career success according to professionals and that these topics should be both introduced and emphasized in the accounting curriculum. The results provide useful information to educators to prioritize general emerging topics and specific subtopics in the accounting curriculum by taking into account the most pressing needs of the profession.

Book part
Publication date: 1 November 2007

Irina Farquhar and Alan Sorkin

This study proposes targeted modernization of the Department of Defense (DoD's) Joint Forces Ammunition Logistics information system by implementing the optimized innovative…

Abstract

This study proposes targeted modernization of the Department of Defense (DoD's) Joint Forces Ammunition Logistics information system by implementing the optimized innovative information technology open architecture design and integrating Radio Frequency Identification Device data technologies and real-time optimization and control mechanisms as the critical technology components of the solution. The innovative information technology, which pursues the focused logistics, will be deployed in 36 months at the estimated cost of $568 million in constant dollars. We estimate that the Systems, Applications, Products (SAP)-based enterprise integration solution that the Army currently pursues will cost another $1.5 billion through the year 2014; however, it is unlikely to deliver the intended technical capabilities.

Details

The Value of Innovation: Impact on Health, Life Quality, Safety, and Regulatory Research
Type: Book
ISBN: 978-1-84950-551-2

Article
Publication date: 17 April 2020

Houda Chakiri, Mohammed El Mohajir and Nasser Assem

Most local governance assessment tools are entirely or partially based on stakeholders’ surveys, focus groups and benchmarks of different local governments in the world. These…

Abstract

Purpose

Most local governance assessment tools are entirely or partially based on stakeholders’ surveys, focus groups and benchmarks of different local governments in the world. These tools remain a subjective way of local governance evaluation. To measure the performance of local good-governance using an unbiased assessment technique, the authors have developed a framework to help automate the design process of a data warehouse (DW), which provides local and central decision-makers with factual, measurable and accurate local government data to help assess the performance of local government. The purpose of this paper is to propose the extraction of the DW schema based on a mixed approach that adopts both i* framework for requirements-based representation and domain ontologies for data source representation, to extract the multi-dimensional (MD) elements. The data was collected from various sources and information systems (ISs) deployed in different municipalities.

Design/methodology/approach

The authors present a framework for the design and implementation of a DW for local good-governance assessment. The extraction of facts and dimensions of the DW’s MD schema is done using a hybrid approach, where the extraction of requirement-based DW schema and source-based DW schema are done in parallel followed by the reconciliation of the obtained schemas to obtain the good-governance assessment DW final design.

Findings

The authors developed a novel framework to design and implement a DW for local good-governance assessment. The framework enables the extraction of the DW MD schema by using domain ontologies to help capture semantic artifacts and minimize misconceptions and misunderstandings between different stakeholders. The introduction and use of domain ontologies during the design process serves the generalization and automation purpose of the framework.

Research limitations/implications

The presently conducted research faced two main limitations as follows: the first is the full automation of the design process of the DW and the second, and most important, is access to local government data as it remains limited because of the lack of digitally stored data in municipalities, especially in developing countries in addition to the difficulty of accessing the data because of regulatory aspects and bureaucracy.

Practical implications

The local government environment is among the public administrations most subject to change-adverse cultures and where the authors can face high levels of resistance and significant difficulties during the implementation of decision support systems, despite the commitment/engagement of decision-makers. Access to data sources stored by different ISs might be challenging. While approaching the municipalities for data access, it was done in the framework of a research project within one of the most notorious universities in the country, which gave more credibility and trust to the research team. There is also a need for further testing of the framework to reveal its scalability and performance characteristics.

Originality/value

Compared to other local government assessment ad hoc tools that are partially or entirely based on subjectively collected data, the framework provides a basis for automated design of a comprehensive local government DW using e-government domain ontologies for data source representation coupled with the goal, rationale and business process diagrams for user requirements representations, thus enabling the extraction of the final DW MD schema.

Details

Transforming Government: People, Process and Policy, vol. 14 no. 2
Type: Research Article
ISSN: 1750-6166

Keywords

Article
Publication date: 11 September 2007

Ruey‐Kei Chiu, S.C. Lenny Koh and Chi‐Ming Chang

The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an…

Abstract

Purpose

The purpose of this paper is to provide a data framework to support the incremental aggregation of, and an effective data refresh model to maintain the data consistency in, an aggregated centralized database.

Design/methodology/approach

It is based on a case study of enterprise distributed databases aggregation for Taiwan's National Immunization Information System (NIIS). Selective data replication aggregated the distributed databases to the central database. The data refresh model assumed heterogeneous aggregation activity within the distributed database systems. The algorithm of the data refresh model followed a lazy replication scheme but update transactions were only allowed on the distributed databases.

Findings

It was found that the approach to implement the data refreshment for the aggregation of heterogeneous distributed databases can be more effectively achieved through the design of a refresh algorithm and standardization of message exchange between distributed and central databases.

Research limitations/implications

The transaction records are stored and transferred in standardized XML format. It is more time‐consuming in record transformation and interpretation but it does have higher transportability and compatibility over different platforms in data refreshment with equal performance. The distributed database designer should manage these issues as well assure the quality.

Originality/value

The data system model presented in this paper may be applied to other similar implementations because its approach is not restricted to a specific database management system and it uses standardized XML message for transaction exchange.

Details

Journal of Manufacturing Technology Management, vol. 18 no. 7
Type: Research Article
ISSN: 1741-038X

Keywords

Book part
Publication date: 30 September 2020

Bhawna Suri, Shweta Taneja and Hemanpreet Singh Kalsi

This chapter discussed the role of business intelligence (BI) in healthcare twofold strategic decision making of the organization and the stakeholders. The visualization…

Abstract

This chapter discussed the role of business intelligence (BI) in healthcare twofold strategic decision making of the organization and the stakeholders. The visualization techniques of data mining are applied for the early and correct diagnosis of the disease, patient’s satisfaction quotient and also helpful for the hospital to know their best commanders.

In this chapter, the usefulness of BI is shown at two levels: at doctor level and at hospital level. As a case study, a hospital is taken which deals with three different kinds of diseases: Breast Cancer, Diabetes, and Liver disorder. BI can be applied for taking better strategic decisions in the context of hospital and its department’s growth. At the doctor level, on the basis of various symptoms of the disease, the doctor can advise the suitable treatment to the patients. At the hospital level, the best department among all can be identified. Also, a patient’s type of admission, continued their treatments with the hospital, patient’s satisfaction quotient, etc., can be calculated. The authors have used different methods like Correlation matrix, decision tree, mosaic plots, etc., to conduct this analysis.

Details

Big Data Analytics and Intelligence: A Perspective for Health Care
Type: Book
ISBN: 978-1-83909-099-8

Keywords

Article
Publication date: 24 October 2008

Jayanthi Ranjan

The paper intends to find out the business justifications and requirements for incorporating business intelligence (BI) in organizations because many organizations that already

9605

Abstract

Purpose

The paper intends to find out the business justifications and requirements for incorporating business intelligence (BI) in organizations because many organizations that already have systems in place to collect data and gather information, often find themselves in a situation where they have no tools or roadmaps to put their vast data and information into use for strategic decision making.

Design/methodology/approach

In this paper BI and the growing potential for implementing BI is explained. The paper also explains a checklist for implementing BI.

Findings

During the last ten years, the approach to business management in the entire globe has deeply changed. Firms have understood the importance of enforcing achievement of the goals defined by their strategy through metrics‐driven management. Firms are evolving into new forms based on knowledge and networks in response to an environment characterized by indistinct organizational boundaries and fast‐paced change. New and complex changes are emerging that will force enterprises to operate in entirely new methods. Understanding the data and transforming, and shaping them into networked marketplaces is a key strategy for any organization to achieve competitive advantage. The business success factor for any enterprise is finding ways to bring the vast amount of data that are flowing within and across the business processes together and making sense out of them. Business Intercenine includes extraction, transformation and loading (ETL), data warehousing, database query and reporting, multidimensional/online analytical processing (OLAP) data analysis, data mining and visualization.

Originality/value

The paper provides useful information on business justifications and requirements for incorporating business intelligence in organizations.

Details

VINE, vol. 38 no. 4
Type: Research Article
ISSN: 0305-5728

Keywords

Article
Publication date: 6 September 2016

Ignacio Traverso-Ribón, Antonio Balderas-Alberico, Juan-Manuel Dodero, Ivan Ruiz-Rube and Manuel Palomo-Duarte

In a project-based learning experience, the detailed monitoring of the activities in which team members participate can be useful to evaluate their work. However, the project…

Abstract

Purpose

In a project-based learning experience, the detailed monitoring of the activities in which team members participate can be useful to evaluate their work. However, the project activity produces a large amount of data that can be hardly assessed by a single project supervisor. This poses a scalability issue if the number of users or projects size increases. In this vein, the purpose of this paper is to make the assessment of online learning experiences more sustainable.

Design/methodology/approach

This paper describes a learning-oriented collaborative assessment method, supported by an open data framework. Also, an architecture for the extraction of different indicators to facilitate the assessment process is presented.

Findings

The assessment method and the open data framework were applied to a project-based course on web engineering. This experience has provided positive evidences because the grade measurement was backed up with assessment evidences and calculated with less effort.

Research limitations/implications

At the moment, results indicate that apparently there are no significant evidences against the sustainable evaluation practices for students’ summative evaluation. Nevertheless, when more data become available, a more statistically significant analysis could be made to determine the influence of the assessment practices in the final result of the evaluated skills.

Originality/value

In contrast to various existing proposals for e-assessment, the strategy focuses on assessing learning experiences in software development projects. Also, the approach is based on the reuse of information from external process supporting tools by integrating a number of metrics in a non-intrusive way.

Content available
Book part
Publication date: 14 December 2023

Abstract

Details

Advances in Accounting Education: Teaching and Curriculum Innovations
Type: Book
ISBN: 978-1-83797-172-5

Article
Publication date: 2 November 2023

Julaine Clunis

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of…

Abstract

Purpose

This paper aims to delve into the complexities of terminology mapping and annotation, particularly within the context of the COVID-19 pandemic. It underscores the criticality of harmonizing clinical knowledge organization systems (KOS) through a cohesive clinical knowledge representation approach. Central to the study is the pursuit of a novel method for integrating emerging COVID-19-specific vocabularies with existing systems, focusing on simplicity, adaptability and minimal human intervention.

Design/methodology/approach

A design science research (DSR) methodology is used to guide the development of a terminology mapping and annotation workflow. The KNIME data analytics platform is used to implement and test the mapping and annotation techniques, leveraging its powerful data processing and analytics capabilities. The study incorporates specific ontologies relevant to COVID-19, evaluates mapping accuracy and tests performance against a gold standard.

Findings

The study demonstrates the potential of the developed solution to map and annotate specific KOS efficiently. This method effectively addresses the limitations of previous approaches by providing a user-friendly interface and streamlined process that minimizes the need for human intervention. Additionally, the paper proposes a reusable workflow tool that can streamline the mapping process. It offers insights into semantic interoperability issues in health care as well as recommendations for work in this space.

Originality/value

The originality of this study lies in its use of the KNIME data analytics platform to address the unique challenges posed by the COVID-19 pandemic in terminology mapping and annotation. The novel workflow developed in this study addresses known challenges by combining mapping and annotation processes specifically for COVID-19-related vocabularies. The use of DSR methodology and relevant ontologies with the KNIME tool further contribute to the study’s originality, setting it apart from previous research in the terminology mapping and annotation field.

Details

The Electronic Library , vol. 41 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 5 October 2010

Jiann‐Cherng Shieh

For library service, bibliomining is concisely defined as the data mining techniques used to extract patterns of behavior‐based artifacts from library systems. The bibliomining…

1254

Abstract

Purpose

For library service, bibliomining is concisely defined as the data mining techniques used to extract patterns of behavior‐based artifacts from library systems. The bibliomining process includes identifying topics, creating a data warehouse, refining data, exploring data and evaluating results. The cases of practical implementations and applications in different areas have proved that the properly enough and consolidated data warehouse is the critical promise to successful data mining applications. However, the data warehouse creation in the processing of various data sources obviously hampers librarians to apply bibliomining to improve their services and operations. Moreover, most market data mining tools are even more complex for librarians to adopt bibliomining. The purpose of this paper is to propose a practical application model for librarian bibliomining, then develop its corresponding data processing prototype system to guarantee the success of applying data mining in libraries.

Design/methodology/approach

The rapid prototyping software development method was applied to design a prototype bibliomining system. In order to evaluate the effectiveness of the system, there was a comparison experiment of accomplishing an assigned task for 15 librarians.

Findings

With the results of system usability scale (SUS) comparison and turn‐around time analysis, it was established that the proposed model and the developed prototype system can really help librarians handle bibliomining applications better.

Originality/value

The proposed novel application bibliomining model and its developed integration system are proved to be effective and efficient in bibliomining by the task‐oriented experiment and SUS to 15 librarians. Comparing turn‐around time to accomplish the assigned task, about 35 per cent in terms of time was saved. Librarians really require an appropriate integration tool to assist them in successful bibliomining applications.

Details

The Electronic Library, vol. 28 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

1 – 10 of over 2000