Search results

1 – 10 of over 28000
Article
Publication date: 1 January 2006

Ranjit Bose

Managing enterprise performance is an important, yet a difficult process due to its complexity. The process involves monitoring the strategic focus of an enterprise, whose…

8037

Abstract

Purpose

Managing enterprise performance is an important, yet a difficult process due to its complexity. The process involves monitoring the strategic focus of an enterprise, whose performance is measured from the analysis of data generated from a wide range of interrelated business activities performed at different levels within the enterprise. This study aims to investigate management data systems technologies in terms of how they are used and the issues that are related to their effective management within the broader context of enterprise performance management (EPM).

Design/methodology/approach

A range of recently published research literature on data warehousing, online analytic processing and EPM is reviewed to explore their current state, issues and challenges learned from their practice.

Findings

The findings of the study are reported in two parts. The first part discusses the current business practices of these technologies, and the second part identifies and discusses the issues and challenges the business managers dealing with these technologies face for gaining competitive advantage for their businesses.

Originality/value

The study findings are intended to assist the business managers to effectively understand the issues and technologies behind EPM implementation.

Details

Industrial Management & Data Systems, vol. 106 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 September 2004

A.D. Songer, B. Hays and C. North

The construction industry produces voluminous quantitative data. Much of this data is created during the controls phase of projects and relates to cost, schedule, and…

Abstract

The construction industry produces voluminous quantitative data. Much of this data is created during the controls phase of projects and relates to cost, schedule, and administrative information. Recent storage and processing advances in computers as well as display capabilities afforded by computer graphics increase the opportunity to monitor projects fundamentally different from existing project control systems. However, changes in project control methods have been slow to evolve. The lack of a fundamental model of project control data representation contributes to the inadequate application and implementation of visual tools in project control methods. Difficulties associated with the graphical representation of data can be traced to the diversity of skills required in creating visual information displays. Owing to the reality that not all engineers/constructors possess these attributes in great strength, streamlining the process of how to best visualize data is important. Visual representations of data hold great potential for reducing communication difficulties fostered by industry fragmentation. However, without information structure, organization, and visual explanations, the massive amount of data available to project managers results in information overload. Therefore, improved information displays are needed to overcome the possibility of information overload with the capability of human perception. This paper discusses research to create a framework for visual representation of construction project data. Underlying visualization theory, the visual framework, and a detailed implementation are provided.

Details

Construction Innovation, vol. 4 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 27 July 2018

Evangelia Triperina, Georgios Bardis, Cleo Sgouropoulou, Ioannis Xydas, Olivier Terraz and Georgios Miaoulis

The purpose of this paper is to introduce a novel framework for visual-aided ontology-based multidimensional ranking and to demonstrate a case study in the academic domain.

Abstract

Purpose

The purpose of this paper is to introduce a novel framework for visual-aided ontology-based multidimensional ranking and to demonstrate a case study in the academic domain.

Design/methodology/approach

The paper presents a method for adapting semantic web technologies on multiple criteria decision-making algorithms to endow to them dynamic characteristics. It also showcases the enhancement of the decision-making process by visual analytics.

Findings

The semantic enhanced ranking method enables the reproducibility and transparency of ranking results, while the visual representation of this information further benefits decision makers into making well-informed and insightful deductions about the problem.

Research limitations/implications

This approach is suitable for application domains that are ranked on the basis of multiple criteria.

Originality/value

The discussed approach provides a dynamic ranking methodology, instead of focusing only on one application field, or one multiple criteria decision-making method. It proposes a framework that allows integration of multidimensional, domain-specific information and produces complex ranking results in both textual and visual form.

Details

Data Technologies and Applications, vol. 52 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 7 June 2022

Ana Gutiérrez, Jose Aguilar, Ana Ortega and Edwin Montoya

The authors propose the concept of “Autonomic Cycle for innovation processes,” which defines a set of tasks of data analysis, whose objective is to improve the innovation process…

1485

Abstract

Purpose

The authors propose the concept of “Autonomic Cycle for innovation processes,” which defines a set of tasks of data analysis, whose objective is to improve the innovation process in micro-, small and medium-sized enterprises (MSMEs).

Design/methodology/approach

The authors design autonomic cycles where each data analysis task interacts with each other and has different roles: some of them must observe the innovation process, others must analyze and interpret what happens in it, and finally, others make decisions in order to improve the innovation process.

Findings

In this article, the authors identify three innovation sub-processes which can be applied to autonomic cycles, which allow interoperating the actors of innovation processes (data, people, things and services). These autonomic cycles define an innovation problem, specify innovation requirements, and finally, evaluate the results of the innovation process, respectively. Finally, the authors instance/apply the autonomic cycle of data analysis tasks to determine the innovation problem in the textile industry.

Research limitations/implications

It is necessary to implement all autonomous cycles of data analysis tasks (ACODATs) in a real scenario to verify their functionalities. Also, it is important to determine the most important knowledge models required in the ACODAT for the definition of the innovation problem. Once determined this, it is necessary to define the relevant everything mining techniques required for their implementations, such as service and process mining tasks.

Practical implications

ACODAT for the definition of the innovation problem is essential in a process innovation because it allows the organization to identify opportunities for improvement.

Originality/value

The main contributions of this work are: For an innovation process is specified its ACODATs in order to manage it. A multidimensional data model for the management of an innovation process is defined, which stores the required information of the organization and of the context. The ACODAT for the definition of the innovation problem is detailed and instanced in the textile industry. The Artificial Intelligence (AI) techniques required for the ACODAT for the innovation problem definition are specified, in order to obtain the knowledge models (prediction and diagnosis) for the management of the innovation process for MSMEs of the textile industry.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 20 June 2008

Nikolaos Fousteris, Manolis Gergatsoulis and Yannis Stavrakas

In a wide spectrum of applications, it is desirable to manipulate semistructured information that may present variations according to different circumstances. Multidimensional XML…

Abstract

Purpose

In a wide spectrum of applications, it is desirable to manipulate semistructured information that may present variations according to different circumstances. Multidimensional XML (MXML) is an extension of XML suitable for representing data that assume different facets, having different value and/or structure under different contexts. The purpose of this paper is to develop techniques for updating MXML documents.

Design/methodology/approach

Updating XML has been studied in the past, however, updating MXML must take into account the additional features, which stem from incorporating context into MXML. This paper investigates the problem of updating MXML in two levels: at the graph level, i.e. in an implementation independent way; and at the relational storage level.

Findings

The paper introduces six basic update operations, which are capable of any possible change. Those operations are specified in an implementation independent way, and their effect explained through examples. Algorithms are given that implement those operations using SQL on a specific storage method that employs relational tables for keeping MXML. An overview is given of multidimensional XPath (MXPath), an extension of XPath that incorporates context, and show how to translate MXPath queries to “equivalent” SQL queries.

Research limitations/implications

Though the proposed operations solve the problem of updating MXML documents, several problems, such as formally define MXPath and its translation to SQL, remain to be investigated in the future in order to implement a system that stores, queries and updates MXML documents through a relational database infrastructure.

Practical implications

MXML is suitable for representing, in a compact way, data that assume different facets, having different value or structure, under different contexts. In order for MXML to be applicable in practice, it is vital to develop techniques and tools for storing, updating and querying MXML documents. The techniques proposed in this paper form a significant step in this direction.

Originality/value

This paper presents a novel approach for updating MXML documents by proposing update operations on both, the graph level and the (relational) storage level.

Details

International Journal of Web Information Systems, vol. 4 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 17 April 2020

Houda Chakiri, Mohammed El Mohajir and Nasser Assem

Most local governance assessment tools are entirely or partially based on stakeholders’ surveys, focus groups and benchmarks of different local governments in the world. These…

Abstract

Purpose

Most local governance assessment tools are entirely or partially based on stakeholders’ surveys, focus groups and benchmarks of different local governments in the world. These tools remain a subjective way of local governance evaluation. To measure the performance of local good-governance using an unbiased assessment technique, the authors have developed a framework to help automate the design process of a data warehouse (DW), which provides local and central decision-makers with factual, measurable and accurate local government data to help assess the performance of local government. The purpose of this paper is to propose the extraction of the DW schema based on a mixed approach that adopts both i* framework for requirements-based representation and domain ontologies for data source representation, to extract the multi-dimensional (MD) elements. The data was collected from various sources and information systems (ISs) deployed in different municipalities.

Design/methodology/approach

The authors present a framework for the design and implementation of a DW for local good-governance assessment. The extraction of facts and dimensions of the DW’s MD schema is done using a hybrid approach, where the extraction of requirement-based DW schema and source-based DW schema are done in parallel followed by the reconciliation of the obtained schemas to obtain the good-governance assessment DW final design.

Findings

The authors developed a novel framework to design and implement a DW for local good-governance assessment. The framework enables the extraction of the DW MD schema by using domain ontologies to help capture semantic artifacts and minimize misconceptions and misunderstandings between different stakeholders. The introduction and use of domain ontologies during the design process serves the generalization and automation purpose of the framework.

Research limitations/implications

The presently conducted research faced two main limitations as follows: the first is the full automation of the design process of the DW and the second, and most important, is access to local government data as it remains limited because of the lack of digitally stored data in municipalities, especially in developing countries in addition to the difficulty of accessing the data because of regulatory aspects and bureaucracy.

Practical implications

The local government environment is among the public administrations most subject to change-adverse cultures and where the authors can face high levels of resistance and significant difficulties during the implementation of decision support systems, despite the commitment/engagement of decision-makers. Access to data sources stored by different ISs might be challenging. While approaching the municipalities for data access, it was done in the framework of a research project within one of the most notorious universities in the country, which gave more credibility and trust to the research team. There is also a need for further testing of the framework to reveal its scalability and performance characteristics.

Originality/value

Compared to other local government assessment ad hoc tools that are partially or entirely based on subjectively collected data, the framework provides a basis for automated design of a comprehensive local government DW using e-government domain ontologies for data source representation coupled with the goal, rationale and business process diagrams for user requirements representations, thus enabling the extraction of the final DW MD schema.

Details

Transforming Government: People, Process and Policy, vol. 14 no. 2
Type: Research Article
ISSN: 1750-6166

Keywords

Open Access
Article
Publication date: 2 July 2024

Qingyun Fu, Shuxin Ding, Tao Zhang, Rongsheng Wang, Ping Hu and Cunlai Pu

To optimize train operations, dispatchers currently rely on experience for quick adjustments when delays occur. However, delay predictions often involve imprecise shifts based on…

Abstract

Purpose

To optimize train operations, dispatchers currently rely on experience for quick adjustments when delays occur. However, delay predictions often involve imprecise shifts based on known delay times. Real-time and accurate train delay predictions, facilitated by data-driven neural network models, can significantly reduce dispatcher stress and improve adjustment plans. Leveraging current train operation data, these models enable swift and precise predictions, addressing challenges posed by train delays in high-speed rail networks during unforeseen events.

Design/methodology/approach

This paper proposes CBLA-net, a neural network architecture for predicting late arrival times. It combines CNN, Bi-LSTM, and attention mechanisms to extract features, handle time series data, and enhance information utilization. Trained on operational data from the Beijing-Tianjin line, it predicts the late arrival time of a target train at the next station using multidimensional input data from the target and preceding trains.

Findings

This study evaluates our model's predictive performance using two data approaches: one considering full data and another focusing only on late arrivals. Results show precise and rapid predictions. Training with full data achieves a MAE of approximately 0.54 minutes and a RMSE of 0.65 minutes, surpassing the model trained solely on delay data (MAE: is about 1.02 min, RMSE: is about 1.52 min). Despite superior overall performance with full data, the model excels at predicting delays exceeding 15 minutes when trained exclusively on late arrivals. For enhanced adaptability to real-world train operations, training with full data is recommended.

Originality/value

This paper introduces a novel neural network model, CBLA-net, for predicting train delay times. It innovatively compares and analyzes the model's performance using both full data and delay data formats. Additionally, the evaluation of the network's predictive capabilities considers different scenarios, providing a comprehensive demonstration of the model's predictive performance.

Article
Publication date: 1 March 1999

Vijayan Sugumaran and Ranjit Bose

There is a tremendous explosion in the amount of data that organizations generate, collect and store. Managers are beginning to recognize the value of this asset, and are…

1778

Abstract

There is a tremendous explosion in the amount of data that organizations generate, collect and store. Managers are beginning to recognize the value of this asset, and are increasingly relying on intelligent systems to access, analyze, summarize, and interpret information from large and multiple data sources. These systems help them make critical business decisions faster or with a greater degree of confidence. Data mining is a promising new technology that helps bring business intelligence into these systems. While there is a plethora of data mining techniques and tools available, they present inherent problems for end‐users such as complexity, required technical expertise, lack of flexibility and interoperability, etc. These problems can be mitigated by deploying software agents to assist end‐users in their problem solving endeavors. This paper presents the design and development of an intelligent software agent based data analysis and mining environment called IDM, which is utilized in decision making activities.

Details

Industrial Management & Data Systems, vol. 99 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 1 February 2004

Margo Hanna

Higher education (HE) is becoming a big business, with huge investments in IT technology supporting online learning. With the awareness of the knowledge economy has come a growing…

6438

Abstract

Higher education (HE) is becoming a big business, with huge investments in IT technology supporting online learning. With the awareness of the knowledge economy has come a growing consciousness that HE constitutes a large industry or economic sector in its own right. In a marketing fashion, we understand that some customers present much greater profit potential than others. But, how will we find those high‐potential customers in a database that contains hundreds of data items for each of millions of customers? Data mining software can help find the “high‐profit” gems buried in mountains of information. However, merely identifying the best prospects is not enough to improve customer value. One must somehow fit the data mining results into the execution of the content management system that enhances the profitability of customer relationships. However, data mining is not yet engaged into e‐learning systems. This paper describes how we can profit from the integration of data mining and the e‐learning technology.

Details

Campus-Wide Information Systems, vol. 21 no. 1
Type: Research Article
ISSN: 1065-0741

Keywords

Book part
Publication date: 5 October 2018

Nicolás Marín Ruiz, María Martínez-Rojas, Carlos Molina Fernández, José Manuel Soto-Hidalgo, Juan Carlos Rubio-Romero and María Amparo Vila Miranda

The construction sector has significantly evolved in recent decades, in parallel with a huge increase in the amount of data generated and exchanged in any construction project…

Abstract

The construction sector has significantly evolved in recent decades, in parallel with a huge increase in the amount of data generated and exchanged in any construction project. These data need to be managed in order to complete a successful project in terms of quality, cost and schedule in the the context of a safe project environment while appropriately organising many construction documents.

However, the origin of these data is very diverse, mainly due to the sector’s characteristics. Moreover, these data are affected by uncertainty, complexity and diversity due to the imprecise nature of the many factors involved in construction projects. As a result, construction project data are associated with large, irregular and scattered datasets.

The objective of this chapter is to introduce an approach based on a fuzzy multi-dimensional model and on line analytical processing (OLAP) operations in order to manage construction data and support the decision-making process based on previous experiences. On one hand, the proposal allows for the integration of data in a common repository which is accessible to users along the whole project’s life cycle. On the other hand, it allows for the establishment of more flexible structures for representing the data of the main tasks in the construction project management domain. The incorporation of this fuzzy framework allows for the management of imprecision in construction data and provides easy and intuitive access to users so that they can make more reliable decisions.

Details

Fuzzy Hybrid Computing in Construction Engineering and Management
Type: Book
ISBN: 978-1-78743-868-2

Keywords

1 – 10 of over 28000