Search results

1 – 10 of over 180000
Article
Publication date: 30 September 2014

Jing Guo, Qinling Huang and Jiayi Chen

The purpose of this paper is to put forward a Ultra-high Frequency Radio Frequency Identification (UHF-RFID) data model construction scheme for university libraries, hoping to…

Abstract

Purpose

The purpose of this paper is to put forward a Ultra-high Frequency Radio Frequency Identification (UHF-RFID) data model construction scheme for university libraries, hoping to realize the opening, uniform, compatible and interoperable RFID application between different libraries and manufacturers.

Design/methodology/approach

This article uses the practical application needs of university libraries as the starting point, and proposes the UHF-RFID data model construction scheme for university libraries based on the study of applicable standards, such as ISO 28560.

Findings

Based on practical application demand of university libraries and some international standards, the paper puts forward an UHF-RFID data model construction scheme for university libraries. First, the scheme explains and defines six user data elements different from ISO28560: version, owner library identifiers, temporary item location, subject, International Standard Serial Number (ISSN) and International Standard Book Number (ISBN). Furthermore, different encoding rules for electronic product code (EPC) data area and user data area are designed to achieve maximum work efficiency.

Practical implications

This paper tries to bring forward a set of referential UHF-RFID data model standards for university libraries. Hopefully, this standard will offer uniform data models for university libraries to comply with, integrate the disordered market and further make the opening, unified, compatible and interoperable RFID application possible.

Originality/value

Although there are several formally published RFID standard documents, they are primarily designed for high frequency RFID technology. Concerning UHF-RFID technology, there are still no internationally unified data model standards. Hence, this paper brings forward the UHF-RFID data model construction scheme for university libraries.

Details

The Electronic Library, vol. 32 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 24 April 2020

Juan Manuel Davila Delgado and Lukumon O. Oyedele

The purpose of this paper is to review and provide recommendations to extend the current open standard data models for describing monitoring systems and circular economy precepts…

Abstract

Purpose

The purpose of this paper is to review and provide recommendations to extend the current open standard data models for describing monitoring systems and circular economy precepts for built assets. Open standard data models enable robust and efficient data exchange which underpins the successful implementation of a circular economy. One of the largest opportunities to reduce the total life cycle cost of a built asset is to use the building information modelling (BIM) approach during the operational phase because it represents the largest share of the entire cost. BIM models that represent the actual conditions and performance of the constructed assets can boost the benefits of the installed monitoring systems and reduce maintenance and operational costs.

Design/methodology/approach

This paper presents a horizontal investigation of current BIM data models and their use for describing circular economy principles and performance monitoring of built assets. Based on the investigation, an extension to the industry foundation classes (IFC) specification, recommendations and guidelines are presented which enable to describe circular economy principles and asset monitoring using IFC.

Findings

Current open BIM data models are not sufficiently mature yet. This limits the interoperability of the BIM approach and the implementation of circular economy principles. An overarching approach to extend the current standards is necessary, which considers aspects related to not only modelling the monitoring system but also data management and analysis.

Originality/value

To the authors’ best knowledge, this is the first study that identifies requirements for data model standards in the context current linear economic model of making, using and disposing is growing unsustainably far beyond the finite limits of planet of a circular economy. The results of this study set the basis for the extension of current standards required to apply the circular economy precepts.

Details

Journal of Engineering, Design and Technology , vol. 18 no. 5
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 1 March 2002

M. Alshawi and I. Faraj

There have been major efforts to develop the technology for integrated construction environments and the mechanisms needed to improve the collaboration between construction…

1251

Abstract

There have been major efforts to develop the technology for integrated construction environments and the mechanisms needed to improve the collaboration between construction professionals. Evidently, the development and usage of such an environment is a complicated task. The two issues that can be among the main contributors to this are: the development of the technology and its effective implementation. These two issues are addressed separately in this paper. The paper first explains the approaches of sharing project information, followed by a review of a recent project in this area, the result of which is a distributed integrated construction environment based on the industry foundation class (IFC), capable of supporting a number of construction applications. This environment is capable of supporting a construction team to work collaboratively over the internet. It then discusses the difficulties facing the successful implementation of such environments in construction organisations. This is addressed within the context of two management models for effective implementation of IT: the resource‐based model and the Nolan model.

Details

Construction Innovation, vol. 2 no. 1
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 24 October 2023

Hasan Tutar, Mehmet Şahin and Teymur Sarkhanov

The lack of a definite standard for determining the sample size in qualitative research leaves the research process to the initiative of the researcher, and this situation…

Abstract

Purpose

The lack of a definite standard for determining the sample size in qualitative research leaves the research process to the initiative of the researcher, and this situation overshadows the scientificity of the research. The primary purpose of this research is to propose a model by questioning the problem of determining the sample size, which is one of the essential issues in qualitative research. The fuzzy logic model is proposed to determine the sample size in qualitative research.

Design/methodology/approach

Considering the structure of the problem in the present study, the proposed fuzzy logic model will benefit and contribute to the literature and practical applications. In this context, ten variables, namely scope of research, data quality, participant genuineness, duration of the interview, number of interviews, homogeneity, information strength, drilling ability, triangulation and research design, are used as inputs. A total of 20 different scenarios were created to demonstrate the applicability of the model proposed in the research and how the model works.

Findings

The authors reflected the results of each scenario in the table and showed the values for the sample size in qualitative studies in Table 4. The research results show that the proposed model's results are of a quality that will support the literature. The research findings show that it is possible to develop a model using the laws of fuzzy logic to determine the sample size in qualitative research.

Originality/value

The model developed in this research can contribute to the literature, and in any case, it can be argued that determining the sample volume is a much more effective and functional model than leaving it to the initiative of the researcher.

Details

Qualitative Research Journal, vol. 24 no. 3
Type: Research Article
ISSN: 1443-9883

Keywords

Article
Publication date: 1 September 2002

Ghassan Aouad, Ming Sun and Ishan Faraj

This paper presents an argument for automating data representations within the construction sector. It questions whether full automation and integration is feasible and achievable…

Abstract

This paper presents an argument for automating data representations within the construction sector. It questions whether full automation and integration is feasible and achievable considering the complexity of the industry and supply chain problems. The paper starts by reviewing the research in the area of information automation, modelling and integration. A research prototype, GALLICON, is used as an example to demonstrate the levels of integration and automation that may be achieved with the current generation of technology.

Details

Construction Innovation, vol. 2 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 31 May 2024

Farzaneh Zarei and Mazdak Nik-Bakht

This paper aims to enrich the 3D urban models with data contributed by citizens to support data-driven decision-making in urban infrastructure projects. We introduced a new…

Abstract

Purpose

This paper aims to enrich the 3D urban models with data contributed by citizens to support data-driven decision-making in urban infrastructure projects. We introduced a new application domain extension to CityGML (social – input ADE) to enable citizens to store, classify and exchange comments generated by citizens regarding infrastructure elements. The main goal of social – input ADE is to add citizens’ feedback as semantic objects to the CityGML model.

Design/methodology/approach

Firstly, we identified the key functionalities of the suggested ADE and how to integrate them with existing 3D urban models. Next, we developed a high-level conceptual design outlining the main components and interactions within the social-input ADE. Then we proposed a package diagram for the social – input ADE to illustrate the organization of model elements and their dependencies. We also provide a detailed discussion of the functionality of different modules in the social-input ADE.

Findings

As a result of this research, it has seen that informative streams of information are generated via mining the stored data. The proposed ADE links the information of the built environment to the knowledge of end-users and enables an endless number of socially driven innovative solutions.

Originality/value

This work aims to provide a digital platform for aggregating, organizing and filtering the distributed end-users’ inputs and integrating them within the city’s digital twins to enhance city models. To create a data standard for integrating attributes of city physical elements and end-users’ social information and inputs in the same digital ecosystem, the open data model CityGML has been used.

Book part
Publication date: 16 December 2009

Jeffrey S. Racine

The R environment for statistical computing and graphics (R Development Core Team, 2008) offers practitioners a rich set of statistical methods ranging from random number…

Abstract

The R environment for statistical computing and graphics (R Development Core Team, 2008) offers practitioners a rich set of statistical methods ranging from random number generation and optimization methods through regression, panel data, and time series methods, by way of illustration. The standard R distribution (base R) comes preloaded with a rich variety of functionality useful for applied econometricians. This functionality is enhanced by user-supplied packages made available via R servers that are mirrored around the world. Of interest in this chapter are methods for estimating nonparametric and semiparametric models. We summarize many of the facilities in R and consider some tools that might be of interest to those wishing to work with nonparametric methods who want to avoid resorting to programming in C or Fortran but need the speed of compiled code as opposed to interpreted code such as Gauss or Matlab by way of example. We encourage those working in the field to strongly consider implementing their methods in the R environment thereby making their work accessible to the widest possible audience via an open collaborative forum.

Details

Nonparametric Econometric Methods
Type: Book
ISBN: 978-1-84950-624-3

Article
Publication date: 9 August 2022

Theocharis Moysiadis, Konstantina Spanaki, Ayalew Kassahun, Sabine Kläser, Nicolas Becker, George Alexiou, Nikolaos Zotos and Iliada Karali

Traceability of food is of paramount importance to the increasingly sustainability-conscious consumers. Several tracking and tracing systems have been developed in the AgriFood…

Abstract

Purpose

Traceability of food is of paramount importance to the increasingly sustainability-conscious consumers. Several tracking and tracing systems have been developed in the AgriFood sector in order to prove to the consumers the origins and processing of food products. Critical challenges in realizing food's traceability include cooperating with multiple actors on common data sharing standards and data models.

Design/methodology/approach

This research applies a design science approach to showcase traceability that includes preharvest activities and conditions in a case study. The authors demonstrate how existing data sharing standards can be applied in combination with new data models suitable for capturing transparency information about plant production.

Findings

Together with existing studies on farm-to-fork transparency, our results demonstrate how to realize transparency from field to fork and enable producers to show a complete bill of sustainability.

Originality/value

The existing standards and data models address transparency challenges in AgriFood chains from the moment of harvest up to retail (farm-to-fork) relatively well, but not what happens before harvest. In order to address sustainability concerns, there is a need to collect data about production activities related to product quality and sustainability before harvesting and share it downstream the supply chain. The ability to gather data on sustainability practices such as reducing pesticide, herbicide, fertilizer and water use are crucial requirements for producers to market their produce as quality and sustainable products.

Details

Benchmarking: An International Journal, vol. 30 no. 9
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 1 September 2001

James R. Otto, James H. Cook and Q.B. Chung

Explores the use of extensible markup language (XML) to both store and enforce organizational data definitions, thus providing a synergetic framework for leveraging the potential…

1170

Abstract

Explores the use of extensible markup language (XML) to both store and enforce organizational data definitions, thus providing a synergetic framework for leveraging the potential of knowledge management (KM) tools. XML provides a flexible markup standard for representing data models. KM provides IT processes for capturing, maintaining, and using information. While the processes that comprise KM and the mechanisms that form XML differ greatly in concept, they both deal in a fundamental way with information. XML maintains the context of data (i.e. data model) which enables data to represent information. KM provides the framework for managing this information. Explores the vital role that XML can play to support an efficient corporate KM strategy.

Details

Journal of Knowledge Management, vol. 5 no. 3
Type: Research Article
ISSN: 1367-3270

Keywords

Open Access
Article
Publication date: 22 November 2022

Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…

Abstract

Purpose

The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.

Design/methodology/approach

Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.

Findings

Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.

Originality/value

The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.

Details

Marine Economics and Management, vol. 5 no. 2
Type: Research Article
ISSN: 2516-158X

Keywords

1 – 10 of over 180000