Search results

1 – 10 of over 1000
Article
Publication date: 1 January 1983

K.T. FUNG

The relationship between the search problem and entropy identified by Pierce is used to show that the entropy maximization technique can also be applied to the data distribution

Abstract

The relationship between the search problem and entropy identified by Pierce is used to show that the entropy maximization technique can also be applied to the data distribution problem. An immediate result is that a previously validated entropy model for the data distribution problem can now be more formally developed.

Details

Kybernetes, vol. 12 no. 1
Type: Research Article
ISSN: 0368-492X

Article
Publication date: 6 July 2015

Andrew Whyte and James Donaldson

The use of digital-models to communicate civil-engineering design continues to generate debate; this pilot-work reviews technology uptake towards data repurposing and assesses…

Abstract

Purpose

The use of digital-models to communicate civil-engineering design continues to generate debate; this pilot-work reviews technology uptake towards data repurposing and assesses digital (vs traditional) design-preparation timelines and fees for infrastructure. The paper aims to discuss these issues.

Design/methodology/approach

Extending (building-information-modelling) literature, distribution-impact is investigated across: quality-management, technical-applications and contractual-liability. Project case-study scenarios were developed and validated with resultant modelling-application timeline/fees examined, in conjunction with qualitative semi-structured interviews with 11 prominent stakeholder companies.

Findings

Results generated to explore digital-model data-distribution/usage identify: an 8 per cent time/efficiency improvement at the design-phase, and a noteworthy cost-saving of 0.7 per cent overall. Fragmented opinion regarding modelling utilisation exists across supply-chains, with concerns over liability, quality-management and, the lack of Australian-Standard contract-clause(s) dealing directly with digital-model document hierarchy/clarification/reuse.

Research limitations/implications

Representing a small-scale/snapshot industrial-study, findings suggest that (model-distribution) must emphasise checking-procedures within quality-systems and, seek precedence clarification for dimensioned documentation. Similarly, training in specific file-formatting (digital-model-addenda) techniques, CAD-file/hard-copy continuity, and digital-visualisation software, can better regulate model dissemination/reuse. Time/cost savings through digital-model data-distribution in civil-engineering contracts are available to enhance provision of society’s infrastructure.

Originality/value

This work extends knowledge of 3D-model distribution for roads/earthworks/drainage, and presents empirical evidence that (alongside appropriate consideration of general-conditions-of-contract and specific training to address revision-document continuity), industry may achieve tangible benefits from digital-model data as a means to communicate civil-engineering design.

Details

Built Environment Project and Asset Management, vol. 5 no. 3
Type: Research Article
ISSN: 2044-124X

Keywords

Article
Publication date: 1 May 2003

Sugjoon Yoon and Hyunjoo Kang

Various parameter values are provided in the form of data tables, where data keys are ordered and unevenly spaced in general, for real‐time simulation of dynamic systems. However…

Abstract

Various parameter values are provided in the form of data tables, where data keys are ordered and unevenly spaced in general, for real‐time simulation of dynamic systems. However, most parameter values required for simulation do not explicitly exist in data tables. Thus, unit intervals, including parameter values, are searched rather than the data keys. Since real‐time constraint enforces use of a fixed step size in integration of system differential equations because of the inherent nature of input from and output to real hardware, the worst case of iterated probes in searching algorithms is the core measure for comparison. The worst case is expressed as Big O. In this study, conventional bisection, interpolation, and fast searches are analyzed and compared in Big O as well as the newly developed searching algorithms: modified fast search and modified regular falsi search. If the criterion is actual execution time required for searching, most numerical tests in this paper show that bisection search is superior to the others. Interpolation search and its variations show better performance in the case of linear or near linear data distribution than bisection search. The numerical tests show that modified regular falsi search is faster than the other interpolation searches in either expected time or worst cases. Given parameter tables should be carefully examined for their data distribution in order to determine the most appropriate searching algorithm for the application.

Details

Engineering Computations, vol. 20 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 June 1990

Tony Lancaster

AS industrial monitor and control systems grow in both size and sophistication, the demands placed upon the man‐machine interface (MMI) increase to the point at which there are…

Abstract

AS industrial monitor and control systems grow in both size and sophistication, the demands placed upon the man‐machine interface (MMI) increase to the point at which there are significant advantages to be gained from separating the data distribution and display function from the rest of the system. One of the main benefits of such an approach is the elimination of ‘knock‐on’ effects from changes in the way the data is distributed, manipulated and displayed. In a system where data distribution and display are an integral part of the total monitor and control system, changes to one part of the system can result in unexpected and far reaching effects on the whole system. As a result, changes which would improve the MMI aspects of the system are often not implemented.

Details

Aircraft Engineering and Aerospace Technology, vol. 62 no. 6
Type: Research Article
ISSN: 0002-2667

Article
Publication date: 10 March 2023

Jingyi Li and Shiwei Chao

Binary classification on imbalanced data is a challenge; due to the imbalance of the classes, the minority class is easily masked by the majority class. However, most existing…

Abstract

Purpose

Binary classification on imbalanced data is a challenge; due to the imbalance of the classes, the minority class is easily masked by the majority class. However, most existing classifiers are better at identifying the majority class, thereby ignoring the minority class, which leads to classifier degradation. To address this, this paper proposes a twin-support vector machines for binary classification on imbalanced data.

Design/methodology/approach

In the proposed method, the authors construct two support vector machines to focus on majority classes and minority classes, respectively. In order to promote the learning ability of the two support vector machines, a new kernel is derived for them.

Findings

(1) A novel twin-support vector machine is proposed for binary classification on imbalanced data, and new kernels are derived. (2) For imbalanced data, the complexity of data distribution has negative effects on classification results; however, advanced classification results can be gained and desired boundaries are learned by using optimizing kernels. (3) Classifiers based on twin architectures have more advantages than those based on single architecture for binary classification on imbalanced data.

Originality/value

For imbalanced data, the complexity of data distribution has negative effects on classification results; however, advanced classification results can be gained and desired boundaries are learned through using optimizing kernels.

Details

Data Technologies and Applications, vol. 57 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 18 February 2021

Wenguang Yang, Lianhai Lin and Hongkui Gao

To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system theory…

Abstract

Purpose

To solve the problem of simulation evaluation with small samples, a fresh approach of grey estimation is presented based on classical statistical theory and grey system theory. The purpose of this paper is to make full use of the difference of data distribution and avoid the marginal data being ignored.

Design/methodology/approach

Based upon the grey distribution characteristics of small sample data, the definition about a new concept of grey relational similarity measure comes into being. At the same time, the concept of sample weight is proposed according to the grey relational similarity measure. Based on the new definition of grey weight, the grey point estimation and grey confidence interval are studied. Then the improved Bootstrap resampling is designed by uniform distribution and randomness as an important supplement of the grey estimation. In addition, the accuracy of grey bilateral and unilateral confidence intervals is introduced by using the new grey relational similarity measure approach.

Findings

The new small sample evaluation method can realize the effective expansion and enrichment of data and avoid the excessive concentration of data. This method is an organic fusion of grey estimation and improved Bootstrap method. Several examples are used to demonstrate the feasibility and validity of the proposed methods to illustrate the credibility of some simulation data, which has no need to know the probability distribution of small samples.

Originality/value

This research has completed the combination of grey estimation and improved Bootstrap, which makes more reasonable use of the value of different data than the unimproved method.

Details

Grey Systems: Theory and Application, vol. 12 no. 2
Type: Research Article
ISSN: 2043-9377

Keywords

Article
Publication date: 1 November 2021

Vishakha Pareek, Santanu Chaudhury and Sanjay Singh

The electronic nose is an array of chemical or gas sensors and associated with a pattern-recognition framework competent in identifying and classifying odorant or non-odorant and…

Abstract

Purpose

The electronic nose is an array of chemical or gas sensors and associated with a pattern-recognition framework competent in identifying and classifying odorant or non-odorant and simple or complex gases. Despite more than 30 years of research, the robust e-nose device is still limited. Most of the challenges towards reliable e-nose devices are associated with the non-stationary environment and non-stationary sensor behaviour. Data distribution of sensor array response evolves with time, referred to as non-stationarity. The purpose of this paper is to provide a comprehensive introduction to challenges related to non-stationarity in e-nose design and to review the existing literature from an application, system and algorithm perspective to provide an integrated and practical view.

Design/methodology/approach

The authors discuss the non-stationary data in general and the challenges related to the non-stationarity environment in e-nose design or non-stationary sensor behaviour. The challenges are categorised and discussed with the perspective of learning with data obtained from the sensor systems. Later, the e-nose technology is reviewed with the system, application and algorithmic point of view to discuss the current status.

Findings

The discussed challenges in e-nose design will be beneficial for researchers, as well as practitioners as it presents a comprehensive view on multiple aspects of non-stationary learning, system, algorithms and applications for e-nose. The paper presents a review of the pattern-recognition techniques, public data sets that are commonly referred to as olfactory research. Generic techniques for learning in the non-stationary environment are also presented. The authors discuss the future direction of research and major open problems related to handling non-stationarity in e-nose design.

Originality/value

The authors first time review the existing literature related to learning with e-nose in a non-stationary environment and existing generic pattern-recognition algorithms for learning in the non-stationary environment to bridge the gap between these two. The authors also present details of publicly available sensor array data sets, which will benefit the upcoming researchers in this field. The authors further emphasise several open problems and future directions, which should be considered to provide efficient solutions that can handle non-stationarity to make e-nose the next everyday device.

Article
Publication date: 1 March 2013

Ren Liyong, Lei Ming and Zhao Di

The purpose of this paper is to improve data transmission efficiency in a P2P system.

862

Abstract

Purpose

The purpose of this paper is to improve data transmission efficiency in a P2P system.

Design/methodology/approach

This mechanism does not require changing existing mesh topology, and only necessary peers' evaluation mechanisms need to be introduced.

Findings

This paper presents a data reservation mechanism so that it can transmit data via the push and pull model on the P2P live stream system with the mesh topology. The experiment result of establishing a simulation environment reveals that the above mentioned is an effective way of reducing the data transmission delay by 40 per cent.

Originality/value

To improve data transmission efficiency in a P2P system, this paper presents a data reservation mechanism so that it can transmit data via the push and pull model on the P2P live stream system with the mesh topology.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 32 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Open Access
Article
Publication date: 20 September 2022

Joo Hun Yoo, Hyejun Jeong, Jaehyeok Lee and Tai-Myoung Chung

This study aims to summarize the critical issues in medical federated learning and applicable solutions. Also, detailed explanations of how federated learning techniques can be…

2982

Abstract

Purpose

This study aims to summarize the critical issues in medical federated learning and applicable solutions. Also, detailed explanations of how federated learning techniques can be applied to the medical field are presented. About 80 reference studies described in the field were reviewed, and the federated learning framework currently being developed by the research team is provided. This paper will help researchers to build an actual medical federated learning environment.

Design/methodology/approach

Since machine learning techniques emerged, more efficient analysis was possible with a large amount of data. However, data regulations have been tightened worldwide, and the usage of centralized machine learning methods has become almost infeasible. Federated learning techniques have been introduced as a solution. Even with its powerful structural advantages, there still exist unsolved challenges in federated learning in a real medical data environment. This paper aims to summarize those by category and presents possible solutions.

Findings

This paper provides four critical categorized issues to be aware of when applying the federated learning technique to the actual medical data environment, then provides general guidelines for building a federated learning environment as a solution.

Originality/value

Existing studies have dealt with issues such as heterogeneity problems in the federated learning environment itself, but those were lacking on how these issues incur problems in actual working tasks. Therefore, this paper helps researchers understand the federated learning issues through examples of actual medical machine learning environments.

Details

International Journal of Web Information Systems, vol. 18 no. 2/3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 March 2013

Hongyu Zhao, Zhelong Wang, Hong Shang, Weijian Hu and Gao Qin

The purpose of this paper is to reduce the calculation burden and speed up the estimation process of Allan variance method while ensuring the exactness of the analysis results.

Abstract

Purpose

The purpose of this paper is to reduce the calculation burden and speed up the estimation process of Allan variance method while ensuring the exactness of the analysis results.

Design/methodology/approach

A series of six‐hour static tests have been implemented at room temperature, and the static measurements have been collected from MEMS IMU. In order to characterize the various types of random noise terms for the IMU, the basic definition and main procedure of the Allan variance method are investigated. Unlike the normal Allan variance method, which has the shortcomings of processing large data sets and requiring long computation time, a modified Allan variance method is proposed based on the features of data distribution in the log‐log plot of the Allan standard deviation versus the averaging time.

Findings

Experiment results demonstrate that the modified Allan variance method can effectively estimate the noise coefficients for MEMS IMU, with controllable computation time and acceptable estimation accuracy.

Originality/value

This paper proposes a time‐controllable Allan variance method which can quickly and accurately identify different noise terms imposed by the stochastic fluctuations.

Details

Industrial Robot: An International Journal, vol. 40 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 1000