Search results
1 – 10 of over 3000Kedong Yin, Yun Cao, Shiwei Zhou and Xinman Lv
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems…
Abstract
Purpose
The purposes of this research are to study the theory and method of multi-attribute index system design and establish a set of systematic, standardized, scientific index systems for the design optimization and inspection process. The research may form the basis for a rational, comprehensive evaluation and provide the most effective way of improving the quality of management decision-making. It is of practical significance to improve the rationality and reliability of the index system and provide standardized, scientific reference standards and theoretical guidance for the design and construction of the index system.
Design/methodology/approach
Using modern methods such as complex networks and machine learning, a system for the quality diagnosis of index data and the classification and stratification of index systems is designed. This guarantees the quality of the index data, realizes the scientific classification and stratification of the index system, reduces the subjectivity and randomness of the design of the index system, enhances its objectivity and rationality and lays a solid foundation for the optimal design of the index system.
Findings
Based on the ideas of statistics, system theory, machine learning and data mining, the focus in the present research is on “data quality diagnosis” and “index classification and stratification” and clarifying the classification standards and data quality characteristics of index data; a data-quality diagnosis system of “data review – data cleaning – data conversion – data inspection” is established. Using a decision tree, explanatory structural model, cluster analysis, K-means clustering and other methods, classification and hierarchical method system of indicators is designed to reduce the redundancy of indicator data and improve the quality of the data used. Finally, the scientific and standardized classification and hierarchical design of the index system can be realized.
Originality/value
The innovative contributions and research value of the paper are reflected in three aspects. First, a method system for index data quality diagnosis is designed, and multi-source data fusion technology is adopted to ensure the quality of multi-source, heterogeneous and mixed-frequency data of the index system. The second is to design a systematic quality-inspection process for missing data based on the systematic thinking of the whole and the individual. Aiming at the accuracy, reliability, and feasibility of the patched data, a quality-inspection method of patched data based on inversion thought and a unified representation method of data fusion based on a tensor model are proposed. The third is to use the modern method of unsupervised learning to classify and stratify the index system, which reduces the subjectivity and randomness of the design of the index system and enhances its objectivity and rationality.
Details
Keywords
The missing travel time data for roads is a common problem encountered by traffic management departments. Tensor decomposition, as one of the most widely used method for…
Abstract
Purpose
The missing travel time data for roads is a common problem encountered by traffic management departments. Tensor decomposition, as one of the most widely used method for completing missing traffic data, plays a significant role in the intelligent transportation system (ITS). However, existing methods of tensor decomposition focus on the global data structure, resulting in relatively low accuracy in fibrosis missing scenarios. Therefore, this paper aims to propose a novel tensor decomposition model which further considers the local spatiotemporal similarity for fibrosis missing to improve travel time completion accuracy.
Design/methodology/approach
The proposed model can aggregate road sections with similar physical attributes by spatial clustering, and then it calculates the temporal association of road sections by the dynamic longest common subsequence. A similarity relationship matrix in the temporal dimension is constructed and incorporated into the tensor completion model, which can enhance the local spatiotemporal relationship of the missing parts of the fibrosis type.
Findings
The experiment shows that this method is superior and robust. Compared with other baseline models, this method has the smallest error and maintains good completion results despite high missing rates.
Originality/value
This model has higher accuracy for the fibrosis missing and performs good convergence effects in the case of the high missing rate.
Details
Keywords
Assad Mehmood, Kashif Zia, Arshad Muhammad and Dinesh Kumar Saini
Participatory wireless sensor networks (PWSN) is an emerging paradigm that leverages existing sensing and communication infrastructures for the sensing task. Various environmental…
Abstract
Purpose
Participatory wireless sensor networks (PWSN) is an emerging paradigm that leverages existing sensing and communication infrastructures for the sensing task. Various environmental phenomenon – P monitoring applications dealing with noise pollution, road traffic, requiring spatio-temporal data samples of P (to capture its variations and its profile construction) in the region of interest – can be enabled using PWSN. Because of irregular distribution and uncontrollable mobility of people (with mobile phones), and their willingness to participate, complete spatio-temporal (CST) coverage of P may not be ensured. Therefore, unobserved data values must be estimated for CST profile construction of P and presented in this paper.
Design/methodology/approach
In this paper, the estimation of these missing data samples both in spatial and temporal dimension is being discussed, and the paper shows that non-parametric technique – Kernel Regression – provides better estimation compared to parametric regression techniques in PWSN context for spatial estimation. Furthermore, the preliminary results for estimation in temporal dimension have been provided. The deterministic and stochastic approaches toward estimation in the context of PWSN have also been discussed.
Findings
For the task of spatial profile reconstruction, it is shown that non-parametric estimation technique (kernel regression) gives a better estimation of the unobserved data points. In case of temporal estimation, few preliminary techniques have been studied and have shown that further investigations are required to find out best estimation technique(s) which may approximate the missing observations (temporally) with considerably less error.
Originality/value
This study addresses the environmental informatics issues related to deterministic and stochastic approaches using PWSN.
Details
Keywords
Walaa M. El-Sayed, Hazem M. El-Bakry and Salah M. El-Sayed
Wireless sensor networks (WSNs) are periodically collecting data through randomly dispersed sensors (motes), which typically consume high energy in radio communication that mainly…
Abstract
Wireless sensor networks (WSNs) are periodically collecting data through randomly dispersed sensors (motes), which typically consume high energy in radio communication that mainly leans on data transmission within the network. Furthermore, dissemination mode in WSN usually produces noisy values, incorrect measurements or missing information that affect the behaviour of WSN. In this article, a Distributed Data Predictive Model (DDPM) was proposed to extend the network lifetime by decreasing the consumption in the energy of sensor nodes. It was built upon a distributive clustering model for predicting dissemination-faults in WSN. The proposed model was developed using Recursive least squares (RLS) adaptive filter integrated with a Finite Impulse Response (FIR) filter, for removing unwanted reflections and noise accompanying of the transferred signals among the sensors, aiming to minimize the size of transferred data for providing energy efficient. The experimental results demonstrated that DDPM reduced the rate of data transmission to ∼20%. Also, it decreased the energy consumption to 95% throughout the dataset sample and upgraded the performance of the sensory network by about 19.5%. Thus, it prolonged the lifetime of the network.
Details
Keywords
Sebastian Drexel, Susanne Zimmermann-Janschitz and Robert J. Koester
A search and rescue incident is ultimately all about the location of the missing person; hence, geotechnical tools are critical in providing assistance to search planners. One…
Abstract
Purpose
A search and rescue incident is ultimately all about the location of the missing person; hence, geotechnical tools are critical in providing assistance to search planners. One critical role of Geographic Information Systems (GISs) is to define the boundaries that define the search area. The literature mostly focuses on ring- and area-based methods but lacks a linear/network approach. The purpose of this paper is to present a novel network approach that will benefit search planners by saving time, requires less data layers and provides better results.
Design/methodology/approach
The paper compares two existing models (Ring Model, Travel Time Cost Surface Model (TTCSM)) against a new network model (Travel Time Network Model) by using a case study from a mountainous area in Austria. Newest data from the International Search and Rescue Incident Database are used for all three models. Advantages and disadvantages of each model are evaluated.
Findings
Network analyses offer a fruitful alternative to the Ring Model and the TTCSM for estimating search areas, especially for regions with comprehensive trail/road networks. Furthermore, only few basic data are needed for quick calculation.
Practical implications
The paper supports GIS network analyses for wildland search and rescue operations to raise the survival chances of missing persons due to optimizing search area estimation.
Originality/value
The paper demonstrates the value of the novel network approach, which requires fewer GIS layers and less time to generate a solution. Furthermore, the paper provides a comparison between all three potential models.
Details
Keywords
Abdulmohsen S. Almohsen, Naif M. Alsanabani, Abdullah M. Alsugair and Khalid S. Al-Gahtani
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the…
Abstract
Purpose
The variance between the winning bid and the owner's estimated cost (OEC) is one of the construction management risks in the pre-tendering phase. The study aims to enhance the quality of the owner's estimation for predicting precisely the contract cost at the pre-tendering phase and avoiding future issues that arise through the construction phase.
Design/methodology/approach
This paper integrated artificial neural networks (ANN), deep neural networks (DNN) and time series (TS) techniques to estimate the ratio of a low bid to the OEC (R) for different size contracts and three types of contracts (building, electric and mechanic) accurately based on 94 contracts from King Saud University. The ANN and DNN models were evaluated using mean absolute percentage error (MAPE), mean sum square error (MSSE) and root mean sums square error (RMSSE).
Findings
The main finding is that the ANN provides high accuracy with MAPE, MSSE and RMSSE a 2.94%, 0.0015 and 0.039, respectively. The DNN's precision was high, with an RMSSE of 0.15 on average.
Practical implications
The owner and consultant are expected to use the study's findings to create more accuracy of the owner's estimate and decrease the difference between the owner's estimate and the lowest submitted offer for better decision-making.
Originality/value
This study fills the knowledge gap by developing an ANN model to handle missing TS data and forecasting the difference between a low bid and an OEC at the pre-tendering phase.
P. C. Parida, Arup Mitra and Kailash Ch. Pradhan
This study attempts to examine the missing middle (MM) phenomena in the context of the Indian manufacturing sector using the unit level information from the database of Ministry…
Abstract
Purpose
This study attempts to examine the missing middle (MM) phenomena in the context of the Indian manufacturing sector using the unit level information from the database of Ministry of Corporate Affair, Government of India.
Design/methodology/approach
Unlike the previous studies, the present study first bifurcated the missing enterprises into two categories such as “permanently” dropped and “reappeared,” in order to pursue a meaningful analysis and derive conclusions with policy insights. Various financial indicators were used to explain the causes of MM phenomena during 2009–2010 and 2016–2017, in a logistic framework.
Findings
The study found that profit margin ratio is higher for the group of medium sized enterprises which continued in comparison to the units which dropped out permanently. Similar is the case with the ratio of investment turnover. The econometric results, however suggest that the relationship between the chances of a firm being dropped out and financial indicators is weak as the coefficients of various financial indicators are found to be statistically significant only for a few years.
Originality/value
The study suggests that the missing middle phenomenon is not a myth in India as very large number of medium-sized firms have been disappearing from the market over the years. Based on firm level data it identifies the factors which resulted in such a phenomenon.
Details
Keywords
Laurens Swinkels and Thijs Markwat
To better understand the impact of choosing a carbon data provider for the estimated portfolio emissions across four asset classes. This is important, as prior literature has…
Abstract
Purpose
To better understand the impact of choosing a carbon data provider for the estimated portfolio emissions across four asset classes. This is important, as prior literature has suggested that Environmental, Social and Governance scores across providers have low correlation.
Design/methodology/approach
The authors compare carbon data from four data providers for developed and emerging equity markets and investment grade and high-yield corporate bond markets.
Findings
Data on scope 1 and scope 2 is similar across the four data providers, but for scope 3 differences can be substantial. Carbon emissions data has become more consistent across providers over time.
Research limitations/implications
The authors examine the impact of different carbon data providers at the asset class level. Portfolios that invest only in a subset of the asset class may be affected differently. Because “true” carbon emissions are not known, the authors cannot investigate which provider has the most accurate carbon data.
Practical implications
The impact of choosing a carbon data provider is limited for scope 1 and scope 2 data for equity markets. Differences are larger for corporate bonds and scope 3 emissions.
Originality/value
The authors compare carbon accounting metrics on scopes 1, 2 and 3 of corporate greenhouse gas emissions carbon data from multiple providers for developed and emerging equity and investment grade and high yield investment portfolios. Moreover, the authors show the impact of filling missing data points, which is especially relevant for corporate bond markets, where data coverage tends to be lower.
Details
Keywords
Paul Soper, Alex G. Stewart, Rajan Nathan, Sharleen Nall-Evans, Rachel Mills, Felix Michelet and Sujeet Jaydeokar
This study aims to evaluate the quality of transition from child and adolescent services to adult intellectual disability services, using the relevant National Institute for…
Abstract
Purpose
This study aims to evaluate the quality of transition from child and adolescent services to adult intellectual disability services, using the relevant National Institute for Health and Care Excellence (NICE) standard (QS140). In addition, this study also identifies any differences in transition quality between those young people with intellectual disability with and without autism.
Design/methodology/approach
Using routinely collected clinical data, this study identifies demographic and clinical characteristics of, and contextual complexities experienced by, young people in transition between 2017 and 2020. Compliance with the quality standard was assessed by applying dedicated search terms to the records.
Findings
The study highlighted poor recording of data with only 22% of 306 eligible cases having sufficient data recorded to determine compliance with the NICE quality standard. Available data indicated poor compliance with the standard. Child and adolescent mental health services, generally, did not record mental health co-morbidities. Compliance with three out of the five quality statements was higher for autistic young people, but this only reached statistical significance for one of those statements (i.e. having a named worker, p = 0.02).
Research limitations/implications
Missing data included basic clinical characteristics such as the level of intellectual disability and the presence of autism. This required adult services to duplicate assessment procedures that potentially delayed clinical outcomes. This study highlights that poor compliance may reflect inaccurate recording that needs addressing through training and introduction of shared protocols.
Originality/value
To the best of the authors’ knowledge, this is the first study to examine the transition process between children’s and adults’ intellectual disability health services using NICE quality standard 140.
Details
Keywords
Noemi Manara, Lorenzo Rosset, Francesco Zambelli, Andrea Zanola and America Califano
In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme…
Abstract
Purpose
In the field of heritage science, especially applied to buildings and artefacts made by organic hygroscopic materials, analyzing the microclimate has always been of extreme importance. In particular, in many cases, the knowledge of the outdoor/indoor microclimate may support the decision process in conservation and preservation matters of historic buildings. This knowledge is often gained by implementing long and time-consuming monitoring campaigns that allow collecting atmospheric and climatic data.
Design/methodology/approach
Sometimes the collected time series may be corrupted, incomplete and/or subjected to the sensors' errors because of the remoteness of the historic building location, the natural aging of the sensor or the lack of a continuous check of the data downloading process. For this reason, in this work, an innovative approach about reconstructing the indoor microclimate into heritage buildings, just knowing the outdoor one, is proposed. This methodology is based on using machine learning tools known as variational auto encoders (VAEs), that are able to reconstruct time series and/or to fill data gaps.
Findings
The proposed approach is implemented using data collected in Ringebu Stave Church, a Norwegian medieval wooden heritage building. Reconstructing a realistic time series, for the vast majority of the year period, of the natural internal climate of the Church has been successfully implemented.
Originality/value
The novelty of this work is discussed in the framework of the existing literature. The work explores the potentials of machine learning tools compared to traditional ones, providing a method that is able to reliably fill missing data in time series.
Details