Search results
1 – 10 of over 3000Ming-min Liu, L.Z. Li and Jun Zhang
The purpose of this paper is to discuss a data interpolation method of curved surfaces from the point of dimension reduction and manifold learning.
Abstract
Purpose
The purpose of this paper is to discuss a data interpolation method of curved surfaces from the point of dimension reduction and manifold learning.
Design/methodology/approach
Instead of transmitting data of curved surfaces in 3D space directly, the method transmits data by unfolding 3D curved surfaces into 2D planes by manifold learning algorithms. The similarity between surface unfolding and manifold learning is discussed. Projection ability of several manifold learning algorithms is investigated to unfold curved surface. The algorithms’ efficiency and their influences on the accuracy of data transmission are investigated by three examples.
Findings
It is found that the data interpolations using manifold learning algorithms LLE, HLLE and LTSA are efficient and accurate.
Originality/value
The method can improve the accuracies of coupling data interpolation and fluid-structure interaction simulation involving curved surfaces.
Details
Keywords
Wenfeng Zhang, Ming K. Lim, Mei Yang, Xingzhi Li and Du Ni
As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This…
Abstract
Purpose
As the supply chain is a highly integrated infrastructure in modern business, the risks in supply chain are also becoming highly contagious among the target company. This motivates researchers to continuously add new features to the datasets for the credit risk prediction (CRP). However, adding new features can easily lead to missing of the data.
Design/methodology/approach
Based on the gaps summarized from the literature in CRP, this study first introduces the approaches to the building of datasets and the framing of the algorithmic models. Then, this study tests the interpolation effects of the algorithmic model in three artificial datasets with different missing rates and compares its predictability before and after the interpolation in a real dataset with the missing data in irregular time-series.
Findings
The algorithmic model of the time-decayed long short-term memory (TD-LSTM) proposed in this study can monitor the missing data in irregular time-series by capturing more and better time-series information, and interpolating the missing data efficiently. Moreover, the algorithmic model of Deep Neural Network can be used in the CRP for the datasets with the missing data in irregular time-series after the interpolation by the TD-LSTM.
Originality/value
This study fully validates the TD-LSTM interpolation effects and demonstrates that the predictability of the dataset after interpolation is improved. Accurate and timely CRP can undoubtedly assist a target company in avoiding losses. Identifying credit risks and taking preventive measures ahead of time, especially in the case of public emergencies, can help the company minimize losses.
Details
Keywords
Eric Ghysels and J. Isaac Miller
We analyze the sizes of standard cointegration tests applied to data subject to linear interpolation, discovering evidence of substantial size distortions induced by the…
Abstract
We analyze the sizes of standard cointegration tests applied to data subject to linear interpolation, discovering evidence of substantial size distortions induced by the interpolation. We propose modifications to these tests to effectively eliminate size distortions from such tests conducted on data interpolated from end-of-period sampled low-frequency series. Our results generally do not support linear interpolation when alternatives such as aggregation or mixed-frequency-modified tests are possible.
Details
Keywords
Liang Zhao, Wen Tao, Guangwen Wang, Lida Wang and Guichang Liu
The paper aims to develop an intelligent anti-corrosion expert system based on browser/server (B/S) architecture to realize an intelligent corrosion management system.
Abstract
Purpose
The paper aims to develop an intelligent anti-corrosion expert system based on browser/server (B/S) architecture to realize an intelligent corrosion management system.
Design/methodology/approach
The system is based on Java EE technology platform and model view controller (MVC) three-tier architecture development model. The authors used an extended three-dimensional interpolation model to predict corrosion rate, and the model is verified by cross-validation method. Additionally, MySQL is used to realize comprehensive data management.
Findings
The proposed anti-corrosion system thoroughly considers a full use of corrosion data, relevant corrosion prediction and efficient corrosion management in one system. Therefore, this system can achieve an accurate prediction of corrosion rate, risk evaluation, risk alert and expert suggestion for equipment in petrochemical plants.
Originality/value
Collectively, this present study has important ramifications for the more efficient and scientific management of corrosion data in enterprises and experts’ guidance in controlling corrosion status. At the same time, the digital management of corrosion data can provide a data support for related theoretical researches in corrosion field, and the intelligent system also offers examples in other fields to improve system by adding intelligence means.
Details
Keywords
Edmund Baffoe-Twum, Eric Asa and Bright Awuku
Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations…
Abstract
Background: The annual average daily traffic (AADT) data from road segments are critical for roadway projects, especially with the decision-making processes about operations, travel demand, safety-performance evaluation, and maintenance. Regular updates help to determine traffic patterns for decision-making. Unfortunately, the luxury of having permanent recorders on all road segments, especially low-volume roads, is virtually impossible. Consequently, insufficient AADT information is acquired for planning and new developments. A growing number of statistical, mathematical, and machine-learning algorithms have helped estimate AADT data values accurately, to some extent, at both sampled and unsampled locations on low-volume roadways. In some cases, roads with no representative AADT data are resolved with information from roadways with similar traffic patterns.
Methods: This study adopted an integrative approach with a combined systematic literature review (SLR) and meta-analysis (MA) to identify and to evaluate the performance, the sources of error, and possible advantages and disadvantages of the techniques utilized most for estimating AADT data. As a result, an SLR of various peer-reviewed articles and reports was completed to answer four research questions.
Results: The study showed that the most frequent techniques utilized to estimate AADT data on low-volume roadways were regression, artificial neural-network techniques, travel-demand models, the traditional factor approach, and spatial interpolation techniques. These AADT data-estimating methods' performance was subjected to meta-analysis. Three studies were completed: R squared, root means square error, and mean absolute percentage error. The meta-analysis results indicated a mixed summary effect: 1. all studies were equal; 2. all studies were not comparable. However, the integrated qualitative and quantitative approach indicated that spatial-interpolation (Kriging) methods outperformed the others.
Conclusions: Spatial-interpolation methods may be selected over others to generate accurate AADT data by practitioners at all levels for decision making. Besides, the resulting cross-validation statistics give statistics like the other methods' performance measures.
Details
Keywords
Rainald Löhner, Harbir Antil, Hamid Tamaddon-Jahromi, Neeraj Kavan Chakshu and Perumal Nithiarasu
The purpose of this study is to compare interpolation algorithms and deep neural networks for inverse transfer problems with linear and nonlinear behaviour.
Abstract
Purpose
The purpose of this study is to compare interpolation algorithms and deep neural networks for inverse transfer problems with linear and nonlinear behaviour.
Design/methodology/approach
A series of runs were conducted for a canonical test problem. These were used as databases or “learning sets” for both interpolation algorithms and deep neural networks. A second set of runs was conducted to test the prediction accuracy of both approaches.
Findings
The results indicate that interpolation algorithms outperform deep neural networks in accuracy for linear heat conduction, while the reverse is true for nonlinear heat conduction problems. For heat convection problems, both methods offer similar levels of accuracy.
Originality/value
This is the first time such a comparison has been made.
Details
Keywords
João Gabriel Ribeiro and Sônia Maria de Stefano Piedade
The state of Mato Grosso represents the largest producer and exporter of soybeans in Brazil; given this importance, it was aimed to propose to use the univariate imputation tool…
Abstract
Purpose
The state of Mato Grosso represents the largest producer and exporter of soybeans in Brazil; given this importance, it was aimed to propose to use the univariate imputation tool for time series, through applications of splines interpolations, in 46 of its municipalities that had missing data in the variables soybean production in thousand tons, production value and soy derivatives in R$ thousand, and also to assess the differences between the observed series and those with imputed values, in each of these municipalities, in these variables.
Design/methodology/approach
The proposed methodology was based on the use of the univariate imputation method through the application of cubic spline interpolation in each of the 46 municipalities, for each of the 3 variables. Then, for each municipality, the original series were compared with each observed series plus the values imputed in these variables by the Quenouille test of correlation of time series.
Findings
It was observed that, after imputation, all series were compared with those observed and are equal by the Queinouille test in the 46 municipalities analyzed, and the Wilcoxon test also showed equality for the accumulated total of the three variables involved with the production of soybeans. And there were increases of 5.92%, 3.58% and 2.84% for soy production, soy production value and soy derivatives value accumulated in the state after imputation in the 46 municipalities.
Originality/value
The present research and its results facilitate the process of estimates and monitoring the total soy production in the state of Mato Grosso and its municipalities from 1990 to 2018.
Details
Keywords
Doaa Salaheldin Ismail Elsayed
Aleppo city in Syria has witnessed severe bombardment since the 2011 war affecting its landscape heritage, causing explicit geomorphological changes with anthropogenic qualities…
Abstract
Purpose
Aleppo city in Syria has witnessed severe bombardment since the 2011 war affecting its landscape heritage, causing explicit geomorphological changes with anthropogenic qualities. The research aims to log observations on the patterns of bombardment craters. It investigates their key role in guiding post-war recovery plans. Currently, the interpretation of war scars is not considered in the reconstruction plans proposed by local administrations and here lies the importance of the research.
Design/methodology/approach
The study investigates the geomorphological transformations along the southern citadel perimeter in old Aleppo. Currently, digital tools facilitated data prediction in conflict areas. The research employs an empirical method for inhabiting war craters based on both qualitative and quantitative approaches. The former utilizes satellite images to define the geographical changes of landscape heritage. The latter applies geostatistical data analysis, validation, interpolation and simulation for multi-temporal Google Earth maps. The study exploits Surfer 13 software to localize and measure the preserved craters.
Findings
The research employs the generated models in a landscape design proposal examining the method's applicability. Finally, it offers a methodological toolkit guiding post-war landscape recovery toward the interpretation of conflict geography.
Practical implications
The paper enables a practical understanding of the contemporaneity of landscape heritage recovery as an action between sustainable development and conservation.
Social implications
The paper integrates the conflict geographies to the people's commemoration of places and events.
Originality/value
The article offers an insight into the rehabilitation of war landscapes focusing on land craters, exploiting geostatistical data prediction methods.
Details
Keywords
Zhiwen Pan, Wen Ji, Yiqiang Chen, Lianjun Dai and Jun Zhang
The disability datasets are the datasets that contain the information of disabled populations. By analyzing these datasets, professionals who work with disabled populations can…
Abstract
Purpose
The disability datasets are the datasets that contain the information of disabled populations. By analyzing these datasets, professionals who work with disabled populations can have a better understanding of the inherent characteristics of the disabled populations, so that working plans and policies, which can effectively help the disabled populations, can be made accordingly.
Design/methodology/approach
In this paper, the authors proposed a big data management and analytic approach for disability datasets.
Findings
By using a set of data mining algorithms, the proposed approach can provide the following services. The data management scheme in the approach can improve the quality of disability data by estimating miss attribute values and detecting anomaly and low-quality data instances. The data mining scheme in the approach can explore useful patterns which reflect the correlation, association and interactional between the disability data attributes. Experiments based on real-world dataset are conducted at the end to prove the effectiveness of the approach.
Originality/value
The proposed approach can enable data-driven decision-making for professionals who work with disabled populations.
Details
Keywords
Sanat Agrawal, Deon J. de Beer and Yashwant Kumar Modi
This paper aims to convert surface data directly to a three-dimensional (3D) stereolithography (STL) part. The Geographic Information Systems (GIS) data available for a terrain…
Abstract
Purpose
This paper aims to convert surface data directly to a three-dimensional (3D) stereolithography (STL) part. The Geographic Information Systems (GIS) data available for a terrain are the data of its surface. It doesn’t have information for a solid model. The data need to be converted into a three-dimensional (3D) solid model for making physical models by additive manufacturing (AM).
Design/methodology/approach
A methodology has been developed to make the wall and base of the part and tessellates the part with triangles. A program has been written which gives output of the part in STL file format. The elevation data are interpolated and any singularity present is removed. Extensive search techniques are used.
Findings
AM technologies are increasingly being used for terrain modeling. However, there is not enough work done to convert the surface data into 3D solid model. The present work aids in this area.
Practical implications
The methodology removes data loss associated with intermediate file formats. Terrain models can be created in less time and less cost. Intricate geometries of terrain can be created with ease and great accuracy.
Social implications
The terrain models can be used for GIS education, educating the community for catchment management, conservation management, etc.
Originality/value
The work allows direct and automated conversion of GIS surface data into a 3D STL part. It removes intermediate steps and any data loss associated with intermediate file formats.
Details