Search results

1 – 10 of over 16000
Book part
Publication date: 25 October 2023

Md Aminul Islam and Md Abu Sufian

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The…

Abstract

This research navigates the confluence of data analytics, machine learning, and artificial intelligence to revolutionize the management of urban services in smart cities. The study thoroughly investigated with advanced tools to scrutinize key performance indicators integral to the functioning of smart cities, thereby enhancing leadership and decision-making strategies. Our work involves the implementation of various machine learning models such as Logistic Regression, Support Vector Machine, Decision Tree, Naive Bayes, and Artificial Neural Networks (ANN), to the data. Notably, the Support Vector Machine and Bernoulli Naive Bayes models exhibit robust performance with an accuracy rate of 70% precision score. In particular, the study underscores the employment of an ANN model on our existing dataset, optimized using the Adam optimizer. Although the model yields an overall accuracy of 61% and a precision score of 58%, implying correct predictions for the positive class 58% of the time, a comprehensive performance assessment using the Area Under the Receiver Operating Characteristic Curve (AUC-ROC) metrics was necessary. This evaluation results in a score of 0.475 at a threshold of 0.5, indicating that there's room for model enhancement. These models and their performance metrics serve as a key cog in our data analytics pipeline, providing decision-makers and city leaders with actionable insights that can steer urban service management decisions. Through real-time data availability and intuitive visualization dashboards, these leaders can promptly comprehend the current state of their services, pinpoint areas requiring improvement, and make informed decisions to bolster these services. This research illuminates the potential for data analytics, machine learning, and AI to significantly upgrade urban service management in smart cities, fostering sustainable and livable communities. Moreover, our findings contribute valuable knowledge to other cities aiming to adopt similar strategies, thus aiding the continued development of smart cities globally.

Details

Technology and Talent Strategies for Sustainable Smart Cities
Type: Book
ISBN: 978-1-83753-023-6

Keywords

Article
Publication date: 31 October 2023

Yangze Liang and Zhao Xu

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components…

Abstract

Purpose

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components during the construction phase is predominantly done manually, resulting in low efficiency and hindering the progress of intelligent construction. This paper presents an intelligent inspection method for assessing the appearance quality of PC components, utilizing an enhanced you look only once (YOLO) model and multi-source data. The aim of this research is to achieve automated management of the appearance quality of precast components in the prefabricated construction process through digital means.

Design/methodology/approach

The paper begins by establishing an improved YOLO model and an image dataset for evaluating appearance quality. Through object detection in the images, a preliminary and efficient assessment of the precast components' appearance quality is achieved. Moreover, the detection results are mapped onto the point cloud for high-precision quality inspection. In the case of precast components with quality defects, precise quality inspection is conducted by combining the three-dimensional model data obtained from forward design conversion with the captured point cloud data through registration. Additionally, the paper proposes a framework for an automated inspection platform dedicated to assessing appearance quality in prefabricated buildings, encompassing the platform's hardware network.

Findings

The improved YOLO model achieved a best mean average precision of 85.02% on the VOC2007 dataset, surpassing the performance of most similar models. After targeted training, the model exhibits excellent recognition capabilities for the four common appearance quality defects. When mapped onto the point cloud, the accuracy of quality inspection based on point cloud data and forward design is within 0.1 mm. The appearance quality inspection platform enables feedback and optimization of quality issues.

Originality/value

The proposed method in this study enables high-precision, visualized and automated detection of the appearance quality of PC components. It effectively meets the demand for quality inspection of precast components on construction sites of prefabricated buildings, providing technological support for the development of intelligent construction. The design of the appearance quality inspection platform's logic and framework facilitates the integration of the method, laying the foundation for efficient quality management in the future.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 9 October 2023

Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…

Abstract

Purpose

Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.

Design/methodology/approach

This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.

Findings

As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.

Originality/value

This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.

Details

International Journal of Web Information Systems, vol. 20 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 22 May 2023

Edmund Baffoe-Twum, Eric Asa and Bright Awuku

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the…

Abstract

Background: Geostatistics focuses on spatial or spatiotemporal datasets. Geostatistics was initially developed to generate probability distribution predictions of ore grade in the mining industry; however, it has been successfully applied in diverse scientific disciplines. This technique includes univariate, multivariate, and simulations. Kriging geostatistical methods, simple, ordinary, and universal Kriging, are not multivariate models in the usual statistical function. Notwithstanding, simple, ordinary, and universal kriging techniques utilize random function models that include unlimited random variables while modeling one attribute. The coKriging technique is a multivariate estimation method that simultaneously models two or more attributes defined with the same domains as coregionalization.

Objective: This study investigates the impact of populations on traffic volumes as a variable. The additional variable determines the strength or accuracy obtained when data integration is adopted. In addition, this is to help improve the estimation of annual average daily traffic (AADT).

Methods procedures, process: The investigation adopts the coKriging technique with AADT data from 2009 to 2016 from Montana, Minnesota, and Washington as primary attributes and population as a controlling factor (second variable). CK is implemented for this study after reviewing the literature and work completed by comparing it with other geostatistical methods.

Results, observations, and conclusions: The Investigation employed two variables. The data integration methods employed in CK yield more reliable models because their strength is drawn from multiple variables. The cross-validation results of the model types explored with the CK technique successfully evaluate the interpolation technique's performance and help select optimal models for each state. The results from Montana and Minnesota models accurately represent the states' traffic and population density. The Washington model had a few exceptions. However, the secondary attribute helped yield an accurate interpretation. Consequently, the impact of tourism, shopping, recreation centers, and possible transiting patterns throughout the state is worth exploring.

Details

Emerald Open Research, vol. 1 no. 5
Type: Research Article
ISSN: 2631-3952

Keywords

Article
Publication date: 20 March 2024

Gang Yu, Zhiqiang Li, Ruochen Zeng, Yucong Jin, Min Hu and Vijayan Sugumaran

Accurate prediction of the structural condition of urban critical infrastructure is crucial for predictive maintenance. However, the existing prediction methods lack precision due…

25

Abstract

Purpose

Accurate prediction of the structural condition of urban critical infrastructure is crucial for predictive maintenance. However, the existing prediction methods lack precision due to limitations in utilizing heterogeneous sensing data and domain knowledge as well as insufficient generalizability resulting from limited data samples. This paper integrates implicit and qualitative expert knowledge into quantifiable values in tunnel condition assessment and proposes a tunnel structure prediction algorithm that augments a state-of-the-art attention-based long short-term memory (LSTM) model with expert rating knowledge to achieve robust prediction results to reasonably allocate maintenance resources.

Design/methodology/approach

Through formalizing domain experts' knowledge into quantitative tunnel condition index (TCI) with analytic hierarchy process (AHP), a fusion approach using sequence smoothing and sliding time window techniques is applied to the TCI and time-series sensing data. By incorporating both sensing data and expert ratings, an attention-based LSTM model is developed to improve prediction accuracy and reduce the uncertainty of structural influencing factors.

Findings

The empirical experiment in Dalian Road Tunnel in Shanghai, China showcases the effectiveness of the proposed method, which can comprehensively evaluate the tunnel structure condition and significantly improve prediction performance.

Originality/value

This study proposes a novel structure condition prediction algorithm that augments a state-of-the-art attention-based LSTM model with expert rating knowledge for robust prediction of structure condition of complex projects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 15 March 2024

Florian Rupp, Benjamin Schnabel and Kai Eckert

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…

Abstract

Purpose

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.

Design/methodology/approach

In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.

Findings

The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.

Practical implications

Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.

Originality/value

With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 21 December 2023

Majid Rahi, Ali Ebrahimnejad and Homayun Motameni

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is…

Abstract

Purpose

Taking into consideration the current human need for agricultural produce such as rice that requires water for growth, the optimal consumption of this valuable liquid is important. Unfortunately, the traditional use of water by humans for agricultural purposes contradicts the concept of optimal consumption. Therefore, designing and implementing a mechanized irrigation system is of the highest importance. This system includes hardware equipment such as liquid altimeter sensors, valves and pumps which have a failure phenomenon as an integral part, causing faults in the system. Naturally, these faults occur at probable time intervals, and the probability function with exponential distribution is used to simulate this interval. Thus, before the implementation of such high-cost systems, its evaluation is essential during the design phase.

Design/methodology/approach

The proposed approach included two main steps: offline and online. The offline phase included the simulation of the studied system (i.e. the irrigation system of paddy fields) and the acquisition of a data set for training machine learning algorithms such as decision trees to detect, locate (classification) and evaluate faults. In the online phase, C5.0 decision trees trained in the offline phase were used on a stream of data generated by the system.

Findings

The proposed approach is a comprehensive online component-oriented method, which is a combination of supervised machine learning methods to investigate system faults. Each of these methods is considered a component determined by the dimensions and complexity of the case study (to discover, classify and evaluate fault tolerance). These components are placed together in the form of a process framework so that the appropriate method for each component is obtained based on comparison with other machine learning methods. As a result, depending on the conditions under study, the most efficient method is selected in the components. Before the system implementation phase, its reliability is checked by evaluating the predicted faults (in the system design phase). Therefore, this approach avoids the construction of a high-risk system. Compared to existing methods, the proposed approach is more comprehensive and has greater flexibility.

Research limitations/implications

By expanding the dimensions of the problem, the model verification space grows exponentially using automata.

Originality/value

Unlike the existing methods that only examine one or two aspects of fault analysis such as fault detection, classification and fault-tolerance evaluation, this paper proposes a comprehensive process-oriented approach that investigates all three aspects of fault analysis concurrently.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Open Access
Article
Publication date: 22 November 2023

En-Ze Rui, Guang-Zhi Zeng, Yi-Qing Ni, Zheng-Wei Chen and Shuo Hao

Current methods for flow field reconstruction mainly rely on data-driven algorithms which require an immense amount of experimental or field-measured data. Physics-informed neural…

Abstract

Purpose

Current methods for flow field reconstruction mainly rely on data-driven algorithms which require an immense amount of experimental or field-measured data. Physics-informed neural network (PINN), which was proposed to encode physical laws into neural networks, is a less data-demanding approach for flow field reconstruction. However, when the fluid physics is complex, it is tricky to obtain accurate solutions under the PINN framework. This study aims to propose a physics-based data-driven approach for time-averaged flow field reconstruction which can overcome the hurdles of the above methods.

Design/methodology/approach

A multifidelity strategy leveraging PINN and a nonlinear information fusion (NIF) algorithm is proposed. Plentiful low-fidelity data are generated from the predictions of a PINN which is constructed purely using Reynold-averaged Navier–Stokes equations, while sparse high-fidelity data are obtained by field or experimental measurements. The NIF algorithm is performed to elicit a multifidelity model, which blends the nonlinear cross-correlation information between low- and high-fidelity data.

Findings

Two experimental cases are used to verify the capability and efficacy of the proposed strategy through comparison with other widely used strategies. It is revealed that the missing flow information within the whole computational domain can be favorably recovered by the proposed multifidelity strategy with use of sparse measurement/experimental data. The elicited multifidelity model inherits the underlying physics inherent in low-fidelity PINN predictions and rectifies the low-fidelity predictions over the whole computational domain. The proposed strategy is much superior to other contrastive strategies in terms of the accuracy of reconstruction.

Originality/value

In this study, a physics-informed data-driven strategy for time-averaged flow field reconstruction is proposed which extends the applicability of the PINN framework. In addition, embedding physical laws when training the multifidelity model leads to less data demand for model development compared to purely data-driven methods for flow field reconstruction.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 1
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 8 September 2023

Xiancheng Ou, Yuting Chen, Siwei Zhou and Jiandong Shi

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the…

Abstract

Purpose

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the dilemma of knowledge confusion. The existing mechanisms for controlling the quality of online educational videos suffer from subjectivity and low timeliness. Monitoring the quality of online educational videos involves analyzing metadata features and log data, which is an important aspect. With the development of artificial intelligence technology, deep learning techniques with strong predictive capabilities can provide new methods for predicting the quality of online educational videos, effectively overcoming the shortcomings of existing methods. The purpose of this study is to find a deep neural network that can model the dynamic and static features of the video itself, as well as the relationships between videos, to achieve dynamic monitoring of the quality of online educational videos.

Design/methodology/approach

The quality of a video cannot be directly measured. According to previous research, the authors use engagement to represent the level of video quality. Engagement is the normalized participation time, which represents the degree to which learners tend to participate in the video. Based on existing public data sets, this study designs an online educational video engagement prediction model based on dynamic graph neural networks (DGNNs). The model is trained based on the video’s static features and dynamic features generated after its release by constructing dynamic graph data. The model includes a spatiotemporal feature extraction layer composed of DGNNs, which can effectively extract the time and space features contained in the video's dynamic graph data. The trained model is used to predict the engagement level of learners with the video on day T after its release, thereby achieving dynamic monitoring of video quality.

Findings

Models with spatiotemporal feature extraction layers consisting of four types of DGNNs can accurately predict the engagement level of online educational videos. Of these, the model using the temporal graph convolutional neural network has the smallest prediction error. In dynamic graph construction, using cosine similarity and Euclidean distance functions with reasonable threshold settings can construct a structurally appropriate dynamic graph. In the training of this model, the amount of historical time series data used will affect the model’s predictive performance. The more historical time series data used, the smaller the prediction error of the trained model.

Research limitations/implications

A limitation of this study is that not all video data in the data set was used to construct the dynamic graph due to memory constraints. In addition, the DGNNs used in the spatiotemporal feature extraction layer are relatively conventional.

Originality/value

In this study, the authors propose an online educational video engagement prediction model based on DGNNs, which can achieve the dynamic monitoring of video quality. The model can be applied as part of a video quality monitoring mechanism for various online educational resource platforms.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 12 October 2023

Xiaoli Su, Lijun Zeng, Bo Shao and Binlong Lin

The production planning problem with fine-grained information has hardly been considered in practice. The purpose of this study is to investigate the data-driven production…

Abstract

Purpose

The production planning problem with fine-grained information has hardly been considered in practice. The purpose of this study is to investigate the data-driven production planning problem when a manufacturer can observe historical demand data with high-dimensional mixed-frequency features, which provides fine-grained information.

Design/methodology/approach

In this study, a two-step data-driven optimization model is proposed to examine production planning with the exploitation of mixed-frequency demand data is proposed. First, an Unrestricted MIxed DAta Sampling approach is proposed, which imposes Group LASSO Penalty (GP-U-MIDAS). The use of high frequency of massive demand information is analytically justified to significantly improve the predictive ability without sacrificing goodness-of-fit. Then, integrated with the GP-U-MIDAS approach, the authors develop a multiperiod production planning model with a rolling cycle. The performance is evaluated by forecasting outcomes, production planning decisions, service levels and total cost.

Findings

Numerical results show that the key variables influencing market demand can be completely recognized through the GP-U-MIDAS approach; in particular, the selected accuracy of crucial features exceeds 92%. Furthermore, the proposed approach performs well regarding both in-sample fitting and out-of-sample forecasting throughout most of the horizons. Taking the total cost and service level obtained under the actual demand as the benchmark, the mean values of both the service level and total cost differences are reduced. The mean deviations of the service level and total cost are reduced to less than 2.4%. This indicates that when faced with fluctuating demand, the manufacturer can adopt the proposed model to effectively manage total costs and experience an enhanced service level.

Originality/value

Compared with previous studies, the authors develop a two-step data-driven optimization model by directly incorporating a potentially large number of features; the model can help manufacturers effectively identify the key features of market demand, improve the accuracy of demand estimations and make informed production decisions. Moreover, demand forecasting and optimal production decisions behave robustly with shifting demand and different cost structures, which can provide manufacturers an excellent method for solving production planning problems under demand uncertainty.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 16000