Search results

1 – 10 of over 20000
Article
Publication date: 23 May 2022

Ting Wang, Xiaoling Shao and Xue Yan

In intelligent scheduling, parallel batch processing can reasonably allocate production resources and reduce the production cost per unit product. Hence, the research on a…

Abstract

Purpose

In intelligent scheduling, parallel batch processing can reasonably allocate production resources and reduce the production cost per unit product. Hence, the research on a parallel batch scheduling problem (PBSP) with uncertain job size is of great significance to realize the flexibility of product production and mass customization of personalized products.

Design/methodology/approach

The authors propose a robust formulation in which the job size is defined by budget constrained support. For obtaining the robust solution of the robust PBSP, the authors propose an exact algorithm based on branch-and-price framework, where the pricing subproblem can be reduced to a robust shortest path problem with resource constraints. The robust subproblem is transformed into a deterministic mixed integer programming by duality. A series of deterministic shortest path problems with resource constraints is derived from the programming for which the authors design an efficient label-setting algorithm with a strong dominance rule.

Findings

The authors test the performance of the proposed algorithm on the extension of benchmark instances in literature and compare the infeasible rate of robust and deterministic solutions in simulated scenarios. The authors' results show the efficiency of the authors' algorithm and importance of incorporating uncertainties in the problem.

Originality/value

This work is the first to study the PBSP with uncertain size. To solve this problem, the authors design an efficient exact algorithm based on Dantzig–Wolfe decomposition. This can not only enrich the intelligent manufacturing theory related to parallel batch scheduling but also provide ideas for relevant enterprises to solve problems.

Details

Industrial Management & Data Systems, vol. 122 no. 10
Type: Research Article
ISSN: 0263-5577

Keywords

Open Access
Article
Publication date: 4 August 2020

Kanak Meena, Devendra K. Tayal, Oscar Castillo and Amita Jain

The scalability of similarity joins is threatened by the unexpected data characteristic of data skewness. This is a pervasive problem in scientific data. Due to skewness, the…

786

Abstract

The scalability of similarity joins is threatened by the unexpected data characteristic of data skewness. This is a pervasive problem in scientific data. Due to skewness, the uneven distribution of attributes occurs, and it can cause a severe load imbalance problem. When database join operations are applied to these datasets, skewness occurs exponentially. All the algorithms developed to date for the implementation of database joins are highly skew sensitive. This paper presents a new approach for handling data-skewness in a character- based string similarity join using the MapReduce framework. In the literature, no such work exists to handle data skewness in character-based string similarity join, although work for set based string similarity joins exists. Proposed work has been divided into three stages, and every stage is further divided into mapper and reducer phases, which are dedicated to a specific task. The first stage is dedicated to finding the length of strings from a dataset. For valid candidate pair generation, MR-Pass Join framework has been suggested in the second stage. MRFA concepts are incorporated for string similarity join, which is named as “MRFA-SSJ” (MapReduce Frequency Adaptive – String Similarity Join) in the third stage which is further divided into four MapReduce phases. Hence, MRFA-SSJ has been proposed to handle skewness in the string similarity join. The experiments have been implemented on three different datasets namely: DBLP, Query log and a real dataset of IP addresses & Cookies by deploying Hadoop framework. The proposed algorithm has been compared with three known algorithms and it has been noticed that all these algorithms fail when data is highly skewed, whereas our proposed method handles highly skewed data without any problem. A set-up of the 15-node cluster has been used in this experiment, and we are following the Zipf distribution law for the analysis of skewness factor. Also, a comparison among existing and proposed techniques has been shown. Existing techniques survived till Zipf factor 0.5 whereas the proposed algorithm survives up to Zipf factor 1. Hence the proposed algorithm is skew insensitive and ensures scalability with a reasonable query processing time for string similarity database join. It also ensures the even distribution of attributes.

Details

Applied Computing and Informatics, vol. 18 no. 1/2
Type: Research Article
ISSN: 2634-1964

Keywords

Article
Publication date: 1 October 2006

T. Yuge, K. Tagami and S. Yanagi

Calculating the exact top event probability of fault trees is an important analysis in quantitative risk assessments. However, it is a difficult problem for the trees with complex…

1596

Abstract

Purpose

Calculating the exact top event probability of fault trees is an important analysis in quantitative risk assessments. However, it is a difficult problem for the trees with complex structure. Therefore, the paper aims to provide an efficient calculation method to obtain an exact top event probability of a fault tree with many repeated events when the minimal cut sets of the tree model are given.

Design/methodology/approach

The method is based on the inclusion‐exclusion method. Generally, the inclusion‐exclusion method tends to get into computational difficulties for a large‐scale fault tree. The computation time has been reduced by enumerating only non‐canceling terms.

Findings

The method enables the calculation of the probability more quickly than the conventional method. The effect increases as the number of repeated events increases, namely the tree structure becomes complex. This method also can be applied to obtain the lower and upper bounds of the top event probability easily.

Originality/value

The paper expresses the top event probability by using only non‐canceling terms. This is the first application in fault tree analysis.

Details

Journal of Quality in Maintenance Engineering, vol. 12 no. 4
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 1 March 1996

Chakib Kara‐Zaitri

Presents a new qualitative fault tree evaluation algorithm based on bit manipulation techniques for the identification of the largest independent sub‐trees and the subsequent…

1187

Abstract

Presents a new qualitative fault tree evaluation algorithm based on bit manipulation techniques for the identification of the largest independent sub‐trees and the subsequent determination of all minimal cut sets of large and complex fault trees. The methodology developed is validated by direct application to a complex fault tree taken from the literature. Results obtained are compared with those available in the literature. Shows that the use of the algorithm (FTABMT) developed results in significant savings in both computer time and storage requirements.

Details

International Journal of Quality & Reliability Management, vol. 13 no. 2
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 14 March 2019

Hailiang Su, Fengchong Lan, Yuyan He and Jiqing Chen

Meta-model method has been widely used in structural reliability optimization design. The main limitation of this method is that it is difficult to quantify the error caused by…

Abstract

Purpose

Meta-model method has been widely used in structural reliability optimization design. The main limitation of this method is that it is difficult to quantify the error caused by the meta-model approximation, which leads to the inaccuracy of the optimization results of the reliability evaluation. Taking the local high efficiency of the proxy model, this paper aims to propose a local effective constrained response surface method (LEC-RSM) based on a meta-model.

Design/methodology/approach

The operating mechanisms of LEC-RSM is to calculate the index of the local relative importance based on numerical theory and capture the most effective area in the entire design space, as well as selecting important analysis domains for sample changes. To improve the efficiency of the algorithm, the constrained efficient set algorithm (ESA) is introduced, in which the sample point validity is identified based on the reliability information obtained in the previous cycle and then the boundary sampling points that violate the constraint conditions are ignored or eliminated.

Findings

The computational power of the proposed method is demonstrated by solving two mathematical problems and the actual engineering optimization problem of a car collision. LEC-RSM makes it easier to achieve the optimal performance, less feature evaluation and fewer algorithm iterations.

Originality/value

This paper proposes a new RSM technology based on proxy model to complete the reliability design. The originality of this paper is to increase the sampling points by identifying the local importance of the analysis domain and introduce the constrained ESA to improve the efficiency of the algorithm.

Article
Publication date: 13 October 2021

Liang Su, Zhenpo Wang and Chao Chen

The purpose of this study is to propose a torque vectoring control system for improving the handling stability of distributed drive electric buses under complicated driving…

Abstract

Purpose

The purpose of this study is to propose a torque vectoring control system for improving the handling stability of distributed drive electric buses under complicated driving conditions. Energy crisis and environment pollution are two key pressing issues faced by mankind. Pure electric buses are recognized as the effective method to solve the problems. Distributed drive electric buses (DDEBs) as an emerging mode of pure electric buses are attracting intense research interests around the world. Compared with the central driven electric buses, DDEB is able to control the driving and braking torque of each wheel individually and accurately to significantly enhance the handling stability. Therefore, the torque vectoring control (TVC) system is proposed to allocate the driving torque among four wheels reasonably to improve the handling stability of DDEBs.

Design/methodology/approach

The proposed TVC system is designed based on hierarchical control. The upper layer is direct yaw moment controller based on feedforward and feedback control. The feedforward control algorithm is designed to calculate the desired steady-state yaw moment based on the steering wheel angle and the longitudinal velocity. The feedback control is anti-windup sliding mode control algorithm, which takes the errors between actual and reference yaw rate as the control variables. The lower layer is torque allocation controller, including economical torque allocation control algorithm and optimal torque allocation control algorithm.

Findings

The steady static circular test has been carried out to demonstrate the effectiveness and control effort of the proposed TVC system. Compared with the field experiment results of tested bus with TVC system and without TVC system, the slip angle of tested bus with TVC system is much less than without TVC. And the actual yaw rate of tested bus with TVC system is able to track the reference yaw rate completely. The experiment results demonstrate that the TVC system has a remarkable performance in the real practice and improve the handling stability effectively.

Originality/value

In view of the large load transfer, the strong coupling characteristics of tire , the suspension and the steering system during coach corning, the vehicle reference steering characteristics is defined considering vehicle nonlinear characteristics and the feedforward term of torque vectoring control at different steering angles and speeds is designed. Meanwhile, in order to improve the robustness of controller, an anti-integral saturation sliding mode variable structure control algorithm is proposed as the feedback term of torque vectoring control.

Article
Publication date: 1 March 1997

H. Daiguji, X. Yuan and S. Yamamoto

Proposes a measure to stabilize the fourth(fifth)‐order high resolution schemes for the compressible Navier‐Stokes equations. Solves the N‐S equations of the volume fluxes and the…

Abstract

Proposes a measure to stabilize the fourth(fifth)‐order high resolution schemes for the compressible Navier‐Stokes equations. Solves the N‐S equations of the volume fluxes and the low‐Reynolds number k‐ε turbulence model in general curvilinear co‐ordinates by the delta‐form implicit finite difference methods. Notes that, in order to simulate the flow containing weak discontinuities accurately, it is very effective to use some higher‐order TVD upstream‐difference schemes in the right‐hand side of the equations of these methods; however, the higher‐order correction terms of such schemes in general amplify the numerical disturbances. Therefore, restricts these terms here by operating the minmod functions to the curvatures so as to suppress the occurrence of new inflection points. Computes an unsteady transonic turbine cascade flow where vortex streets occur from the trailing edge of blades and interact with shock waves. Finds that the stabilization measure improves not only the computational results but also the convergency for such a complicated flow problem.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 7 no. 2/3
Type: Research Article
ISSN: 0961-5539

Keywords

Abstract

Details

Developing an Effective Model for Detecting Trade-based Market Manipulation
Type: Book
ISBN: 978-1-80117-397-1

Article
Publication date: 12 June 2020

Sandeepkumar Hegde and Monica R. Mundada

According to the World Health Organization, by 2025, the contribution of chronic disease is expected to rise by 73% compared to all deaths and it is considered as global burden of…

Abstract

Purpose

According to the World Health Organization, by 2025, the contribution of chronic disease is expected to rise by 73% compared to all deaths and it is considered as global burden of disease with a rate of 60%. These diseases persist for a longer duration of time, which are almost incurable and can only be controlled. Cardiovascular disease, chronic kidney disease (CKD) and diabetes mellitus are considered as three major chronic diseases that will increase the risk among the adults, as they get older. CKD is considered a major disease among all these chronic diseases, which will increase the risk among the adults as they get older. Overall 10% of the population of the world is affected by CKD and it is likely to double in the year 2030. The paper aims to propose novel feature selection approach in combination with the machine-learning algorithm which can early predict the chronic disease with utmost accuracy. Hence, a novel feature selection adaptive probabilistic divergence-based feature selection (APDFS) algorithm is proposed in combination with the hyper-parameterized logistic regression model (HLRM) for the early prediction of chronic disease.

Design/methodology/approach

A novel feature selection APDFS algorithm is proposed which explicitly handles the feature associated with the class label by relevance and redundancy analysis. The algorithm applies the statistical divergence-based information theory to identify the relationship between the distant features of the chronic disease data set. The data set required to experiment is obtained from several medical labs and hospitals in India. The HLRM is used as a machine-learning classifier. The predictive ability of the framework is compared with the various algorithm and also with the various chronic disease data set. The experimental result illustrates that the proposed framework is efficient and achieved competitive results compared to the existing work in most of the cases.

Findings

The performance of the proposed framework is validated by using the metric such as recall, precision, F1 measure and ROC. The predictive performance of the proposed framework is analyzed by passing the data set belongs to various chronic disease such as CKD, diabetes and heart disease. The diagnostic ability of the proposed approach is demonstrated by comparing its result with existing algorithms. The experimental figures illustrated that the proposed framework performed exceptionally well in prior prediction of CKD disease with an accuracy of 91.6.

Originality/value

The capability of the machine learning algorithms depends on feature selection (FS) algorithms in identifying the relevant traits from the data set, which impact the predictive result. It is considered as a process of choosing the relevant features from the data set by removing redundant and irrelevant features. Although there are many approaches that have been already proposed toward this objective, they are computationally complex because of the strategy of following a one-step scheme in selecting the features. In this paper, a novel feature selection APDFS algorithm is proposed which explicitly handles the feature associated with the class label by relevance and redundancy analysis. The proposed algorithm handles the process of feature selection in two separate indices. Hence, the computational complexity of the algorithm is reduced to O(nk+1). The algorithm applies the statistical divergence-based information theory to identify the relationship between the distant features of the chronic disease data set. The data set required to experiment is obtained from several medical labs and hospitals of karkala taluk ,India. The HLRM is used as a machine learning classifier. The predictive ability of the framework is compared with the various algorithm and also with the various chronic disease data set. The experimental result illustrates that the proposed framework is efficient and achieved competitive results are compared to the existing work in most of the cases.

Details

International Journal of Pervasive Computing and Communications, vol. 17 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 12 May 2023

Chang-Sup Park

This paper studies a keyword search over graph-structured data used in various fields such as semantic web, linked open data and social networks. This study aims to propose an…

Abstract

Purpose

This paper studies a keyword search over graph-structured data used in various fields such as semantic web, linked open data and social networks. This study aims to propose an efficient keyword search algorithm on graph data to find top-k answers that are most relevant to the query and have diverse content nodes for the input keywords.

Design/methodology/approach

Based on an aggregative measure of diversity of an answer set, this study proposes an approach to searching the top-k diverse answers to a query on graph data, which finds a set of most relevant answer trees whose average dissimilarity should be no lower than a given threshold. This study defines a diversity constraint that must be satisfied for a subset of answer trees to be included in the solution. Then, an enumeration algorithm and a heuristic search algorithm are proposed to find an optimal solution efficiently based on the diversity constraint and an A* heuristic. This study also provides strategies for improving the performance of the heuristic search method.

Findings

The results of experiments using a real data set demonstrate that the proposed search algorithm can find top-k diverse and relevant answers to a query on large-scale graph data efficiently and outperforms the previous methods.

Originality/value

This study proposes a new keyword search method for graph data that finds an optimal solution with diverse and relevant answers to the query. It can provide users with query results that satisfy their various information needs on large graph data.

Details

International Journal of Web Information Systems, vol. 19 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 20000