Search results
1 – 10 of over 3000Zibo Li, Zhengxiang Yan, Shicheng Li, Guangmin Sun, Xin Wang, Dequn Zhao, Yu Li and Xiucheng Liu
The purpose of this paper is to overcome the application limitations of other multi-variable regression based on polynomials due to the huge computation room and time cost.
Abstract
Purpose
The purpose of this paper is to overcome the application limitations of other multi-variable regression based on polynomials due to the huge computation room and time cost.
Design/methodology/approach
In this paper, based on the idea of feature selection and cascaded regression, two strategies including Laguerre polynomials and manifolds optimization are proposed to enhance the accuracy of multi-variable regression. Laguerre polynomials were combined with the genetic algorithm to enhance the capacity of polynomials approximation and the manifolds optimization method was introduced to solve the co-related optimization problem.
Findings
Two multi-variable Laguerre polynomials regression methods are designed. Firstly, Laguerre polynomials are combined with feature selection method. Secondly, manifolds component analysis is adopted in cascaded Laguerre polynomials regression method. Two methods are brought to enhance the accuracy of multi-variable regression method.
Research limitations/implications
With the increasing number of variables in regression problem, the stable accuracy performance might not be kept by using manifold-based optimization method. Moreover, the methods mentioned in this paper are not suitable for the classification problem.
Originality/value
Experiments are conducted on three types of datasets to evaluate the performance of the proposed regression methods. The best accuracy was achieved by the combination of cascade, manifold optimization and Chebyshev polynomials, which implies that the manifolds optimization has stronger contribution than the genetic algorithm and Laguerre polynomials.
Details
Keywords
Yajing Gu, Hongyan Yan and Yuanguo Zhu
The purpose of this paper is to propose an iterative Legendre technique to deal with a continuous optimal control problem (OCP).
Abstract
Purpose
The purpose of this paper is to propose an iterative Legendre technique to deal with a continuous optimal control problem (OCP).
Design/methodology/approach
For the system in the considered problem, the control variable is a function of the state variables and their derivatives. State variables in the problem are approximated by Legendre expansions as functions of time t. A constant matrix is given to express the derivatives of state variables. Therefore, control variables can be described as functions of time t. After that, the OCP is converted to an unconstrained optimization problem whose decision variables are the unknown coefficients in the Legendre expansions.
Findings
The convergence of the proposed algorithm is proved. Experimental results, which contain the controlled Duffing oscillator problem demonstrate that the proposed technique is faster than existing methods.
Originality/value
Experimental results, which contained the controlled Duffing oscillator problem demonstrate that the proposed technique can be faster while securing exactness.
Details
Keywords
Jianteng Xu, Qingpu Zhang and Qingguo Bai
The purpose of this paper is to find the best approximation algorithm for solving the more general case of single‐supplier multi‐retailer capacitated economic lot‐sizing (SM‐CELS…
Abstract
Purpose
The purpose of this paper is to find the best approximation algorithm for solving the more general case of single‐supplier multi‐retailer capacitated economic lot‐sizing (SM‐CELS) problem in deterministic inventory theory, which is the non‐deterministic polynomial (NP)‐hard problem.
Design/methodology/approach
Since few theoretical results have been published on polynomial time approximation algorithms for SM‐CELS problems, this paper develops a fully polynomial time approximation scheme (FPTAS) for the problem with monotone production and holding‐backlogging cost functions. First the optimal solution of a rounded problem is presented as the approximate solution and its straightforward dynamic‐programming (DP) algorithm. Then the straightforward DP algorithm is converted into an FPTAS by exploiting combinatorial properties of the recursive function.
Findings
An FPTAS is designed for the SM‐CELS problem with monotone cost functions, which is the strongest polynomial time approximation result.
Research limitations/implications
The main limitation is that the supplier only manufactures without holding any products when the model is applied.
Practical implications
The paper presents the best result for the SM‐CELS problem in deterministic inventory theory.
Originality/value
The LP‐rounding technique, an effective approach to design approximation algorithms for NP‐hard problems, is successfully applied to the SM‐CELS problem in this paper.
Details
Keywords
Concurrency is a desirable property that enhances workflow efficiency. The purpose of this paper is to propose six polynomial-time algorithms that collectively maximize control…
Abstract
Purpose
Concurrency is a desirable property that enhances workflow efficiency. The purpose of this paper is to propose six polynomial-time algorithms that collectively maximize control flow concurrency for Business Process Model and Notation (BPMN) workflow models. The proposed algorithms perform model-level transformations on a BPMN model during the design phase of the model, thereby improving the workflow model’s execution efficiency.
Design/methodology/approach
The approach is similar to source code optimization, which solely works with syntactic means. The first step makes implicit synchronizations of interdependent concurrent control flows explicit by adding parallel gateways. After that, every control flow can proceed asynchronously. The next step then generates an equivalent sequence of execution hierarchies for every control flow such that they collectively provide maximum concurrency for the control flow. As a whole, the proposed algorithms add a valuable feature to a BPMN modeling tool to maximize control flow concurrency.
Findings
In addition, this paper introduces the concept of control flow independence, which is a user-determined semantic property of BPMN models that cannot be obtained by any syntactic means. But, if control flow independence holds in a BPMN model, the model’s determinism is guaranteed. As a result, the proposed algorithms output a model that can be proved to be equivalent to the original model.
Originality/value
This paper adds value to BPMN modeling tools by providing polynomial-time algorithms that collectively maximize control flow concurrency in a BPMN model during the design phase of the model. As a result, the model’s execution efficiency will increase. Similar to source code optimization, these algorithms perform model-level transformations on a BPMN model through syntactic means; and the transformations performed to each control flow are guaranteed to be equivalent to the control flow. Furthermore, a case study on a real-life new employee preparation process is provided to demonstrate the proposed algorithms’ usefulness on increasing the process’s execution efficiency.
Details
Keywords
Maulin Patel, S. Venkateson and R. Chandrasekaran
A critical issue in the design of routing protocols for wireless sensor networks is the efficient utilization of resources such as scarce bandwidth and limited energy supply. Many…
Abstract
A critical issue in the design of routing protocols for wireless sensor networks is the efficient utilization of resources such as scarce bandwidth and limited energy supply. Many routing schemes proposed in the literature try to minimize the energy consumed in routing or maximize the lifetime of the sensor network without taking into consideration limited capacities of nodes and wireless links. This can lead to congestion, increased delay, packet losses and ultimately to retransmission of packets, which will waste considerable amount of energy. This paper presents a Minimum‐cost Capacity‐constrained Routing (MCCR) protocol which minimize the total energy consumed in routing while guaranteeing that the total load on each sensor node and on each wireless link does not exceed its capacity. The protocol is derived from polynomial‐time minimum‐cost flow algorithms. Therefore protocol is simple and scalable. The paper improves the routing protocol in (1) to incorporate integrality, node capacity and link capacity constraints. This improved protocol is called Maximum Lifetime Capacity‐constrained Routing (MLCR). The objective of MLCR protocol is to maximize the time until the first battery drains its energy subject to the node capacity and link capacity constraints. A strongly polynomial time algorithm is proposed for a special case of MLCR problem when the energy consumed in transmission by a sensor node is constant. Simulations are performed to analyzed the performance of the proposed protocols.
Details
Keywords
Mingang Gao, Hong Chi, Baoguang Xu and Ruo Ding
The purpose of this paper is to focus on disruption management responding to large‐area flight delays (LFD). It is urgent for airways to reschedule the disrupted flights so as to…
Abstract
Purpose
The purpose of this paper is to focus on disruption management responding to large‐area flight delays (LFD). It is urgent for airways to reschedule the disrupted flights so as to relieve the negative influence and minimize losses. The authors try to reduce the risk of airline company's credit and economic losses by rescheduling flights with mathematic models and algorithm.
Design/methodology/approach
Based on flight classifications of real‐time statuses and priority indicators, all flights are prioritized. In this paper, two mathematic programming models of flight rescheduling are proposed. For the second model, an optimum polynomial algorithm is designed.
Findings
In practice, when LFD happens, it is very important for the airline company to pay attention to real‐time statuses of all the flights. At the same time, the disruption management should consider not only the economic loss but also other non‐quantitative loss such as passengers' satisfaction, etc.
Originality/value
In this paper, two mathematic programming models of flight rescheduling are built. An algorithm is designed and it is proved to be an optimum polynomial algorithm and a case study is given to illustrate the algorithm. The paper provides a theory support for airways to reduce the risk brought by LFD.
Details
Keywords
Krish Sethanand, Thitivadee Chaiyawat and Chupun Gowanit
This paper presents the systematic process framework to develop the suitable crop insurance for each agriculture farming region which has individual differences of associated…
Abstract
Purpose
This paper presents the systematic process framework to develop the suitable crop insurance for each agriculture farming region which has individual differences of associated crop, climate condition, including applicable technology to be implemented in crop insurance practice. This paper also studies the adoption of new insurance scheme to assess the willingness to join crop insurance program.
Design/methodology/approach
Crop insurance development has been performed through IDDI conceptual framework to illustrate the specific crop insurance diagram. Area-yield insurance as a type of index-based insurance advantages on reducing basis risk, adverse selection and moral hazard. This paper therefore aims to develop area-yield crop insurance, at a provincial level, focusing on rice insurance scheme for the protection of flood. The diagram demonstrates the structure of area-yield rice insurance associates with selected machine learning algorithm to evaluate indemnity payment and premium assessment applicable for Jasmine 105 rice farming in Ubon Ratchathani province. Technology acceptance model (TAM) is used for new insurance adoption testing.
Findings
The framework produces the visibly informative structure of crop insurance. Random Forest is the algorithm that gives high accuracy for specific collected data for rice farming in Ubon Ratchathani province to evaluate the rice production to calculate an indemnity payment. TAM shows that the level of adoption is high.
Originality/value
This paper originates the framework to generate the viable crop insurance that suitable to individual farming and contributes the idea of technology implementation in the new service of crop insurance scheme.
Details
Keywords
Krzysztof J. Cios and Ning Liu
Presents an inductive machine learning algorithm called CLILP2 that learns multiple covers for a concept from positive and negative examples. Although inductive learning is an…
Abstract
Presents an inductive machine learning algorithm called CLILP2 that learns multiple covers for a concept from positive and negative examples. Although inductive learning is an error‐prone process, multiple meaning interpretation of the examples is utilized by CLILP2 to compensate for the narrowness of induction. The algorithm is tested on data sets representing three different domains. Analyses the complexity of the algorithm and compares the results with those obtained by others. Employs measures of specificity, sensitivity, and predictive accuracy which are not usually used in presenting machine learning results, and shows that they evaluate better the “correctness” of the learned concepts. The study is published in two parts: I – the CLILP2 algorithm; II – experimental results and conclusions.
Details
Keywords
In this study, we analyze the power of the individual return-to-volatility security performance heuristic (ri/stdi) to simplify the identification of securities to buy and…
Abstract
In this study, we analyze the power of the individual return-to-volatility security performance heuristic (ri/stdi) to simplify the identification of securities to buy and, consequently, to form the optimal no short sales mean–variance portfolios. The heuristic ri/stdi is powerful enough to identify the long and shorts sets. This is due to the positive definiteness of the variance–covariance matrix – the key is to use the heuristic sequentially. At the investor level, the heuristic helps investors to decide what securities to consider first. At the portfolio level, the heuristic may help us find out whether it is a good idea to invest in equity to begin with. Our research may also help to integrate individual security analysis into portfolio optimization through improved security rankings.
Ryma Zineb Badaoui, Mourad Boudhar and Mohammed Dahane
This paper studies the preemptive scheduling problem of independent jobs on identical machines. The purpose of this paper is to minimize the makespan under the imposed…
Abstract
Purpose
This paper studies the preemptive scheduling problem of independent jobs on identical machines. The purpose of this paper is to minimize the makespan under the imposed constraints, namely, the ones that relate the transportation delays which are required to transport a preempted job from one machine to another. This study considers the case when the transportation delays are variable.
Design/methodology/approach
The contribution is twofold. First, this study proposes a new linear programming formulation in real and binary decision variables. Then, this study proposes and implements a solution strategy, which consists of two stages. The goal of the first stage is to obtain the best machines order using a local search strategy. For the second stage, the objective is to determine the best possible sequence of jobs. To solve the preemptive scheduling problem with transportation delays, this study proposes a heuristic and two metaheuristics (simulated annealing and variable neighborhood search), each with two modes of evaluation.
Findings
Computational experiments are presented and discussed on randomly generated instances.
Practical implications
The study has implications in various industrial environments when the preemption of jobs is allowed.
Originality/value
This study proposes a new linear programming formulation for the problem with variable transportation delays as well as a corresponding heuristic and metaheuristics.
Details