Search results
1 – 10 of 457Petar Jackovich, Bruce Cox and Raymond R. Hill
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and…
Abstract
Purpose
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and vertex-greedy subclasses. As these subclasses of heuristics can create subtours, two known methodologies for subtour elimination on symmetric instances are reviewed and are expanded to cover asymmetric problem instances. This paper introduces a third novel subtour elimination methodology, the greedy tracker (GT), and compares it to both known methodologies.
Design/methodology/approach
Computational results for all three subtour elimination methodologies are generated across 17 symmetric instances ranging in size from 29 vertices to 5,934 vertices, as well as 9 asymmetric instances ranging in size from 17 to 443 vertices.
Findings
The results demonstrate the GT is the fastest method for preventing subtours for instances below 400 vertices. Additionally, a distinction between fragment constructive heuristics and the subtour elimination methodology used to ensure the feasibility of resulting solutions enables the introduction of a new vertex-greedy fragment heuristic called ordered greedy.
Originality/value
This research has two main contributions: first, it introduces a novel subtour elimination methodology. Second, the research introduces the concept of ordered lists which remaps the TSP into a new space with promising initial computational results.
Details
Keywords
Tianyi Wu, Jian Hua Liu, Shaoli Liu, Peng Jin, Hao Huang and Wei Liu
This paper aims to solve the problem of free-form tubes’ machining errors which are caused by their complex geometries and material properties.
Abstract
Purpose
This paper aims to solve the problem of free-form tubes’ machining errors which are caused by their complex geometries and material properties.
Design/methodology/approach
In this paper, the authors propose a multi-view vision-based method for measuring free-form tubes. The authors apply photogrammetry theory to construct the initial model and then optimize the model using an energy function. The energy function is based on the features of the image of the tube. Solving the energy function allows to use the gray features of the images to reconstruct centerline point clouds and thus obtain the pertinent geometric parameters.
Findings
According to the experiments, the measurement process takes less than 2 min and the precision of the proposed system is 0.2 mm. The authors used simple operations to carry out the measurements, and the process is fully automatic.
Originality/value
This paper proposes a method for measuring free-form tubes based on multi-view vision, which has not been attempted to the best of authors’ knowledge. This method differs from traditional multi-view vision measurement methods, because it does not rely on the data of the design model of the tube. The application of the energy function also avoids the problem of matching corresponding points and thus simplifying the calculation and improving its stability.
Details
Keywords
Multipath routing holds a great potential to provide sufficient bandwidth to a plethora of applications in wireless sensor networks. In this paper, we consider the problem of…
Abstract
Multipath routing holds a great potential to provide sufficient bandwidth to a plethora of applications in wireless sensor networks. In this paper, we consider the problem of interference that can significantly affect the expected performances. We focus on the performance evaluation of the iterative paths discovery approach as opposed to the traditional concurrent multipath routing. Five different variants of multipath protocols are simulated and evaluated using different performance metrics. We mainly show that the iterative approach allows better performances when used jointly with an interference-aware metric or when an interference-zone marking strategy is employed. This latter appears to exhibit the best performances in terms of success ratio, achieved throughput, control messages overhead as well as energy consumption.
Details
Keywords
Mohammad Khalid Pandit, Roohie Naaz Mir and Mohammad Ahsan Chishti
The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational…
Abstract
Purpose
The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.
Design/methodology/approach
To realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.
Findings
Experimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.
Originality/value
The proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.
Details
Keywords
Dimos C. Charmpis and Manolis Papadrakakis
Balancing and dual domain decomposition methods (DDMs) comprise a family of efficient high performance solution approaches for a large number of problems in computational…
Abstract
Balancing and dual domain decomposition methods (DDMs) comprise a family of efficient high performance solution approaches for a large number of problems in computational mechanics. Such DDMs are used in practice on parallel computing environments with the number of generated subdomains being generally larger than the number of available processors. This paper presents an effective heuristic technique for organizing the subdomains into subdomain clusters, in order to assign each cluster to a processor. This task is handled by the proposed approach as a graph partitioning optimization problem using the publicly available software METIS. The objective of the optimization process is to minimize the communication requirements of the DDMs under the constraint of producing balanced processor workloads. This constraint optimization procedure for treating the subdomain cluster generation task leads to increased computational efficiencies for balancing and dual DDMs.
Details
Keywords
The social media revolution has brought tremendous change in business strategies for marketing and promoting the products and services. Online social networks have become prime…
Abstract
Purpose
The social media revolution has brought tremendous change in business strategies for marketing and promoting the products and services. Online social networks have become prime choice to promote the products because of the large size of online communities. Identification of seed nodes or identifying the users who are able to maximize the spread of information over the network is the key challenge faced by organizations. It is proved as non-deterministic polynomial-time hard problem. The purpose of this paper is to design an efficient algorithm for optimal seed selection to cover the online social network as much as possible to maximize the influence. In this approach, agglomerative clustering is used to generate the initial population of seed nodes for GA.
Design/methodology/approach
In this paper agglomerative clustering based approach is proposed to generate the initial population of seed nodes for GA. This approach helps in creating the initial populations of Genetic algorithm from different parts of the network. Genetic algorithm evolves this population and aids in generating the best seed nodes in the network.
Findings
The performance of of proposed approach is assessed with respect to existing seed selection approaches like k-medoid, k-means, general greedy, random, discounted degree and high degree. The algorithms are compared over networks data sets with varying out-degree ratio. Experiments reveal that the proposed approach is able to improve the spread of influence by 35% as compared to contemporary techniques.
Originality/value
This paper is original contribution. The agglomerative clustering-based GA for optimal seed selection is developed to improve the spread of influence in online social networks. This paper is of immense importance for viral marketing and the organizations willing to promote product or services online via influential personalities.
Details
Keywords
Byung-Won On, Gyu Sang Choi and Soo-Mok Jung
The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs) as a case…
Abstract
Purpose
The purpose of this paper is to collect and understand the nature of real cases of author name variants that have often appeared in bibliographic digital libraries (DLs) as a case study of the name authority control problem in DLs.
Design/methodology/approach
To find a sample of name variants across DLs (e.g. DBLP and ACM) and in a single DL (e.g. ACM), the approach is based on two bipartite matching algorithms: Maximum Weighted Bipartite Matching and Maximum Cardinality Bipartite Matching.
Findings
First, the authors validated the effectiveness and efficiency of the bipartite matching algorithms. The authors also studied the nature of real cases of author name variants that had been found across DLs (e.g. ACM, CiteSeer and DBLP) and in a single DL.
Originality/value
To the best of the authors knowledge, there is less research effort to understand the nature of author name variants shown in DLs. A thorough analysis can help focus research effort on real problems that arise when the authors perform duplicate detection methods.
Details
Keywords
Zhang-Hui Liu, Guo-Long Chen, Ning-Ning Wang and Biao Song
– The purpose of this paper is to present a new immunization strategy for effectively solving the control of the spread of the virus.
Abstract
Purpose
The purpose of this paper is to present a new immunization strategy for effectively solving the control of the spread of the virus.
Design/methodology/approach
Inspired by the idea of network partition, taking two optimization targets which are the scale of sub-network and the sum of the strengths of the sub-network's nodes into account at the same time, a new immunization strategy based on greedy algorithm in the scale-free network is presented. After specifying the number of nodes through the immunization, the network is divided into the scale of sub-network and the sum of the strength of the sub-network's nodes as small as possible.
Findings
The experimental results show that the proposed algorithm has the better performance than targeted immunization which is supposed to be highly efficient at present.
Originality/value
This paper proposes a new immunization strategy based on greedy algorithm in the scale-free network for effectively solving the control of the spread of the virus.
Details
Keywords
As knowledge hiding is prevalent and often leaves severe detrimental consequences in its wake, it is imperative to place strategies on the front burner to identify its potential…
Abstract
Purpose
As knowledge hiding is prevalent and often leaves severe detrimental consequences in its wake, it is imperative to place strategies on the front burner to identify its potential antecedents forthwith if there is going to be any headway to curtail the incidence of this phenomenon in organizations. Therefore, this study aims to examine the relationship between dispositional greed and knowledge hiding with the perceived loss of knowledge power as an underlying mechanism.
Design/methodology/approach
A multi-wave, three weeks apart strategy was used for data collection. A sample of 262 employees working full-time in various organizations operating across different industries in Nigeria participated in this study. Data were analyzed with partial least squares structural equation modeling.
Findings
The results showed that dispositional greed related positively to a perceived loss of knowledge power but insignificantly to any of the three dimensions of knowledge hiding (i.e. playing dumb, evasive hiding and rationalized hiding). On the other hand, the relationship between perceived loss of knowledge power and the three dimensions of knowledge hiding was positive. Finally, dispositional greed had an indirect positive relationship with the three dimensions of knowledge hiding through perceived loss of knowledge power.
Research limitations/implications
All the variables were self-reported, which may lead to the same source bias.
Practical implications
Human resources managers can subject employees to cognitive restructuring training to help them identify thinking patterns that contribute to the perception of losing their power in the organization if they share knowledge and help reshape their perceptions regarding knowledge sharing. Management can use rewards to encourage employees to adopt knowledge sharing and refrain from knowledge hiding as a desired organizational norm.
Originality/value
This study offers novel insights that identify an underlying mechanism that encourages greedy employees to enact knowledge hiding.
Details
Keywords
Qingzheng Xu, Na Wang and Lei Wang
The purpose of this paper is to examine and compare the entire impact of various execution skills of oppositional biogeography-based optimization using the current optimum…
Abstract
Purpose
The purpose of this paper is to examine and compare the entire impact of various execution skills of oppositional biogeography-based optimization using the current optimum (COOBBO) algorithm.
Design/methodology/approach
The improvement measures tested in this paper include different initialization approaches, crossover approaches, local optimization approaches, and greedy approaches. Eight well-known traveling salesman problems (TSP) are employed for performance verification. Four comparison criteria are recoded and compared to analyze the contribution of each modified method.
Findings
Experiment results illustrate that the combination model of “25 nearest-neighbor algorithm initialization+inver-over crossover+2-opt+all greedy” may be the best choice of all when considering both the overall algorithm performance and computation overhead.
Originality/value
When solving TSP with varying scales, these modified methods can enhance the performance and efficiency of COOBBO algorithm in different degrees. And an appropriate combination model may make the fullest possible contribution.
Details