Search results
1 – 10 of 88Petar Jackovich, Bruce Cox and Raymond R. Hill
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and…
Abstract
Purpose
This paper aims to define the class of fragment constructive heuristics used to compute feasible solutions for the traveling salesman problem (TSP) into edge-greedy and vertex-greedy subclasses. As these subclasses of heuristics can create subtours, two known methodologies for subtour elimination on symmetric instances are reviewed and are expanded to cover asymmetric problem instances. This paper introduces a third novel subtour elimination methodology, the greedy tracker (GT), and compares it to both known methodologies.
Design/methodology/approach
Computational results for all three subtour elimination methodologies are generated across 17 symmetric instances ranging in size from 29 vertices to 5,934 vertices, as well as 9 asymmetric instances ranging in size from 17 to 443 vertices.
Findings
The results demonstrate the GT is the fastest method for preventing subtours for instances below 400 vertices. Additionally, a distinction between fragment constructive heuristics and the subtour elimination methodology used to ensure the feasibility of resulting solutions enables the introduction of a new vertex-greedy fragment heuristic called ordered greedy.
Originality/value
This research has two main contributions: first, it introduces a novel subtour elimination methodology. Second, the research introduces the concept of ordered lists which remaps the TSP into a new space with promising initial computational results.
Details
Keywords
Ruxia Ma, Xiaofeng Meng and Zhongyuan Wang
The Web is the largest repository of information. Personal information is usually scattered on various pages of different websites. Search engines have made it easier to find…
Abstract
Purpose
The Web is the largest repository of information. Personal information is usually scattered on various pages of different websites. Search engines have made it easier to find personal information. An attacker may collect a user's scattered information together via search engines, and infer some privacy information. The authors call this kind of privacy attack “Privacy Inference Attack via Search Engines”. The purpose of this paper is to provide a user‐side automatic detection service for detecting the privacy leakage before publishing personal information.
Design/methodology/approach
In this paper, the authors propose a user‐side automatic detection service. In the user‐side service, the authors construct a user information correlation (UICA) graph to model the association between user information returned by search engines. The privacy inference attack is mapped into a decision problem of searching a privacy inferring path with the maximal probability in the UICA graph and it is proved that it is a nondeterministic polynomial time (NP)‐complete problem by a two‐step reduction. A Privacy Leakage Detection Probability (PLD‐Probability) algorithm is proposed to find the privacy inferring path: it combines two significant factors which can influence the vertexes' probability in the UICA graph and uses greedy algorithm to find the privacy inferring path.
Findings
The authors reveal that privacy inferring attack via search engines is very serious in real life. In this paper, a user‐side automatic detection service is proposed to detect the risk of privacy inferring. The authors make three kinds of experiments to evaluate the seriousness of privacy leakage problem and the performance of methods proposed in this paper. The results show that the algorithm for the service is reasonable and effective.
Originality/value
The paper introduces a new family of privacy attacks on the Web: privacy inferring attack via search engines and presents a privacy inferring model to describe the process and principles of personal privacy inferring attack via search engines. A user‐side automatic detection service is proposed to detect the privacy inference before publishing personal information. In this user‐side service, the authors propose a Privacy Leakage Detection Probability (PLD‐Probability) algorithm. Extensive experiments show these methods are reasonable and effective.
Details
Keywords
Dimos C. Charmpis and Manolis Papadrakakis
Balancing and dual domain decomposition methods (DDMs) comprise a family of efficient high performance solution approaches for a large number of problems in computational…
Abstract
Balancing and dual domain decomposition methods (DDMs) comprise a family of efficient high performance solution approaches for a large number of problems in computational mechanics. Such DDMs are used in practice on parallel computing environments with the number of generated subdomains being generally larger than the number of available processors. This paper presents an effective heuristic technique for organizing the subdomains into subdomain clusters, in order to assign each cluster to a processor. This task is handled by the proposed approach as a graph partitioning optimization problem using the publicly available software METIS. The objective of the optimization process is to minimize the communication requirements of the DDMs under the constraint of producing balanced processor workloads. This constraint optimization procedure for treating the subdomain cluster generation task leads to increased computational efficiencies for balancing and dual DDMs.
Details
Keywords
H.R. Khataee, M.Y. Ibrahim, S. Sourchi, L. Eskandari and M.A. Teh Noranis
One of the significant underlying principles of nanorobotic systems deals with the understanding and conceptualization of their respective complex nanocomponents. This paper…
Abstract
Purpose
One of the significant underlying principles of nanorobotic systems deals with the understanding and conceptualization of their respective complex nanocomponents. This paper introduces a new methodology to compute a set of optimal electronic and mathematical properties of Buckyball nanoparticle using graph algorithms based on dynamic programming and greedy algorithm.
Design/methodology/approach
Buckyball, C60, is composed of sixty equivalent carbon atoms arranged as a highly symmetric hollow spherical cage in the form of a soccer ball. At first, Wiener, hyper‐Wiener, Harary and reciprocal Wiener indices were computed using dynamic programming and presented them as: W(Buckyball)=11870.4, WW(Buckyball)=52570.9, Ha(Buckyball)=102.2 and RW(Buckyball)=346.9. The polynomials of Buckyball, Hosoya and hyper‐Hosoya, which are in relationship with Buckyball's indices, have also been computed. The relationships between Buckyball's indices and polynomials were then computed and demonstrated a good agreement with their mathematical equations. Also, a graph algorithm based on greedy algorithms was used to find some optimal electronic aspects of Buckyball's structure by computing the Minimum Weight Spanning Tree (MWST) of Buckyball.
Findings
The computed MWST was indicated that for connecting sixty carbon atoms of Buckyball together: the minimum numbers of double bonds were 30; the minimum numbers of single bonds were 29; and the minimum numbers of electrons were 178. These results also had good agreement with the principles of the authors' used greedy algorithm.
Originality/value
This paper has used the graph algorithms for computing the optimal electronic and mathematical properties of BB. It has focused on mathematical properties of BB including Wiener, hyper‐Wiener, Harary and reciprocal Wiener indices as well as Hosoya and Hyper‐Hosoya polynomials and computerized them with dynamic programming graph algorithms.
Details
Keywords
Mohammad Khalid Pandit, Roohie Naaz Mir and Mohammad Ahsan Chishti
The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational…
Abstract
Purpose
The intelligence in the Internet of Things (IoT) can be embedded by analyzing the huge volumes of data generated by it in an ultralow latency environment. The computational latency incurred by the cloud-only solution can be significantly brought down by the fog computing layer, which offers a computing infrastructure to minimize the latency in service delivery and execution. For this purpose, a task scheduling policy based on reinforcement learning (RL) is developed that can achieve the optimal resource utilization as well as minimum time to execute tasks and significantly reduce the communication costs during distributed execution.
Design/methodology/approach
To realize this, the authors proposed a two-level neural network (NN)-based task scheduling system, where the first-level NN (feed-forward neural network/convolutional neural network [FFNN/CNN]) determines whether the data stream could be analyzed (executed) in the resource-constrained environment (edge/fog) or be directly forwarded to the cloud. The second-level NN ( RL module) schedules all the tasks sent by level 1 NN to fog layer, among the available fog devices. This real-time task assignment policy is used to minimize the total computational latency (makespan) as well as communication costs.
Findings
Experimental results indicated that the RL technique works better than the computationally infeasible greedy approach for task scheduling and the combination of RL and task clustering algorithm reduces the communication costs significantly.
Originality/value
The proposed algorithm fundamentally solves the problem of task scheduling in real-time fog-based IoT with best resource utilization, minimum makespan and minimum communication cost between the tasks.
Details
Keywords
A. Kaveh and G.R. Roosta
An improvement is presented for the existing minimal cycle basis selection algorithms increasing their efficiency. This consists of reducing the number of cycles to be considered…
Abstract
An improvement is presented for the existing minimal cycle basis selection algorithms increasing their efficiency. This consists of reducing the number of cycles to be considered as candidates for being the elements of a minimal cycle basis and makes practical use of the Greedy algorithm feasible. A modification is also included to form suboptimal‐minimal cycle bases in place of minimal bases. An efficient algorithm is developed to form suboptimal cycle bases of graphs, in which the Greedy algorithm is applied twice. First a suboptimal minimal cycle basis is formed, and then ignoring the minimality, a basis with elements having smaller overlaps is selected.
Details
Keywords
Petko Kitanov, Odile Marcotte, Wil H.A. Schilders and Suzanne M. Shontz
To simulate large parasitic resistive networks, one must reduce the size of the circuit models through methods that are accurate and preserve terminal connectivity and network…
Abstract
Purpose
To simulate large parasitic resistive networks, one must reduce the size of the circuit models through methods that are accurate and preserve terminal connectivity and network sparsity. The purpose here is to present such a method, which exploits concepts from graph theory in a systematic fashion.
Design/methodology/approach
The model order reduction problem is formulated for parasitic resistive networks through graph theory concepts and algorithms are presented based on the notion of vertex cut in order to reduce the size of electronic circuit models. Four variants of the basic method are proposed and their respective merits discussed.
Findings
The algorithms proposed enable the production of networks that are significantly smaller than those produced by earlier methods, in particular the method described in the report by Lenaers entitled “Model order reduction for large resistive networks”. The reduction in the number of resistors achieved through the algorithms is even more pronounced in the case of large networks.
Originality/value
The paper seems to be the first to make a systematic use of vertex cuts in order to reduce a parasitic resistive network.
Details
Keywords
Yongqing Hai, Yufei Guo and Mo Dong
Integrality of surface mesh is requisite for computational engineering. Nonwatertight meshes with holes can bring inconvenience to applications. Unlike simple modeling or…
Abstract
Purpose
Integrality of surface mesh is requisite for computational engineering. Nonwatertight meshes with holes can bring inconvenience to applications. Unlike simple modeling or visualization, the downstream industrial application scenarios put forward higher requirements for hole-filling, although many related algorithms have been developed. This study aims at the hole-filling issue in industrial application scenarios.
Design/methodology/approach
This algorithm overcomes some inherent weakness of general methods and generates a high-level resulting mesh. Initially, the primitive hole boundary is filled with a more appropriate triangulation which introduces fewer geometric errors. And in order for better performances on shape approximation of the background mesh, the algorithm also refines the initial triangulation with topology optimization. When obtaining the background mesh defining the geometry and size field, spheres on it are packed to determine the vertex configuration and then the resulting high-level mesh is generated.
Findings
Through emphasizing geometry recovery and mesh quality, the proposed algorithm works well in hole-filling in industrial application scenarios. Many experimental results demonstrate the reliability and the performance of the algorithm. And the processed meshes are capable of being used for industrial simulation computations directly.
Originality/value
This paper makes input meshes more adaptable for solving programs through local modifications on meshes and perfects the preprocessing technology of finite element analysis (FEA).
Details
Keywords
Nai‐Luen Lai, Chun‐Han Lin and Chung‐Ta King
A primary task of wireless sensor networks is to measure environmental conditions. In most applications, a sink node is responsible for collecting data from the sensors through…
Abstract
Purpose
A primary task of wireless sensor networks is to measure environmental conditions. In most applications, a sink node is responsible for collecting data from the sensors through multihop communications. The communication pattern is called convergecast. However, radio congestion around the sink can easily become a bottleneck for the convergecast. The purpose of this paper is to consider both scheduling algorithms and routing structures to improve the throughput of convergecast.
Design/methodology/approach
The paper addresses the issue from two perspectives. First by considering the transition scheduling that reduces radio interference to perform convergecast efficiently. Second, by studying the effects of routing structures on convergecast. A routing algorithm, called disjoint‐strip routing, is proposed as an alternative to existing shortest‐path routing.
Findings
The paper shows that constructing a shortest‐length conflict‐free schedule is equivalent to finding a minimal vertex coloring. To solve the scheduling problem, a virtual‐node expansion is proposed to handle relay operations and then coloring algorithms are utilized. Regarding the routing structures, a disjoint‐strip algorithm is proposed to leverage possible parallel transmissions. Proposed algorithms are evaluated through simulations.
Originality/value
This paper separates the problem for optimizing data‐collection throughput into two stages: constructing a routing structure on a given deployment; and scheduling the activation time of each link. Determining routing topologies and communication schedules for optimal throughput are shown to be hard, so heuristics are applied in both stages. VNE is proposed, which makes traffic information visible to coloring algorithms. The advantage of VNE is verified through simulations. VNE can be applied to any coloring algorithm and any deterministic traffic pattern. It is shown that routing structures set a limit on the performance of scheduling algorithms. There are two possible ways in routing algorithms to improve convergecast throughput: first, by reducing the total number of transmissions during data collection; second, by transferring data in parallel. The shortest‐path routing addresses the first point while DS addresses the second one. As expected, when the deployments are even and balanced, minimizing the number of transmissions is more effective than parallelizing them. On the other hand, when the deployments are unbalanced and conflicts are not strict, parallel transmissions can improve the throughput.
Details
Keywords
G. Sisias, R. Phillips, C.A. Dobson, M.J. Fagan and C.M. Langton
A set of algorithms has been developed and evaluated for 3D and 21/2D rapid prototyping replication of 3D reconstructions of cancellous bone samples. The algorithms replicate a…
Abstract
A set of algorithms has been developed and evaluated for 3D and 21/2D rapid prototyping replication of 3D reconstructions of cancellous bone samples. The algorithms replicate a voxel map without any loss of fidelity, so as to increase the validity of the comparison of mechanical tests on the 3D reconstructed models with those predicted by finite element analyses. The evaluation is both in terms of algorithmic complexity and the resultant data set size. The former determines the feasibility of the conversion process, whereas the latter the potential success of the manufacturing process. The algorithms and their implementation in PC software is presented.
Details