Search results

1 – 10 of over 10000
Article
Publication date: 30 January 2009

Ali Ardalan and Roya Ardalan

Efficient operation of supply chain management (SCM) software is highly dependent on performance of its data structures that are used for data storage and retrieval. Each module…

1662

Abstract

Purpose

Efficient operation of supply chain management (SCM) software is highly dependent on performance of its data structures that are used for data storage and retrieval. Each module in the software should use data structures that are appropriate for the types of operations performed in that module. The purpose of this paper is to develop and introduce an efficient data structure for storage and retrieval of data related to capacity of resources.

Design/methodology/approach

A major aim of supply management systems is timely production and delivery of products. This paper reviews data structures and designs an efficient data structure for storage and retrieval of data that is used in the scheduling module of SCM software.

Findings

This paper introduces a new data structure and search and update algorithms. This data structure can be used in SCM software to record the availability of non‐storable resources.

Originality/value

This is the first paper that discusses the role of data structures in SCM software and develops a data structure that can be used in the scheduling routine of SCM systems. Scheduling is one of the complex modules of SCM software. Some of the special characteristics related to capacity of resources to develop a data structure that can be efficiently searched and updated as part of scheduling routines were used in the new data structure. This data structure is a modified version of threaded height‐balanced binary search tree. Each node in the proposed tree has one more key than a node in the ordinary threaded height‐balanced binary search tree. The available algorithms in the literature on search and update operations on height‐balanced binary search trees are modified to suit the proposed tree.

Details

Industrial Management & Data Systems, vol. 109 no. 1
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 6 March 2009

Ik Sang Shin, Sang‐Hyun Nam, Rodney Roberts and Seungbin Moon

The purpose of this paper is to provide a minimum time algorithm to intercept an object on a conveyor belt by a robotic manipulator. The goal is that the robot is able to…

Abstract

Purpose

The purpose of this paper is to provide a minimum time algorithm to intercept an object on a conveyor belt by a robotic manipulator. The goal is that the robot is able to intercept objects on a conveyor line moving at a given speed in minimum time.

Design/methodology/approach

In order to formulate the problem, the robot and object‐arrival time functions were introduced, and conclude that the optimal point occurs at the intersection of these two functions. The search algorithm for finding the intersection point between the robot and object arrival time functions are also presented to find the optimal point in real‐time.

Findings

Simulation results show that the presented algorithm is well established for various initial robot positions.

Practical implications

A trapezoidal velocity profile was employed which is used in many industrial robots currently in use. Thus, it is believed that robot travel time algorithm is readily implemented for any commercially available robots.

Originality/value

The paper considers exhaustive cases where robot travel time functions are dependent upon initial positions of robotic end‐effectors. Also presented is a fast converging search algorithm so that real time application is more feasible in many cases.

Details

Industrial Robot: An International Journal, vol. 36 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 February 1987

MARK STEWART and PETER WILLETT

This paper describes the simulation of a nearest neighbour searching algorithm for document retrieval using a pool of microprocessors. The documents in a database are organised in…

Abstract

This paper describes the simulation of a nearest neighbour searching algorithm for document retrieval using a pool of microprocessors. The documents in a database are organised in a multi‐dimensional binary search tree, and the algorithm identifies the nearest neighbour for a query by a backtracking search of this tree. Three techniques are described which allow parallel searching of the tree. A PASCAL‐based, general purpose simulation system is used to simulate these techniques, using a pool of Transputer‐like microprocessors with three standard document test collections. The degree of speed‐up and processor utilisation obtained is shown to be strongly dependent upon the characteristics of the documents and queries used. The results support the use of pooled microprocessor systems for searching applications in information retrieval.

Details

Journal of Documentation, vol. 43 no. 2
Type: Research Article
ISSN: 0022-0418

Book part
Publication date: 12 November 2015

Jeffrey Burke and Mario Torres

This chapter examines the relationship between community educational attainment and Fourth Amendment legal principles being implemented in public schools. Using education…

Abstract

This chapter examines the relationship between community educational attainment and Fourth Amendment legal principles being implemented in public schools. Using education attainment data obtained from the U.S. Census, this study examined the influence of educational attainment on how searches of students were conducted and the relative legal and judicial outcomes. The results of this study offer insight on issues related to forms of discipline in public schools and contribute to knowledge bases in the fields of economics, law, social theory, and educational leadership and administration.

Prior studies regarding the Fourth Amendment in schools focused largely on administrative decisions, judgments, and practices, but the aspect of educational attainment has been minimally investigated. Findings suggest community educational attainment has little to no predictive influence on aspects related to student searches examined in the study, which include the intrusiveness level of the search and the number of searches occurring during a single search event. Implications for future research and leadership are discussed.

Details

Legal Frontiers in Education: Complex Law Issues for Leaders, Policymakers and Policy Implementers
Type: Book
ISBN: 978-1-78560-577-2

Article
Publication date: 1 February 1995

A. Munjiza, D.R.J. Owen and N. Bicanic

This paper discusses the issues involved in the development of combined finite/discrete element methods; both from a fundamental theoretical viewpoint and some related algorithmic…

3141

Abstract

This paper discusses the issues involved in the development of combined finite/discrete element methods; both from a fundamental theoretical viewpoint and some related algorithmic considerations essential for the efficient numerical solution of large scale industrial problems. The finite element representation of the solid region is combined with progressive fracturing, which leads to the formation of discrete elements, which may be composed of one or more deformable finite elements. The applicability of the approach is demonstrated by the solution of a range of examples relevant to various industrial sections.

Details

Engineering Computations, vol. 12 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 August 1997

A. Macfarlane, S.E. Robertson and J.A. Mccann

The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text…

Abstract

The progress of parallel computing in Information Retrieval (IR) is reviewed. In particular we stress the importance of the motivation in using parallel computing for text retrieval. We analyse parallel IR systems using a classification defined by Rasmussen and describe some parallel IR systems. We give a description of the retrieval models used in parallel information processing. We describe areas of research which we believe are needed.

Details

Journal of Documentation, vol. 53 no. 3
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 2 May 2017

Guillermo Gonzalo Schiava D'Albano, Tomas Lukas, Fang Su, Theodosios Korakianitis and Ante Munjiza

Contact interaction and contact detection (CD) remain key components of any discontinua simulations. The methods of discontinua include combined finite-discrete element method…

Abstract

Purpose

Contact interaction and contact detection (CD) remain key components of any discontinua simulations. The methods of discontinua include combined finite-discrete element method (FDEM), discrete element method, molecular dynamics, etc. In recent years, a number of CD algorithms have been developed, such as Munjiza–Rougier (MR), Munjiza–Rougier–Schiava (MR-S), Munjiza-No Binary Search (NBS), Balanced Binary Tree Schiava (BBTS), 3D Discontinuous Deformation Analysis and many others. This work aims to conduct a numerical comparison of certain algorithms often used in FDEM for bodies of the same size. These include MR, MR-S, NBS and BBTS algorithms.

Design/methodology/approach

Computational simulations were used in this work.

Findings

In discrete element simulations where particles are introduced randomly or in which the relative position between particles is constantly changing, the MR and MR-S algorithms present an advantage in terms of CD times.

Originality/value

This paper presents a detailed comparison between CD algorithms. The comparisons are performed for problem cases with different lattices and distributions of particles in discrete element simulations. The comparison includes algorithms that have not been evaluated between them. Also, two new algorithms are presented in the paper, MR-S and BBTS.

Details

Engineering Computations, vol. 34 no. 3
Type: Research Article
ISSN: 0264-4401

Keywords

Abstract

Details

Database Management Systems
Type: Book
ISBN: 978-1-78756-695-8

Article
Publication date: 1 June 2006

A.C. Caputo, L. Fratocchi and P.M. Pelagagge

This purpose of this paper is to present a methodology for optimally planning long‐haul road transport activities through proper aggregation of customer orders in separate…

4236

Abstract

Purpose

This purpose of this paper is to present a methodology for optimally planning long‐haul road transport activities through proper aggregation of customer orders in separate full‐truckload or less‐than‐truckload shipments in order to minimize total transportation costs.

Design/methodology/approach

The model is applied to a specific Italian multi‐plant firm operating in the plastic film for packaging sector. The method, given the order quantities to be shipped and the location of customers, aggregates shipments in subgroups of compatible orders resorting to a heuristic procedure and successively consolidates them in optimized full truck load and less than truck load shipments resorting to a Genetic Algorithm in order to minimize total shipping costs respecting delivery due dates and proper geographical and truck capacity constraints.

Findings

The paper demonstrates that evolutionary computation techniques may be effective in tactical planning of transportation activities. The model shows that substantial savings on overall transportation cost may be achieved adopting the proposed methodology in a real life scenario.

Research limitations/implications

The main limitation of this optimisation methodology is that an heuristic procedure is utilized instead of an enumerative approach in order to at first aggregate shipments in compatible sets before the optimisation algorithm carries out the assignments of customer orders to separate truckloads. Even if this implies that the solution could be sub‐optimal, it has demonstrated a very satisfactory performance and enables the problem to become manageable in real life settings.

Practical implications

The proposed methodology enables to rapidly choose if a customer order should be shipped via a FTL or a LTL transport and performs the aggregation of different orders in separate shipments in order to minimize total transportation costs. As a consequence, the task of logistics managers is greatly simplified and consistently better performances respect manual planning can be obtained.

Originality/value

The described methodology is original in both the kind of approach adopted to solve the problem of optimising orders shipping in long‐haul direct shipping distribution logistics, and in the solution technique adopted which integrates heuristic algorithm and an original formulation of a GA optimisation problem. Moreover, the methodology solves both the truckload assignment problem and the choice of LTL vs FTL shipment thus representing an useful tool for logistics managers.

Details

Industrial Management & Data Systems, vol. 106 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 18 September 2009

Wei Lu, Andrew MacFarlane and Fabio Venuti

Being an important data exchange and information storage standard, XML has generated a great deal of interest and particular attention has been paid to the issue of XML indexing…

Abstract

Purpose

Being an important data exchange and information storage standard, XML has generated a great deal of interest and particular attention has been paid to the issue of XML indexing. Clear use cases for structured search in XML have been established. However, most of the research in the area is either based on relational database systems or specialized semi‐structured data management systems. This paper aims to propose a method for XML indexing based on the information retrieval (IR) system Okapi.

Design/methodology/approach

First, the paper reviews the structure of inverted files and gives an overview of the issues of why this indexing mechanism cannot properly support XML retrieval, using the underlying data structures of Okapi as an example. Then the paper explores a revised method implemented on Okapi using path indexing structures. The paper evaluates these index structures through the metrics of indexing run time, path search run time and space costs using the INEX and Reuters RVC1 collections.

Findings

Initial results on the INEX collections show that there is a substantial overhead in space costs for the method, but this increase does not affect run time adversely. Indexing results on differing sized Reuters RVC1 sub‐collections show that the increase in space costs with increasing the size of a collection is significant, but in terms of run time the increase is linear. Path search results show sub‐millisecond run times, demonstrating minimal overhead for XML search.

Practical implications

Overall, the results show the method implemented to support XML search in a traditional IR system such as Okapi is viable.

Originality/value

The paper provides useful information on a method for XML indexing based on the IR system Okapi.

Details

Aslib Proceedings, vol. 61 no. 5
Type: Research Article
ISSN: 0001-253X

Keywords

1 – 10 of over 10000