Search results
1 – 10 of over 25000P.A.D. de Maine, K.D. Bradley and S.M. Jodis
The General Information Management (GIM) system defined in this paper is designed to: (1) be information independent; (2) be logically data independent (it is therefore question…
Abstract
The General Information Management (GIM) system defined in this paper is designed to: (1) be information independent; (2) be logically data independent (it is therefore question type independent); (3) honor requests for information in small and bounded search times; (4) provide a security system that is foolproof, virus proof and easy to use; (5) be economical and efficient in the use of memory and data communication systems; and (6) be modular in design to function in distributed or standalone environments. The basis of the GIM system is a context free language or data structure, called JOBLIST, and a simulated communications network, called SOLID. Queries, converted to JOBLIST, directly describe the information paths in SOLID that terminate with the location(s) of the referenced information. There is no directory. A proof‐of‐principle prototype has established that the JOBLIST/SOLID system fully meets the above specifications.
Details
Keywords
This paper aims to present an innovative research methodology that enables a company to realign its quality cost elements in order to improve implementation of its quality system.
Abstract
Purpose
This paper aims to present an innovative research methodology that enables a company to realign its quality cost elements in order to improve implementation of its quality system.
Design/methodology/approach
The methodology combines the following methods: the house of quality costs (HOQC) method, which translates the desired improvement in failure costs (internal and external) into controllable efforts (prevention and appraisal costs) and ranks them by relative importance, the analysis of variance method, which supports selection of vital quality costs, and the enhanced control chart method, used to validate the strong causal linkages in HOQC.
Findings
Two case studies are presented to illustrate the application of the developed methodology. In the furniture firm, there are basically two vital sources of defects that could affect the overall cost of quality – raw materials and production process. In the food firm, traditional quality control is not enough to eliminate quality problems from the production processes. Hence, the hazard analysis critical control point is implemented.
Practical implications
The methodology applied in this paper proved itself capable of effectively handling realignment of quality cost elements. The methodology emphasizes adopting a systemic approach for selecting the vital controllable efforts in response to vital failure costs, as well as for detecting changes in the quality cost structure.
Originality/value
The American Society for Quality provides good coverage of quality cost types, but offers no mechanism for building and maintaining the relevance of these costs.
Details
Keywords
Seyed Ashkan Zarghami and Indra Gunawan
As a response to the growing operational and disruptive threats to water distribution networks (WDNs), researchers have developed a vast array of methods for the reliability…
Abstract
Purpose
As a response to the growing operational and disruptive threats to water distribution networks (WDNs), researchers have developed a vast array of methods for the reliability analysis of WDNs. In order to follow this growing number of methods, this paper reviews and documents in one place the historical developments in the reliability analysis of WDN.
Design/methodology/approach
A systematic literature review (SLR) is carried out to summarize the state-of-the-art research on reliability analysis of WDNs. In conducting this systemic literature review, the authors adopted an iterative approach to define appropriate keywords, analyze and synthesize data and finalizing the classification results.
Findings
First, the hydraulic approach to reliability analysis is currently pervasive, and relatively little academic research has addressed the topological reliability analysis of WDNs. Second, in order to provide a comprehensive picture of the network reliability, a different approach that integrates topological and hydraulic attributes seems a more effective method. Third, the conventional reliability analysis methods are only effective for demonstrating a snapshot of these networks at a given point in time. The availability of methods that enable researchers to evaluate the reliability in response to changes in its variables is still a major challenge.
Originality/value
The present paper facilitates future research in the reliability analysis of WDNs by providing a source of references for researchers and water utilities. Further, this article makes a contribution to the literature by offering a roadmap for future reliability analysis of WDNs by reviewing the evolution of the current reliability analysis methods throughout history.
Details
Keywords
This chapter explores a descriptive theory of multidimensional travel behaviour, estimation of quantitative models and demonstration in an agent-based microsimulation.
Abstract
Purpose
This chapter explores a descriptive theory of multidimensional travel behaviour, estimation of quantitative models and demonstration in an agent-based microsimulation.
Theory
A descriptive theory on multidimensional travel behaviour is conceptualised. It theorizes multidimensional knowledge updating, search start/stopping criteria and search/decision heuristics. These components are formulated or empirically modelled and integrated in a unified and coherent approach.
Findings
The theory is supported by empirical observations and the derived quantitative models are tested by an agent-based simulation on a demonstration network.
Originality and value
Based on artificially intelligent agents, learning and search theory and bounded rationality, this chapter makes an effort to embed a sound theoretical foundation for the computational process approach and agent-based micro-simulations. A pertinent new theory is proposed with experimental observations and estimations to demonstrate agents with systematic deviations from the rationality paradigm. Procedural and multidimensional decision-making are modelled. The numerical experiment highlights the capabilities of the proposed theory in estimating rich behavioural dynamics.
Details
Keywords
Chenfeng Xiong, Xiqun Chen and Lei Zhang
This chapter explores a descriptive theory of multidimensional travel behaviour, estimation of quantitative models, and demonstration in an agent-based microsimulation.
Abstract
Purpose
This chapter explores a descriptive theory of multidimensional travel behaviour, estimation of quantitative models, and demonstration in an agent-based microsimulation.
Theory
A descriptive theory on multidimensional travel behaviour is conceptualised. It theorizes multidimensional knowledge updating, search start/stopping criteria, and search/decision heuristics. These components are formulated or empirically modelled and integrated in a unified and coherent approach.
Findings
The theory is supported by empirical observations and the derived quantitative models are tested by an agent-based simulation on a demonstration network.
Originality and value
Based on artificially intelligent agents, learning and search theory, and bounded rationality, this chapter makes an effort to embed a sound theoretical foundation for the computational process approach and agent-based microsimulations. A pertinent new theory is proposed with experimental observations and estimations to demonstrate agents with systematic deviations from the rationality paradigm. Procedural and multidimensional decision-making are modelled. The numerical experiment highlights the capabilities of the proposed theory in estimating rich behavioural dynamics.
Details
Keywords
Yazdan Mansourian and Nigel Ford
This paper aims to examine our current knowledge of how searchers perceive and react to the possibility of missing potentially important information whilst searching the web is…
Abstract
Purpose
This paper aims to examine our current knowledge of how searchers perceive and react to the possibility of missing potentially important information whilst searching the web is limited. The study reported here seeks to investigate such perceptions and reactions, and to explore the extent to which Simon's “bounded rationality” theory is useful in illuminating these issues.
Design/methodology/approach
Totally 37 academic staff, research staff and research students in three university departments were interviewed about their web searching. The open‐ended, semi‐structured interviews were inductively analysed. Emergence of the concept of “good enough” searching prompted a further analysis to explore the extent to which the data could be interpreted in terms of Simon's concepts of “bounded rationality” and “satisficing”.
Findings
The results indicate that the risk of missing potentially important information was a matter of concern to the interviewees. Their estimations of the likely extent and importance of missed information affected decisions by individuals as to when to stop searching – decisions based on very different criteria, which map well onto Simon's concepts. On the basis of the interview data, the authors propose tentative categorizations of perceptions of the risk of missing information including “inconsequential” “tolerable” “damaging” and “disastrous” and search strategies including “perfunctory” “minimalist” “nervous” and “extensive”. It is concluded that there is at least a prima facie case for bounded rationality and satisficing being considered as potentially useful concepts in our quest better to understand aspects of human information behaviour.
Research limitations/implications
Although the findings are based on a relatively small sample and an exploratory qualitative analysis, it is argued that the study raises a number of interesting questions, and has implications for both the development of theory and practice in the areas of web searching and information literacy.
Originality/value
The paper focuses on an aspect of web searching which has not to date been well explored. Whilst research has done much to illuminate searchers' perceptions of what they find on the web, we know relatively little of their perceptions of, and reactions to information that they fail to find. The study reported here provides some tentative models, based on empirical evidence, of these phenomena.
Details
Keywords
This paper aims to provide a promising memetic algorithm (MA) for an unrelated parallel machine scheduling problem with grey processing times by using a simple dispatching rule in…
Abstract
Purpose
This paper aims to provide a promising memetic algorithm (MA) for an unrelated parallel machine scheduling problem with grey processing times by using a simple dispatching rule in the local search phase of the proposed MA.
Design/methodology/approach
This paper proposes a MA for an unrelated parallel machine scheduling problem where the objective is to minimize the sum of weighted completion times of jobs with uncertain processing times. In the optimal schedule of the problem’s single machine version with deterministic processing time, the machine has a sequence where jobs are ordered in their increasing order of weighted processing times. The author adapts this property to some of their local search mechanisms that are required to assure the local optimality of the solution generated by the proposed MA. To show the efficiency of the proposed algorithm, this study uses other local search methods in the MA within this experiment. The uncertainty of processing times is expressed with grey numbers.
Findings
Experimental study shows that the MA with the swap-based local search and the weighted shortest processing time (WSPT) dispatching rule outperforms other MA alternatives with swap-based and insertion-based local searches without that dispatching rule.
Originality/value
A promising and effective MA with the WSPT dispatching rule is designed and applied to unrelated parallel machine scheduling problems where the objective is to minimize the sum of the weighted completion times of jobs with grey processing time.
Details
Keywords
John R. King and Alexander S. Spachis
Scheduling is defined by Baker as, “the allocation of resources over time to perform a collection of tasks”. The term facilities is often used instead of resources and the tasks…
Abstract
Scheduling is defined by Baker as, “the allocation of resources over time to perform a collection of tasks”. The term facilities is often used instead of resources and the tasks to be performed may involve a variety of different operations.
Jinglin Qi, Zhengbiao Han and Preben Hansen
This study constructed an information search process model based on costs and benefits to reflect different information search processes under different decisions from a…
Abstract
Purpose
This study constructed an information search process model based on costs and benefits to reflect different information search processes under different decisions from a behavioural economics perspective.
Design/methodology/approach
This study used a deductive approach to conceptualise the costs, benefits, and uncertainties of the information search process. Subsequently, we constructed an information search process model based on the costs and benefits using graphical reasoning, loss aversion theory, bounded rationality theory, the satisficing theory of behavioural economics, and the uncertainty changes of information search process.
Findings
The model revealed four types of user behaviours in the information search process: (1) avoiding search at the initiation of the search process; (2) exiting in the middle of a search; (3) stopping at the point of satisficing; and (4) continuing the search until experiencing physical discomfort.
Originality/value
The model constructed in this study treats the information search as a process based on costs and benefits with uncertainty. This model integrates information search avoidance and stopping into an information search process model. The model identifies users’ bounded rationality by evaluating ideal and real situations. Moreover, the model explains relative and absolute information overloads and the area beyond the user’s bounded rationality. These findings could help improve users’ information literacy and optimise information systems.
Details