Search results
1 – 10 of over 7000To describe consumers’ heuristic and analytical searches for a pre‐purchase information acquisition, and to assess the correspondence of flexibility of information task and the…
Abstract
Purpose
To describe consumers’ heuristic and analytical searches for a pre‐purchase information acquisition, and to assess the correspondence of flexibility of information task and the information found with a search.
Design/methodology/approach
Propositions based on current research in web use and consumer studies. Tracked records of searches are used for descriptive analysis of transitional patterns in the data. Regression is used for statistical verification of the information provided by searches.
Findings
Consumer searches center on chaining events, indicating heavy reliance on hyperlink navigation between web sites. Formal searches are seldom used, although when employed, tend to have a high level of diagnosticity. The emphasis on heuristic behavior is logical, as the way consumer information is currently presented on the internet rewards for this type of behavior. Use of heuristic search increases the likelihood of access to flexibly presented information.
Research limitations/implications
Consumers favor heuristic trial‐and‐error searches even in focused fact‐finding search tasks, which are typically considered the domain of analytical seeking. Consumers seem to benefit most from apparently inefficient, reactive and heuristic searches, because these are more likely to provide information in a format that the consumer can adapt. Convenience sample limits generalizability of findings.
Originality/value
While there is an increasing body of knowledge concerning internet use for finding information, fewer studies have focused on consumer uses of the web in search. This paper provides new information of online consumers, an increasingly important topic.
Details
Keywords
The purpose of this paper is to elaborate the picture of strategies and tactics for information seeking and searching by focusing on the heuristic elements of such strategies and…
Abstract
Purpose
The purpose of this paper is to elaborate the picture of strategies and tactics for information seeking and searching by focusing on the heuristic elements of such strategies and tactics.
Design/methodology/approach
A conceptual analysis of a sample of 31 pertinent investigations was conducted to find out how researchers have approached heuristics in the above context since the 1970s. To achieve this, the study draws on the ideas produced within the research programmes on Heuristics and Biases, and Fast and Frugal Heuristics.
Findings
Researchers have approached the heuristic elements in three major ways. First, these elements are defined as general level constituents of browsing strategies in particular. Second, heuristics are approached as search tips. Third, there are examples of conceptualizations of individual heuristics. Familiarity heuristic suggests that people tend to prefer sources that have worked well in similar situations in the past. Recognition heuristic draws on an all-or-none distinction of the information objects, based on cues such as information scent. Finally, representativeness heuristic is based on recalling similar instances of events or objects and judging their typicality in terms of genres, for example.
Research limitations/implications
As the study focuses on three heuristics only, the findings cannot be generalized to describe the use of all heuristic elements of strategies and tactics for information seeking and searching.
Originality/value
The study pioneers by providing an in-depth analysis of the ways in which the heuristic elements are conceptualized in the context of information seeking and searching. The findings contribute to the elaboration of the conceptual issues of information behavior research.
Details
Keywords
Önder Halis Bettemir and M. Talat Birgonul
Exact solution of time–cost trade-off problem (TCTP) by the state-of-the-art meta-heuristic algorithms can be obtained for small- and medium-scale problems, while satisfactory…
Abstract
Purpose
Exact solution of time–cost trade-off problem (TCTP) by the state-of-the-art meta-heuristic algorithms can be obtained for small- and medium-scale problems, while satisfactory results cannot be obtained for large construction projects. In this study, a hybrid heuristic meta-heuristic algorithm that adapts the search domain is developed to solve the large-scale discrete TCTP more efficiently.
Design/methodology/approach
Minimum cost slope–based heuristic network analysis algorithm (NAA), which eliminates the unfeasible search domain, is embedded into differential evolution meta-heuristic algorithm. Heuristic NAA narrows the search domain at the initial phase of the optimization. Moreover, activities with float durations higher than the predetermined threshold value are eliminated and then the meta-heuristic algorithm starts and searches the global optimum through the narrowed search space. However, narrowing the search space may increase the probability of obtaining a local optimum. Therefore, adaptive search domain approach is employed to make reintroduction of the eliminated activities to the design variable set possible, which reduces the possibility of converging into local minima.
Findings
The developed algorithm is compared with plain meta-heuristic algorithm with two separate analyses. In the first analysis, both algorithms have the same computational demand, and in the latter analysis, the meta-heuristic algorithm has fivefold computational demand. The tests on case study problems reveal that the developed algorithm presents lower total project costs according to the dependent t-test for paired samples with α = 0.0005.
Research limitations/implications
In this study, TCTP is solved without considering quality or restrictions on the resources.
Originality/value
The proposed method enables to adapt the number of parameters, that is, the search domain and provides the opportunity of obtaining significant improvements on the meta-heuristic algorithms for other engineering optimization problems, which is the theoretical contribution of this study. The proposed approach reduces the total construction cost of the large-scale projects, which can be the practical benefit of this study.
Details
Keywords
John H Drake, Matthew Hyde, Khaled Ibrahim and Ender Ozcan
Hyper-heuristics are a class of high-level search techniques which operate on a search space of heuristics rather than directly on a search space of solutions. The purpose of this…
Abstract
Purpose
Hyper-heuristics are a class of high-level search techniques which operate on a search space of heuristics rather than directly on a search space of solutions. The purpose of this paper is to investigate the suitability of using genetic programming as a hyper-heuristic methodology to generate constructive heuristics to solve the multidimensional 0-1 knapsack problem
Design/methodology/approach
Early hyper-heuristics focused on selecting and applying a low-level heuristic at each stage of a search. Recent trends in hyper-heuristic research have led to a number of approaches being developed to automatically generate new heuristics from a set of heuristic components. A population of heuristics to rank knapsack items are trained on a subset of test problems and then applied to unseen instances.
Findings
The results over a set of standard benchmarks show that genetic programming can be used to generate constructive heuristics which yield human-competitive results.
Originality/value
In this work the authors show that genetic programming is suitable as a method to generate reusable constructive heuristics for the multidimensional 0-1 knapsack problem. This is classified as a hyper-heuristic approach as it operates on a search space of heuristics rather than a search space of solutions. To our knowledge, this is the first time in the literature a GP hyper-heuristic has been used to solve the multidimensional 0-1 knapsack problem. The results suggest that using GP to evolve ranking mechanisms merits further future research effort.
Details
Keywords
This paper studies a keyword search over graph-structured data used in various fields such as semantic web, linked open data and social networks. This study aims to propose an…
Abstract
Purpose
This paper studies a keyword search over graph-structured data used in various fields such as semantic web, linked open data and social networks. This study aims to propose an efficient keyword search algorithm on graph data to find top-k answers that are most relevant to the query and have diverse content nodes for the input keywords.
Design/methodology/approach
Based on an aggregative measure of diversity of an answer set, this study proposes an approach to searching the top-k diverse answers to a query on graph data, which finds a set of most relevant answer trees whose average dissimilarity should be no lower than a given threshold. This study defines a diversity constraint that must be satisfied for a subset of answer trees to be included in the solution. Then, an enumeration algorithm and a heuristic search algorithm are proposed to find an optimal solution efficiently based on the diversity constraint and an A* heuristic. This study also provides strategies for improving the performance of the heuristic search method.
Findings
The results of experiments using a real data set demonstrate that the proposed search algorithm can find top-k diverse and relevant answers to a query on large-scale graph data efficiently and outperforms the previous methods.
Originality/value
This study proposes a new keyword search method for graph data that finds an optimal solution with diverse and relevant answers to the query. It can provide users with query results that satisfy their various information needs on large graph data.
Details
Keywords
Derya Deliktaş and Dogan Aydin
Assembly lines are widely employed in manufacturing processes to produce final products in a flow efficiently. The simple assembly line balancing problem is a basic version of the…
Abstract
Purpose
Assembly lines are widely employed in manufacturing processes to produce final products in a flow efficiently. The simple assembly line balancing problem is a basic version of the general problem and has still attracted the attention of researchers. The type-I simple assembly line balancing problems (SALBP-I) aim to minimise the number of workstations on an assembly line by keeping the cycle time constant.
Design/methodology/approach
This paper focuses on solving multi-objective SALBP-I problems by utilising an artificial bee colony based-hyper heuristic (ABC-HH) algorithm. The algorithm optimises the efficiency and idleness percentage of the assembly line and concurrently minimises the number of workstations. The proposed ABC-HH algorithm is improved by adding new modifications to each phase of the artificial bee colony framework. Parameter control and calibration are also achieved using the irace method. The proposed model has undergone testing on benchmark problems, and the results obtained have been compared with state-of-the-art algorithms.
Findings
The experimental results of the computational study on the benchmark dataset unequivocally establish the superior performance of the ABC-HH algorithm across 61 problem instances, outperforming the state-of-the-art approach.
Originality/value
This research proposes the ABC-HH algorithm with local search to solve the SALBP-I problems more efficiently.
Details
Keywords
The purpose of this study is to improve the egocentric search speed for important documents in neighbouring blogs.
Abstract
Purpose
The purpose of this study is to improve the egocentric search speed for important documents in neighbouring blogs.
Design/methodology/approach
This paper presents a rapid egocentric search scheme that narrows down the search space to more important blogs. To determine which blogs are more valuable among a user's neighbouring blogs, a heuristic function is developed that predicts the authority scores on the basis of the local information of the blog. The proposed approach improves the speed of the egocentric search process and the quality of retrieved documents.
Findings
A blog is a new medium that is receiving considerable attention. Its links enable one to acquire information about social relations between bloggers in a blog space, and these relations reflect bloggers' interests. Therefore, the ability to search documents in linked blogs is significant for bloggers. An egocentric search method is proposed to search for documents in such neighbouring blogs. However, it takes considerable time to find the most valuable documents in a user's neighbouring blogs when many blogs are linked to that user's blog.
Originality/value
This study shows that the number of neighbouring blogs, which are linked to a blog with trackbacks and comments, is important for estimating the authority of a blog. In the experimental results this method performs about five times faster than the egocentric search using a breadth‐first search strategy in searching for the top 5 per cent of the most important documents in the neighbouring blogs.
Details
Keywords
Valentina Franzoni and Alfredo Milani
In this work, a new general framework is proposed to guide navigation over a collaborative concept network, in order to discover paths between concepts. Finding semantic chains…
Abstract
Purpose
In this work, a new general framework is proposed to guide navigation over a collaborative concept network, in order to discover paths between concepts. Finding semantic chains between concepts over a semantic network is an issue of great interest for many applications, such as explanation generation and query expansion. Collaborative concept networks over the web tend to have features such as large dimensions, high connectivity degree, dynamically evolution over the time, which represent special challenges for efficient graph search methods, since they result in huge memory requirements, high branching factors, unknown dimensions and high cost for accessing nodes. The paper aims to discuss these issues.
Design/methodology/approach
The proposed framework is based on the novel notion of heuristic semantic walk (HSW). In the HSW framework, a semantic proximity measure among concepts, reflecting the collective knowledge embedded in search engines or other statistical sources, is used as a heuristic in order to guide the search in the collaborative network. Different search strategies, information sources and proximity measures, can be used to adapt HSW to the collaborative semantic network under consideration.
Findings
Experiments held on the Wikipedia network and Bing search engine on a range of different semantic measures show that the proposed HSW approach with weighted randomized walk strategy outperforms state-of-the-art search methods.
Originality/value
To the best of the authors' knowledge, the proposed HSW model is the first approach which uses search engine-based proximity measures as heuristic for semantic search.
Details
Keywords
Hong Ma, Ni Shen, Jing Zhu and Mingrong Deng
Motivated by a problem in the context of DiDi Travel, the biggest taxi hailing platform in China, the purpose of this paper is to propose a novel facility location problem…
Abstract
Purpose
Motivated by a problem in the context of DiDi Travel, the biggest taxi hailing platform in China, the purpose of this paper is to propose a novel facility location problem, specifically, the single source capacitated facility location problem with regional demand and time constraints, to help improve overall transportation efficiency and cost.
Design/methodology/approach
This study develops a mathematical programming model, considering regional demand and time constraints. A novel two-stage neighborhood search heuristic algorithm is proposed and applied to solve instances based on data sets published by DiDi Travel.
Findings
The results of this study show that the model is adequate since new characteristics of demand can be deduced from large vehicle trajectory data sets. The proposed algorithm is effective and efficient on small and medium as well as large instances. The research also solves and presents a real instance in the urban area of Chengdu, China, with up to 30 facilities and demand deduced from 16m taxi trajectory data records covering around 16,000 drivers.
Research limitations/implications
This study examines an offline and single-period case of the problem. It does not consider multi-period or online cases with uncertainties, where decision makers need to dynamically remove out-of-service stations and add other stations to the selected group.
Originality/value
Prior studies have been quite limited. They have not yet considered demand in the form of vehicle trajectory data in facility location problems. This study takes into account new characteristics of demand, regional and time constrained, and proposes a new variant and its solution approach.
Details
Keywords
Demonstrates the application of the recently proposed shifting mean heuristic to statistical quality control situations. Proposes the heuristic search procedure primarily as a…
Abstract
Demonstrates the application of the recently proposed shifting mean heuristic to statistical quality control situations. Proposes the heuristic search procedure primarily as a tool for retrospective (or post mortem) data analysis, retrospectively examining a known data set to establish shifts of process mean, as illustrated in the traditional Manhattan diagram (i.e. a plot of observations with superimposed sub‐process means). However, because the procedure can be operated automatically, it is suggested that where sampling rates are relatively slow it could also be used for online quality monitoring by providing a dynamic Manhattan diagram which changes shape when a newly developed series of observations establishes a shift from a pre‐specified reference level or target value. Briefly reviews other approaches to establishing shifts in process mean, describes the concepts underlying the heuristic search procedure, and reviews the statistical considerations involved. Compares the procedure with other approaches, and provides examples of the heuristic search procedure operating as a tool for retrospective data analysis using three time series kindly provided by the editor which were analysed “blind” by the author. The editor’s comments on the results are appended.
Details