Search results

1 – 10 of over 4000
Article
Publication date: 8 January 2018

Helena Lee and Natalie Pang

The purpose of this paper is to examine the role of task and user’s topic familiarity in the evaluation of information patch (websites).

Abstract

Purpose

The purpose of this paper is to examine the role of task and user’s topic familiarity in the evaluation of information patch (websites).

Design/methodology/approach

An experimental study was conducted in a computer laboratory to examine users’ information seeking and foraging behaviour. In total, 160 university students participated in the research. Two types of task instructions, specifically defined and non-specifically defined (general) task types were administered. Mixed methods approach involving both quantitative and qualitative thematic coding were adopted, from the data of the questionnaire surveys and post-experiment interviews.

Findings

In the context of task attributes, users who conducted information seeking task with specifically defined instructions, as compared to the non-specifically defined instructions, demonstrated stricter credibility evaluations. Evidence demonstrated the link between topical knowledge and credibility perception. Users with topical knowledge applied critical credibility assessments than users without topical knowledge. Furthermore, the evidential results supported that the level of difficulty and knowledge of the topic or subject matter associated with users’ credibility evaluations. Users who have lesser or no subject knowledge and who experienced difficulty in the information search tended to be less diagnostic in their appraisal of the information patch (website or webpages). Users equipped with topical knowledge and who encountered less difficulty in the search, exhibited higher expectation and evaluative criteria of the information patch.

Research limitations/implications

The constraints of time in the lab experiment, carried out in the presence of and under the observation of the researcher, may affect users’ information seeking behaviour. It would be beneficial to consider users’ information search gratifications and motivations in studying information evaluations and foraging patterns. There is scope to investigate users’ proficiency such as expert or novice, and individual learning styles in assessing information credibility.

Practical implications

Past studies on information evaluation, specifically credibility is often associated with users’ characteristics, source, or contents. This study sheds light on the context of task type, task difficulty and topical knowledge in affecting users’ information judgement.

Originality/value

One of the scarce studies in relating task orientation, task difficulty and topical knowledge to information evaluations.

Details

Journal of Documentation, vol. 74 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 4 November 2014

Stella D. Tomasi

The purpose of this paper was to study users’ behaviour when using different search engine results pages (SERPs) to identify what types of scents (cues) were the most useful to…

Abstract

Purpose

The purpose of this paper was to study users’ behaviour when using different search engine results pages (SERPs) to identify what types of scents (cues) were the most useful to find relevant information to complete tasks on the Web based on information foraging theory.

Design/methodology/approach

This study has designed three interface prototypes and conducted a qualitative study using the protocol analysis methodology. The subjects were recorded and videotaped to identify patterns of searching behaviours on visualization interfaces of SERPs.

Findings

The study found that users found titles of categories or websites, keywords of categories, orientation of results and animation are strong scents that users follow to help find information on SERPs. If certain scents are not used followed on an interface, then their strength will diminish. Furthermore, the study showed that simple scent trails are more important to users than complicated trails.

Originality/value

This study uses a qualitative approach to explore how users behave with different SERP formats, particularity a visualization format, and identify which scents on the interface are important for users to follow to successfully complete tasks on the Web.

Details

Journal of Systems and Information Technology, vol. 16 no. 4
Type: Research Article
ISSN: 1328-7265

Keywords

Book part
Publication date: 10 July 2014

To examine how reading in electronic formats differs from traditional reading of print.

Abstract

Purpose

To examine how reading in electronic formats differs from traditional reading of print.

Design/methodology/approach

Concepts about digital print are discussed alongside research studies in fields related to multisensory technologies and electronic means of communication. A model of online reading is proposed integrating aspects of information foraging theory. Pedagogical applications are needed to integrate e-reading theory within classrooms.

Findings

With the varied text structures, directionality concerns, and interactive text features, our attention must turn to the theoretical foundations that underpin digital literacy learning today. Online foraging schemes can explain how information is sought and retrieved when reading new information from digital mediums.

Practical implications

Teachers must address the current, digital literacy needs of their students, thus preparing them for challenges in the 21st century. Varying text structures within digital formats as well as providing as-needed facilitation are the scaffolds that students need today. Using technologies such as digital games, tools, and contexts advances the mission of resource-based teaching and learning.

Details

Theoretical Models of Learning and Literacy Development
Type: Book
ISBN: 978-1-78350-821-1

Keywords

Article
Publication date: 10 July 2017

Jiqun Liu

The purpose of this paper is to build a unified model of human information behavior (HIB) for integrating classical constructs and reformulating the structure of HIB theory.

1917

Abstract

Purpose

The purpose of this paper is to build a unified model of human information behavior (HIB) for integrating classical constructs and reformulating the structure of HIB theory.

Design/methodology/approach

This paper employs equilibrium perspective from partial equilibrium theory to conceptualization and deduction, starting from four basic assumptions.

Findings

This paper develops two models to incorporate previous HIB research approaches into an equilibrium-analysis-oriented information supply-demand (ISD) framework: first, the immediate-task/problem-based and everyday life information-seeking (ELIS)-sense-making approaches are incorporated into the short-term ISD model; second, the knowledge-construction-oriented and ability-based HIB research approaches are elaborated by the long-term ISD model. Relations among HIB theories are illustrated via the method of graphical reasoning. Moreover, these two models jointly reveal the connection between information seeking in immediate problematic situations and long-term ability improvement.

Originality/value

The equilibrium framework enables future research to explore HIB from three perspectives: stages: group the classical concepts (e.g. anomalous state of knowledge, uncertainty) into different stages (i.e. start state, process, goal state) and see how they interact with each other within and across different stages; forces: explore information behaviors and information-related abilities as information supply and demand forces, and see how different forces influence each other and jointly motivate people to pursue the equilibriums between outside world and mental model; and short term and long term: study the connections between short-term information seeking and long-term ability improvement at both theoretical and empirical levels.

Details

Journal of Documentation, vol. 73 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 1 April 1986

J.F. Stelzer

The application of colour for an improved presentation of 3‐D structures with finite elements is reported. Also shown is how the hidden‐surface technique can be used for: (1…

Abstract

The application of colour for an improved presentation of 3‐D structures with finite elements is reported. Also shown is how the hidden‐surface technique can be used for: (1) generating pictures in the hidden line alike mode, (2) generating photo‐alike pictures by shading the surfaces according to Lambert's cosine law, (3) showing the regions of different materials or properties by distinct colouring, (4) the presentation of temperature and stress fields by colouring. This colouring is done with smooth colour transitions and delivers pictures similar to those gained by thermography or stress optics. Furthermore, (5) it is possible to generate contour lines on the remaining visible surfaces. The problems arising with the attachment of a colour hardcopier are also considered.

Details

Engineering Computations, vol. 3 no. 4
Type: Research Article
ISSN: 0264-4401

Article
Publication date: 1 February 2003

Eric Klopfer and Andrew Begel

StarLogo is a computer modeling tool that empowers students to understand the world through the design and creation of complex systems models. StarLogo enables students to program…

Abstract

StarLogo is a computer modeling tool that empowers students to understand the world through the design and creation of complex systems models. StarLogo enables students to program software creatures to interact with one another and their environment, and study the emergent patterns from these interactions. Building an easy‐to‐understand, yet powerful tool for students required a great deal of thought about the design of the programming language, environment, and its implementation. The salient features are StarLogo's great degree of transparency (the capability to see how a simulation is built), its support to let students create their own models (not just use models built by others), its efficient implementation (supporting simulations with thousands of independently executing creatures on desktop computers), and its flexible and simple user interface (which enables students to interact dynamically with their simulation during model testing and validation). The resulting platform provides a uniquely accessible tool that enables students to become full‐fledged practitioners of modeling. In addition, we describe the powerful insights and deep scientific understanding that students have developed through the use of StarLogo.

Details

Kybernetes, vol. 32 no. 1/2
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 28 November 2018

Alexandra Pereira Nunes, Ana Rita Silva Gaspar, Andry M. Pinto and Aníbal Castilho Matos

This paper aims to present a mosaicking method for underwater robotic applications, whose result can be provided to other perceptual systems for scene understanding such as…

Abstract

Purpose

This paper aims to present a mosaicking method for underwater robotic applications, whose result can be provided to other perceptual systems for scene understanding such as real-time object recognition.

Design/methodology/approach

This method is called robust and large-scale mosaicking (ROLAMOS) and presents an efficient frame-to-frame motion estimation with outlier removal and consistency checking that maps large visual areas in high resolution. The visual mosaic of the sea-floor is created on-the-fly by a robust registration procedure that composes monocular observations and manages the computational resources. Moreover, the registration process of ROLAMOS aligns the observation to the existing mosaic.

Findings

A comprehensive set of experiments compares the performance of ROLAMOS to other similar approaches, using both data sets (publicly available) and live data obtained by a ROV operating in real scenes. The results demonstrate that ROLAMOS is adequate for mapping of sea-floor scenarios as it provides accurate information from the seabed, which is of extreme importance for autonomous robots surveying the environment that does not rely on specialized computers.

Originality/value

The ROLAMOS is suitable for robotic applications that require an online, robust and effective technique to reconstruct the underwater environment from only visual information.

Details

Sensor Review, vol. 39 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 9 October 2017

Reijo Savolainen

The purpose of this paper is to elaborate the picture of strategies and tactics for information seeking and searching by focusing on the heuristic elements of such strategies and…

1122

Abstract

Purpose

The purpose of this paper is to elaborate the picture of strategies and tactics for information seeking and searching by focusing on the heuristic elements of such strategies and tactics.

Design/methodology/approach

A conceptual analysis of a sample of 31 pertinent investigations was conducted to find out how researchers have approached heuristics in the above context since the 1970s. To achieve this, the study draws on the ideas produced within the research programmes on Heuristics and Biases, and Fast and Frugal Heuristics.

Findings

Researchers have approached the heuristic elements in three major ways. First, these elements are defined as general level constituents of browsing strategies in particular. Second, heuristics are approached as search tips. Third, there are examples of conceptualizations of individual heuristics. Familiarity heuristic suggests that people tend to prefer sources that have worked well in similar situations in the past. Recognition heuristic draws on an all-or-none distinction of the information objects, based on cues such as information scent. Finally, representativeness heuristic is based on recalling similar instances of events or objects and judging their typicality in terms of genres, for example.

Research limitations/implications

As the study focuses on three heuristics only, the findings cannot be generalized to describe the use of all heuristic elements of strategies and tactics for information seeking and searching.

Originality/value

The study pioneers by providing an in-depth analysis of the ways in which the heuristic elements are conceptualized in the context of information seeking and searching. The findings contribute to the elaboration of the conceptual issues of information behavior research.

Details

Journal of Documentation, vol. 73 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 9 November 2012

Octavio Andrés González‐Estrada, Juan José Ródenas, Stéphane Pierre Alain Bordas, Marc Duflot, Pierre Kerfriden and Eugenio Giner

The purpose of this paper is to assess the effect of the statical admissibility of the recovered solution and the ability of the recovered solution to represent the singular…

1200

Abstract

Purpose

The purpose of this paper is to assess the effect of the statical admissibility of the recovered solution and the ability of the recovered solution to represent the singular solution; also the accuracy, local and global effectivity of recovery‐based error estimators for enriched finite element methods (e.g. the extended finite element method, XFEM).

Design/methodology/approach

The authors study the performance of two recovery techniques. The first is a recently developed superconvergent patch recovery procedure with equilibration and enrichment (SPR‐CX). The second is known as the extended moving least squares recovery (XMLS), which enriches the recovered solutions but does not enforce equilibrium constraints. Both are extended recovery techniques as the polynomial basis used in the recovery process is enriched with singular terms for a better description of the singular nature of the solution.

Findings

Numerical results comparing the convergence and the effectivity index of both techniques with those obtained without the enrichment enhancement clearly show the need for the use of extended recovery techniques in Zienkiewicz‐Zhu type error estimators for this class of problems. The results also reveal significant improvements in the effectivities yielded by statically admissible recovered solutions.

Originality/value

The paper shows that both extended recovery procedures and statical admissibility are key to an accurate assessment of the quality of enriched finite element approximations.

Article
Publication date: 28 December 2023

Ankang Ji, Xiaolong Xue, Limao Zhang, Xiaowei Luo and Qingpeng Man

Crack detection of pavement is a critical task in the periodic survey. Efficient, effective and consistent tracking of the road conditions by identifying and locating crack…

Abstract

Purpose

Crack detection of pavement is a critical task in the periodic survey. Efficient, effective and consistent tracking of the road conditions by identifying and locating crack contributes to establishing an appropriate road maintenance and repair strategy from the promptly informed managers but still remaining a significant challenge. This research seeks to propose practical solutions for targeting the automatic crack detection from images with efficient productivity and cost-effectiveness, thereby improving the pavement performance.

Design/methodology/approach

This research applies a novel deep learning method named TransUnet for crack detection, which is structured based on Transformer, combined with convolutional neural networks as encoder by leveraging a global self-attention mechanism to better extract features for enhancing automatic identification. Afterward, the detected cracks are used to quantify morphological features from five indicators, such as length, mean width, maximum width, area and ratio. Those analyses can provide valuable information for engineers to assess the pavement condition with efficient productivity.

Findings

In the training process, the TransUnet is fed by a crack dataset generated by the data augmentation with a resolution of 224 × 224 pixels. Subsequently, a test set containing 80 new images is used for crack detection task based on the best selected TransUnet with a learning rate of 0.01 and a batch size of 1, achieving an accuracy of 0.8927, a precision of 0.8813, a recall of 0.8904, an F1-measure and dice of 0.8813, and a Mean Intersection over Union of 0.8082, respectively. Comparisons with several state-of-the-art methods indicate that the developed approach in this research outperforms with greater efficiency and higher reliability.

Originality/value

The developed approach combines TransUnet with an integrated quantification algorithm for crack detection and quantification, performing excellently in terms of comparisons and evaluation metrics, which can provide solutions with potentially serving as the basis for an automated, cost-effective pavement condition assessment scheme.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

1 – 10 of over 4000