Search results
1 – 10 of over 7000Ziming Zeng, Shouqiang Sun, Tingting Li, Jie Yin and Yueyan Shen
The purpose of this paper is to build a mobile visual search service system for the protection of Dunhuang cultural heritage in the smart library. A novel mobile visual search…
Abstract
Purpose
The purpose of this paper is to build a mobile visual search service system for the protection of Dunhuang cultural heritage in the smart library. A novel mobile visual search model for Dunhuang murals is proposed to help users acquire rich knowledge and services conveniently.
Design/methodology/approach
First, local and global features of images are extracted, and the visual dictionary is generated by the k-means clustering. Second, the mobile visual search model based on the bag-of-words (BOW) and multiple semantic associations is constructed. Third, the mobile visual search service system of the smart library is designed in the cloud environment. Furthermore, Dunhuang mural images are collected to verify this model.
Findings
The findings reveal that the BOW_SIFT_HSV_MSA model has better search performance for Dunhuang mural images when the scale-invariant feature transform (SIFT) and the hue, saturation and value (HSV) are used to extract local and global features of the images. Compared with different methods, this model is the most effective way to search images with the semantic association in the topic, time and space dimensions.
Research limitations/implications
Dunhuang mural image set is a part of the vast resources stored in the smart library, and the fine-grained semantic labels could be applied to meet diverse search needs.
Originality/value
The mobile visual search service system is constructed to provide users with Dunhuang cultural services in the smart library. A novel mobile visual search model based on BOW and multiple semantic associations is proposed. This study can also provide references for the protection and utilization of other cultural heritages.
Details
Keywords
Ziming Zeng, Shouqiang Sun, Jingjing Sun, Jie Yin and Yueyan Shen
Dunhuang murals are rich in cultural and artistic value. The purpose of this paper is to construct a novel mobile visual search (MVS) framework for Dunhuang murals, enabling users…
Abstract
Purpose
Dunhuang murals are rich in cultural and artistic value. The purpose of this paper is to construct a novel mobile visual search (MVS) framework for Dunhuang murals, enabling users to efficiently search for similar, relevant and diversified images.
Design/methodology/approach
The convolutional neural network (CNN) model is fine-tuned in the data set of Dunhuang murals. Image features are extracted through the fine-tuned CNN model, and the similarities between different candidate images and the query image are calculated by the dot product. Then, the candidate images are sorted by similarity, and semantic labels are extracted from the most similar image. Ontology semantic distance (OSD) is proposed to match relevant images using semantic labels. Furthermore, the improved DivScore is introduced to diversify search results.
Findings
The results illustrate that the fine-tuned ResNet152 is the best choice to search for similar images at the visual feature level, and OSD is the effective method to search for the relevant images at the semantic level. After re-ranking based on DivScore, the diversification of search results is improved.
Originality/value
This study collects and builds the Dunhuang mural data set and proposes an effective MVS framework for Dunhuang murals to protect and inherit Dunhuang cultural heritage. Similar, relevant and diversified Dunhuang murals are searched to meet different demands.
Details
Keywords
Asad Ullah Khan, Zhiqiang Ma, Mingxing Li, Liangze Zhi, Weijun Hu and Xia Yang
The evolution from emerging technologies to smart libraries is thoroughly analyzed thematically and bibliometrically in this research study, spanning 2013 through 2022. Finding…
Abstract
Purpose
The evolution from emerging technologies to smart libraries is thoroughly analyzed thematically and bibliometrically in this research study, spanning 2013 through 2022. Finding and analyzing the significant changes, patterns and trends in the subject as they are represented in academic papers is the goal of this research.
Design/methodology/approach
Using bibliometric methodologies, this study gathered and examined a large corpus of research papers, conference papers and related material from several academic databases.
Findings
Starting with Artificial Intelligence (AI), the Internet of Things (IoT), Big Data (BD), Augmentation Reality/Virtual Reality and Blockchain Technology (BT), the study discusses the advent of new technologies and their effects on libraries. Using bibliometric analysis, this study looks at the evolution of publications over time, the geographic distribution of research and the most active institutions and writers in the area. A thematic analysis is also carried out to pinpoint the critical areas of study and trends in emerging technologies and smart libraries. Some emerging themes are information retrieval, personalized recommendations, intelligent data analytics, connected library spaces, real-time information access, augmented reality/virtual reality applications in libraries and strategies, digital literacy and inclusivity.
Originality/value
This study offers a thorough overview of the research environment by combining bibliometric and thematic analysis, illustrating the development of theories and concepts during the last ten years. The results of this study helps in understanding the trends and future research directions in emerging technologies and smart libraries. This study is an excellent source of information for academics, practitioners and policymakers involved in developing and applying cutting-edge technology in library environments.
Details
Keywords
Abstract
Purpose
The authors aim to present a vision based wide area registration method for camera phones based mobile augmented reality applications.
Design/methodology/approach
The tracking system uses a drift‐free 6 DOF tracker based on prebuilt multiple maps, and can be initialized using the authors' compacted key‐frames recognition engine.
Findings
Given the current location and camera pose, the authors show how the corresponding virtual objects can be accurately superimposed even in the case of varying user positions.
Originality/value
The authors' system can be used in wide area scenarios and provides an accurate registration between real and virtual objects.
Details
Keywords
An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their…
Abstract
Purpose
An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their requirements using an image query. Nevertheless, determining whether the retrieval system can provide convenient operation and relevant retrieval results is challenging. A CBIR system based on deep learning features was proposed in this study to effectively search and navigate images in digital articles.
Design/methodology/approach
Convolutional neural networks (CNNs) were used as the feature extractors in the author's experiments. Using pretrained parameters, the training time and retrieval time were reduced. Different CNN features were extracted from the constructed image databases consisting of images taken from the National Palace Museum Journals Archive and were compared in the CBIR system.
Findings
DenseNet201 achieved the best performance, with a top-10 mAP of 89% and a query time of 0.14 s.
Practical implications
The CBIR homepage displayed image categories showing the content of the database and provided the default query images. After retrieval, the result showed the metadata of the retrieved images and links back to the original pages.
Originality/value
With the interface and retrieval demonstration, a novel image-based reading mode can be established via the CBIR and links to the original images and contextual descriptions.
Details
Keywords
Chia-Chen Chen, Patrick C.K. Hung, Erol Egrioglu, Dickson K.W. Chiu and Kevin K.W. Ho
Dan Wu and Shutian Zhang
Good abandonment behavior refers to users obtaining direct answers via search engine results pages (SERPs) without clicking any search result, which occurs commonly in mobile…
Abstract
Purpose
Good abandonment behavior refers to users obtaining direct answers via search engine results pages (SERPs) without clicking any search result, which occurs commonly in mobile search. This study aims to better understand users' good abandonment behavior and perception, and then construct a good abandonment prediction model for mobile search with improved performance.
Design/methodology/approach
In this study, an in situ user mobile search experiment (N = 43) and a crowdsourcing survey (N = 1,379) were conducted. Good abandonment behavior was analyzed from a quantitative perspective, exploring users' search behavior characteristics from four aspects: session and query, SERPs, gestures and eye-tracking data.
Findings
Users show less engagement with SERPs in good abandonment, spending less time and using fewer gestures, and they pay more visual attention to answer-like results. It was also found that good abandonment behavior is often related to users' perceived difficulty of the searching tasks and trustworthiness in the search engine. A good abandonment prediction model in mobile search was constructed with a high accuracy (97.14%).
Originality/value
This study is the first to explore eye-tracking characteristics of users' good abandonment behavior in mobile search, and to explore users' perception of their good abandonment behavior. Visual attention features are introduced into good abandonment prediction in mobile search for the first time and proved to be important predictors in the proposed model.
Details
Keywords
Weimin Zhai, Zhongzhen Lin and Biwen Xu
With the rapid development of technology, 360° panorama on mobile as a very convenient way to present virtual reality has brought a new shopping experience to consumers. Usually…
Abstract
Purpose
With the rapid development of technology, 360° panorama on mobile as a very convenient way to present virtual reality has brought a new shopping experience to consumers. Usually, consumers get product information through virtual annotations in 360° panorama and then make a series of shopping behaviors. The visual design of virtual annotation significantly influences users' online visual search for product information. This study aims to investigate the influence of the visual design of virtual annotation on consumers' shopping experience in the online shopping interface of 360° panorama.
Design/methodology/approach
A 2 × 3 between-subject design was planned to help explore whether different display model of annotation (i.e. negative polarity and positive polarity) and different background transparency of annotation (i.e. 0% transparency, 25% transparency and 50% transparency) may affect users' task performance and their subjective evaluations.
Findings
(1) Virtual annotations with different background transparency affect user performance, and transparency has better visual search performance. (2) Virtual annotation background display mode may affect the user operation performance; the positive polarity of the virtual annotation is more convenient for the users' visual searching for product information. (3) When the annotation background transparency is opaque or semi-transparent, the negative polarity display is more favorable to the users' visual search. However, this situation is reversed when the annotation background transparency is 25%. (4) Participants preferred the presentation of positive polarity virtual annotations. (5) Regarding the degree of willingness to use and ease of understanding, participants preferred the negative polarity display for 0% background transparency or 50% background transparency. However, the opposite result was obtained for 25% background transparency.
Originality/value
The findings generated from the research can be a good reference for the development of virtual annotation visual design for mobile shopping applications.
Highlights
Virtual annotation background transparency and background display mode are two essential attributes of 360° panoramas.
This study examined how virtual annotation background transparency and background display mode influence user performance and experience.
It is recommended to use a translucent or opaque annotation background with a negative polarity display.
Virtual annotation presentation with 25% background transparency facilitates consumer searching and comparison of product information.
Users prefer a positive polarity annotation display.
Virtual annotation background transparency and background display mode are two essential attributes of 360° panoramas.
This study examined how virtual annotation background transparency and background display mode influence user performance and experience.
It is recommended to use a translucent or opaque annotation background with a negative polarity display.
Virtual annotation presentation with 25% background transparency facilitates consumer searching and comparison of product information.
Users prefer a positive polarity annotation display.
Details