Search results
1 – 10 of 45An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their…
Abstract
Purpose
An increasing number of images are generated daily, and images are gradually becoming a search target. Content-based image retrieval (CBIR) is helpful for users to express their requirements using an image query. Nevertheless, determining whether the retrieval system can provide convenient operation and relevant retrieval results is challenging. A CBIR system based on deep learning features was proposed in this study to effectively search and navigate images in digital articles.
Design/methodology/approach
Convolutional neural networks (CNNs) were used as the feature extractors in the author's experiments. Using pretrained parameters, the training time and retrieval time were reduced. Different CNN features were extracted from the constructed image databases consisting of images taken from the National Palace Museum Journals Archive and were compared in the CBIR system.
Findings
DenseNet201 achieved the best performance, with a top-10 mAP of 89% and a query time of 0.14 s.
Practical implications
The CBIR homepage displayed image categories showing the content of the database and provided the default query images. After retrieval, the result showed the metadata of the retrieved images and links back to the original pages.
Originality/value
With the interface and retrieval demonstration, a novel image-based reading mode can be established via the CBIR and links to the original images and contextual descriptions.
Details
Keywords
Nguyen Thi Dinh, Nguyen Thi Uyen Nhi, Thanh Manh Le and Thanh The Van
The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the…
Abstract
Purpose
The problem of image retrieval and image description exists in various fields. In this paper, a model of content-based image retrieval and image content extraction based on the KD-Tree structure was proposed.
Design/methodology/approach
A Random Forest structure was built to classify the objects on each image on the basis of the balanced multibranch KD-Tree structure. From that purpose, a KD-Tree structure was generated by the Random Forest to retrieve a set of similar images for an input image. A KD-Tree structure is applied to determine a relationship word at leaves to extract the relationship between objects on an input image. An input image content is described based on class names and relationships between objects.
Findings
A model of image retrieval and image content extraction was proposed based on the proposed theoretical basis; simultaneously, the experiment was built on multi-object image datasets including Microsoft COCO and Flickr with an average image retrieval precision of 0.9028 and 0.9163, respectively. The experimental results were compared with those of other works on the same image dataset to demonstrate the effectiveness of the proposed method.
Originality/value
A balanced multibranch KD-Tree structure was built to apply to relationship classification on the basis of the original KD-Tree structure. Then, KD-Tree Random Forest was built to improve the classifier performance and retrieve a set of similar images for an input image. Concurrently, the image content was described in the process of combining class names and relationships between objects.
Details
Keywords
Yaolin Zhou, Zhaoyang Zhang, Xiaoyu Wang, Quanzheng Sheng and Rongying Zhao
The digitalization of archival management has rapidly developed with the maturation of digital technology. With data's exponential growth, archival resources have transitioned…
Abstract
Purpose
The digitalization of archival management has rapidly developed with the maturation of digital technology. With data's exponential growth, archival resources have transitioned from single modalities, such as text, images, audio and video, to integrated multimodal forms. This paper identifies key trends, gaps and areas of focus in the field. Furthermore, it proposes a theoretical organizational framework based on deep learning to address the challenges of managing archives in the era of big data.
Design/methodology/approach
Via a comprehensive systematic literature review, the authors investigate the field of multimodal archive resource organization and the application of deep learning techniques in archive organization. A systematic search and filtering process is conducted to identify relevant articles, which are then summarized, discussed and analyzed to provide a comprehensive understanding of existing literature.
Findings
The authors' findings reveal that most research on multimodal archive resources predominantly focuses on aspects related to storage, management and retrieval. Furthermore, the utilization of deep learning techniques in image archive retrieval is increasing, highlighting their potential for enhancing image archive organization practices; however, practical research and implementation remain scarce. The review also underscores gaps in the literature, emphasizing the need for more practical case studies and the application of theoretical concepts in real-world scenarios. In response to these insights, the authors' study proposes an innovative deep learning-based organizational framework. This proposed framework is designed to navigate the complexities inherent in managing multimodal archive resources, representing a significant stride toward more efficient and effective archival practices.
Originality/value
This study comprehensively reviews the existing literature on multimodal archive resources organization. Additionally, a theoretical organizational framework based on deep learning is proposed, offering a novel perspective and solution for further advancements in the field. These insights contribute theoretically and practically, providing valuable knowledge for researchers, practitioners and archivists involved in organizing multimodal archive resources.
Details
Keywords
Na Jiang, Xiaohui Liu, Hefu Liu, Eric Tze Kuan Lim, Chee-Wee Tan and Jibao Gu
Artificial intelligence (AI) has gained significant momentum in recent years. Among AI-infused systems, one prominent application is context-aware systems. Although the fusion of…
Abstract
Purpose
Artificial intelligence (AI) has gained significant momentum in recent years. Among AI-infused systems, one prominent application is context-aware systems. Although the fusion of AI and context awareness has given birth to personalized and timely AI-powered context-aware systems, several challenges still remain. Given the “black box” nature of AI, the authors propose that human–AI collaboration is essential for AI-powered context-aware services to eliminate uncertainty and evolve. To this end, this study aims to advance a research agenda for facilitators and outcomes of human–AI collaboration in AI-powered context-aware services.
Design/methodology/approach
Synthesizing the extant literature on AI and context awareness, the authors advance a theoretical framework that not only differentiates among the three phases of AI-powered context-aware services (i.e. context acquisition, context interpretation and context application) but also outlines plausible research directions for each stage.
Findings
The authors delve into the role of human–AI collaboration and derive future research questions from two directions, namely, the effects of AI-powered context-aware services design on human–AI collaboration and the impact of human–AI collaboration.
Originality/value
This study contributes to the extant literature by identifying knowledge gaps in human–AI collaboration for AI-powered context-aware services and putting forth research directions accordingly. In turn, their proposed framework yields actionable guidance for AI-powered context-aware service designers and practitioners.
Details
Keywords
Aya Khaled Youssef Sayed Mohamed, Dagmar Auer, Daniel Hofer and Josef Küng
Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are…
Abstract
Purpose
Data protection requirements heavily increased due to the rising awareness of data security, legal requirements and technological developments. Today, NoSQL databases are increasingly used in security-critical domains. Current survey works on databases and data security only consider authorization and access control in a very general way and do not regard most of today’s sophisticated requirements. Accordingly, the purpose of this paper is to discuss authorization and access control for relational and NoSQL database models in detail with respect to requirements and current state of the art.
Design/methodology/approach
This paper follows a systematic literature review approach to study authorization and access control for different database models. Starting with a research on survey works on authorization and access control in databases, the study continues with the identification and definition of advanced authorization and access control requirements, which are generally applicable to any database model. This paper then discusses and compares current database models based on these requirements.
Findings
As no survey works consider requirements for authorization and access control in different database models so far, the authors define their requirements. Furthermore, the authors discuss the current state of the art for the relational, key-value, column-oriented, document-based and graph database models in comparison to the defined requirements.
Originality/value
This paper focuses on authorization and access control for various database models, not concrete products. This paper identifies today’s sophisticated – yet general – requirements from the literature and compares them with research results and access control features of current products for the relational and NoSQL database models.
Details
Keywords
Abstract
Purpose
In order to improve the estimation accuracy of soil organic matter, this paper aims to establish a modified model for hyperspectral estimation of soil organic matter content based on the positive and inverse grey relational degrees.
Design/methodology/approach
Based on 82 soil sample data collected in Daiyue District, Tai'an City, Shandong Province, firstly, the spectral data of soil samples are transformed by the first order differential and logarithmic reciprocal first order differential and so on, the correlation coefficients between the transformed spectral data and soil organic matter content are calculated, and the estimation factors are selected according to the principle of maximum correlation. Secondly, the positive and inverse grey relational degree model is used to identify the samples to be identified, and the initial estimated values of the organic matter content are obtained. Finally, based on the difference information between the samples to be identified and their corresponding known patterns, a modified model for the initial estimation of soil organic matter content is established, and the estimation accuracy of the model is evaluated using the mean relative error and the determination coefficient.
Findings
The results show that the methods of logarithmic reciprocal first order differential and the first-order differential of the square root for transforming the original spectral data are more effective, which could significantly improve the correlation between soil organic matter content and spectral data. The modified model for hyperspectral estimation of soil organic matter has high estimation accuracy, the average relative error (MRE) of 11 test samples is 4.091%, and the determination coefficient (R2) is 0.936. The estimation precision is higher than that of linear regression model, BP neural network and support vector machine model. The application examples show that the modified model for hyperspectral estimation of soil organic matter content based on positive and inverse grey relational degree proposed in this article is feasible and effective.
Social implications
The model in this paper has clear mathematical and physics meaning, simple calculation and easy programming. The model not only fully excavates and utilizes the internal information of known pattern samples with “insufficient and incomplete information”, but also effectively overcomes the randomness and grey uncertainty in the spectral estimation of soil organic matter. The research results not only enrich the grey system theory and methods, but also provide a new approach for hyperspectral estimation of soil properties such as soil organic matter content, water content and so on.
Originality/value
The paper succeeds in realizing both a modified model for hyperspectral estimation of soil organic matter based on the positive and inverse grey relational degrees and effectively dealing with the randomness and grey uncertainty in spectral estimation.
Details
Keywords
Elaheh Hosseini, Kimiya Taghizadeh Milani and Mohammad Shaker Sabetnasab
This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.
Abstract
Purpose
This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.
Design/methodology/approach
This applied research employed a descriptive and analytical method, scientometric indicators, co-word techniques, and social network analysis. VOSviewer, SPSS, Python programming, and UCINet software were used for data analysis and network structure visualization.
Findings
The top ranks of the Web of Science (WOS) subject categorization belonged to various fields of computer science. Besides, the USA was the most prolific country. The keyword ontology had the highest frequency of co-occurrence. Ontology and semantic were the most frequent co-word pairs. In terms of the network structure, nine major topic clusters were identified based on co-occurrence, and 29 thematic clusters were identified based on hierarchical clustering. Comparisons between the two clustering techniques indicated that three clusters, namely semantic bioinformatics, knowledge representation, and semantic tools were in common. The most mature and mainstream thematic clusters were natural language processing techniques to boost modeling and visualization, context-aware knowledge discovery, probabilistic latent semantic analysis (PLSA), semantic tools, latent semantic indexing, web ontology language (OWL) syntax, and ontology-based deep learning.
Originality/value
This study adopted various techniques such as co-word analysis, social network analysis network structure visualization, and hierarchical clustering to represent a suitable, visual, methodical, and comprehensive perspective into linked data.
Details
Keywords
Julián Monsalve-Pulido, Jose Aguilar, Edwin Montoya and Camilo Salazar
This article proposes an architecture of an intelligent and autonomous recommendation system to be applied to any virtual learning environment, with the objective of efficiently…
Abstract
This article proposes an architecture of an intelligent and autonomous recommendation system to be applied to any virtual learning environment, with the objective of efficiently recommending digital resources. The paper presents the architectural details of the intelligent and autonomous dimensions of the recommendation system. The paper describes a hybrid recommendation model that orchestrates and manages the available information and the specific recommendation needs, in order to determine the recommendation algorithms to be used. The hybrid model allows the integration of the approaches based on collaborative filter, content or knowledge. In the architecture, information is extracted from four sources: the context, the students, the course and the digital resources, identifying variables, such as individual learning styles, socioeconomic information, connection characteristics, location, etc. Tests were carried out for the creation of an academic course, in order to analyse the intelligent and autonomous capabilities of the architecture.
Details
Keywords
Chia-Ling Chang, Yen-Liang Chen and Jia-Shin Li
The purpose of this paper is to provide a cross-platform recommendation system that recommends the most suitable public Instagram accounts to Facebook users.
Abstract
Purpose
The purpose of this paper is to provide a cross-platform recommendation system that recommends the most suitable public Instagram accounts to Facebook users.
Design/methodology/approach
We collect data from both Facebook and Instagram and then propose a similarity matching mechanism for recommending the most appropriate Instagram accounts to Facebook users. By removing the data disparity between the two heterogeneous platforms and integrating them, the system is able to make more accurate recommendations.
Findings
The results show that the method proposed in this paper can recommend suitable public Instagram accounts to Facebook users with very high accuracy.
Originality/value
To the best of the authors’ knowledge, this is the first study to propose a recommender system to recommend Instagram public accounts to Facebook users. Second, our proposed method can integrate heterogeneous data from two different platforms to generate collaborative recommendations. Furthermore, our cross-platform system reveals an innovative concept of how multiple platforms can promote their respective platforms in a unified, cooperative and collaborative manner.
Details
Keywords
Jiaxin Ye, Huixiang Xiong, Jinpeng Guo and Xuan Meng
The purpose of this study is to investigate how book group recommendations can be used as a meaningful way to suggest suitable books to users, given the increasing number of…
Abstract
Purpose
The purpose of this study is to investigate how book group recommendations can be used as a meaningful way to suggest suitable books to users, given the increasing number of individuals engaging in sharing and discussing books on the web.
Design/methodology/approach
The authors propose reviews fine-grained classification (CFGC) and its related models such as CFGC1 for book group recommendation. These models can categorize reviews successively by function and role. Constructing the BERT-BiLSTM model to classify the reviews by function. The frequency characteristics of the reviews are mined by word frequency analysis, and the relationship between reviews and total book score is mined by correlation analysis. Then, the reviews are classified into three roles: celebrity, general and passerby. Finally, the authors can form user groups, mine group features and combine group features with book fine-grained ratings to make book group recommendations.
Findings
Overall, the best recommendations are made by Synopsis comments, with the accuracy, recall, F-value and Hellinger distance of 52.9%, 60.0%, 56.3% and 0.163, respectively. The F1 index of the recommendations based on the author and the writing comments is improved by 2.5% and 0.4%, respectively, compared to the Synopsis comments.
Originality/value
Previous studies on book recommendation often recommend relevant books for users by mining the similarity between books, so the set of book recommendations recommended to users, especially to groups, always focuses on the few types. The proposed method can effectively ensure the diversity of recommendations by mining users’ tendency to different review attributes of books and recommending books for the groups. In addition, this study also investigates which types of reviews should be used to make book recommendations when targeting groups with specific tendencies.
Details