Search results

1 – 10 of 354
Article
Publication date: 9 July 2024

Zengrui Zheng, Kainan Su, Shifeng Lin, Zhiquan Fu and Chenguang Yang

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information…

Abstract

Purpose

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information from multiple modalities to address these limitations has emerged as a key research focus. This study aims to provide a comprehensive review of the development of vision-based SLAM (including visual SLAM) for navigation and pose estimation, with a specific focus on techniques for integrating multiple modalities.

Design/methodology/approach

This paper initially introduces the mathematical models and framework development of visual SLAM. Subsequently, this paper presents various methods for improving accuracy in visual SLAM by fusing different spatial and semantic features. This paper also examines the research advancements in vision-based SLAM with respect to multi-sensor fusion in both loosely coupled and tightly coupled approaches. Finally, this paper analyzes the limitations of current vision-based SLAM and provides predictions for future advancements.

Findings

The combination of vision-based SLAM and deep learning has significant potential for development. There are advantages and disadvantages to both loosely coupled and tightly coupled approaches in multi-sensor fusion, and the most suitable algorithm should be chosen based on the specific application scenario. In the future, vision-based SLAM is evolving toward better addressing challenges such as resource-limited platforms and long-term mapping.

Originality/value

This review introduces the development of vision-based SLAM and focuses on the advancements in multimodal fusion. It allows readers to quickly understand the progress and current status of research in this field.

Details

Robotic Intelligence and Automation, vol. 44 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 29 January 2024

Kai Wang

The identification of network user relationship in Fancircle contributes to quantifying the violence index of user text, mining the internal correlation of network behaviors among…

Abstract

Purpose

The identification of network user relationship in Fancircle contributes to quantifying the violence index of user text, mining the internal correlation of network behaviors among users, which provides necessary data support for the construction of knowledge graph.

Design/methodology/approach

A correlation identification method based on sentiment analysis (CRDM-SA) is put forward by extracting user semantic information, as well as introducing violent sentiment membership. To be specific, the topic of the implementation of topology mapping in the community can be obtained based on self-built field of violent sentiment dictionary (VSD) by extracting user text information. Afterward, the violence index of the user text is calculated to quantify the fuzzy sentiment representation between the user and the topic. Finally, the multi-granularity violence association rules mining of user text is realized by constructing violence fuzzy concept lattice.

Findings

It is helpful to reveal the internal relationship of online violence under complex network environment. In that case, the sentiment dependence of users can be characterized from a granular perspective.

Originality/value

The membership degree of violent sentiment into user relationship recognition in Fancircle community is introduced, and a text sentiment association recognition method based on VSD is proposed. By calculating the value of violent sentiment in the user text, the annotation of violent sentiment in the topic dimension of the text is achieved, and the partial order relation between fuzzy concepts of violence under the effective confidence threshold is utilized to obtain the association relation.

Details

Data Technologies and Applications, vol. 58 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 29 November 2022

Yung-Ting Chuang and Ching-Hsien Wang

The purpose of this paper is to propose a mobile and social-based question-and-answer (Q&A) system that analyzes users' social relationships and past answering behavior, considers…

Abstract

Purpose

The purpose of this paper is to propose a mobile and social-based question-and-answer (Q&A) system that analyzes users' social relationships and past answering behavior, considers users' interest similarity and answer quality to infer suitable respondents and forwards the questions to users that are willing to give high quality answers.

Design/methodology/approach

This research applies first-order logic (FOL) inference calculation to generate question/interest ID that combines a users' social information, interests and social network intimacy to choose the nodes that can provide high-quality answers. After receiving a question, a friend can answer it, forward it to their friends according to the number of TTL (Time-to-Live) hops, or send the answer directly to the server. This research collected data from the TripAdvisor.com website and uses it for the experiment. The authors also collected previously answered questions from TripAdvisor.com; thus, subsequent answers could be forwarded to a centralized server to improve the overall performance.

Findings

The authors have first noticed that even though the proposed system is decentralized, it can still accurately identify the appropriate respondents to provide high-quality answers. In addition, since this system can easily identify the best answerers, there is no need to implement broadcasting, thus reducing the overall execution time and network bandwidth required. Moreover, this system allows users to accurately and quickly obtain high-quality answers after comparing and calculating interest IDs. The system also encourages frequent communication and interaction among users. Lastly, the experiments demonstrate that this system achieves high accuracy, high recall rate, low overhead, low forwarding cost and low response rate in all scenarios.

Originality/value

This paper proposes a mobile and social-based Q&A system that applies FOL inference calculation to analyze users' social relationships and past answering behavior, considers users' interest similarity and answer quality to infer suitable respondents and forwards the questions to users that are willing to give high quality answers. The experiments demonstrate that this system achieves high accuracy, high recall rate, low overhead, low forwarding cost and low response rate in all scenarios.

Article
Publication date: 6 August 2024

Yingjie Yu, Shuai Chen, Xinpeng Yang, Changzhen Xu, Sen Zhang and Wendong Xiao

This paper proposes a self-supervised monocular depth estimation algorithm under multiple constraints, which can generate the corresponding depth map end-to-end based on RGB…

Abstract

Purpose

This paper proposes a self-supervised monocular depth estimation algorithm under multiple constraints, which can generate the corresponding depth map end-to-end based on RGB images. On this basis, based on the traditional visual simultaneous localisation and mapping (VSLAM) framework, a dynamic object detection framework based on deep learning is introduced, and dynamic objects in the scene are culled during mapping.

Design/methodology/approach

Typical SLAM algorithms or data sets assume a static environment and do not consider the potential consequences of accidentally adding dynamic objects to a 3D map. This shortcoming limits the applicability of VSLAM in many practical cases, such as long-term mapping. In light of the aforementioned considerations, this paper presents a self-supervised monocular depth estimation algorithm based on deep learning. Furthermore, this paper introduces the YOLOv5 dynamic detection framework into the traditional ORBSLAM2 algorithm for the purpose of removing dynamic objects.

Findings

Compared with Dyna-SLAM, the algorithm proposed in this paper reduces the error by about 13%, and compared with ORB-SLAM2 by about 54.9%. In addition, the algorithm in this paper can process a single frame of image at a speed of 15–20 FPS on GeForce RTX 2080s, far exceeding Dyna-SLAM in real-time performance.

Originality/value

This paper proposes a VSLAM algorithm that can be applied to dynamic environments. The algorithm consists of a self-supervised monocular depth estimation part under multiple constraints and the introduction of a dynamic object detection framework based on YOLOv5.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 November 2023

Salam Abdallah and Ashraf Khalil

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two…

205

Abstract

Purpose

This study aims to understand and a lay a foundation of how analytics has been used in depression management, this study conducts a systematic literature review using two techniques – text mining and manual review. The proposed methodology would aid researchers in identifying key concepts and research gaps, which in turn, will help them to establish the theoretical background supporting their empirical research objective.

Design/methodology/approach

This paper explores a hybrid methodology for literature review (HMLR), using text mining prior to systematic manual review.

Findings

The proposed rapid methodology is an effective tool to automate and speed up the process required to identify key and emerging concepts and research gaps in any specific research domain while conducting a systematic literature review. It assists in populating a research knowledge graph that does not reach all semantic depths of the examined domain yet provides some science-specific structure.

Originality/value

This study presents a new methodology for conducting a literature review for empirical research articles. This study has explored an “HMLR” that combines text mining and manual systematic literature review. Depending on the purpose of the research, these two techniques can be used in tandem to undertake a comprehensive literature review, by combining pieces of complex textual data together and revealing areas where research might be lacking.

Details

Information Discovery and Delivery, vol. 52 no. 3
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 31 July 2024

Xuelai Li, Xincong Yang, Kailun Feng and Changyong Liu

Manual monitoring is a conventional method for monitoring and managing construction safety risks. However, construction sites involve risk coupling - a phenomenon in which…

Abstract

Purpose

Manual monitoring is a conventional method for monitoring and managing construction safety risks. However, construction sites involve risk coupling - a phenomenon in which multiple safety risk factors occur at the same time and amplify the probability of construction accidents. It is challenging to manually monitor safety risks that occur simultaneously at different times and locations, especially considering the limitations of risk manager’s expertise and human capacity.

Design/methodology/approach

To address this challenge, an automatic approach that integrates point cloud, computer vision technologies, and Bayesian networks for simultaneous monitoring and evaluation of multiple on-site construction risks is proposed. This approach supports the identification of risk couplings and decision-making process through a system that combines real-time monitoring of multiple safety risks with expert knowledge. The proposed approach was applied to a foundation project, from laboratory experiments to a real-world case application.

Findings

In the laboratory experiment, the proposed approach effectively monitored and assessed the interdependent risks coupling in foundation pit construction. In the real-world case, the proposed approach shows good adaptability to the actual construction application.

Originality/value

The core contribution of this study lies in the combination of an automatic monitoring method with an expert knowledge system to quantitatively assess the impact of risk coupling. This approach offers a valuable tool for risk managers in foundation pit construction, promoting a proactive and informed risk coupling management strategy.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 19 December 2022

Farshid Danesh and Somayeh Ghavidel

The purpose of this study was a longitudinal study on knowledge organization (KO) realm structure and cluster concepts and emerging KO events based on co-occurrence analysis.

177

Abstract

Purpose

The purpose of this study was a longitudinal study on knowledge organization (KO) realm structure and cluster concepts and emerging KO events based on co-occurrence analysis.

Design/methodology/approach

This longitudinal study uses the co-occurrence analysis. This research population includes keywords of articles indexed in the Web of Science Core Collection 1975–1999 and 2000–2018. Hierarchical clustering, multidimensional scaling and co-occurrence analysis were used to conduct the present research. SPSS, UCINET, VOSviewer and NetDraw were used to analyze and visualize data.

Findings

The “Information Technology” in 1975–1999 and the “Information Literacy” in 2000–2018, with the highest frequency, were identified as the most widely used keywords of KO in the world. In the first period, the cluster “Knowledge Management” had the highest centrality, the cluster “Strategic Planning” had the highest density in 2000–2018 and the cluster “Information Retrieval” had the highest centrality and density. The two-dimensional map of KO’s thematic and clustering of KO topics by cluster analysis method indicates that in the periods examined in this study, thematic clusters had much overlap in terms of concept and content.

Originality/value

The present article uses a longitudinal study to examine the KO’s publications in the past half-century. This paper also uses hierarchical clustering and multidimensional scaling methods. Studying the concepts and thematic trends in KO can impact organizing information as the core of libraries, museums and archives. Also, it can scheme information organizing and promote knowledge management. Because the results obtained from this article can help KO policymakers determine and design the roadmap, research planning, and micro and macro budgeting processes.

Details

Global Knowledge, Memory and Communication, vol. 73 no. 6/7
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 29 May 2024

Lino Gonzalez-Garcia, Gema González-Carreño, Ana María Rivas Machota and Juan Padilla Fernández-Vega

Knowledge graphs (KGs) are structured knowledge bases that represent real-world entities and are used in a variety of applications. Many of them are created and curated from a…

Abstract

Purpose

Knowledge graphs (KGs) are structured knowledge bases that represent real-world entities and are used in a variety of applications. Many of them are created and curated from a combination of automated and manual processes. Microdata embedded in Web pages for purposes of facilitating indexing and search engine optimization are a potential source to augment KGs under some assumptions of complementarity and quality that have not been thoroughly explored to date. In that direction, this paper aims to report results on a study that evaluates the potential of using microdata extracted from the Web to augment the large, open and manually curated Wikidata KG for the domain of touristic information. As large corpora of Web text is currently being leveraged via large language models (LLMs), these are used to compare the effectiveness of the microdata enhancement method.

Design/methodology/approach

The Schema.org taxonomy was used as the source to determine the annotation types to be collected. Here, the authors focused on tourism-related pages as a case study, selecting the relevant Schema.org concepts as point of departure. The large CommonCrawl resource was used to select those annotations from a large recent sample of the World Wide Web. The extracted annotations were processed and matched with Wikidata to estimate the degree to which microdata produced for SEO might become a valuable resource to complement KGs or vice versa. The Web pages themselves can also serve as a context to produce additional metadata elements using them as context in pipelines of an existing LLMs. That way, both the annotations and the contents itself can be used as sources.

Findings

The samples extracted revealed a concentration of metadata annotations in only a few of the relevant Schema.org attributes and also revealed the possible influence of authoring tools in a significant fraction of microdata produced. The analysis of the overlapping of attributes in the sample with those of Wikidata showed the potential of the technique, limited by the disbalance of the presence of attributes. The combination of those with the use of LLMs to produce additional annotations demonstrates the feasibility of the approach in the population of existing Wikidata locations. However, in both cases, the effectiveness appears to be lower in the cases of less content in the KG, which are arguably the most relevant when considering the scenario of an automated population approach.

Originality/value

The research reports novel empirical findings on the way touristic annotations with a SEO orientation are being produced in the wild and provides an assessment of their potential to complement KGs, or reuse information from those graphs. It also provides insights on the potential of using LLMs for the task.

Details

The Electronic Library , vol. 42 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 29 August 2024

Yizhuo Zhang, Yunfei Zhang, Huiling Yu and Shen Shi

The anomaly detection task for oil and gas pipelines based on acoustic signals faces issues such as background noise coverage, lack of effective features, and small sample sizes…

Abstract

Purpose

The anomaly detection task for oil and gas pipelines based on acoustic signals faces issues such as background noise coverage, lack of effective features, and small sample sizes, resulting in low fault identification accuracy and slow efficiency. The purpose of this paper is to study an accurate and efficient method of pipeline anomaly detection.

Design/methodology/approach

First, to address the impact of background noise on the accuracy of anomaly signals, the adaptive multi-threshold center frequency variational mode decomposition method(AMTCF-VMD) method is used to eliminate strong noise in pipeline signals. Secondly, to address the strong data dependency and loss of local features in the Swin Transformer network, a Hybrid Pyramid ConvNet network with an Agent Attention mechanism is proposed. This compensates for the limitations of CNN’s receptive field and enhances the Swin Transformer’s global contextual feature representation capabilities. Thirdly, to address the sparsity and imbalance of anomaly samples, the SpecAugment and Scaper methods are integrated to enhance the model’s generalization ability.

Findings

In the pipeline anomaly audio and environmental datasets such as ESC-50, the AMTCF-VMD method shows more significant denoising effects compared to wavelet packet decomposition and EMD methods. Additionally, the model achieved 98.7% accuracy on the preprocessed anomaly audio dataset and 99.0% on the ESC-50 dataset.

Originality/value

This paper innovatively proposes and combines the AMTCF-VMD preprocessing method with the Agent-SwinPyramidNet model, addressing noise interference and low accuracy issues in pipeline anomaly detection, and providing strong support for oil and gas pipeline anomaly recognition tasks in high-noise environments.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 20 September 2024

Javier Santiago Cortes Lopez, Guillermo Rodriguez Abitia, Juan Gomez Reynoso and Angel Eduardo Muñoz Zavala

This qualitative study aims to fill gaps in a widely studied and relevant organizational feature: the alignment between information technologies and business strategies.

Abstract

Purpose

This qualitative study aims to fill gaps in a widely studied and relevant organizational feature: the alignment between information technologies and business strategies.

Design/methodology/approach

This research is a qualitative study. The authors used focus groups, content analysis and semantic networks as research approaches to identify the main factors that prevent or foster such alignment.

Findings

Results reveal a leading role of innovation, organizational culture, access to information and financial factors that could promote or inhibit alignment and competitiveness.

Originality/value

This research was conducted only in small and medium organizations in Mexico, which represents about 52% of the Mexican Gross Domestic Product (for Mexico as one of the leading trade partners of the USA).

Details

Measuring Business Excellence, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1368-3047

Keywords

1 – 10 of 354