Search results
1 – 10 of 13Feng Zhang, Youliang Wei and Tao Feng
GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to…
Abstract
Purpose
GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to an excessive data volume of the query result, which causes problems such as resource overload of the API server. Therefore, this paper aims to address this issue by predicting the response data volume of a GraphQL query statement.
Design/methodology/approach
This paper proposes a GraphQL response data volume prediction approach based on Code2Vec and AutoML. First, a GraphQL query statement is transformed into a path collection of an abstract syntax tree based on the idea of Code2Vec, and then the query is aggregated into a vector with the fixed length. Finally, the response result data volume is predicted by a fully connected neural network. To further improve the prediction accuracy, the prediction results of embedded features are combined with the field features and summary features of the query statement to predict the final response data volume by the AutoML model.
Findings
Experiments on two public GraphQL API data sets, GitHub and Yelp, show that the accuracy of the proposed approach is 15.85% and 50.31% higher than existing GraphQL response volume prediction approaches based on machine learning techniques, respectively.
Originality/value
This paper proposes an approach that combines Code2Vec and AutoML for GraphQL query response data volume prediction with higher accuracy.
Details
Keywords
Miquel Centelles and Núria Ferran-Ferrer
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific…
Abstract
Purpose
Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific focus on enhancing management and retrieval with a gender nonbinary perspective.
Design/methodology/approach
This study employs heuristic and inspection methods to assess Wikipedia’s KOS, ensuring compliance with international standards. It evaluates the efficiency of retrieving non-masculine gender-related articles using the Catalan Wikipedian category scheme, identifying limitations. Additionally, a novel assessment of Wikidata ontologies examines their structure and coverage of gender-related properties, comparing them to Wikipedia’s taxonomy for advantages and enhancements.
Findings
This study evaluates Wikipedia’s taxonomy and Wikidata’s ontologies, establishing evaluation criteria for gender-based categorization and exploring their structural effectiveness. The evaluation process suggests that Wikidata ontologies may offer a viable solution to address Wikipedia’s categorization challenges.
Originality/value
The assessment of Wikipedia categories (taxonomy) based on KOS standards leads to the conclusion that there is ample room for improvement, not only in matters concerning gender identity but also in the overall KOS to enhance search and retrieval for users. These findings bear relevance for the design of tools to support information retrieval on knowledge-rich websites, as they assist users in exploring topics and concepts.
Details
Keywords
Jing Chen, Hongli Chen and Yingyun Li
Cross-app interactive search has become the new normal, but the characteristics of their tactic transitions are still unclear. This study investigated the transitions of daily…
Abstract
Purpose
Cross-app interactive search has become the new normal, but the characteristics of their tactic transitions are still unclear. This study investigated the transitions of daily search tactics during the cross-app interaction search process.
Design/methodology/approach
In total, 204 young participants' impressive cross-app search experiences in real daily situations were collected. The search tactics and tactic transition sequences in their search process were obtained by open coding. Statistical analysis and sequence analysis were used to analyze the frequently applied tactics, the frequency and probability of tactic transitions and the tactic transition sequences representing characteristics of tactic transitions occurring at the beginning, middle and ending phases.
Findings
Creating the search statement (Creat), evaluating search results (EvalR), evaluating an individual item (EvalI) and keeping a record (Rec) were the most frequently applied tactics. The frequency and probability of transitions differed significantly between different tactic types. “Creat? EvalR? EvalI? Rec” is the typical path; Initiate the search in various ways and modifying the search statement were highlighted at the beginning phase; iteratively creating the search statement is highlighted in the middle phase; Moreover, utilization and feedback of information are highlighted at the ending phase.
Originality/value
The present study shed new light on tactic transitions in the cross-app interactive environment to explore information search behaviour. The findings of this work provide targeted suggestions for optimizing APP query, browsing and monitoring systems.
Details
Keywords
Somayeh Tamjid, Fatemeh Nooshinfard, Molouk Sadat Hosseini Beheshti, Nadjla Hariri and Fahimeh Babalhavaeji
The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts…
Abstract
Purpose
The purpose of this study is to develop a domain independent, cost-effective, time-saving and semi-automated ontology generation framework that could extract taxonomic concepts from unstructured text corpus. In the human disease domain, ontologies are found to be extremely useful for managing the diversity of technical expressions in favour of information retrieval objectives. The boundaries of these domains are expanding so fast that it is essential to continuously develop new ontologies or upgrade available ones.
Design/methodology/approach
This paper proposes a semi-automated approach that extracts entities/relations via text mining of scientific publications. Text mining-based ontology (TmbOnt)-named code is generated to assist a user in capturing, processing and establishing ontology elements. This code takes a pile of unstructured text files as input and projects them into high-valued entities or relations as output. As a semi-automated approach, a user supervises the process, filters meaningful predecessor/successor phrases and finalizes the demanded ontology-taxonomy. To verify the practical capabilities of the scheme, a case study was performed to drive glaucoma ontology-taxonomy. For this purpose, text files containing 10,000 records were collected from PubMed.
Findings
The proposed approach processed over 3.8 million tokenized terms of those records and yielded the resultant glaucoma ontology-taxonomy. Compared with two famous disease ontologies, TmbOnt-driven taxonomy demonstrated a 60%–100% coverage ratio against famous medical thesauruses and ontology taxonomies, such as Human Disease Ontology, Medical Subject Headings and National Cancer Institute Thesaurus, with an average of 70% additional terms recommended for ontology development.
Originality/value
According to the literature, the proposed scheme demonstrated novel capability in expanding the ontology-taxonomy structure with a semi-automated text mining approach, aiming for future fully-automated approaches.
Details
Keywords
Manuel Pedro Rodríguez Bolívar and Laura Alcaide Muñoz
This study aims to conduct performance and clustering analyses with the help of Digital Government Reference Library (DGRL) v16.6 database examining the role of emerging…
Abstract
Purpose
This study aims to conduct performance and clustering analyses with the help of Digital Government Reference Library (DGRL) v16.6 database examining the role of emerging technologies (ETs) in public services delivery.
Design/methodology/approach
VOSviewer and SciMAT techniques were used for clustering and mapping the use of ETs in the public services delivery. Collecting documents from the DGRL v16.6 database, the paper uses text mining analysis for identifying key terms and trends in e-Government research regarding ETs and public services.
Findings
The analysis indicates that all ETs are strongly linked to each other, except for blockchain technologies (due to its disruptive nature), which indicate that ETs can be, therefore, seen as accumulative knowledge. In addition, on the whole, findings identify four stages in the evolution of ETs and their application to public services: the “electronic administration” stage, the “technological baseline” stage, the “managerial” stage and the “disruptive technological” stage.
Practical implications
The output of the present research will help to orient policymakers in the implementation and use of ETs, evaluating the influence of these technologies on public services.
Social implications
The research helps researchers to track research trends and uncover new paths on ETs and its implementation in public services.
Originality/value
Recent research has focused on the need of implementing ETs for improving public services, which could help cities to improve the citizens’ quality of life in urban areas. This paper contributes to expanding the knowledge about ETs and its implementation in public services, identifying trends and networks in the research about these issues.
Details
Keywords
Yaxi Liu, Chunxiu Qin, Yulong Wang and XuBu Ma
Exploratory search activities are ubiquitous in various information systems. Much potentially useful or even serendipitous information is discovered during the exploratory search…
Abstract
Purpose
Exploratory search activities are ubiquitous in various information systems. Much potentially useful or even serendipitous information is discovered during the exploratory search process. Given its irreplaceable role in information systems, exploratory search has attracted growing attention from the information system community. Since few studies have methodically reviewed current publications, researchers and practitioners are unable to take full advantage of existing achievements, which, in turn, limits their progress in this field. Through a literature review, this study aims to recapitulate important research topics of exploratory search in information systems, providing a research landscape of exploratory search.
Design/methodology/approach
Automatic and manual searches were performed on seven reputable databases to collect relevant literature published between January 2005 and July 2023. The literature pool contains 146 primary studies on exploratory search in information system research.
Findings
This study recapitulated five important topics of exploratory search, namely, conceptual frameworks, theoretical frameworks, influencing factors, design features and evaluation metrics. Moreover, this review revealed research gaps in current studies and proposed a knowledge framework and a research agenda for future studies.
Originality/value
This study has important implications for beginners to quickly get a snapshot of exploratory search studies, for researchers to re-align current research or discover new interesting issues, and for practitioners to design information systems that support exploratory search.
Details
Keywords
Stephanie Q. Liu, Khadija Ali Vakeel, Nicholas. A. Smith, Roya Sadat Alavipour, Chunhao(Victor) Wei and Jochen Wirtz
An AI concierge is a technologically advanced, intelligent and personalized assistant that is designated to an individual customer, proactively taking care of that customer’s…
Abstract
Purpose
An AI concierge is a technologically advanced, intelligent and personalized assistant that is designated to an individual customer, proactively taking care of that customer’s needs throughout the service journey. This article envisions the idea of AI concierges and discusses how to leverage AI concierges in the customer journey.
Design/methodology/approach
This article takes a conceptual approach and draws insights from literature in service management, marketing, psychology, human-computer interaction and ethics.
Findings
This article delineates the fundamental forms of AI concierges: dialog interface (no embodiment), virtual avatar (embodiment in the virtual world), holographic projection (projection in the physical world) and tangible service robot (embodiment in the physical world). Key attributes of AI concierges are the ability to exhibit semantic understanding of auditory and visual inputs, maintain an emotional connection with the customer, demonstrate proactivity in refining the customer’s experience and ensure omnipresence through continuous availability in various forms to attend to service throughout the customer journey. Furthermore, the article explores the multifaceted roles that AI concierges can play across the pre-encounter, encounter and post-encounter stages of the customer journey and explores the opportunities and challenges associated with AI concierges.
Practical implications
This paper provides insights for professionals in hospitality, retail, travel, and healthcare on leveraging AI concierges to enhance the customer experience. By broadening AI concierge services, organizations can deliver personalized assistance and refined services across the entire customer journey.
Originality/value
This article is the first to introduce the concept of the AI concierge. It offers a novel perspective by defining AI concierges’ fundamental forms, key attributes and exploring their diverse roles in the customer journey. Additionally, it lays out a research agenda aimed at further advancing this domain.
Details
Keywords
Ali Ahmed Albinali, Russell Lock and Iain Phillips
This study aims to look at challenges that hinder small- and medium-sized enterprises (SMEs) from using open data (OD). The research gaps identified are then used to propose a…
Abstract
Purpose
This study aims to look at challenges that hinder small- and medium-sized enterprises (SMEs) from using open data (OD). The research gaps identified are then used to propose a next generation of OD platform (ODP+).
Design/methodology/approach
This study proposes a more effective platform for SMEs called ODP+. A proof of concept was implemented by using modern techniques and technologies, with a pilot conducted among selected SMEs and government employees to test the approach’s viability.
Findings
The findings identify current OD platforms generally, and in Gulf Cooperation Council (GCC) countries, they encounter several difficulties, including that the data sets are complex to understand and determine their potential for reuse. The application of big data analytics in mitigating the identified challenges is demonstrated through the artefacts that have been developed.
Research limitations/implications
This paper discusses several challenges that must be addressed to ensure that OD is accessible, helpful and of high quality in the future when planning and implementing OD initiatives.
Practical implications
The proposed ODP+ integrates social network data, SME data sets and government databases. It will give SMEs a platform for combining data from government agencies, third parties and social networks to carry out complex analytical scenarios or build the needed application using artificial intelligence.
Social implications
The findings promote the potential future utilisation of OD and suggest ways to give users access to knowledge and features.
Originality/value
To the best of the authors’ knowledge, no study provides extensive research about OD in Qatar or GCC. Further, the proposed ODP+ is a new platform that allows SMEs to run natural language data analytics queries.
Details
Keywords
Zulma Valedon Westney, Inkyoung Hur, Ling Wang and Junping Sun
Disinformation on social media is a serious issue. This study examines the effects of disinformation on COVID-19 vaccination decision-making to understand how social media users…
Abstract
Purpose
Disinformation on social media is a serious issue. This study examines the effects of disinformation on COVID-19 vaccination decision-making to understand how social media users make healthcare decisions when disinformation is presented in their social media feeds. It examines trust in post owners as a moderator on the relationship between information types (i.e. disinformation and factual information) and vaccination decision-making.
Design/methodology/approach
This study conducts a scenario-based web survey experiment to collect extensive survey data from social media users.
Findings
This study reveals that information types differently affect social media users' COVID-19 vaccination decision-making and finds a moderating effect of trust in post owners on the relationship between information types and vaccination decision-making. For those who have a high degree of trust in post owners, the effect of information types on vaccination decision-making becomes large. In contrast, information types do not affect the decision-making of those who have a very low degree of trust in post owners. Besides, identification and compliance are found to affect trust in post owners.
Originality/value
This study contributes to the literature on online disinformation and individual healthcare decision-making by demonstrating the effect of disinformation on vaccination decision-making and providing empirical evidence on how trust in post owners impacts the effects of information types on vaccination decision-making. This study focuses on trust in post owners, unlike prior studies that focus on trust in information or social media platforms.
Details
Keywords
Luís Jacques de Sousa, João Poças Martins, Luís Sanhudo and João Santos Baptista
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase…
Abstract
Purpose
This study aims to review recent advances towards the implementation of ANN and NLP applications during the budgeting phase of the construction process. During this phase, construction companies must assess the scope of each task and map the client’s expectations to an internal database of tasks, resources and costs. Quantity surveyors carry out this assessment manually with little to no computer aid, within very austere time constraints, even though these results determine the company’s bid quality and are contractually binding.
Design/methodology/approach
This paper seeks to compile applications of machine learning (ML) and natural language processing in the architectural engineering and construction sector to find which methodologies can assist this assessment. The paper carries out a systematic literature review, following the preferred reporting items for systematic reviews and meta-analyses guidelines, to survey the main scientific contributions within the topic of text classification (TC) for budgeting in construction.
Findings
This work concludes that it is necessary to develop data sets that represent the variety of tasks in construction, achieve higher accuracy algorithms, widen the scope of their application and reduce the need for expert validation of the results. Although full automation is not within reach in the short term, TC algorithms can provide helpful support tools.
Originality/value
Given the increasing interest in ML for construction and recent developments, the findings disclosed in this paper contribute to the body of knowledge, provide a more automated perspective on budgeting in construction and break ground for further implementation of text-based ML in budgeting for construction.
Details