Search results
1 – 7 of 7This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current…
Abstract
Purpose
This paper aims to give an overview of the history and evolution of commercial search engines. It traces the development of search engines from their early days to their current form as complex technology-powered systems that offer a wide range of features and services.
Design/methodology/approach
In recent years, advancements in artificial intelligence (AI) technology have led to the development of AI-powered chat services. This study explores official announcements and releases of three major search engines, Google, Bing and Baidu, of AI-powered chat services.
Findings
Three major players in the search engine market, Google, Microsoft and Baidu started to integrate AI chat into their search results. Google has released Bard, later upgraded to Gemini, a LaMDA-powered conversational AI service. Microsoft has launched Bing Chat, renamed later to Copilot, a GPT-powered by OpenAI search engine. The largest search engine in China, Baidu, released a similar service called Ernie. There are also new AI-based search engines, which are briefly described.
Originality/value
This paper discusses the strengths and weaknesses of the traditional – algorithmic powered search engines and modern search with generative AI support, and the possibilities of merging them into one service. This study stresses the types of inquiries provided to search engines, users’ habits of using search engines and the technological advantage of search engine infrastructure.
Details
Keywords
Integrating the Chat Generative Pre-Trained Transformer-type (ChatGPT-type) model with government services has great development prospects. Applying this model improves service…
Abstract
Purpose
Integrating the Chat Generative Pre-Trained Transformer-type (ChatGPT-type) model with government services has great development prospects. Applying this model improves service efficiency but has certain risks, thus having a dual impact on the public. For a responsible and democratic government, it is necessary to fully understand the factors influencing public acceptance and their causal relationships to truly encourage the public to accept and use government ChatGPT-type services.
Design/methodology/approach
This study used the Latent Dirichlet allocation (LDA) model to analyze comment texts and summarize 15 factors that affect public acceptance. Multiple-related matrices were established using the grey decision-making trial and evaluation laboratory (grey-DEMATEL) method to reveal causal relationships among factors. From the two opposite extraction rules of result priority and cause priority, the authors obtained an antagonistic topological model with comprehensive influence values using the total adversarial interpretive structure model (TAISM).
Findings
Fifteen factors were categorized in terms of cause and effect, and the antagonistic topological model with comprehensive influence values was also analyzed. The analysis showed that perceived risk, trust and meeting demand were the three most critical factors of public acceptance. Meanwhile, perceived risk and trust directly affected public acceptance and were affected by other factors. Supervision and accountability had the highest driving power and acted as the causal factor to influence other factors.
Originality/value
This study identified the factors affecting public acceptance of integrating the ChatGPT-type model with government services. It analyzed the relationship between the factors to provide a reference for decision-makers. This study introduced TAISM to form the LDA-grey-DEMATEL-TAISM method to provide an analytical paradigm for studying similar influencing factors.
Details
Keywords
Hasnan Baber, Kiran Nair, Ruchi Gupta and Kuldeep Gurjar
This paper aims to present a systematic literature review and bibliometric analysis of research papers published on chat generative pre-trained transformer (ChatGPT), an…
Abstract
Purpose
This paper aims to present a systematic literature review and bibliometric analysis of research papers published on chat generative pre-trained transformer (ChatGPT), an OpenAI-developed large-scale generative language model. The study’s objective is to provide a comprehensive assessment of the present status of research on ChatGPT and identify current trends and themes in the literature.
Design/methodology/approach
A total of 328 research article data was extracted from Scopus for bibliometric analysis, to investigate publishing trends, productive countries and keyword analysis around the topic and 34 relevant research publications were selected for an in-depth systematic literature review.
Findings
The findings indicate that ChatGPT research is still in its early stages, with the current emphasis on applications such as natural language processing and understanding, dialogue systems, speech processing and recognition, learning systems, chatbots and response generation. The USA is at the forefront of publishing on this topic and new keywords, e.g. “patient care”, “medical”, “higher education” and so on are emerging themes around the topic.
Research limitations/implications
These findings underscore the importance of ongoing research and development to address these limitations and ensure that ChatGPT is used responsibly and ethically. While systematic review research on ChatGPT heralds exciting opportunities, it also demands a careful understanding of its nuances to harness its potential effectively.
Originality/value
Overall, this study provides a valuable resource for researchers and practitioners interested in ChatGPT at this early stage and helps to identify the grey areas around this topic.
Details
Keywords
Shaodan Sun, Jun Deng and Xugong Qin
This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained…
Abstract
Purpose
This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained knowledge element perspective. This endeavor seeks to unlock the latent value embedded within newspaper contents while simultaneously furnishing invaluable guidance within methodological paradigms for research in the humanities domain.
Design/methodology/approach
According to the semantic organization process and knowledge element concept, this study proposes a holistic framework, including four pivotal stages: knowledge element description, extraction, association and application. Initially, a semantic description model dedicated to knowledge elements is devised. Subsequently, harnessing the advanced deep learning techniques, the study delves into the realm of entity recognition and relationship extraction. These techniques are instrumental in identifying entities within the historical newspaper contents and capturing the interdependencies that exist among them. Finally, an online platform based on Flask is developed to enable the recognition of entities and relationships within historical newspapers.
Findings
This article utilized the Shengjing Times·Changchun Compilation as the datasets for describing, extracting, associating and applying newspapers contents. Regarding knowledge element extraction, the BERT + BS consistently outperforms Bi-LSTM, CRF++ and even BERT in terms of Recall and F1 scores, making it a favorable choice for entity recognition in this context. Particularly noteworthy is the Bi-LSTM-Pro model, which stands out with the highest scores across all metrics, notably achieving an exceptional F1 score in knowledge element relationship recognition.
Originality/value
Historical newspapers transcend their status as mere artifacts, evolving into invaluable reservoirs safeguarding the societal and historical memory. Through semantic organization from a fine-grained knowledge element perspective, it can facilitate semantic retrieval, semantic association, information visualization and knowledge discovery services for historical newspapers. In practice, it can empower researchers to unearth profound insights within the historical and cultural context, broadening the landscape of digital humanities research and practical applications.
Details
Keywords
Khameel B. Mustapha, Eng Hwa Yap and Yousif Abdalla Abakr
Following the recent rise in generative artificial intelligence (GenAI) tools, fundamental questions about their wider impacts have started to reverberate around various…
Abstract
Purpose
Following the recent rise in generative artificial intelligence (GenAI) tools, fundamental questions about their wider impacts have started to reverberate around various disciplines. This study aims to track the unfolding landscape of general issues surrounding GenAI tools and to elucidate the specific opportunities and limitations of these tools as part of the technology-assisted enhancement of mechanical engineering education and professional practices.
Design/methodology/approach
As part of the investigation, the authors conduct and present a brief scientometric analysis of recently published studies to unravel the emerging trend on the subject matter. Furthermore, experimentation was done with selected GenAI tools (Bard, ChatGPT, DALL.E and 3DGPT) for mechanical engineering-related tasks.
Findings
The study identified several pedagogical and professional opportunities and guidelines for deploying GenAI tools in mechanical engineering. Besides, the study highlights some pitfalls of GenAI tools for analytical reasoning tasks (e.g., subtle errors in computation involving unit conversions) and sketching/image generation tasks (e.g., poor demonstration of symmetry).
Originality/value
To the best of the authors’ knowledge, this study presents the first thorough assessment of the potential of GenAI from the lens of the mechanical engineering field. Combining scientometric analysis, experimentation and pedagogical insights, the study provides a unique focus on the implications of GenAI tools for material selection/discovery in product design, manufacturing troubleshooting, technical documentation and product positioning, among others.
Details
Keywords
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Abstract
Purpose
This study aims to construct a sentiment series generation method for danmu comments based on deep learning, and explore the features of sentiment series after clustering.
Design/methodology/approach
This study consisted of two main parts: danmu comment sentiment series generation and clustering. In the first part, the authors proposed a sentiment classification model based on BERT fine-tuning to quantify danmu comment sentiment polarity. To smooth the sentiment series, they used methods, such as comprehensive weights. In the second part, the shaped-based distance (SBD)-K-shape method was used to cluster the actual collected data.
Findings
The filtered sentiment series or curves of the microfilms on the Bilibili website could be divided into four major categories. There is an apparently stable time interval for the first three types of sentiment curves, while the fourth type of sentiment curve shows a clear trend of fluctuation in general. In addition, it was found that “disputed points” or “highlights” are likely to appear at the beginning and the climax of films, resulting in significant changes in the sentiment curves. The clustering results show a significant difference in user participation, with the second type prevailing over others.
Originality/value
Their sentiment classification model based on BERT fine-tuning outperformed the traditional sentiment lexicon method, which provides a reference for using deep learning as well as transfer learning for danmu comment sentiment analysis. The BERT fine-tuning–SBD-K-shape algorithm can weaken the effect of non-regular noise and temporal phase shift of danmu text.
Details
Keywords
Xiaobo Tang, Heshen Zhou and Shixuan Li
Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly…
Abstract
Purpose
Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly cited paper prediction studies consider early citation information, so predicting highly cited papers by publication is challenging. Therefore, the authors propose a method for predicting early highly cited papers based on their own features.
Design/methodology/approach
This research analyzed academic papers published in the Journal of the Association for Computing Machinery (ACM) from 2000 to 2013. Five types of features were extracted: paper features, journal features, author features, reference features and semantic features. Subsequently, the authors applied a deep neural network (DNN), support vector machine (SVM), decision tree (DT) and logistic regression (LGR), and they predicted highly cited papers 1–3 years after publication.
Findings
Experimental results showed that early highly cited academic papers are predictable when they are first published. The authors’ prediction models showed considerable performance. This study further confirmed that the features of references and authors play an important role in predicting early highly cited papers. In addition, the proportion of high-quality journal references has a more significant impact on prediction.
Originality/value
Based on the available information at the time of publication, this study proposed an effective early highly cited paper prediction model. This study facilitates the early discovery and realization of the value of scientific and technological achievements.
Details