Search results

1 – 10 of over 1000
Article
Publication date: 12 February 2024

Hamid Reza Saeidnia, Elaheh Hosseini, Shadi Abdoli and Marcel Ausloos

The study aims to analyze the synergy of artificial intelligence (AI), with scientometrics, webometrics and bibliometrics to unlock and to emphasize the potential of the…

Abstract

Purpose

The study aims to analyze the synergy of artificial intelligence (AI), with scientometrics, webometrics and bibliometrics to unlock and to emphasize the potential of the applications and benefits of AI algorithms in these fields.

Design/methodology/approach

By conducting a systematic literature review, our aim is to explore the potential of AI in revolutionizing the methods used to measure and analyze scholarly communication, identify emerging research trends and evaluate the impact of scientific publications. To achieve this, we implemented a comprehensive search strategy across reputable databases such as ProQuest, IEEE Explore, EBSCO, Web of Science and Scopus. Our search encompassed articles published from January 1, 2000, to September 2022, resulting in a thorough review of 61 relevant articles.

Findings

(1) Regarding scientometrics, the application of AI yields various distinct advantages, such as conducting analyses of publications, citations, research impact prediction, collaboration, research trend analysis and knowledge mapping, in a more objective and reliable framework. (2) In terms of webometrics, AI algorithms are able to enhance web crawling and data collection, web link analysis, web content analysis, social media analysis, web impact analysis and recommender systems. (3) Moreover, automation of data collection, analysis of citations, disambiguation of authors, analysis of co-authorship networks, assessment of research impact, text mining and recommender systems are considered as the potential of AI integration in the field of bibliometrics.

Originality/value

This study covers the particularly new benefits and potential of AI-enhanced scientometrics, webometrics and bibliometrics to highlight the significant prospects of the synergy of this integration through AI.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 6 February 2024

Lin Xue and Feng Zhang

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web…

Abstract

Purpose

With the increasing number of Web services, correct and efficient classification of Web services is crucial to improve the efficiency of service discovery. However, existing Web service classification approaches ignore the class overlap in Web services, resulting in poor accuracy of classification in practice. This paper aims to provide an approach to address this issue.

Design/methodology/approach

This paper proposes a label confusion and priori correction-based Web service classification approach. First, functional semantic representations of Web services descriptions are obtained based on BERT. Then, the ability of the model is enhanced to recognize and classify overlapping instances by using label confusion learning techniques; Finally, the predictive results are corrected based on the label prior distribution to further improve service classification effectiveness.

Findings

Experiments based on the ProgrammableWeb data set show that the proposed model demonstrates 4.3%, 3.2% and 1% improvement in Macro-F1 value compared to the ServeNet-BERT, BERT-DPCNN and CARL-NET, respectively.

Originality/value

This paper proposes a Web service classification approach for the overlapping categories of Web services and improve the accuracy of Web services classification.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 18 March 2024

Raj Kumar Bhardwaj, Ritesh Kumar and Mohammad Nazim

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest…

Abstract

Purpose

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.

Design/methodology/approach

The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.

Findings

The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.

Research limitations/implications

The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.

Practical implications

The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.

Originality/value

The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.

Details

Performance Measurement and Metrics, vol. 25 no. 1
Type: Research Article
ISSN: 1467-8047

Keywords

Article
Publication date: 17 May 2023

Tong Yang, Jie Wu and Junming Zhang

This study aims to establish a comprehensive satisfaction analysis framework by mining online restaurant reviews, which can not only accurately reveal consumer satisfaction but…

Abstract

Purpose

This study aims to establish a comprehensive satisfaction analysis framework by mining online restaurant reviews, which can not only accurately reveal consumer satisfaction but also identify factors leading to dissatisfaction and further quantify improvement opportunity levels.

Design/methodology/approach

Adopting deep learning, Cross-Bidirectional Encoder Representations Transformers (BERT) model is developed to measure customer satisfaction. Furthermore, opinion mining technique is used to extract consumers’ opinions and obtain dissatisfaction factors. Furthermore, the opportunity algorithm is introduced to quantify attributes’ improvement opportunity levels. A total of 19,133 online reviews of 31 restaurants in Universal Beijing Resort are crawled to validate the framework.

Findings

Results demonstrate the superiority of Cross-BERT model compared to existing models such as sentiment lexicon-based model and Naïve Bayes. More importantly, after effectively unveiling customer dissatisfaction factors (e.g. long queuing time and taste salty), “Dish taste,” “Waiters’ attitude” and “Decoration” are identified as the three secondary attributes with the greatest improvement opportunities.

Practical implications

The proposed framework helps managers, especially in the restaurant industry, accurately understand customer satisfaction and reasons behind dissatisfaction, thereby generating efficient countermeasures. Especially, the improvement opportunity levels also benefit practitioners in efficiently allocating limited business resources.

Originality/value

This work contributes to hospitality and tourism literature by developing a comprehensive customer satisfaction analysis framework in the big data era. Moreover, to the best of the authors’ knowledge, this work is among the first to introduce opportunity algorithm to quantify service improvement benefits. The proposed Cross-BERT model also advances the methodological literature on measuring customer satisfaction.

Details

International Journal of Contemporary Hospitality Management, vol. 36 no. 3
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 26 April 2024

Rajender Kumar and Dinesh K. Gupta

The purpose of this paper is to examine the restructuring of human resources development processes in Indian Institutes of Technology (IIT) libraries in North India, emphasizing…

Abstract

Purpose

The purpose of this paper is to examine the restructuring of human resources development processes in Indian Institutes of Technology (IIT) libraries in North India, emphasizing the essential information and communications technology (ICT) skills for both recruits and existing staff.

Design/methodology/approach

The study used a survey research design, with two different sets of structured questionnaires used to collect data. The first set, which was distributed to all heads of seven IIT libraries in North India, received a 100% response rate. Simultaneously, the second set was distributed to library users, yielding a 92% response rate (680 responses out of 700 distributed). The collected data were analyzed and tabulated, with suitable interpretations.

Findings

The findings of the study reveal that all examined libraries have implemented skill development programs. Moreover, advanced ICT skills are considered essential for staff appointments, and specific institutes (IIT Kanpur, IIT Delhi, IIT Jodhpur and IIT Ropar) took the initiative to provide ICT training to their employees. Trained employees exhibited enhanced performance, attributed to advanced ICT knowledge. The study suggests restructuring selection criteria and introducing structured ICT training programs for library staff, ensuring a more adept workforce for current demands.

Research limitations/implications

The study can increase the impact globally on human resource development by incorporating soft skills, job satisfaction and leadership development while exploring research opportunities through cross-institutional comparisons and the integration of emerging technologies such as artificial intelligence and virtual reality.

Originality/value

This study collected primary data from IIT libraries in North India using self-designed questionnaires. The findings provide useful insights into how libraries might restructure human resource development in the digital age.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 9 April 2024

Long Liu, Lifeng Wang and Ziwang Xiao

The combination of an Engineered Cementitious Composite (ECC) layer and steel plate to reinforce RC beams (ESRB) is a new strengthening method. The ESRB was proposed based on the…

Abstract

Purpose

The combination of an Engineered Cementitious Composite (ECC) layer and steel plate to reinforce RC beams (ESRB) is a new strengthening method. The ESRB was proposed based on the steel plate at the bottom of RC beams, aiming to solve the problem of over-reinforced RC beams and improve the bearing capacity of RC beams without affecting their ductility.

Design/methodology/approach

In this paper, the finite element model of ESRB was established by ABAQUS. The results were compared with the experimental results of ESRB in previous studies and the reliability of the finite element model was verified. On this basis, parameters such as the width of the steel plate, thickness of the ECC layer, damage degree of the original beam and cross-sectional area of longitudinal tensile rebar were analyzed by the verified finite element model. Based on the load–deflection curve of ESRB, ESRB was discussed in terms of ultimate bearing capacity and ductility.

Findings

The results demonstrate that when the width of the steel plate increases, the ultimate load of ESRB increases to 133.22 kN by 11.58% as well as the ductility index increases to 2.39. With the increase of the damage degree of the original beam, the ultimate load of ESRB decreases by 23.7%–91.09 kN and the ductility index decreases to 1.90. With the enhancement of the cross-sectional area of longitudinal tensile rebar, the ultimate bearing capacity of ESRB increases to 126.75 kN by 6.2% and the ductility index elevates to 2.30. Finally, a calculation model for predicting the flexural capacity of ESRB is proposed. The calculated results of the model are in line with the experimental results.

Originality/value

Based on the comparative analysis of the test results and numerical simulation results of 11 test beams, this investigation verified the accuracy and reliability of the finite element simulation from the aspects of load–deflection curve, characteristic load and failure mode. Furthermore, based on load–deflection curve, the effects of steel plate width, ECC layer thickness, damage degree of the original beam and cross-sectional area of longitudinal tensile rebar on the ultimate bearing capacity and ductility of ESRB were discussed. Finally, a simplified method was put forward to further verify the effectiveness of ESRB through analytical calculation.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 24 October 2022

Priyanka Chawla, Rutuja Hasurkar, Chaithanya Reddy Bogadi, Naga Sindhu Korlapati, Rajasree Rajendran, Sindu Ravichandran, Sai Chaitanya Tolem and Jerry Zeyu Gao

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives…

Abstract

Purpose

The study aims to propose an intelligent real-time traffic model to address the traffic congestion problem. The proposed model assists the urban population in their everyday lives by assessing the probability of road accidents and accurate traffic information prediction. It also helps in reducing overall carbon dioxide emissions in the environment and assists the urban population in their everyday lives by increasing overall transportation quality.

Design/methodology/approach

This study offered a real-time traffic model based on the analysis of numerous sensor data. Real-time traffic prediction systems can identify and visualize current traffic conditions on a particular lane. The proposed model incorporated data from road sensors as well as a variety of other sources. It is difficult to capture and process large amounts of sensor data in real time. Sensor data is consumed by streaming analytics platforms that use big data technologies, which is then processed using a range of deep learning and machine learning techniques.

Findings

The study provided in this paper would fill a gap in the data analytics sector by delivering a more accurate and trustworthy model that uses internet of things sensor data and other data sources. This method can also assist organizations such as transit agencies and public safety departments in making strategic decisions by incorporating it into their platforms.

Research limitations/implications

The model has a big flaw in that it makes predictions for the period following January 2020 that are not particularly accurate. This, however, is not a flaw in the model; rather, it is a flaw in Covid-19, the global epidemic. The global pandemic has impacted the traffic scenario, resulting in erratic data for the period after February 2020. However, once the circumstance returns to normal, the authors are confident in their model’s ability to produce accurate forecasts.

Practical implications

To help users choose when to go, this study intended to pinpoint the causes of traffic congestion on the highways in the Bay Area as well as forecast real-time traffic speeds. To determine the best attributes that influence traffic speed in this study, the authors obtained data from the Caltrans performance measurement system (PeMS), reviewed it and used multiple models. The authors developed a model that can forecast traffic speed while accounting for outside variables like weather and incident data, with decent accuracy and generalizability. To assist users in determining traffic congestion at a certain location on a specific day, the forecast method uses a graphical user interface. This user interface has been designed to be readily expanded in the future as the project’s scope and usefulness increase. The authors’ Web-based traffic speed prediction platform is useful for both municipal planners and individual travellers. The authors were able to get excellent results by using five years of data (2015–2019) to train the models and forecast outcomes for 2020 data. The authors’ algorithm produced highly accurate predictions when tested using data from January 2020. The benefits of this model include accurate traffic speed forecasts for California’s four main freeways (Freeway 101, I-680, 880 and 280) for a specific place on a certain date. The scalable model performs better than the vast majority of earlier models created by other scholars in the field. The government would benefit from better planning and execution of new transportation projects if this programme were to be extended across the entire state of California. This initiative could be expanded to include the full state of California, assisting the government in better planning and implementing new transportation projects.

Social implications

To estimate traffic congestion, the proposed model takes into account a variety of data sources, including weather and incident data. According to traffic congestion statistics, “bottlenecks” account for 40% of traffic congestion, “traffic incidents” account for 25% and “work zones” account for 10% (Traffic Congestion Statistics). As a result, incident data must be considered for analysis. The study uses traffic, weather and event data from the previous five years to estimate traffic congestion in any given area. As a result, the results predicted by the proposed model would be more accurate, and commuters who need to schedule ahead of time for work would benefit greatly.

Originality/value

The proposed work allows the user to choose the optimum time and mode of transportation for them. The underlying idea behind this model is that if a car spends more time on the road, it will cause traffic congestion. The proposed system encourages users to arrive at their location in a short period of time. Congestion is an indicator that public transportation needs to be expanded. The optimum route is compared to other kinds of public transit using this methodology (Greenfield, 2014). If the commute time is comparable to that of private car transportation during peak hours, consumers should take public transportation.

Details

World Journal of Engineering, vol. 21 no. 1
Type: Research Article
ISSN: 1708-5284

Keywords

Article
Publication date: 17 June 2021

Ambica Ghai, Pradeep Kumar and Samrat Gupta

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered…

1165

Abstract

Purpose

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered with to influence public opinion. Since the consumers of online information (misinformation) tend to trust the content when the image(s) supplement the text, image manipulation software is increasingly being used to forge the images. To address the crucial problem of image manipulation, this study focusses on developing a deep-learning-based image forgery detection framework.

Design/methodology/approach

The proposed deep-learning-based framework aims to detect images forged using copy-move and splicing techniques. The image transformation technique aids the identification of relevant features for the network to train effectively. After that, the pre-trained customized convolutional neural network is used to train on the public benchmark datasets, and the performance is evaluated on the test dataset using various parameters.

Findings

The comparative analysis of image transformation techniques and experiments conducted on benchmark datasets from a variety of socio-cultural domains establishes the effectiveness and viability of the proposed framework. These findings affirm the potential applicability of proposed framework in real-time image forgery detection.

Research limitations/implications

This study bears implications for several important aspects of research on image forgery detection. First this research adds to recent discussion on feature extraction and learning for image forgery detection. While prior research on image forgery detection, hand-crafted the features, the proposed solution contributes to stream of literature that automatically learns the features and classify the images. Second, this research contributes to ongoing effort in curtailing the spread of misinformation using images. The extant literature on spread of misinformation has prominently focussed on textual data shared over social media platforms. The study addresses the call for greater emphasis on the development of robust image transformation techniques.

Practical implications

This study carries important practical implications for various domains such as forensic sciences, media and journalism where image data is increasingly being used to make inferences. The integration of image forgery detection tools can be helpful in determining the credibility of the article or post before it is shared over the Internet. The content shared over the Internet by the users has become an important component of news reporting. The framework proposed in this paper can be further extended and trained on more annotated real-world data so as to function as a tool for fact-checkers.

Social implications

In the current scenario wherein most of the image forgery detection studies attempt to assess whether the image is real or forged in an offline mode, it is crucial to identify any trending or potential forged image as early as possible. By learning from historical data, the proposed framework can aid in early prediction of forged images to detect the newly emerging forged images even before they occur. In summary, the proposed framework has a potential to mitigate physical spreading and psychological impact of forged images on social media.

Originality/value

This study focusses on copy-move and splicing techniques while integrating transfer learning concepts to classify forged images with high accuracy. The synergistic use of hitherto little explored image transformation techniques and customized convolutional neural network helps design a robust image forgery detection framework. Experiments and findings establish that the proposed framework accurately classifies forged images, thus mitigating the negative socio-cultural spread of misinformation.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 6 July 2023

Fayaz Ahmad Loan, Aasif Mohammad Khan, Syed Aasif Ahmad Andrabi, Sozia Rashid Sozia and Umer Yousuf Parray

The purpose of the present study is to identify the active and dead links of uniform resource locators (URLs) associated with web references and to compare the effectiveness of…

Abstract

Purpose

The purpose of the present study is to identify the active and dead links of uniform resource locators (URLs) associated with web references and to compare the effectiveness of Chrome, Google and WayBack Machine in retrieving the dead URLs.

Design/methodology/approach

The web references of the Library Hi Tech from 2004 to 2008 were selected for analysis to fulfill the set objectives. The URLs were extracted from the articles to verify their accessibility in terms of persistence and decay. The URLs were then executed directly in the internet browser (Chrome), search engine (Google) and Internet Archive (WayBack Machine). The collected data were recorded in an excel file and presented in tables/diagrams for further analysis.

Findings

From the total of 1,083 web references, a maximum number was retrieved by the WayBack Machine (786; 72.6 per cent) followed by Google (501; 46.3 per cent) and the lowest by Chrome (402; 37.1 per cent). The study concludes that the WayBack Machine is more efficient, retrieves a maximum number of missing web citations and fulfills the mission of preservation of web sources to a larger extent.

Originality/value

A good number of studies have been conducted to analyze the persistence and decay of web-references; however, the present study is unique as it compared the dead URL retrieval effectiveness of internet explorer (Chrome), search engine giant (Google) and WayBack Machine of the Internet Archive.

Research limitations/implications

The web references of a single journal, namely, Library Hi Tech, were analyzed for 5 years only. A major study across disciplines and sources may yield better results.

Practical implications

URL decay is becoming a major problem in the preservation and citation of web resources. The study has some healthy recommendations for authors, editors, publishers, librarians and web designers to improve the persistence of web references.

Details

Data Technologies and Applications, vol. 58 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 25 January 2024

Yaolin Zhou, Zhaoyang Zhang, Xiaoyu Wang, Quanzheng Sheng and Rongying Zhao

The digitalization of archival management has rapidly developed with the maturation of digital technology. With data's exponential growth, archival resources have transitioned…

Abstract

Purpose

The digitalization of archival management has rapidly developed with the maturation of digital technology. With data's exponential growth, archival resources have transitioned from single modalities, such as text, images, audio and video, to integrated multimodal forms. This paper identifies key trends, gaps and areas of focus in the field. Furthermore, it proposes a theoretical organizational framework based on deep learning to address the challenges of managing archives in the era of big data.

Design/methodology/approach

Via a comprehensive systematic literature review, the authors investigate the field of multimodal archive resource organization and the application of deep learning techniques in archive organization. A systematic search and filtering process is conducted to identify relevant articles, which are then summarized, discussed and analyzed to provide a comprehensive understanding of existing literature.

Findings

The authors' findings reveal that most research on multimodal archive resources predominantly focuses on aspects related to storage, management and retrieval. Furthermore, the utilization of deep learning techniques in image archive retrieval is increasing, highlighting their potential for enhancing image archive organization practices; however, practical research and implementation remain scarce. The review also underscores gaps in the literature, emphasizing the need for more practical case studies and the application of theoretical concepts in real-world scenarios. In response to these insights, the authors' study proposes an innovative deep learning-based organizational framework. This proposed framework is designed to navigate the complexities inherent in managing multimodal archive resources, representing a significant stride toward more efficient and effective archival practices.

Originality/value

This study comprehensively reviews the existing literature on multimodal archive resources organization. Additionally, a theoretical organizational framework based on deep learning is proposed, offering a novel perspective and solution for further advancements in the field. These insights contribute theoretically and practically, providing valuable knowledge for researchers, practitioners and archivists involved in organizing multimodal archive resources.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

1 – 10 of over 1000