Search results

1 – 10 of 344
Article
Publication date: 16 February 2022

Pragati Agarwal, Sanjeev Swami and Sunita Kumari Malhotra

The purpose of this paper is to give an overview of artificial intelligence (AI) and other AI-enabled technologies and to describe how COVID-19 affects various industries such as…

3528

Abstract

Purpose

The purpose of this paper is to give an overview of artificial intelligence (AI) and other AI-enabled technologies and to describe how COVID-19 affects various industries such as health care, manufacturing, retail, food services, education, media and entertainment, banking and insurance, travel and tourism. Furthermore, the authors discuss the tactics in which information technology is used to implement business strategies to transform businesses and to incentivise the implementation of these technologies in current or future emergency situations.

Design/methodology/approach

The review provides the rapidly growing literature on the use of smart technology during the current COVID-19 pandemic.

Findings

The 127 empirical articles the authors have identified suggest that 39 forms of smart technologies have been used, ranging from artificial intelligence to computer vision technology. Eight different industries have been identified that are using these technologies, primarily food services and manufacturing. Further, the authors list 40 generalised types of activities that are involved including providing health services, data analysis and communication. To prevent the spread of illness, robots with artificial intelligence are being used to examine patients and give drugs to them. The online execution of teaching practices and simulators have replaced the classroom mode of teaching due to the epidemic. The AI-based Blue-dot algorithm aids in the detection of early warning indications. The AI model detects a patient in respiratory distress based on face detection, face recognition, facial action unit detection, expression recognition, posture, extremity movement analysis, visitation frequency detection, sound pressure detection and light level detection. The above and various other applications are listed throughout the paper.

Research limitations/implications

Research is largely delimited to the area of COVID-19-related studies. Also, bias of selective assessment may be present. In Indian context, advanced technology is yet to be harnessed to its full extent. Also, educational system is yet to be upgraded to add these technologies potential benefits on wider basis.

Practical implications

First, leveraging of insights across various industry sectors to battle the global threat, and smart technology is one of the key takeaways in this field. Second, an integrated framework is recommended for policy making in this area. Lastly, the authors recommend that an internet-based repository should be developed, keeping all the ideas, databases, best practices, dashboard and real-time statistical data.

Originality/value

As the COVID-19 is a relatively recent phenomenon, such a comprehensive review does not exist in the extant literature to the best of the authors’ knowledge. The review is rapidly emerging literature on smart technology use during the current COVID-19 pandemic.

Details

Journal of Science and Technology Policy Management, vol. 15 no. 3
Type: Research Article
ISSN: 2053-4620

Keywords

Abstract

Details

Journal of Science and Technology Policy Management, vol. 15 no. 3
Type: Research Article
ISSN: 2053-4620

Article
Publication date: 2 April 2024

R.S. Vignesh and M. Monica Subashini

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories…

Abstract

Purpose

An abundance of techniques has been presented so forth for waste classification but, they deliver inefficient results with low accuracy. Their achievement on various repositories is different and also, there is insufficiency of high-scale databases for training. The purpose of the study is to provide high security.

Design/methodology/approach

In this research, optimization-assisted federated learning (FL) is introduced for thermoplastic waste segregation and classification. The deep learning (DL) network trained by Archimedes Henry gas solubility optimization (AHGSO) is used for the classification of plastic and resin types. The deep quantum neural networks (DQNN) is used for first-level classification and the deep max-out network (DMN) is employed for second-level classification. This developed AHGSO is obtained by blending the features of Archimedes optimization algorithm (AOA) and Henry gas solubility optimization (HGSO). The entities included in this approach are nodes and servers. Local training is carried out depending on local data and updations to the server are performed. Then, the model is aggregated at the server. Thereafter, each node downloads the global model and the update training is executed depending on the downloaded global and the local model till it achieves the satisfied condition. Finally, local update and aggregation at the server is altered based on the average method. The Data tag suite (DATS_2022) dataset is used for multilevel thermoplastic waste segregation and classification.

Findings

By using the DQNN in first-level classification the designed optimization-assisted FL has gained an accuracy of 0.930, mean average precision (MAP) of 0.933, false positive rate (FPR) of 0.213, loss function of 0.211, mean square error (MSE) of 0.328 and root mean square error (RMSE) of 0.572. In the second level classification, by using DMN the accuracy, MAP, FPR, loss function, MSE and RMSE are 0.932, 0.935, 0.093, 0.068, 0.303 and 0.551.

Originality/value

The multilevel thermoplastic waste segregation and classification using the proposed model is accurate and improves the effectiveness of the classification.

Article
Publication date: 1 November 2023

Juan Yang, Zhenkun Li and Xu Du

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their…

Abstract

Purpose

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their emotional states in daily communication. Therefore, how to achieve automatic and accurate audiovisual emotion recognition is significantly important for developing engaging and empathetic human–computer interaction environment. However, two major challenges exist in the field of audiovisual emotion recognition: (1) how to effectively capture representations of each single modality and eliminate redundant features and (2) how to efficiently integrate information from these two modalities to generate discriminative representations.

Design/methodology/approach

A novel key-frame extraction-based attention fusion network (KE-AFN) is proposed for audiovisual emotion recognition. KE-AFN attempts to integrate key-frame extraction with multimodal interaction and fusion to enhance audiovisual representations and reduce redundant computation, filling the research gaps of existing approaches. Specifically, the local maximum–based content analysis is designed to extract key-frames from videos for the purpose of eliminating data redundancy. Two modules, including “Multi-head Attention-based Intra-modality Interaction Module” and “Multi-head Attention-based Cross-modality Interaction Module”, are proposed to mine and capture intra- and cross-modality interactions for further reducing data redundancy and producing more powerful multimodal representations.

Findings

Extensive experiments on two benchmark datasets (i.e. RAVDESS and CMU-MOSEI) demonstrate the effectiveness and rationality of KE-AFN. Specifically, (1) KE-AFN is superior to state-of-the-art baselines for audiovisual emotion recognition. (2) Exploring the supplementary and complementary information of different modalities can provide more emotional clues for better emotion recognition. (3) The proposed key-frame extraction strategy can enhance the performance by more than 2.79 per cent on accuracy. (4) Both exploring intra- and cross-modality interactions and employing attention-based audiovisual fusion can lead to better prediction performance.

Originality/value

The proposed KE-AFN can support the development of engaging and empathetic human–computer interaction environment.

Article
Publication date: 25 April 2024

Mojtaba Rezaei, Marco Pironti and Roberto Quaglia

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their…

Abstract

Purpose

This study aims to identify and assess the key ethical challenges associated with integrating artificial intelligence (AI) in knowledge-sharing (KS) practices and their implications for decision-making (DM) processes within organisations.

Design/methodology/approach

The study employs a mixed-methods approach, beginning with a comprehensive literature review to extract background information on AI and KS and to identify potential ethical challenges. Subsequently, a confirmatory factor analysis (CFA) is conducted using data collected from individuals employed in business settings to validate the challenges identified in the literature and assess their impact on DM processes.

Findings

The findings reveal that challenges related to privacy and data protection, bias and fairness and transparency and explainability are particularly significant in DM. Moreover, challenges related to accountability and responsibility and the impact of AI on employment also show relatively high coefficients, highlighting their importance in the DM process. In contrast, challenges such as intellectual property and ownership, algorithmic manipulation and global governance and regulation are found to be less central to the DM process.

Originality/value

This research contributes to the ongoing discourse on the ethical challenges of AI in knowledge management (KM) and DM within organisations. By providing insights and recommendations for researchers, managers and policymakers, the study emphasises the need for a holistic and collaborative approach to harness the benefits of AI technologies whilst mitigating their associated risks.

Details

Management Decision, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 16 April 2024

Jinwei Zhao, Shuolei Feng, Xiaodong Cao and Haopei Zheng

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and…

Abstract

Purpose

This paper aims to concentrate on recent innovations in flexible wearable sensor technology tailored for monitoring vital signals within the contexts of wearable sensors and systems developed specifically for monitoring health and fitness metrics.

Design/methodology/approach

In recent decades, wearable sensors for monitoring vital signals in sports and health have advanced greatly. Vital signals include electrocardiogram, electroencephalogram, electromyography, inertial data, body motions, cardiac rate and bodily fluids like blood and sweating, making them a good choice for sensing devices.

Findings

This report reviewed reputable journal articles on wearable sensors for vital signal monitoring, focusing on multimode and integrated multi-dimensional capabilities like structure, accuracy and nature of the devices, which may offer a more versatile and comprehensive solution.

Originality/value

The paper provides essential information on the present obstacles and challenges in this domain and provide a glimpse into the future directions of wearable sensors for the detection of these crucial signals. Importantly, it is evident that the integration of modern fabricating techniques, stretchable electronic devices, the Internet of Things and the application of artificial intelligence algorithms has significantly improved the capacity to efficiently monitor and leverage these signals for human health monitoring, including disease prediction.

Details

Sensor Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 8 April 2024

Hu Luo, Haobin Ruan and Dawei Tu

The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images…

Abstract

Purpose

The purpose of this paper is to propose a whole set of methods for underwater target detection, because most underwater objects have small samples, low quality underwater images problems such as detail loss, low contrast and color distortion, and verify the feasibility of the proposed methods through experiments.

Design/methodology/approach

The improved RGHS algorithm to enhance the original underwater target image is proposed, and then the YOLOv4 deep learning network for underwater small sample targets detection is improved based on the combination of traditional data expansion method and Mosaic algorithm, expanding the feature extraction capability with SPP (Spatial Pyramid Pooling) module after each feature extraction layer to extract richer feature information.

Findings

The experimental results, using the official dataset, reveal a 3.5% increase in average detection accuracy for three types of underwater biological targets compared to the traditional YOLOv4 algorithm. In underwater robot application testing, the proposed method achieves an impressive 94.73% average detection accuracy for the three types of underwater biological targets.

Originality/value

Underwater target detection is an important task for underwater robot application. However, most underwater targets have the characteristics of small samples, and the detection of small sample targets is a comprehensive problem because it is affected by the quality of underwater images. This paper provides a whole set of methods to solve the problems, which is of great significance to the application of underwater robot.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 15 July 2021

Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Abstract

Purpose

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Design/methodology/approach

The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.

Findings

The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.

Originality/value

The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 17 June 2021

Ambica Ghai, Pradeep Kumar and Samrat Gupta

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered…

1165

Abstract

Purpose

Web users rely heavily on online content make decisions without assessing the veracity of the content. The online content comprising text, image, video or audio may be tampered with to influence public opinion. Since the consumers of online information (misinformation) tend to trust the content when the image(s) supplement the text, image manipulation software is increasingly being used to forge the images. To address the crucial problem of image manipulation, this study focusses on developing a deep-learning-based image forgery detection framework.

Design/methodology/approach

The proposed deep-learning-based framework aims to detect images forged using copy-move and splicing techniques. The image transformation technique aids the identification of relevant features for the network to train effectively. After that, the pre-trained customized convolutional neural network is used to train on the public benchmark datasets, and the performance is evaluated on the test dataset using various parameters.

Findings

The comparative analysis of image transformation techniques and experiments conducted on benchmark datasets from a variety of socio-cultural domains establishes the effectiveness and viability of the proposed framework. These findings affirm the potential applicability of proposed framework in real-time image forgery detection.

Research limitations/implications

This study bears implications for several important aspects of research on image forgery detection. First this research adds to recent discussion on feature extraction and learning for image forgery detection. While prior research on image forgery detection, hand-crafted the features, the proposed solution contributes to stream of literature that automatically learns the features and classify the images. Second, this research contributes to ongoing effort in curtailing the spread of misinformation using images. The extant literature on spread of misinformation has prominently focussed on textual data shared over social media platforms. The study addresses the call for greater emphasis on the development of robust image transformation techniques.

Practical implications

This study carries important practical implications for various domains such as forensic sciences, media and journalism where image data is increasingly being used to make inferences. The integration of image forgery detection tools can be helpful in determining the credibility of the article or post before it is shared over the Internet. The content shared over the Internet by the users has become an important component of news reporting. The framework proposed in this paper can be further extended and trained on more annotated real-world data so as to function as a tool for fact-checkers.

Social implications

In the current scenario wherein most of the image forgery detection studies attempt to assess whether the image is real or forged in an offline mode, it is crucial to identify any trending or potential forged image as early as possible. By learning from historical data, the proposed framework can aid in early prediction of forged images to detect the newly emerging forged images even before they occur. In summary, the proposed framework has a potential to mitigate physical spreading and psychological impact of forged images on social media.

Originality/value

This study focusses on copy-move and splicing techniques while integrating transfer learning concepts to classify forged images with high accuracy. The synergistic use of hitherto little explored image transformation techniques and customized convolutional neural network helps design a robust image forgery detection framework. Experiments and findings establish that the proposed framework accurately classifies forged images, thus mitigating the negative socio-cultural spread of misinformation.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 24 March 2022

Elavaar Kuzhali S. and Pushpa M.K.

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150…

Abstract

Purpose

COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The main purpose of this work is, COVID-19 has occurred in more than 150 countries and causes a huge impact on the health of many people. The COVID-19 diagnosis is required to detect at the beginning stage and special attention should be given to them. The fastest way to detect the COVID-19 infected patients is detecting through radiology and radiography images. The few early studies describe the particular abnormalities of the infected patients in the chest radiograms. Even though some of the challenges occur in concluding the viral infection traces in X-ray images, the convolutional neural network (CNN) can determine the patterns of data between the normal and infected X-rays that increase the detection rate. Therefore, the researchers are focusing on developing a deep learning-based detection model.

Design/methodology/approach

The main intention of this proposal is to develop the enhanced lung segmentation and classification of diagnosing the COVID-19. The main processes of the proposed model are image pre-processing, lung segmentation and deep classification. Initially, the image enhancement is performed by contrast enhancement and filtering approaches. Once the image is pre-processed, the optimal lung segmentation is done by the adaptive fuzzy-based region growing (AFRG) technique, in which the constant function for fusion is optimized by the modified deer hunting optimization algorithm (M-DHOA). Further, a well-performing deep learning algorithm termed adaptive CNN (A-CNN) is adopted for performing the classification, in which the hidden neurons are tuned by the proposed DHOA to enhance the detection accuracy. The simulation results illustrate that the proposed model has more possibilities to increase the COVID-19 testing methods on the publicly available data sets.

Findings

From the experimental analysis, the accuracy of the proposed M-DHOA–CNN was 5.84%, 5.23%, 6.25% and 8.33% superior to recurrent neural network, neural networks, support vector machine and K-nearest neighbor, respectively. Thus, the segmentation and classification performance of the developed COVID-19 diagnosis by AFRG and A-CNN has outperformed the existing techniques.

Originality/value

This paper adopts the latest optimization algorithm called M-DHOA to improve the performance of lung segmentation and classification in COVID-19 diagnosis using adaptive K-means with region growing fusion and A-CNN. To the best of the authors’ knowledge, this is the first work that uses M-DHOA for improved segmentation and classification steps for increasing the convergence rate of diagnosis.

Details

Journal of Engineering, Design and Technology , vol. 22 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

1 – 10 of 344