Search results

1 – 10 of 42
Open Access
Article
Publication date: 6 December 2022

Worapan Kusakunniran, Sarattha Karnjanapreechakorn, Pitipol Choopong, Thanongchai Siriapisith, Nattaporn Tesavibul, Nopasak Phasukkijwatana, Supalert Prakhunhungsit and Sutasinee Boonsopon

This paper aims to propose a solution for detecting and grading diabetic retinopathy (DR) in retinal images using a convolutional neural network (CNN)-based approach. It could…

1323

Abstract

Purpose

This paper aims to propose a solution for detecting and grading diabetic retinopathy (DR) in retinal images using a convolutional neural network (CNN)-based approach. It could classify input retinal images into a normal class or an abnormal class, which would be further split into four stages of abnormalities automatically.

Design/methodology/approach

The proposed solution is developed based on a newly proposed CNN architecture, namely, DeepRoot. It consists of one main branch, which is connected by two side branches. The main branch is responsible for the primary feature extractor of both high-level and low-level features of retinal images. Then, the side branches further extract more complex and detailed features from the features outputted from the main branch. They are designed to capture details of small traces of DR in retinal images, using modified zoom-in/zoom-out and attention layers.

Findings

The proposed method is trained, validated and tested on the Kaggle dataset. The regularization of the trained model is evaluated using unseen data samples, which were self-collected from a real scenario from a hospital. It achieves a promising performance with a sensitivity of 98.18% under the two classes scenario.

Originality/value

The new CNN-based architecture (i.e. DeepRoot) is introduced with the concept of a multi-branch network. It could assist in solving a problem of an unbalanced dataset, especially when there are common characteristics across different classes (i.e. four stages of DR). Different classes could be outputted at different depths of the network.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 21 April 2022

Warot Moungsouy, Thanawat Tawanbunjerd, Nutcha Liamsomboon and Worapan Kusakunniran

This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face…

2695

Abstract

Purpose

This paper proposes a solution for recognizing human faces under mask-wearing. The lower part of human face is occluded and could not be used in the learning process of face recognition. So, the proposed solution is developed to recognize human faces on any available facial components which could be varied depending on wearing or not wearing a mask.

Design/methodology/approach

The proposed solution is developed based on the FaceNet framework, aiming to modify the existing facial recognition model to improve the performance of both scenarios of mask-wearing and without mask-wearing. Then, simulated masked-face images are computed on top of the original face images, to be used in the learning process of face recognition. In addition, feature heatmaps are also drawn out to visualize majority of parts of facial images that are significant in recognizing faces under mask-wearing.

Findings

The proposed method is validated using several scenarios of experiments. The result shows an outstanding accuracy of 99.2% on a scenario of mask-wearing faces. The feature heatmaps also show that non-occluded components including eyes and nose become more significant for recognizing human faces, when compared with the lower part of human faces which could be occluded under masks.

Originality/value

The convolutional neural network based solution is tuned up for recognizing human faces under a scenario of mask-wearing. The simulated masks on original face images are augmented for training the face recognition model. The heatmaps are then computed to prove that features generated from the top half of face images are correctly chosen for the face recognition.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 8 February 2022

Gabriela Santiago and Jose Aguilar

The Reflective Middleware for Acoustic Management (ReM-AM), based on the Middleware for Cloud Learning Environments (AmICL), aims to improve the interaction between users and…

Abstract

Purpose

The Reflective Middleware for Acoustic Management (ReM-AM), based on the Middleware for Cloud Learning Environments (AmICL), aims to improve the interaction between users and agents in a Smart Environment (SE) using acoustic services, in order to consider the unpredictable situations due to the sounds and vibrations. The middleware allows observing, analyzing, modifying and interacting in every state of a SE from the acoustics. This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management.

Design/methodology/approach

This work details an extension of the ReM-AM using the ontology-driven architecture (ODA) paradigm for acoustic management. In this paper are defined the different domains of knowledge required for the management of the sounds in SEs, which are modeled using ontologies.

Findings

This work proposes an acoustics and sound ontology, a service-oriented architecture (SOA) ontology, and a data analytics and autonomic computing ontology, which work together. Finally, the paper presents three case studies in the context of smart workplace (SWP), ambient-assisted living (AAL) and Smart Cities (SC).

Research limitations/implications

Future works will be based on the development of algorithms for classification and analysis of sound events, to help with emotion recognition not only from speech but also from random and separate sound events. Also, other works will be about the definition of the implementation requirements, and the definition of the real context modeling requirements to develop a real prototype.

Practical implications

In the case studies is possible to observe the flexibility that the ReM-AM middleware based on the ODA paradigm has by being aware of different contexts and acquire information of each, using this information to adapt itself to the environment and improve it using the autonomic cycles. To achieve this, the middleware integrates the classes and relations in its ontologies naturally in the autonomic cycles.

Originality/value

The main contribution of this work is the description of the ontologies required for future works about acoustic management in SE, considering that what has been studied by other works is the utilization of ontologies for sound event recognition but not have been expanded like knowledge source in an SE middleware. Specifically, this paper presents the theoretical framework of this work composed of the AmICL middleware, ReM-AM middleware and the ODA paradigm.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 1 March 2024

Quoc Duy Nam Nguyen, Hoang Viet Anh Le, Tadashi Nakano and Thi Hong Tran

In the wine industry, maintaining superior quality standards is crucial to meet the expectations of both producers and consumers. Traditional approaches to assessing wine quality…

Abstract

Purpose

In the wine industry, maintaining superior quality standards is crucial to meet the expectations of both producers and consumers. Traditional approaches to assessing wine quality involve labor-intensive processes and rely on the expertise of connoisseurs proficient in identifying taste profiles and key quality factors. In this research, we introduce an innovative and efficient approach centered on the analysis of volatile organic compounds (VOCs) signals using an electronic nose, thereby empowering nonexperts to accurately assess wine quality.

Design/methodology/approach

To devise an optimal algorithm for this purpose, we conducted four computational experiments, culminating in the development of a specialized deep learning network. This network seamlessly integrates 1D-convolutional and long-short-term memory layers, tailor-made for the intricate task at hand. Rigorous validation ensued, employing a leave-one-out cross-validation methodology to scrutinize the efficacy of our design.

Findings

The outcomes of these e-demonstrates were subjected to meticulous evaluation and analysis, which unequivocally demonstrate that our proposed architecture consistently attains promising recognition accuracies, ranging impressively from 87.8% to an astonishing 99.41%. All this is achieved within a remarkably brief timeframe of a mere 4 seconds. These compelling findings have far-reaching implications, promising to revolutionize the assessment and tracking of wine quality, ultimately affording substantial benefits to the wine industry and all its stakeholders, with a particular focus on the critical aspect of VOCs signal analysis.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 16 May 2024

Axel Buck and Christian Mundt

Reynolds-averaged Navier–Stokes (RANS) models often perform poorly in shock/turbulence interaction regions, resulting in excessive wall heat load and incorrect representation of…

Abstract

Purpose

Reynolds-averaged Navier–Stokes (RANS) models often perform poorly in shock/turbulence interaction regions, resulting in excessive wall heat load and incorrect representation of the separation length in shockwave/turbulent boundary layer interactions. The authors suggest that this can be traced back to inadequate numerical treatment of the inviscid fluxes. The purpose of this study is an extension to the well-known Harten, Lax, van Leer, Einfeldt (HLLE) Riemann solver to overcome this issue.

Design/methodology/approach

It explicitly takes into account the broadening of waves due to the averaging procedure, which adds numerical dissipation and reduces excessive turbulence production across shocks. The scheme is derived based on the HLLE equations, and it is tested against three numerical experiments.

Findings

Sod’s shock tube case shows that the scheme succeeds in reducing turbulence amplification across shocks. A shock-free turbulent flat plate boundary layer indicates that smooth flow at moderate turbulence intensity is largely unaffected by the scheme. A shock/turbulent boundary layer interaction case with higher turbulence intensity shows that the added numerical dissipation can, however, impair the wall heat flux distribution.

Originality/value

The proposed scheme is motivated by implicit large eddy simulations that use numerical dissipation as subgrid-scale model. Introducing physical aspects of turbulence into the numerical treatment for RANS simulations is a novel approach.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 21 December 2023

Oladosu Oyebisi Oladimeji and Ayodeji Olusegun J. Ibitoye

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the…

1560

Abstract

Purpose

Diagnosing brain tumors is a process that demands a significant amount of time and is heavily dependent on the proficiency and accumulated knowledge of radiologists. Over the traditional methods, deep learning approaches have gained popularity in automating the diagnosis of brain tumors, offering the potential for more accurate and efficient results. Notably, attention-based models have emerged as an advanced, dynamically refining and amplifying model feature to further elevate diagnostic capabilities. However, the specific impact of using channel, spatial or combined attention methods of the convolutional block attention module (CBAM) for brain tumor classification has not been fully investigated.

Design/methodology/approach

To selectively emphasize relevant features while suppressing noise, ResNet50 coupled with the CBAM (ResNet50-CBAM) was used for the classification of brain tumors in this research.

Findings

The ResNet50-CBAM outperformed existing deep learning classification methods like convolutional neural network (CNN), ResNet-CBAM achieved a superior performance of 99.43%, 99.01%, 98.7% and 99.25% in accuracy, recall, precision and AUC, respectively, when compared to the existing classification methods using the same dataset.

Practical implications

Since ResNet-CBAM fusion can capture the spatial context while enhancing feature representation, it can be integrated into the brain classification software platforms for physicians toward enhanced clinical decision-making and improved brain tumor classification.

Originality/value

This research has not been published anywhere else.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 5 December 2022

Kittisak Chotikkakamthorn, Panrasee Ritthipravat, Worapan Kusakunniran, Pimchanok Tuakta and Paitoon Benjapornlert

Mouth segmentation is one of the challenging tasks of development in lip reading applications due to illumination, low chromatic contrast and complex mouth appearance. Recently…

Abstract

Purpose

Mouth segmentation is one of the challenging tasks of development in lip reading applications due to illumination, low chromatic contrast and complex mouth appearance. Recently, deep learning methods effectively solved mouth segmentation problems with state-of-the-art performances. This study presents a modified Mobile DeepLabV3 based technique with a comprehensive evaluation based on mouth datasets.

Design/methodology/approach

This paper presents a novel approach to mouth segmentation by Mobile DeepLabV3 technique with integrating decode and auxiliary heads. Extensive data augmentation, online hard example mining (OHEM) and transfer learning have been applied. CelebAMask-HQ and the mouth dataset from 15 healthy subjects in the department of rehabilitation medicine, Ramathibodi hospital, are used in validation for mouth segmentation performance.

Findings

Extensive data augmentation, OHEM and transfer learning had been performed in this study. This technique achieved better performance on CelebAMask-HQ than existing segmentation techniques with a mean Jaccard similarity coefficient (JSC), mean classification accuracy and mean Dice similarity coefficient (DSC) of 0.8640, 93.34% and 0.9267, respectively. This technique also achieved better performance on the mouth dataset with a mean JSC, mean classification accuracy and mean DSC of 0.8834, 94.87% and 0.9367, respectively. The proposed technique achieved inference time usage per image of 48.12 ms.

Originality/value

The modified Mobile DeepLabV3 technique was developed with extensive data augmentation, OHEM and transfer learning. This technique gained better mouth segmentation performance than existing techniques. This makes it suitable for implementation in further lip-reading applications.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 29 February 2024

Guanchen Liu, Dongdong Xu, Zifu Shen, Hongjie Xu and Liang Ding

As an advanced manufacturing method, additive manufacturing (AM) technology provides new possibilities for efficient production and design of parts. However, with the continuous…

Abstract

Purpose

As an advanced manufacturing method, additive manufacturing (AM) technology provides new possibilities for efficient production and design of parts. However, with the continuous expansion of the application of AM materials, subtractive processing has become one of the necessary steps to improve the accuracy and performance of parts. In this paper, the processing process of AM materials is discussed in depth, and the surface integrity problem caused by it is discussed.

Design/methodology/approach

Firstly, we listed and analyzed the characterization parameters of metal surface integrity and its influence on the performance of parts and then introduced the application of integrated processing of metal adding and subtracting materials and the influence of different processing forms on the surface integrity of parts. The surface of the trial-cut material is detected and analyzed, and the surface of the integrated processing of adding and subtracting materials is compared with that of the pure processing of reducing materials, so that the corresponding conclusions are obtained.

Findings

In this process, we also found some surface integrity problems, such as knife marks, residual stress and thermal effects. These problems may have a potential negative impact on the performance of the final parts. In processing, we can try to use other integrated processing technologies of adding and subtracting materials, try to combine various integrated processing technologies of adding and subtracting materials, or consider exploring more efficient AM technology to improve processing efficiency. We can also consider adopting production process optimization measures to reduce the processing cost of adding and subtracting materials.

Originality/value

With the gradual improvement of the requirements for the surface quality of parts in the production process and the in-depth implementation of sustainable manufacturing, the demand for integrated processing of metal addition and subtraction materials is likely to continue to grow in the future. By deeply understanding and studying the problems of material reduction and surface integrity of AM materials, we can better meet the challenges in the manufacturing process and improve the quality and performance of parts. This research is very important for promoting the development of manufacturing technology and achieving success in practical application.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2633-6596

Keywords

Open Access
Article
Publication date: 21 June 2022

Abhishek Das and Mihir Narayan Mohanty

In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent…

Abstract

Purpose

In time and accurate detection of cancer can save the life of the person affected. According to the World Health Organization (WHO), breast cancer occupies the most frequent incidence among all the cancers whereas breast cancer takes fifth place in the case of mortality numbers. Out of many image processing techniques, certain works have focused on convolutional neural networks (CNNs) for processing these images. However, deep learning models are to be explored well.

Design/methodology/approach

In this work, multivariate statistics-based kernel principal component analysis (KPCA) is used for essential features. KPCA is simultaneously helpful for denoising the data. These features are processed through a heterogeneous ensemble model that consists of three base models. The base models comprise recurrent neural network (RNN), long short-term memory (LSTM) and gated recurrent unit (GRU). The outcomes of these base learners are fed to fuzzy adaptive resonance theory mapping (ARTMAP) model for decision making as the nodes are added to the F_2ˆa layer if the winning criteria are fulfilled that makes the ARTMAP model more robust.

Findings

The proposed model is verified using breast histopathology image dataset publicly available at Kaggle. The model provides 99.36% training accuracy and 98.72% validation accuracy. The proposed model utilizes data processing in all aspects, i.e. image denoising to reduce the data redundancy, training by ensemble learning to provide higher results than that of single models. The final classification by a fuzzy ARTMAP model that controls the number of nodes depending upon the performance makes robust accurate classification.

Research limitations/implications

Research in the field of medical applications is an ongoing method. More advanced algorithms are being developed for better classification. Still, the scope is there to design the models in terms of better performance, practicability and cost efficiency in the future. Also, the ensemble models may be chosen with different combinations and characteristics. Only signal instead of images may be verified for this proposed model. Experimental analysis shows the improved performance of the proposed model. This method needs to be verified using practical models. Also, the practical implementation will be carried out for its real-time performance and cost efficiency.

Originality/value

The proposed model is utilized for denoising and to reduce the data redundancy so that the feature selection is done using KPCA. Training and classification are performed using heterogeneous ensemble model designed using RNN, LSTM and GRU as base classifiers to provide higher results than that of single models. Use of adaptive fuzzy mapping model makes the final classification accurate. The effectiveness of combining these methods to a single model is analyzed in this work.

Details

Applied Computing and Informatics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2634-1964

Keywords

Open Access
Article
Publication date: 12 December 2023

Christine T. Domegan, Tina Flaherty, John McNamara, David Murphy, Jonathan Derham, Mark McCorry, Suzanne Nally, Maurice Eakin, Dmitry Brychkov, Rebecca Doyle, Arthur Devine, Eva Greene, Joseph McKenna, Finola OMahony and Tadgh O'Mahony

To combat climate change, protect biodiversity, maintain water quality, facilitate a just transition for workers and engage citizens and communities, a diversity of stakeholders…

Abstract

Purpose

To combat climate change, protect biodiversity, maintain water quality, facilitate a just transition for workers and engage citizens and communities, a diversity of stakeholders across multiple levels work together and collaborate to co-create mutually beneficial solutions. This paper aims to illustrate how a 7.5-year collaboration between local communities, researchers, academics, companies, state agencies and policymakers is contributing to the reframing of industrial harvested peatlands to regenerative ecosystems and carbon sinks with impacts on ecological, economic, social and cultural systems.

Design/methodology/approach

The European Union LIFE Integrated Project, Peatlands and People, responding to Ireland’s Climate Action Plan, represents Europe’s largest rehabilitation of industrially harvested peatlands. It makes extensive use of marketing research for reframing strategies and actions by partners, collaborators and communities in the evolving context of a just transition to a carbon-neutral future.

Findings

The results highlight the ecological, economic, social and cultural reframing of peatlands from fossil fuel and waste lands to regenerative ecosystems bursting with biodiversity and climate solution opportunities. Reframing impacts requires muddling through the ebbs and flows of planned, possible and unanticipated change that can deliver benefits for peatlands and people over time.

Research limitations/implications

At 3 of 7.5 years into a project, the authors are muddling through how ecological reframing impacts economic and social/cultural reframing. Further impacts, planned and unplanned, can be expected.

Practical implications

This paper shows how an impact planning canvas tool and impact taxonomy can be applied for social and systems change. The tools can be used throughout a project to understand, respond to and manage for unplanned events. There is constant learning, constantly going back to the impact planning canvas and checking where we are, what is needed. There is action and reaction to each other and to the diversity of stakeholders affected and being affected by the reframing work.

Originality/value

This paper considers how systemic change through ecological, economic, social and cultural reframing is a perfectly imperfect process of muddling through which holds the promise of environmental, economic, technological, political, social and educational impacts to benefit nature, individuals, communities, organisations and society.

Details

European Journal of Marketing, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0309-0566

Keywords

Access

Only Open Access

Year

Content type

Earlycite article (42)
1 – 10 of 42