Search results

1 – 1 of 1
Article
Publication date: 12 July 2024

Zhiqiang Zhang, Xiaoming Li, Xinyi Xu, Chengjie Lu, Yihe Yang and Zhiyong Shi

The purpose of this study is to explore the potential of trainable activation functions to enhance the performance of deep neural networks, specifically ResNet architectures, in…

Abstract

Purpose

The purpose of this study is to explore the potential of trainable activation functions to enhance the performance of deep neural networks, specifically ResNet architectures, in the task of image classification. By introducing activation functions that adapt during training, the authors aim to determine whether such flexibility can lead to improved learning outcomes and generalization capabilities compared to static activation functions like ReLU. This research seeks to provide insights into how dynamic nonlinearities might influence deep learning models' efficiency and accuracy in handling complex image data sets.

Design/methodology/approach

This research integrates three novel trainable activation functions ā€“ CosLU, DELU and ReLUN ā€“ into various ResNet-n architectures, where ā€œnā€ denotes the number of convolutional layers. Using CIFAR-10 and CIFAR-100 data sets, the authors conducted a comparative study to assess the impact of these functions on image classification accuracy. The approach included modifying the traditional ResNet models by replacing their static activation functions with the trainable variants, allowing for dynamic adaptation during training. The performance was evaluated based on accuracy metrics and loss profiles across different network depths.

Findings

The findings indicate that trainable activation functions, particularly CosLU, can significantly enhance the performance of deep learning models, outperforming the traditional ReLU in deeper network configurations on the CIFAR-10 data set. CosLU showed the highest improvement in accuracy, whereas DELU and ReLUN offered varying levels of performance enhancements. These functions also demonstrated potential in reducing overfitting and improving model generalization across more complex data sets like CIFAR-100, suggesting that the adaptability of activation functions plays a crucial role in the training dynamics of deep neural networks.

Originality/value

This study contributes to the field of deep learning by introducing and evaluating the impact of three novel trainable activation functions within widely used ResNet architectures. Unlike previous works that primarily focused on static activation functions, this research demonstrates that incorporating trainable nonlinearities can lead to significant improvements in model performance and adaptability. The introduction of CosLU, DELU and ReLUN provides a new pathway for enhancing the flexibility and efficiency of neural networks, potentially setting a new standard for future deep learning applications in image classification and beyond.

Details

International Journal of Web Information Systems, vol. 20 no. 4
Type: Research Article
ISSN: 1744-0084

Keywords

Access

Year

Last month (1)

Content type

1 – 1 of 1