Artistic Style Recognition: Combining Deep and Shallow Neural Networks for Painting Classification
Citations Over TimeTop 10% of 2023 papers
Abstract
This study’s main goal is to create a useful software application for finding and classifying fine art photos in museums and art galleries. There is an increasing need for tools to swiftly analyze and arrange art collections based on their artistic styles as a result of the digitization of art collections. To increase the accuracy of the style categorization, the suggested technique involves two parts. The input image is split into five sub-patches in the first stage. A DCNN that has been particularly trained for this task is then used to classify each patch individually. A decision-making module using a shallow neural network is part of the second phase. Probability vectors acquired from the first-phase classifier are used to train this network. The results from each of the five patches are combined in this phase to deduce the final style classification for the input image. One key advantage of this approach is employing probability vectors rather than images, and the second phase is trained separately from the first. This helps compensate for any potential errors made during the first phase, improving accuracy in the final classification. To evaluate the proposed method, six various already-trained CNN models, namely AlexNet, VGG-16, VGG-19, GoogLeNet, ResNet-50, and InceptionV3, were employed as the first-phase classifiers. The second-phase classifier was implemented as a shallow neural network. By using four representative art datasets, experimental trials were conducted using the Australian Native Art dataset, the WikiArt dataset, ILSVRC, and Pandora 18k. The findings showed that the recommended strategy greatly surpassed existing methods in terms of style categorization accuracy and precision. Overall, the study assists in creating efficient software systems for analyzing and categorizing fine art images, making them more accessible to the general public through digital platforms. Using pre-trained models, we were able to attain an accuracy of 90.7. Our model performed better with a higher accuracy of 96.5 as a result of fine-tuning and transfer learning.
Related Papers
- → Analysis of Deep Networks with Residual Blocks and Different Activation Functions: Classification of Skin Diseases(2019)64 cited
- → Multiple Feature-Based Classifier and Its Application to Image Classification(2010)12 cited
- → Efficient Tumor Classification using GoogleNet Approach to Increase Accuracy in Comparison with ResNet(2022)2 cited
- → Satellite image classification using a classifier integration model(2011)5 cited
- → Evidential Reasoning Based Classifier Combination for an Optimal Remote Sensing Image Classification(2006)