Open Access Open Access  Restricted Access Subscription Access

Numerical Simulation of Deep Convolutional Neural Network Based Flower Classification System

Prerna Shukala

Abstract


There are more than 250,000 recognized floral plant forms in 350 families. Further more the order, the plant checks of structures, the gardening industry, live plantations and scientific flower classification instructions depend on fruitful flower classification, including a content-based image recuperation. A wide range of applications also includes flower portrayals. The manual classification is however tedious and tiresome, particularly when the picture foundation is perplexing, with a huge number of pictures and probably incorrect for some floral groups. Strong flower division, discovery and classification procedures therefore have exceptional value. In this study, suggest new measures to guarantee vigorous, reliable and continuous characterization during the preparation stage. Our methodology is tested on three flower datasets which are definitely known. The results for all data sets that are superior to the best in this objective with accuracy exceeding 98 per cent. In this paper, a novel two-way deep learning classification is proposed in order to classify flowers from a broad range of animal categories. The floral district was thus divided initially into a portions to allow the base box to be located around it. The proposed approach to floral distribution is demonstrated in a totally coevolutionary network system as a parallel classifier. In addition, to recognize the different floral forms, create a strong coevolutionary neural system classification.


Keywords


Deep Learning, CNN, Flowers Classification, Numerical Simulation,

Full Text:

PDF

References


Qi, X., Xiao, R., Li, C., Qiao, Y., Guo, J., Tang, X.: ‘Pairwise Rotation Invariant Co-Occurrence Local Binary Pattern’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36, 11, pp. 2199–2213

Huw., Hu, R., Xie, N., Ling, H., Maybank, S.: ‘Image Classification Using Multiscale Information Fusion Based on Saliency Driven Nonlinear Diffusion Filtering’ IEEE Transactions on Image Processing, 2014, 23, (4), pp. 1513–1526

Krizhevsky, A., Sutskever, I., Hinton, G.: ‘ImageNet Classification with Deep Convolutional Neural Networks’, in Pereira, F. et al. (Ed.): ‘Advances in Neural Information Processing Systems’ (Curran Associates, Inc., 2012), pp. 1097–1105.

Simonyan, K., Zisserman, A.: ‘Very deep convolutional networks for large-scale image recognition’. Proc. International Conference on Learning Representations, San Diego, CA, May 2015, arXiv preprint arXiv:1409.1556.

Shelhamer, E., Long, J., Darrell, T.: ‘Fully Convolutional Networks for Semantic Segmentation’, IEEE Transactions on Pattern Analysis and Machine Intelligence,2017, 39, (4), pp. 640–651.

Girshick, R., Donahue, J., Darrell, T., Malik, J.: ‘Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation’. Proc. IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, June 2014, pp. 580–587.

Girshick, R.: ‘Fast R-CNN’, Proc. IEEE International Conference on Computer Vision, Santiago, Chile, December 2015, pp. 1440–1448.

Ren, S., He, K., Girshick, R., Sun, J.: ‘Faster R-CNN: Towards real-time object detection with region proposal networks’, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39, (6), pp. 1137–1149.

Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: ‘You Only Look Once: Unified, Real-Time Object Detection’. Proc. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, June 2016, pp. 779–788.

Xu, Y., Zhang, Q., Wang, L.: ‘Metric forests based on Gaussian mixture model for visual image Classification’, Soft Computing, 2018, 22, (2), pp. 499–509.

Murray, N., Perronnin, F.: ‘Generalized Max Pooling’. Proc. IEEE Conference on Computer Vision and Pattern Recognition, Columbus, Ohio, June 2014, pp. 2473–2480.

Xie, L., Wang, J., Zhang, B., Tian, Q.: ‘Incorporating visual adjectives for image classification’, Neurocomputing, 2016, 182, pp. 48–55

Ito, S., Kubota, S.: ‘Object Classification Using Heterogeneous Co-occurrence Features’. Proc. European Conference on Computer Vision, Heraklion, Crete, Greece, September 2010, V, pp. 701–714.

Zhang, C., Huang, Q., Tian, Q.: ‘Contextual Exemplar Classifier Based Image Representation for Classification’, IEEE Transactions on Circuits and Systems for Video Technology, 2017, 27, (8), pp. 1691–1699.

Fernando, B., Fromont, E., Tuytelaars, T.: ‘Mining Mid-level Features for Image Classification’, International Journal of Computer Vision, 2014, 108, (3), pp. 186–203.

Zhang, C., Liu, J., Liang, C., Huang, Q., Tian, Q.: ‘Image classification using Harr like transformation of local features with coding residuals’, Signal Processing, 2013, 93, (8), pp. 2111–2118.

Ye, G., Liu, D., Jhuo, I., Chang, S.: ‘Robust Late Fusion with Rank Minimization’ Proc. IEEE Conference on Computer Vision and Pattern Recognition, Providence, Rhode Island, June 2012, pp. 3021–3028.

He, K., Zhang, X., Ren, S., Sun, J.: ‘Deep residual learning for image recognition’ Proc. IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, Nevada, June 2016, pp. 770–778

Szegedy, C., Liu, W., Jia,Y. et al.: ‘Going deeper with convolutions’. Proc. IEEE Conference onComputer Vision and Pattern Recognition, Boston, MA, June 2015, pp. 1–9

Song, G., Jin, X., Chen, G., Nie, Y.: ‘Two-level hierarchical feature learning for image classification’, Frontiers of Information Technology & Electronics Engineering, 2016, 17, (9), pp. 897–906

Xie, L., Hong, R., Zhang, B., Tian, Q.: ‘Image Classification and Retrieval are ONE’. Proc. 5th ACM on International Conference on Multimedia Retrieval, Shanghai, China, June 2015, pp. 3–10

Rezaian, A., Azizpour, H., Sullivan, J., Carlsson, S.: ‘CNN Features Off-the- Shelf: An Astounding Baseline for Recognition’. Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops, Columbus, Ohio, June2014, pp. 512–519

Qian, Q., Jin, R., Zhu, S., Lin, Y.: ‘Fine-Grained Visual Categorization via Multistage Metric Learning’. Proc. IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, June 2015, pp. 3716–3724

Xie, G., Zhang, X., Shu, X., Yan, S., Liu, C.: ‘Task-Driven Feature Pooling for Image Classification’. Proc. IEEE International Conference on Computer Vision Santiago, Chile, December 2015, pp. 1179–1187

Zheng, L., Zhao, Y., Wang, S., Wang, J., Tian, Q.: ‘Good Practice in CNN Feature Transfer’, arXiv preprint arXiv:1604.00133, 2016

Bakhtiary, A., Lapedriza, A., Masip, D.: ‘Winner takes all hashing for speeding up the training of neural networks in large class problems’, Pattern Recognition Letters, 2017, 93, pp. 38–47

Zhang, C., Li, R., Huang, Q., Tian, Q.: ‘Hierarchical Deep Semantic Representation for Visual Categorization’, Neurocomputing, 2017, 257, pp. 88–96

Liu, Y., Tang, F., Zhou, D., Meng, Y., Dong, W.: ‘Flower Classification via Convolutional Neural Network’. Proc. IEEE International Conference on Functional- Structural Plant Growth Modelling, Simulation, Visualization and Applications, Qingdao, China, November 2016, pp. 110–116

Liu, Y., Guo, Y., Lew, M.: ‘On the Exploration of Convolutional Fusion Networks for Visual Recognition’, in Amsaleg, L. et al. (Ed.): ‘Multimedia Modelling’ (Springer, 2017), pp. 277–289

Chakrabarti, T., McCane, B., Mills, S., Pal, U.: ‘Collaborative Representation based Fine-grained Species Recognition’. Proc. International Conference on Image and Vision Computing New Zealand, Palmerston North, New Zealand, November 2016, pp. 1–6

Xia, X., Xu, C., Nan, B.: ‘Inception-v3 for flower classification’. Proc. International Conferenceon Image, Vision and Computing (ICIVC), Chengdu, China, June 2017, pp. 783–787

Wei, X., Luo, J.,Wu, J., Zhou, Z.: ‘Selective Convolutional Descriptor Aggregation for Fine- Grained Image Retrieval’, IEEE Transactions on Image Processing, 2017,(6), pp. 2868–2881

Xie, G., Zhang, X., Yang, W. et al.: ‘LG-CNN: From local parts to global discrimination for fine- grained recognition’, Pattern Recognition, 2017, 71, pp. 118–131

Shapiro, L., Stockman, G.: ‘Computer Vision’ (Prentice Hall, 2001), pp. 53–54 Loshchilov, I., Hutter, F.: ‘SGDR: Stochastic Gradient Descent with Warm Restarts’. Proc. International Conference on Learning Representations, Toulon, France, April 2017, arXiv preprint, arXiv:1608.03983

Nilsback, M., Zisserman, A.: ‘Delving into the Whorl of Flower Segmentation’. Proc. British Machine Vision Conference, Warwick, UK, September 2007,pp. 54.1–54.10

Nilsback, M., Zisserman, A.: ‘Delving deeper into the whorl of flower segmentation’, Image and Vision Computing, 2010, (28), (6), pp. 1049–1062

Jia, Y., Shelhamer, E., Donahue, J. et al.: ‘Caffe: Convolutional Architecture for Fast Feature Embedding’. Proc. 22nd ACM international conference on Multimedia, Orlando, Florida, November 2014, pp. 675–678

5Saitoh, T., Aoki, K., Kaneko, T.: ‘Automatic Recognition of Blooming Flowers’. Proc. IEEE International Conference on Pattern Recognition, Cambridge, UK, August 2004, 1, pp. 27–30

Aydin, D., U ̆gur, A.: ‘Extraction of flower regions in colour images using ant colony optimization’, Procedia Computer Science, 2011, 3, pp. 530–536

Visin, F., Romero, A., Cho, K. et al.: ‘ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation’. Proc. IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Las Vegas, Nevada, June 2016, pp. 426–433

Liu, F., Lin, G., Qiao, R., Shen, C.: ‘Structured Learning of Tree Potentials in CRF for Image Segmentation’, IEEE Transactions on Neural Networks and Learning Systems, 2017, DOI: 10.1109/TNNLS.2017.2690453, pp. 1–7

Belongie, S., Perona, P.: ‘Visipedia circa 2015’, Pattern Recognition Letters, 2016, 72, pp. 15–24

Bhatt, Vedant, Harvinder Singh Diwan, Satish Kumar Alaria, and Yashika Saini. "Empowering ML Work-Flow with DevOps within Micro Service Architecture and Deploying A Hybrid-MultiCloud, Maintaining CI/CD Pipeline: An Open Shift Orchestration of ML-OPS." new arch- international journal of contemporary architecture 8, no. 2 (2021): 147-154.




DOI: https://doi.org/10.37591/jocta.v12i3.863

Refbacks

  • There are currently no refbacks.


Copyright (c) 2021 Journal of Computer Technology & Applications