Tree Species Identification from Bark Images Using Convolutional Neural Networks | IEEE Conference Publication | IEEE Xplore

Tree Species Identification from Bark Images Using Convolutional Neural Networks


Abstract:

Tree species identification using bark images is a challenging problem that could prove useful for many forestry related tasks. However, while the recent progress in deep...Show More

Abstract:

Tree species identification using bark images is a challenging problem that could prove useful for many forestry related tasks. However, while the recent progress in deep learning showed impressive results on standard vision problems, a lack of datasets prevented its use on tree bark species classification. In this work, we present, and make publicly available, a novel dataset called BarkNet 1.0 containing more than 23,000 high-resolution bark images from 23 different tree species over a wide range of tree diameters. With it, we demonstrate the feasibility of species recognition through bark images, using deep learning. More specifically, we obtain an accuracy of 93.88% on single crop, and an accuracy of 97.81% using a majority voting approach on all of the images of a tree. We also empirically demonstrate that, for a fixed number of images, it is better to maximize the number of tree individuals in the training database, thus directing future data collection efforts.
Date of Conference: 01-05 October 2018
Date Added to IEEE Xplore: 06 January 2019
ISBN Information:

ISSN Information:

Conference Location: Madrid, Spain
References is not available for this document.

I. Introduction

The ability to automatically and reliably identify tree species from images of bark is an important problem, but has received limited attention in the vision and robotics communities. Early work in mobile robotics has already shown that the ability to recognize trees from non-trees in combined LiDAR+camera sensing can improve localization robustness [1]. More recent work on data-efficient semantic localization and mapping algorithms [2], [3] have demonstrated the value of semantically-meaningful landmarks; In our situation, trees and the knowledge of their species would act as such semantic landmarks. The robotics community is also increasingly interested in flying drones in forests [4]. In terms of forestry applications, one could use this visual species identification to perform autonomous forest inventory. In the context of autonomous tree harvesting operations [5], the harvester or forwarder would be able to sort timber by species, improving the operator's margins. Similarly, sawmill processes such as debarking could be fine-tuned or optimized based on the species knowledge of the currently processed log.

Select All
1.
F. T. Ramos, J. Nieto and H. F. Durrant-Whyte, "Recognising and modelling landmarks to close loops in outdoor slam", Proceedings 2007 IEEE International Conference on Robotics and Automation, pp. 2036-2041, April 2007.
2.
N. Atanasov, M. Zhu, K. Daniilidis and G. J. Pappas, "Localization from semantic observations via the matrix permanent", The International Journal of Robotics Research, vol. 35, no. 1-3, pp. 73-99, 2016.
3.
A. Ghasemi Toudeshki, F. Shamshirdar and R. Vaughan, "UAV Visual Teach and Repeat Using Only Semantic Object Features", ArXiv e-prints, Jan. 2018.
4.
N. Smolyanskiy, A. Kamenev, J. Smith and S. Birchfield, "Toward low-flying autonomous MAV trail navigation using deep neural networks for environmental awareness", CoRR, 2017.
5.
T. Hellström, P. Lärkeryd, T. Nordfjell and O. Ringdahl, "Autonomous forest vehicles: Historic envisioned and state-of-the-art", International Journal of Forest Engineering, vol. 20, no. 1, 2009.
6.
S. Fiel and R. Sablatnig, "Automated Identification of Tree Species from Images of the Bark Leaves and Needles", Proceedings of the 16th Computer Vision Winter Workshop, pp. 67-74, 2011.
7.
K. He, X. Zhang, S. Ren and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", Proceedings of the IEEE International Conference on Computer Vision, vol. 11-18-Dece, pp. 1026-1034, 2016.
8.
Z.-k. Huang, D.-S. Huang, J.-X. Du, Z.-h. Quan and S.-B. Gua, "Bark Classification Based on Contourlet Filter Features", Intelligent Computing, pp. 1121-1126, 2006.
9.
Z. Chi, L. Houqiang and W. Chao, "Plant species recognition based on bark patterns using novel Gabor filter banks", International Conference on Neural Networks and Signal Processing 2003. Proceedings of the 2003, vol. 2, pp. 1035-1038, dec 2003.
10.
S. Boudra, I. Yahiaoui and A. Behloul, "A comparison of multi-scale local binary pattern variants for bark image retrieval", Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9386, pp. 764-775, 2015.
11.
M. Sulc, Tree Identification from Images, 2014.
12.
Y. Zhang and Q. Yang, "A Survey on Multi-Task Learning", ArXiv e-prints, July 2017.
13.
M. Sulc and J. Matas, "Kernel-mapped histograms of multi-scale lbps for tree bark recognition", Image and Vision Computing New Zealand (IVCNZ) 2013 28th International Conference of IEEE, pp. 82-87, 2013.
14.
A. Bressane, J. A. F. Roveda and A. C. G. Martins, "Statistical analysis of texture in trunk images for biometric identification of tree species", Environmental Monitoring and Assessment, vol. 187, no. 4, 2015.
15.
A. A. Othmani, C. Jiang, N. Lomenie, J. M. Favreau, A. Piboule and L. F. C. L. Y. Voon, "A novel Computer-Aided Tree Species Identification method based on Burst Wind Segmentation of 3D bark textures", Machine Vision and Applications, vol. 27, no. 5, pp. 751-766, 2016.
16.
A. Krizhevsky, I. Sutskever and G. E. Hinton, "Imagenet classification with deep convolutional neural networks" in Advances in Neural Information Processing Systems 25, Curran Associates, Inc., pp. 1097-1105, 2012.
17.
J. Champ, T. Lorieul, M. Servajean and A. Joly, "A comparative study of fine-grained classification methods in the context of the LifeCLEF plant identification challenge 2015", CEUR Workshop Proceedings, vol. 1391, 2015.
18.
M. Sulc, D. Mishkin and J. Matas, "Very deep residual networks with maxout for plant identification in the wild", Working notes of CLEF, 2016.
19.
N. Sunderhauf, C. McCool, B. Upcroft and P. Tristan, "Fine-grained plant classification using convolutional neural networks for feature extraction", Working notes of CLEF 2014 conference, pp. 756-762, 2014.
20.
H. Goeau, P. Bonnet and A. Joly, "Plant identification based on noisy web data: the amazing performance of deep learning (LifeCLEF 2017)", CLEF working notes, vol. 2017, 2017.
21.
S. H. Lee, C. S. Chan, S. J. Mayo and P. Remagnino, "How deep learning extracts and learns leaf features for plant classification", Pattern Recognition, vol. 71, pp. 1-13, 2017.
22.
T. Mizoguchi, A. Ishii, H. Nakamura, T. Inoue and H. Takamatsu, "Lidar-based individual tree species classification using convolutional neural network", Proc. SPIE, vol. 10332, pp. 10332-10332-7, 2017.
23.
M. Cimpoi, S. Maji and A. Vedaldi, "Deep Filter Banks for Texture Recognition and Segmentation", The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), jun 2015.
24.
L. Sharan, R. Rosenholtz and E. Adelson, "Material perception: What can you see in a brief glance?", Journal of Vision, vol. 9, no. 8, pp. 784-784, Aug 2009.
25.
D. Marcos, M. Volpi and D. Tuia, "Learning rotation invariant convolutional filters for texture classification", 2016 23rd International Conference on Pattern Recognition (ICPR), pp. 2012-2017, dec 2016.
26.
T. Ojala, T. Maenpaa, M. Pietikainen, J. Viertola, J. Kyllonen and S. Huovinen, "Outex - new framework for empirical evaluation of texture analysis algorithms", 2002 proc. 16th International Conference on Pattern Recognition, vol. 1, pp. 701-706.
27.
M. Svab, Computer-vision-based tree trunk recognition, 2014.
28.
L. J. Blaanco, C. M. Travieso, J. M. Quinteiro, P. V. Hernandez, M. K. Dutta and A. Singh, "A bark recognition algorithm for plant classification using a least square support vector machine", 2016 Ninth International Conference on Contemporary Computing (IC3), pp. 1-5, aug 2016.
29.
K. He, X. Zhang, S. Ren and J. Sun, "Deep residual learning for image recognition", 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778, June 2016.
30.
A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, et al., Automatic differentiation in pytorch, 2017.

Contact IEEE to Subscribe

References

References is not available for this document.