Loading [MathJax]/extensions/MathMenu.js
A Classification Scheme Based on Directed Acyclic Graphs for Acoustic Farm Monitoring | IEEE Conference Publication | IEEE Xplore

A Classification Scheme Based on Directed Acyclic Graphs for Acoustic Farm Monitoring


Abstract:

Intelligent farming as part of the green revolution is advancing the world of agriculture in such a way that farms become evolving, with the scope being the optimization ...Show More

Abstract:

Intelligent farming as part of the green revolution is advancing the world of agriculture in such a way that farms become evolving, with the scope being the optimization of animal production in an eco-friendly way. In this direction, we propose exploiting the acoustic modality for farm monitoring. Such information could be used in a stand-alone or complimentary mode to monitor constantly animal population and behavior. To this end, we designed a scheme classifying the vocalizations produced by farm animals. More precisely, we propose a directed acyclic graph, where each node carries out a binary classification task using hidden Markov models. The topological ordering follows a criterion derived from the Kullback-Leibler divergence. During the experimental phase, we employed a publicly available dataset including vocalizations of seven animals typically encountered in farms, where we report promising recognition rates outperforming state of the art classifiers.
Date of Conference: 13-16 November 2018
Date Added to IEEE Xplore: 27 December 2018
ISBN Information:
Print on Demand(PoD) ISSN: 2305-7254
Conference Location: Bologna, Italy
References is not available for this document.

I. Introduction

The area of Computational Bioacoustic Scene Analysis has received increasing attention by the scientific community in the last decades [1], [2], [3], [4]. Such interest is motivated by the potential benefits that can be acquired towards addressing major environmental challenges including invasive species, infectious diseases, climate and land-use change, etc. Availability of accurate information regarding range, population size and trends is crucial for quantifying the conservation status of the species of interest. Such information can be obtained via classical observer-based survey techniques; however these are becoming inadequate since they are a) expensive, b) subject to weather conditions, c) cover a limited amount of time and space, etc. To this end, autonomous recording units (ARUs) are extensively employed by biologists [5], [6]. An ARU which could be useful for the specific application is available at https://www.wildlifeacoustics.com/products/song-meter-sm4. This is also motivated by the cost of the involved acoustic sensors which is constantly decreasing due to the advancements in the field of electronics.

Select All
1.
D. Stowell, Computational Bioacoustic Scene Analysis, Cham:Springer International Publishing, pp. 303-333, 2018, [online] Available: https://doi.org/10.1007/978-3-319-63450-0_11.
2.
D. Blumstein, D. Mennill, P. Clemins, L. Girod, K. Yao, G. Patricelli, et al., "Acoustic monitoring in terrestrial environments using microphone arrays: Applications technological considerations and prospectus", Journal of Applied Ecology, vol. 48, no. 3, pp. 758-767, 2011.
3.
M. W. Towsey, A. M. Truskinger and P. Roe, "The navigation and visualisation of environmental audio using zooming spectrograms", 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 788-797, Nov 2015.
4.
X. Dong, M. Towsey, J. Zhang and P. Roe, "Compact features for birdcall retrieval from environmental acoustic recordings", 2015 IEEE International Conference on Data Mining Workshop (ICDMW), pp. 762-767, Nov 2015.
5.
T. Grill and J. Schlter, "Two convolutional neural networks for bird detection in audio signals", 2017 25th European Signal Processing Conference (EUSIPCO), pp. 1764-1768, Aug 2017.
6.
S. Ntalampiras, "Bird species identification via transfer learning from music genres", Ecological Informatics, vol. 44, pp. 76-81, 2018, [online] Available: https://www.sciencedirect.com/science/article/pii/S1574954117302467.
7.
D. Mitrovic, M. Zeppelzauer and C. Breiteneder, "Discrimination and retrieval of animal sounds", 2006 12th International Multi-Media Modelling Conference, pp. 5, 2006.
8.
N. C. Han, S. V. Muniandy and J. Dayou, "Acoustic classification of australian anurans based on hybrid spectral-entropy approach", Applied Acoustics, vol. 72, no. 9, pp. 639-645, 2011, [online] Available: http://www.sciencedirect.com/science/article/pii/S0003682X11000314.
9.
V. Exadaktylos, M. Silva, D. Berckmans and H. Glotin, "Automatic identification and interpretation of animal sounds application to livestock production optimisation" in Soundscape Semiotics - Localization and Categorization, Rijeka:InTech, 2014, [online] Available: http://dx.doi.org/10.5772/56040.
10.
J. J. Noda, C. M. Travieso, D. Snchez-Rodrguez, M. K. Dutta and A. Singh, "Using bioacoustic signals and support vector machine for automatic classification of insects", 2016 3rd International Conference on Signal Processing and Integrated Networks (SPIN), pp. 656-659, Feb 2016.
11.
A. Kumar and G. P. Hancke, "A zigbee-based animal health monitoring system", IEEE Sensors Journal, vol. 15, no. 1, pp. 610-617, Jan 2015.
12.
S. K. Nagpal and P. Manojkumar, "Hardware implementation of intruder recognition in a farm through wireless sensor network", 2016 International Conference on Emerging Trends in Engineering Technology and Science (ICETETS), pp. 1-5, Feb 2016.
13.
V. M. Anu, M. I. Deepika and L. M. Gladance, "Animal identification and data management using rfid technology", International Confernce on Innovation Information in Computing Technologies, pp. 1-6, Feb 2015.
14.
G. Ditzler, M. Roveri, C. Alippi and R. Polikar, "Learning in nonstationary environments: A survey", IEEE Computational Intelligence Magazine, vol. 10, no. 4, pp. 12-25, Nov 2015.
15.
K. J. Piczak, "Esc: Dataset for environmental sound classification", Proceedings of the 23rd ACM International Conference on Multimedia, pp. 1015-1018, 2015, [online] Available: http://doi.acm.org/10.1145/2733373.2806390.
16.
K. J. Piczak, "Environmental sound classification with convolutional neural networks", 2015 IEEE 25th International Workshop on Machine Learning for Signal Processing (MLSP), pp. 1-6, Sept 2015.
17.
P. Smaragdis, M. Shashanka and B. Raj, "A sparse non-parametric approach for single channel separation of known sounds", Advances in Neural Information Processing Systems 22, pp. 1705-1713, 2009.
18.
S. Ntalampiras, "Directed acyclic graphs for content based sound musical genre and speech emotion classification", Journal of New Music Research, vol. 43, no. 2, pp. 173-182, 2014, [online] Available: https://doi.org/10.1080/09298215.2013.859709.
19.
T. J. VanderWeele and J. M. Robins, "Signed directed acyclic graphs for causal inference", Journal of the Royal Statistical Society: Series B(Statistical Methodology), vol. 72, no. 1, pp. 111-127, 2010, [online] Available: http://dx.doi.org/10.1111/j.1467-9868.2009.00728.x.
20.
P. Taylor, "The target cost formulation in unit selection speech synthesis", INTERSPEECH 2006 - ICSLP Ninth International Conference on Spoken Language Processing Pittsburgh PA USA September 17-21 2006, 2006, [online] Available: http://www.isca-speech.org/archive/_interspeech_2006/i06_1455.html.
21.
Y. Zhao, C. Zhang, F. K. Soong, M. Chu and X. Xiao, "Measuring attribute dissimilarity with hmm kl-divergence for speech synthesis", In, pp. 6-2007, 2007.
22.
P. Liu, F. K. Soong and J. L. Zhou, "Divergence-based similarity measure for spoken document retrieval", 2007 IEEE International Conference on Acoustics Speech and Signal Processing - ICASSP ’07, vol. 4, pp. IV–89-IV–92, April 2007, [online] Available: .
23.
S. A. Cook, "A taxonomy of problems with fast parallel algorithms", Information and Control, vol. 64, no. 1, pp. 2-22, 1985, [online] Available: http://www.sciencedirect.com/science/article/pii/S0019995885800413.
24.
F. Eyben, F. Weninger, F. Gross and B. Schuller, "Recent developments in opensmile the munich open-source multimedia feature extractor", Proceedings of the 21st ACM International Conference on Multimedia, pp. 835-838, 2013, [online] Available: http://doi.acm.org/10.1145/2502081.2502224.
25.
L. R. Rabiner, "A tutorial on hidden markov models and selected applications in speech recognition", Proceedings of the IEEE, vol. 77, no. 2, pp. 257-286, Feb 1989.
26.
D. A. Reynolds and R. C. Rose, "Robust text-independent speaker identification using gaussian mixture speaker models", IEEE Transactions on Speech and Audio Processing, vol. 3, no. 1, pp. 72-83, Jan 1995.
27.
H.-G. Kim and T. Sikora, "Comparison of mpeg-7 audio spectrum projection features and mfcc applied to speaker recognition sound classification and audio segmentation", 2004 IEEE International Conference on Acoustics Speech and Signal Processing, vol. 5, pp. V–925-8, May 2004.
28.
S. Ntalampiras, "A novel holistic modeling approach for generalized sound recognition", IEEE Signal Processing Letters, vol. 20, no. 2, pp. 185-188, Feb 2013.
29.
L. Chen, S. Gunduz and M. T. Ozsu, "Mixed type audio classification with support vector machine", 2006 IEEE International Conference on Multimedia and Expo, pp. 781-784, July 2006.
30.
M. M. Al-Maathidi and F. F. Li, "Audio content feature selection and classification a random forests and decision tree approach", 2015 IEEE International Conference on Progress in Informatics and Computing (PIC), pp. 108-112, Dec 2015.

Contact IEEE to Subscribe

References

References is not available for this document.