1. INTRODUCTION
During the last decade, Neural Networks have been the focal point of machine learning research, especially in the dawn of the Deep Learning era. Most architectures utilize the multiply-accumulate scheme of the linear perceptron that feeds into a nonlinearity. An alternative approach lies on the use of morphological neurons, first introduced by Davidson and Hummer [1]. This approach was extended by Ritter and Sussner, where a simple network consisting of a single hidden layer was proposed for binary classification tasks resulting in a decision boundary parallel to the axes [2]. This limitation was addressed in two major ways, either by extending the architecture to a second hidden layer, where numerous such hyperplanes can be learned allowing the solution of arbitrary (binary) classification tasks [3] or by adding the option of hyperplane rotation [4].