1. INTRODUCTION
Much attention has been paid to methods for identifying atomic actions of a vehicle (or other object) in a video scene, such as if a right or left turn has been made. Previous methods of event labeling have used statistical shape theory and Autoregressive and Moving Average (ARMA) for activity recognition [1]. However, little attention has been paid to trying to semantically describe a more complex vehicle track or path. Unmanned aerial vehicles (UAV), global positioning satellite (GPS) tracking devices, and other sensors have the ability to track vehicles over a long period of time recording many complex activities. Currently, human operators are responsible for describing an object's actions in a complex video track. We propose a framework for automatically segmenting an object's track into meaningful events and applying semantic labels to those events within a given track.