Loading web-font TeX/Math/Italic
Quantifying the Complexity of Standard Benchmarking Datasets for Long-Term Human Trajectory Prediction | IEEE Journals & Magazine | IEEE Xplore

Quantifying the Complexity of Standard Benchmarking Datasets for Long-Term Human Trajectory Prediction


Schematic of the process for obtaining a coarse dataset complexity ranking. Each given dataset is preprocessed (spatial sequence alignment) and decomposed into a prototyp...

Abstract:

Methods to quantify the complexity of trajectory datasets are still a missing piece in benchmarking human trajectory prediction models. In order to gain a better understa...Show More

Abstract:

Methods to quantify the complexity of trajectory datasets are still a missing piece in benchmarking human trajectory prediction models. In order to gain a better understanding of the complexity of trajectory prediction tasks and following the intuition, that more complex datasets contain more information, an approach for quantifying the amount of information contained in a dataset from a prototype-based dataset representation is proposed. The dataset representation is obtained by first employing a non-trivial spatial sequence alignment, which enables a subsequent learning vector quantization (LVQ) stage. A large-scale complexity analysis is conducted on several human trajectory prediction benchmarking datasets, followed by a brief discussion on indications for human trajectory prediction and benchmarking.
Schematic of the process for obtaining a coarse dataset complexity ranking. Each given dataset is preprocessed (spatial sequence alignment) and decomposed into a prototyp...
Published in: IEEE Access ( Volume: 9)
Page(s): 77693 - 77704
Date of Publication: 24 May 2021
Electronic ISSN: 2169-3536

CCBY - IEEE is not the copyright holder of this material. Please follow the instructions via https://creativecommons.org/licenses/by/4.0/ to obtain full-text articles and stipulations in the API documentation.
SECTION I.

Introduction

With the emergence of autonomous vehicles and advances in the field of intelligent robots in general, the task of human trajectory prediction gained a significant amount of research interest in recent years. Besides more classical, physics-based prediction approaches, e.g. building on the Kalman filter [1] or the social forces model [2], a range of deep learning approaches have been proposed to tackle the problem. The most common deep learning models either build around long short-term memory networks (e.g. [3]), convolutional neural networks (e.g. [4]), generative adversarial networks (e.g. [5]) or transformers (e.g. [6]) and vary in contextual cues considered for prediction. Commonly used contextual cues include social (e.g. [7], [8]) and environmental (e.g. [9], [10]) cues. For a comprehensive overview of existing human trajectory prediction approaches, the reader may be referred to [11], [12] [13].

Inherent to prediction model development is the need for proper benchmarks aimed at measuring a model’s prediction performance. Due to the direct relation between dataset complexity and model capacity, creating a not too simple or too hard-to-solve benchmark for human trajectory prediction is still a difficult task. On large datasets, even simple models overfit, while in other cases the prediction performance is poor on individual samples, even for high capacity models. Here, one of the difficulties is the open question of how the complexity of a given dataset for trajectory prediction can be quantified. As a consequence, current attempts in standardized benchmarking originate from heuristics or experience-based criteria when assembling the data basis. Recent examples are the TrajNet challenge [14] or its extension TrajNet ++ [11], [13].

Currently, in human trajectory prediction, the analysis of benchmarking data only takes on a minor role and focuses on a specific aspect of the data. The most common subject for analysis is the existence of social interaction resulting in non-linear behavior with respect to motion. For such analyses, social force models and collision avoidance methods [15] or deviations from regression fits [3], [16] are employed for example. Such methods are also used in TrajNet++ for splitting up the benchmark into interaction and non-interaction tasks. Besides that, basic analyses include dissecting velocity profiles or positional distributions [17]. Following that, there is still a lack of approaches trying to analyze datasets as a whole, with the goal of quantifying the overall complexity of the dataset.

When targeting a qualitative analysis of data complexity, a common approach are low-dimensional embeddings for data visualization, like for example t-SNE [18] or variations of PCA [19]. While such approaches are viable for non-sequential, high-dimensional data, a prototype-based clustering approach seems more viable for sequential data. This is especially true for trajectory datasets, where each dataset can be reduced to a small number of prototypical sub-sequences specifying distinct motion patterns, where each sample can be assumed to be a variation of these prototypes. Additionally, in the context of statistical learning, the complexity of a dataset can be closely related to the entropy of a given dataset. Measuring the dataset entropy, i.e. the amount of information contained in a dataset, is still an open question in the context of trajectory datasets gaining interest recently [20], [21].

Towards this end, an approach for estimating an entropy-inspired measure, the pseudo-entropy, building on a dataset decomposition generated by an adequate pre-processing and clustering method is proposed. For decomposing a given dataset into clusters of distinct velocity-agnostic motion patterns, a spatial alignment1 step followed by vector quantization is applied. Given the dataset decomposition, the pseudo-entropy is estimated by analyzing the prediction performance of a simple trajectory prediction model when gradually enriching its training data with additional motion patterns. This paper is an extension of [20] and focuses mainly on dataset complexity. The main contributions are:

  1. A learning and heuristics-based approach for finding a velocity-agnostic, prototype-based representation of trajectory datasets.

  2. An approach for estimating trajectory dataset complexity in terms of an entropy-inspired measure.

  3. A coarse complexity-based ranking of standard benchmarking datasets for human trajectory prediction.

The paper is structured as follows. Sections II and III present a data pre-processing approach necessary for the actual dataset entropy estimation detailed in section IV. In addition to a coarse dataset ranking, the evaluation section V discusses the ranking, as well as the approach and methods used throughout this paper. Further, some interesting findings resulting from the analysis are discussed. Section VI summarizes the paper, lists potential implications for the state of benchmarking and gives a brief discussion on potential future research directions.

For convenience, several definitions and notations used throughout the paper are listed below:

  • A trajectory X is defined as an ordered set2 of points \{\mathbf {x}_{1}, \ldots, \mathbf {x}_{N}\} .

  • The length of a (sub-)trajectory X always refers to its cardinality |X| , rather than the spatial distance covered.

  • The distance between two trajectories X and Y of the same length |X| = |Y| is defined as d_{tr}(X, Y) = \sum _{j=1}^{|X|} \| \mathbf {x}_{j} - \mathbf {y}_{j} \|_{2} .

  • The number of samples, the trajectory length and the number of prototypes are denoted as N , M and K , respectively. For indexing, i , j and k are used.

  • The q -quantile, with q \in [{0, 1}] , of a set of numbers \{\cdot \} is denoted as Q_{q}(\{ \cdot \}) .

SECTION II.

Spatial Sequence Alignment

With the goal of reaching a velocity-agnostic, prototype-based trajectory dataset representation in mind, the trajectory alignment approach proposed in this section fulfills two integral roles as a pre-processing for the subsequent clustering step. On the one hand, it aligns the data, such that similar patterns are pooled together. On the other hand, it removes variations in velocity among trajectories, therefore generating a dataset with normalized velocity. This, in turn, is essential in obtaining a velocity-agnostic dataset representation. In addition, velocity normalization ensures that similar motion patterns, which only vary in velocity, i.e. in scale, can be pooled together.

Given a set of trajectories (samples) \mathcal {X} = \{X_{1}, \ldots, X_{N}\} , as sequences of M subsequent points X_{i} = \{\mathbf {x}^{i}_{1}, \ldots, \mathbf {x}^{i}_{M}\} , each sample is first normalized by moving it into an arbitrary reference frame and scaling it to unit length: \begin{equation*} X^{norm}_{i} = \left \{{\frac {\mathbf {x}^{i}_{j} - \bar {\mathbf {x}}}{\mathbf {x}^{i}_{M} - \mathbf {x}^{i}_{1}} \mid j \in [1,M] }\right \}.\tag{1}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Here, \bar {\mathbf {x}} is the centroid of a trajectory. It has to be noted that this normalization solely serves the purpose of moving all samples into a common value range and therefore it is not a good normalization in terms of pooling similar samples. Then, all samples are aligned relative to a single learned prototype \hat {Y} = \left \{{\hat {\mathbf {y}}_{1}, \hat {\mathbf {y}}_{2}, \ldots, \hat {\mathbf {y}}_{M} }\right \} by using similarity transformations, which are retrieved from a regression model \phi: X \rightarrow \{\mathbf {t},\alpha,s\} with translation \mathbf {t} , rotation angle \alpha and scale s . \hat {Y} and \phi are learned by minimizing the mean squared error between each aligned sample X^\phi _{i} = \phi (X^{norm}_{i}) and the prototype \hat {Y} \begin{equation*} \mathcal {L}_{\mathrm {align*}}(\phi (X^{norm}_{i}), \hat {Y}) = \frac {1}{M} \sum ^{M}_{j=1} \|\mathbf {x}^{i}_{j} - \hat {\mathbf {y}}_{j}\|^{2}_{2}\tag{2}\end{equation*}

View SourceRight-click on figure for MathML and additional features. using stochastic gradient descent. This is different from linear factor models, where the whole training set has to be considered. With respect to equation 2 and the similarity transformation, the trivial solution that maps all samples onto the zero-vector has to be avoided. A brute force approach to this problem is to enforce a minimum scale for the prototype. These steps result in a minimum variance alignment of all samples with respect to the learned prototype. Further, by learning the prototype and the transformation concurrently, the prototype adapts to the most dominant motion pattern and the normalized data is aligned accordingly.

An exemplary result of this alignment approach is depicted in figure 1. By aligning all samples with a single prototype, aligned samples have a common orientation and form clusters of similar samples.

FIGURE 1. - Example for a resulting alignment (right) of a given dataset (left; hyang scene taken from the stanford drone dataset [15]).
FIGURE 1.

Example for a resulting alignment (right) of a given dataset (left; hyang scene taken from the stanford drone dataset [15]).

SECTION III.

Learning Vector Quantization

Clustering approaches can be applied after the spatial alignment, as it distributes random errors homogeneously over the sequence and exposes clusters of motion patterns. In the landscape of clustering approaches, there exists a wide range of approaches to choose from, ranging from simple established approaches (e.g. k-means [22], DBSCAN [23] or learning vector quantization [24]) to more sophisticated or specialized neural models (e.g. [25], [26] [27]). In the context of this paper, the choice of the clustering approach itself only plays a minor role, thus a learning vector quantization (LVQ) approach is employed, as it can directly be integrated into a deep learning framework. For a more comprehensive review on clustering approaches, the reader may be referred to recent surveys, for example [28] or [29].

The resulting pipeline corresponds to the encoder part of an auto-encoding architecture for representation learning inspired by [30], which is capable of learning meaningful representations. Here, aligned samples are mapped onto K prototypes3 \mathcal {Z} = \{Z_{1}, \ldots, Z_{K}\} , with Z_{k} = \{\mathbf {z}^{k}_{1}, \ldots, \mathbf {z}^{k}_{M}\} , in quantized space. This results in a concise set of prototypes representing the given dataset. The prototypes are learned by minimizing the mean squared error between the aligned samples \mathcal {X}^{\phi } = \{X^\phi _{1}, \ldots, X^\phi _{N}\} and the respective closest prototype Z_{z(i)} in quantized space: \begin{equation*} \mathcal {L} = \underbrace {\frac {1}{N} \sum ^{N}_{i=1} d_{tr}(X^\phi _{i}, Z_{z(i)})}_{\mathcal {L}_{LVQ}} + \gamma L_{reg},\tag{3}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where L_{reg} is the regularization term, which is discussed in section III-B. The index of the closest prototype for a sample X^\phi _{i} is determined by \begin{equation*} z(i) = \mathop {\mathrm {arg\,min}} _{k} d_{tr}(X^\phi _{i}, Z_{k}).\tag{4}\end{equation*}
View SourceRight-click on figure for MathML and additional features.

Note that due to the fixed trajectory length, the mean squared error is a suitable similarity measure. If the length would vary, a more sophisticated measure would be necessary [31].

Using \mathcal {L} for learning the LVQ parameters, two aspects have to be considered:

  1. As the value for K is unknown a priori, it should in general be chosen larger than expected.

  2. Due to the winner takes all strategy, \mathcal {L}_{LVQ} only updates prototypes that have supporting samples.

In order to achieve consistent training and quantization results under these conditions, the following sections present approaches for initialization (III-A), regularization (III-B) and refinement (III-C). While the initialization and regularization primarily focus on 2), the refinement, build upon the expected results using the proposed initialization and regularization approaches, focuses on 1).

For describing these approaches, the support of a prototype plays an integral role. The support \pi (\mathcal {Z})_{k} of the aligned dataset \mathcal {X}^\phi for each prototype Z_{k} \in \mathcal {Z} is aggregated in \begin{align*} \pi (\mathcal {Z})=&\left \{{ \sum ^{N}_{i = 1} \mathbf {1}_{k}(i) \mid k \in [1, \lvert \mathcal {Z} \rvert] }\right \}, \\ \mathrm {with} \\ \mathbf {1}_{k}(i)=&\begin{cases} 1 & \mathrm {if}~z(i) = k \\ 0 & \mathrm {else} \end{cases}.\tag{5}\end{align*}

View SourceRight-click on figure for MathML and additional features.

A resulting set of prototypes using the approach described in this section and following subsections, for the dataset shown in figure 1, is depicted in figure 2. It can be seen, that the prototypes cover a certain range of motion patterns: constant velocity, curvilinear motion, acceleration and deceleration.

FIGURE 2. - Possible set of prototypes (red) for a given aligned dataset (top row). The prototypes represent different motion patterns (from left to right): Constant velocity, curvilinear motion, acceleration and deceleration.
FIGURE 2.

Possible set of prototypes (red) for a given aligned dataset (top row). The prototypes represent different motion patterns (from left to right): Constant velocity, curvilinear motion, acceleration and deceleration.

A. Initialization

The main objective of the initialization step is two-fold. On the one hand, the number of out-of-distribution prototypes should be reduced in the initial set of prototypes \mathcal {Z}_{\mathrm {init}} . On the other hand, \mathcal {Z}_{\mathrm {init}} has to be spread across the data X^\phi , in order to identify different motion patterns more consistently.

Taking this into account, the alignment prototype \hat {Y} is set as the first prototype Z_{1} , as it should resemble the most dominant motion pattern. Under the assumption that other relevant motion patterns are dissimilar to \hat {Y} , a Forgy initialization [32] is applied for initializing the remaining K-1 prototypes. Accordingly, the remaining prototypes are randomly selected from a subset \mathcal {X}' \subset \mathcal {X}^\phi , where \mathcal {X}' contains all samples \mathcal {X}^\phi _{i} with \tau _{\mathrm {ilow}} < d_{tr}(X^\phi _{i}, \hat {Y}) < \tau _{\mathrm {ihigh}} . The thresholds \tau _{\mathrm {ix}} = Q_{q_{\mathrm {ix}}}(\{ d_{tr}(X^\phi _{i}, \hat {Y}) \mid X^\phi _{i} \in \mathcal {X}^\phi \}) are defined as the q_{\mathrm {ix}} -quantiles of all sample distances with respect to \hat {Y} . An upper bound \tau _{\mathrm {ihigh}} is employed to reduce the risk of initializing a prototype with out-of-distribution samples from \mathcal {X}^\phi . Depending on the choices for q_{\mathrm {ilow}} and q_{\mathrm {ihigh}} , there might not be enough samples to choose from (\lvert \mathcal {X}' \rvert < K-1 ). In this case, q_{\mathrm {ilow}} can be gradually reduced until \lvert \mathcal {X}' \rvert \geq K-1 . An example for \mathcal {X} and \mathcal {X}' with q_{\mathrm {ilow}} = 0.9 and q_{\mathrm {ihigh}} = 0.95 is given in figure 3.

FIGURE 3. - Left: Aligned dataset 
$\mathcal {X}^\phi $
 (green) and alignment prototype 
$\hat {Y}$
 (red). Right: Subset 
$\mathcal {X}'$
 for 
$q_{\mathrm {ilow}} = 0.9$
 and 
$q_{\mathrm {ihigh}} = 0.95$
.
FIGURE 3.

Left: Aligned dataset \mathcal {X}^\phi (green) and alignment prototype \hat {Y} (red). Right: Subset \mathcal {X}' for q_{\mathrm {ilow}} = 0.9 and q_{\mathrm {ihigh}} = 0.95 .

B. Regularization

While the initialization helps in increasing the average support for each prototype,4 some out-of-distribution samples5 might be assigned to individual prototypes, resulting in little support from other samples.

To ensure optimization of all prototypes, a regularization term L_{reg} is employed, which is set to move out-of-distribution prototypes closer to more relevant samples or clusters of samples. Following this, different definitions for L_{reg} can be used. On the one hand, L_{reg} could move all prototypes Z_{k} \in \mathcal {Z} \setminus Z_{*} towards the most supported prototype Z_{*} = Z_{k_{*}} , where k_{*} = \mathop {\mathrm {arg\,max}} _{k} \pi (\mathcal {Z})_{k} : \begin{equation*} L_{reg} = \frac {1}{K - 1} \sum _{Z_{k} \in \mathcal {Z} \setminus Z_{*}} d_{tr}(Z_{k}, Z_{*}).\tag{6}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Under ideal conditions, Z_{*} should be equal to the alignment prototype \hat {Y} , which roughly represents the overall mean of the dataset, and L_{reg} behaves accordingly. In practice, however, this assumption might not hold due to noise, increasing unpredictability of the optimization. Hence, in the following L_{reg} is defined to move all prototypes towards the global mean by minimizing the global error \begin{equation*} L_{reg} = \frac {1}{N \cdot K} \sum _{i=1}^{N}\sum _{k=1}^{K} d_{tr}(X_{i}, Z_{k}).\tag{7}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

Intuitively, by choosing an appropriate value for the regularization weight \gamma , this definition of L_{reg} moves low-support prototypes in more reasonable areas within quantized space and the winner takes all loss function \mathcal {L}_{LVQ} keeps them within range of relevant sample clusters. Additionally, when K is too large, superfluous prototypes are very similar after optimization.

As a side-note, very imbalanced prototype sets, in terms of many low-support prototypes, can also be measured by the perplexity score \begin{align*} \mathcal {P}_{\mathcal {Z}}=&\exp \left \{{ -\sum _{k=1}^{K} \pi _{\mathrm {norm}}(\mathcal {Z})_{k} \odot \log \{ \pi _{\mathrm {norm}}(\mathcal {Z})_{k} \} }\right \}, \\ \mathrm {with} \\ \pi _{\mathrm {norm}}(\mathcal {Z})_{k}=&\frac {\pi (\mathcal {Z})_{k}}{\sum _{k=1}^{K} \pi (\mathcal {Z})_{k}}. \tag{8}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Due to \mathcal {P}_{\mathcal {Z}} not being directly derived from \mathcal {Z} , it is not a good term for optimization, and thus for L_{reg} . Nevertheless, \mathcal {P}_{\mathcal {Z}} can be used later on when assessing the complexity of a dataset.

C. Refinement

Finally, a heuristic refinement scheme, building on the expected results when using the regularization approach presented in section III-C, is employed in order to remove unnecessary prototypes when K was too large. The refinement step consists of two phases. In the first phase, low-support prototypes are removed from \mathcal {Z} by using a dataset-dependent threshold \tau _{\mathrm {phase1}} = \lceil \epsilon _{\mathrm {phase1}} \cdot |X|\rceil : \begin{equation*} \mathcal {Z}' = \mathcal {Z} \setminus \left \{{ Z_{k} \mid \pi (\mathcal {Z})_{k} < \tau _{\mathrm {phase1}} }\right \}.\tag{9}\end{equation*}

View SourceRight-click on figure for MathML and additional features.

The second phase revolves around removing prototypes similar to the most supported prototype Z_{*} . It is assumed, that Z_{*} is close to the global mean of the dataset. This implies, that superfluous prototypes are driven towards Z_{*} because of the global mean regularization, allowing to detect and remove these prototypes. For assessing similarity, prototypes Z_{k} \in \mathcal {Z} \setminus \{Z_{*}\} are first aligned with Z_{*} in terms of their starting points \mathbf {z}^{k}_{1} = \mathbf {z}^{*}_{1} and initial orientations \mathbf {z}^{k}_{2} - \mathbf {z}^{k}_{1} = \mathbf {z}^{*}_{2} - \mathbf {z}^{*}_{1} . An aligned prototype Z'_{k} is then considered as similar to Z_{*} when at least \epsilon _{\mathrm {phase2}} \cdot 100~\% of its points are in close proximity to respective points of Z_{*} : \begin{align*} \mathrm {sim}(Z'_{k}, Z_{*})=&\begin{cases} 1 & \mathrm {if}~|\mathcal {S}(Z'_{k}, Z_{*})| \geq \epsilon _{\mathrm {phase2}} \cdot \lvert Z_{*} \rvert \\ 0 & \mathrm {else} \end{cases}, \\ \mathrm {with} \\ \mathcal {S}(Z'_{k}, Z_{*})=&\left \{{ \mathbf {z}^{k}_{j} \mid \| \mathbf {z}^{k}_{j} - \mathbf {z}^{*}_{j} \|_{2} < \tau (Z_{*})_{j}, j \in [1, M] }\right \} \\ \tau (Z_{*})_{j}=&Q_{0.99}\left ({\left \{{ \| \mathbf {z}^{*}_{j} - \mathbf {x}_{j} \|_{2} \mid \mathbf {x}^{i}_{j} \in X_{i},~X_{i} \in \mathcal {X}^\phi }\right \} }\right). \\\tag{10}\end{align*}

View SourceRight-click on figure for MathML and additional features.

\tau (Z_{*})_{j} is the per-point distance threshold of the j ’th trajectory point calculated from the supporting samples \{X_{i} \in \mathcal {X}^\phi \mid z(i) = k_{*} \} of Z_{*} . The 0.99-quantile is used instead of the maximum to exclude outliers in the data. A visual example for determining similarity is given in figure 4.

FIGURE 4. - Example of assessing prototype similarity in the second phase of the refinement scheme. The first row shows the alignment of a prototype 
$Z_{k}$
 (blue) with the highest-support prototype 
$Z_{*}$
 (red), The second row shows how the maximum distance per-point is estimated for determining similarity. In this example, as seen on the right, 
$Z_{k}$
 is only within similarity range for 4 of 8 points, thus it is determined as dissimilar when choosing an overlap factor 
$\epsilon _{\mathrm {phase2}} > 0.5$
.
FIGURE 4.

Example of assessing prototype similarity in the second phase of the refinement scheme. The first row shows the alignment of a prototype Z_{k} (blue) with the highest-support prototype Z_{*} (red), The second row shows how the maximum distance per-point is estimated for determining similarity. In this example, as seen on the right, Z_{k} is only within similarity range for 4 of 8 points, thus it is determined as dissimilar when choosing an overlap factor \epsilon _{\mathrm {phase2}} > 0.5 .

SECTION IV.

Estimating Trajectory Dataset Entropy

This section discusses an attempt in moving towards enabling a thorough complexity analysis of human trajectory prediction benchmarking datasets. While previous work (e.g. [3], [15], [17]) focuses on statistics directly derived from the datasets, like histograms or deviations from linear prediction, the approach proposed in the following relies on a dataset decomposition \mathcal {X}^\phi _{decomp} = \{\mathcal {X}_{1}, \ldots, \mathcal {X}_{k}\} learned by the LVQ model introduced in section III, i.e. the clusters \mathcal {X}_{k} \in \mathcal {X}^\phi assigned to each prototype Z_{k} \in \mathcal {Z} after refinement. This decomposition is used for an effort in estimating an entropy-inspired measure for a given trajectory dataset \mathcal {X} , declared as the pseudo-entropy \text {H}_{pseudo}(\mathcal {X}) . Given a dataset, it is assumed, that a high information content yields a high entropy value, which finally gives a direct quantification of complexity. Compared to the OpenTraj approach described in [21], which was developed independently at the same time and mainly focuses on analyzing raw trajectory data, the approach presented in this paper focuses on the analysis of more abstract trajectory data in alignment space and clusters of motion patterns extracted from aligned data.

As an initial proof of concept, \text {H}_{pseudo}(\mathcal {X}) is estimated using the change in prediction performance of a simple machine learning-based trajectory prediction model when gradually increasing the amount of information in the training data. The amount of information in the training data can be controlled to a certain extent by using the dataset decomposition in alignment space generated by the LVQ model. In general, most current trajectory datasets are strongly biased towards linear motion patterns, thus other, more complex patterns are less relevant and carry more information. Following this, by ordering the clusters in \mathcal {X}^\phi _{decomp} by decreasing relevance, a training set initially consisting only of \mathcal {X}_{1} \in \mathcal {X}^\phi _{decomp} can be gradually enriched with information by first adding \mathcal {X}_{2} , then \mathcal {X}_{3} and so on. Then, a simple prediction model \mathcal {M} is trained on each composed training dataset. In this paper, \mathcal {M} is comprised of a learned linear input transformation \begin{equation*} \mathcal {M}(X_{f}) = WX_{f} + b,\tag{11}\end{equation*}

View SourceRight-click on figure for MathML and additional features. where W and b are learned from data and X_{f} is a flattened vector given by concatenating all points of a trajectory X . This model is assumed to be able to model each relevant motion pattern in isolation. Due to the lack of modeling capacity, it is expected that the prediction error increases with the increase of information present in the data. Finally, \text {H}_{pseudo}(\mathcal {X}^\phi) is calculated by counting significant prediction error increases as depicted in algorithm 1.

SECTION Algorithm 1

Estimate Dataset Pseudo-Entropy

Require:

Decomposed, sorted datasets \mathcal {X}^\phi _{decomp}

\text {PseudoEntropy} \leftarrow 0

for k \in \{1, \ldots, K\} do

\mathcal {X}_{train} = \bigcup _{i=1}^{k} \mathcal {X}_{i},\,\,\mathcal {X}_{i} \in \mathcal {X}^\phi _{decomp}

\mathcal {M}_{k} \leftarrow \text {estimateParameters}(\mathcal {X}_{train})

if k > 1 then

\text {errs}_{k-1} \leftarrow \text {calcPredictionErrorsPerSample}(\mathcal {M}_{k-1})

\text {errs}_{k} \leftarrow \text {calcPredictionErrorsPerSample}(\mathcal {M}_{k})

if \text {SignificantDifference}(\text {errs}_{k-1}, \text {errs}_{k}) \land \text {mean}(\text {errs}_{k}) > \text {mean}(\text {errs}_{k-1}) then

PseudoEntropy++

end if

end if

end for

return PseudoEntropy

SECTION V.

Evaluation

This section starts with a setup common to all experiments, followed by a qualitative evaluation of the simple prediction model described in section IV and the LVQ model for dataset decomposition. Next, a coarse ranking of standard benchmarking datasets for long-term human trajectory prediction based on pseudo-entropy is given. The section closes with a discussion on the approach and methodology itself, other possible factors contributing to dataset complexity and interesting findings.

Evaluations are conducted on scenes taken from the following frequently used benchmarking datasets: BIWI Walking Pedestrians ([33], abbrev.: biwi), Crowds by example (also known as the UCY dataset, [34], abbrev.: crowds) and the Stanford Drone Dataset ([15], abbrev.: sdd). Besides being typically used for evaluating human trajectory prediction models in the literature, the original TrajNet challenge was built around these datasets. The scenes in the datasets are denoted as Dataset: Scene Recording, e.g. recording 01 of the zara scene in the crowds dataset is denoted as crowds:zara01. Note that for sdd, different recordings of the same scene do not necessarily capture the same campus area (but there might be some overlap). An overview of statistical details of the datasets is given in appendix A.

For trajectory prediction tasks targeted in this section, trajectories are split in half in order to obtain observation and target sequences. The prediction error is reported in terms of the average displacement error.

A. Setup

First, the datasets are augmented to have a common sample frequency. The biwi and crowds scenes already have the same sample frequency of 2.5 samples per second, thus the sample frequency of all the sdd scenes is adjusted accordingly.

Next, as the prototype-based representation only works with trajectories of the same length, an appropriate sequence length has to be chosen for each dataset in the evaluation. The most commonly used sequence length in recent benchmarks is M=20 (8 for observation and 12 for predicting) points per trajectory. Setting M=20 for all datasets might, however, lead to smaller datasets having only few trajectories left for learning the LVQ, as the average trajectory length varies greatly between datasets. Because of this, the q -quantile of trajectory lengths per dataset is chosen, i.e. a common but dataset-dependent sequence length. After choosing M , the training datasets are assembled by collecting all possible (sub-)trajectories with length M from each respective dataset, in order to provide as much data as possible. Additionally, for achieving a more meaningful result, the average of multiple sequence lengths, i.e. time scales, is used for calculating the pseudo-entropies. Thus, q = 0.1 and q = 0.25 are used for evaluation, ensuring that a greater portion of the dataset remains while removing less interesting trajectories in terms of long-term trajectory prediction.

Then, trajectories not exceeding a dataset-dependent minimum speed6 s_{\mathrm {min}} are filtered. The reason for this is twofold. First, statistical models are worse in modeling trajectories of slow-moving persons, as their behavior becomes less predictable [35]. Second, and as a consequence, these trajectories generally do not contain viable motion patterns to extract. For this evaluation, the minimum speed is calculated heuristically for a given training dataset \mathcal {X} containing all possible (sub-)trajectories of length M : \begin{align*} s_{\mathrm {min}}=&\frac {\max _{i} \mathrm {m}_{\mathrm {speed}}(i) - \min _{i} \mathrm {m}_{\mathrm {speed}}(i)}{M} \\ \mathrm {with} \\ \mathrm {m}_{\mathrm {speed}}(i)=&\frac {1}{M - 1} \sum _{j=2}^{M} \| \mathbf {x}^{i}_{j} - \mathbf {x}^{i}_{j - 1} \|_{2}\tag{12}\end{align*}

View SourceRight-click on figure for MathML and additional features.

Here, \mathrm {m}_{\mathrm {speed}}(i) denotes the average speed of the i ’th trajectory X_{i} \in \mathcal {X} .

Finally, for each dataset and sequence length, the training of the alignment and LVQ networks are run 10 times. The number of initial prototypes is set to K = 10 for all datasets. If not stated otherwise, the refinement parameters are set to \epsilon _{\mathrm {phase1}} = 0.04 and \epsilon _{\mathrm {phase2}} = 0.9 .

B. Simple Prediction Model Capabilities

The pseudo-entropy estimation approach assumes that the learned linear transformation prediction model \mathcal {M} is capable of modeling basic motion patterns in isolation. In order to verify this, \mathcal {M} has been trained on samples of three clusters corresponding to constant, accelerated and curvilinear motion taken from the decomposed sdd:hyang04 dataset. Then, \mathcal {M} was tasked to predict the remainder of each cluster prototype, given its first half (9 trajectory points in this case), showing its viability. The results are depicted in figure 5. The ground truth is depicted in red and the prediction in blue.

FIGURE 5. - Prediction results of a learned linear transformation (blue), given the first 9 points of a smooth prototypical trajectory (red). Each example resembles a basic motion pattern: constant (a.), accelerated (b.) and curvilinear (c.) motion.
FIGURE 5.

Prediction results of a learned linear transformation (blue), given the first 9 points of a smooth prototypical trajectory (red). Each example resembles a basic motion pattern: constant (a.), accelerated (b.) and curvilinear (c.) motion.

C. LVQ Dataset Decomposition: Quality, Consistency and Sensitivity

In this section, the viability of the LVQ model given an aligned dataset is evaluated, as it builds the basis of the proposed approach for estimating the information content. This evaluation employs three exemplary datasets of varying assumed complexity, the biwi:eth, crowds:zara01 and the sdd:hyang04 dataset. The evaluation targets the quality and consistency of resulting dataset decompositions, as well as the approach sensitivity to its refinement parameters. Due to the similarities of datasets used throughout the evaluation, it is assumed that the findings of this section will carry over to the other datasets.

Starting with consistency, the 10 available training iterations are examined. The first row of figure 6 depicts the number of components identified by the LVQ model before (blue) and after (orange) refinement. Although there are some small fluctuations, there are no strong deviations which cannot be compensated by averaging.

FIGURE 6. - Number of resulting clusters after multiple training runs before (blue) and after (orange) refinement (1st row) and the influence of different values for the parameters 
$\epsilon _{\mathrm {phase1}}$
 and 
$\epsilon _{\mathrm {phase2}}$
 (2nd and 3rd row) of the LVQ refinement procedure for exemplary datasets.
FIGURE 6.

Number of resulting clusters after multiple training runs before (blue) and after (orange) refinement (1st row) and the influence of different values for the parameters \epsilon _{\mathrm {phase1}} and \epsilon _{\mathrm {phase2}} (2nd and 3rd row) of the LVQ refinement procedure for exemplary datasets.

The influence of the heuristic refinement when varying its parameters \epsilon _{\mathrm {phase1}} and \epsilon _{\mathrm {phase2}} is depicted in the second and third row of figure 6. Looking at the second row, the number of remaining components decreases gradually as expected when increasing \epsilon _{\mathrm {phase1}} from an initial 0% up to 10% necessary support by the data basis. The third row in figure 6 indicates the little impact of \epsilon _{\mathrm {phase2}} and phase 2 in general. Decreasing the minimum number of overlapping trajectory points required for being filtered out from 100% to 75%, the final number of components only decreases when there are unnecessary components left, or \epsilon _{\mathrm {phase2}} is too restrictive. In fact, considering all datasets and training iterations, the second refinement phase only removed at least one component in 185 of 1823 cases.

Finally, analyzing the quality of the resulting decomposition, two things have to be considered: the motion patterns represented by each prototype and the significance of each cluster. As the first point is hard to verify quantitatively, a visual inspection is employed. Looking at, for example figure 2 or 7, the learned prototypes and identified motion patterns appear reasonable for bird’s eye view datasets. This is also discussed briefly in section V-E. For evaluating the quality of the decomposition itself, an approach using a simple prediction model similar to the one described in algorithm 1 can be employed. While training models on increasingly complex combinations of clusters remains, the test set errors are now compared to the prediction errors on the remaining clusters. Then, ideally, a significant difference between these errors justifies the existence of the remaining clusters next to the ones combined in the training dataset. Table 1 depicts all mean test set and prediction errors (including standard deviations) for clusters generated for sdd:hyang04. Here, all pairwise differences in each row are significant, thus verifying the learned decomposition. Significance is determined by using a t-test for independent samples using a significance level \alpha = 0.05 . Variance homogeneity has been tested using Levene’s test and considered when choosing a t-test variant (regular vs. Welch’s).

TABLE 1 Mean Test and Cluster Errors With Standard Deviations for a Simple Prediction Model Trained on Different Cluster Combinations Taken From the Decomposed sdd:hyang04. All Pairwise Differences Per Row are Significant (Using \alpha= 0.05 ). Note: Errors are Unit-Less, as the Data is in Alignment Space
Table 1- 
Mean Test and Cluster Errors With Standard Deviations for a Simple Prediction Model Trained on Different Cluster Combinations Taken From the Decomposed sdd:hyang04. All Pairwise Differences Per Row are Significant (Using 
$\alpha= 0.05$
). Note: Errors are Unit-Less, as the Data is in Alignment Space
FIGURE 7. - Data and aligned data (a. & b.: biwi:eth, c. & d: sdd:nexus01) with similar sequence lengths (12 & 15), as well as learned prototypes for biwi:eth (e. & f.) and sdd:nexus01 (g. – l.). Having a higher complexity score, sdd:nexus01 consists of a higher variety of motion patterns, including constant velocity (g.), curvilinear motion (h. & i.), acceleration (j.), deceleration (k.) and a mixed pattern (l.). The fraction of samples assigned to each prototype are 93% (a.) and 7% (b.) for biwi:eth and 43.5% (g.), 19.4% (h.), 13.7% (i.), 8.6% (j.), 5.6% (k.) and 9.2% (l.) for sdd:nexus01.
FIGURE 7.

Data and aligned data (a. & b.: biwi:eth, c. & d: sdd:nexus01) with similar sequence lengths (12 & 15), as well as learned prototypes for biwi:eth (e. & f.) and sdd:nexus01 (g. – l.). Having a higher complexity score, sdd:nexus01 consists of a higher variety of motion patterns, including constant velocity (g.), curvilinear motion (h. & i.), acceleration (j.), deceleration (k.) and a mixed pattern (l.). The fraction of samples assigned to each prototype are 93% (a.) and 7% (b.) for biwi:eth and 43.5% (g.), 19.4% (h.), 13.7% (i.), 8.6% (j.), 5.6% (k.) and 9.2% (l.) for sdd:nexus01.

D. Coarse Dataset Ranking

Table 2 lists commonly used datasets, grouped by their average rounded pseudo-entropy values, calculated according to algorithm 1 for each trained pair of alignment and LVQ models. Again, a t-test for independent samples with a significance level of \alpha = 0.05 is used for testing significance, using Levene’s test for testing variance homogeneity. Some scenes were filtered out after the setup, as their number of samples did not exceed 150, leading to less stable alignment and LVQ models. The evaluation resulted in a coarse ranking of datasets, with datasets being assigned to one of four groups of similar pseudo-entropy. It can be seen that the biwi scenes are among the datasets containing the least information, while the most informative scenes are found in the sdd dataset. For demonstration purposes, the datasets and prototypes for biwi:eth and sdd:nexus01 are illustrated in figure 7 for similar sequence lengths (12 and 15). As opposed to biwi:eth, sdd:nexus01, being ranked has containing more information, consists of three times the amount of motion patterns, including constant velocity, curvilinear motion, acceleration and deceleration, as well as a mixed motion pattern. This mixed pattern consists of constant velocity, decelerating and accelerating parts, and might occur due to the rather high sequence length with respect to the covered time span.

TABLE 2 Coarse Complexity Ranking of Standard Trajectory Benchmarking Datasets Based on Their Estimated Information Content (Pseudo-Entropy). Higher Pseudo-Entropy Implies Higher Dataset Complexity. The Datasets are Abbreviated in Terms of the Actual Dataset (e.g. SDD), the Name of the Scene Included in the Dataset (e.g. Hyang) and the Recording Number in Case There Exist Multiple Recordings for the Respective Scene
Table 2- 
Coarse Complexity Ranking of Standard Trajectory Benchmarking Datasets Based on Their Estimated Information Content (Pseudo-Entropy). Higher Pseudo-Entropy Implies Higher Dataset Complexity. The Datasets are Abbreviated in Terms of the Actual Dataset (e.g. SDD), the Name of the Scene Included in the Dataset (e.g. Hyang) and the Recording Number in Case There Exist Multiple Recordings for the Respective Scene

E. Discussion

Conclusively, selected aspects of the proposed approach and employed methodology are discussed. Then, potential factors for creating a more fine-grained ranking and some insights gained from the analyses are discussed.

1) Approach and Methodology

In the context of dataset complexity assessment, normalizing the velocities using the alignment model should be discussed. Given a prototypical motion pattern, variations in velocity can be generated by scaling it accordingly, thus it is assumed that the pattern itself is the main contributor to a higher dataset entropy. In case the original motion patterns are required, recall that the clustering pipeline corresponds to the encoding part of a well-established auto-encoding architecture, thus the original velocities can be recovered from the dataset representation when employing the full auto-encoding architecture.

Further, a dataset-dependent trajectory length M was chosen instead of a common value for all datasets. This decision is motivated by large differences in trajectory lengths occurring in existing datasets as well as unknown ground resolutions. While differences in length may only lead to some datasets becoming very small if M is too high, it is not clear if two trajectories of the same length from different datasets are comparable due to unknown ground resolutions. Even if the average offset between subsequent trajectory points is equal, both trajectories might represent different real-world speeds, thus covering a longer/shorter distance in reality, which directly impacts occurring motion patterns and the influence of agent-agent interaction.

The pseudo-entropy aims to reveal the average amount of information contained in a given dataset. Looking at the coarse ranking in section V-D, the results appear reasonable from an experience point of view. For verifying this coarse ranking in an experimental setup using state-of-the-art trajectory prediction models, it would be necessary to put all datasets in a common reference frame and re-sample trajectories to achieve a matching ground resolution, i.e. the distance between subsequent trajectory points of objects moving at the same real-world speed must be equal, for all datasets. This can be an interesting experiment for future work on the topic of dataset complexity analysis.

Lastly, the presented approach only allows for a coarse complexity ranking of given datasets. For achieving a more fine-grained ranking, additional factors need to be considered. Possible factors are discussed next.

2) Potential Factors Affecting Complexity

Multiple factors contributing to dataset complexity could be derived from a learned dataset decomposition. First, the diversity between motion patterns, covered implicitly in the pseudo-entropy, could be considered explicitly. In case of distinguishable patterns, statistical models need to be capable of capturing multiple modes in the data, requiring a higher modeling capacity. Thus, a higher pattern diversity is expected to correspond to a higher dataset complexity. The second factor considers occurring variations of the same pattern. This factor looks promising, as a higher variation implies a higher uncertainty when modeling specific motion patterns, making it harder to capture by using statistical models. Lastly, the relevance distribution of identified motion pattern could be considered. This mainly focuses on biases in the data, and thus answers the question if there is a prevalent motion pattern or if the occurrence of all patterns is equally likely. Then, less biased datasets can be considered as more complex, as less biased data enables statistical models to capture different patterns in the first place.

Beyond that, agent-agent interaction, as well as the environmental cues can play an integral role in assessing trajectory dataset complexity. Looking at interaction, its influence on the shape of single trajectories can be significant, though this heavily depends on the chosen sample frequency as well as the ground resolution of a given dataset. More specifically, the influence of agent-agent interaction becomes less relevant, the sparser a trajectory is sampled, due to interactions mainly occurring on short time scales. The same applies to the ground resolution, where interactions become less visible when the spatial distance between subsequent trajectory points increases. As a final note, sensor noise must be considered, as there is a risk of interactions being indistinguishable from noise. For environmental cues, positional biases caused for example by junctions can heavily impact the occurrence of specific motion patterns, especially curvilinear motion. This leads to more diverse, and thus to potentially more complex datasets.

3) Interesting Findings

All datasets in this comparison are recorded from a birds-eye view. Inherent to this perspective is the expectation that there are common motion patterns in all datasets, independent of the time horizon. This fact has been, with a few exceptions, confirmed, in that almost all scenes contain slight variations of at least one basic motion pattern, including constant, accelerated, decelerated and curvilinear motion. Some datasets contain multiple variations of the same basic motion pattern or even mixed motion patterns, enabled by the, partially, high sequence length M . This can be seen in figure 7 (g. – l.).

Another aspect related to the motion patterns found in the data is, that in all datasets, the constant velocity pattern is the dominant, i.e. most supported, motion pattern, covering a large fraction of the entire dataset (see figure 8 for exemplary fractions for low to high complexity scene datasets). This has multiple implications related to common evaluation methodology in current state-of-the-art publications. On the one hand, it is a perfect explanation for the difficulties in beating a simple linear extrapolation model in the task of human trajectory prediction. This phenomenon could for example be observed during the TrajNet challenge [14], where multiple of the first submissions failed to beat the linear model. On the other hand, this fact indicates, that an arbitrarily assembled benchmarking data basis poses the risk of rendering corner cases, i.e. motion patterns different from the constant velocity pattern, statistically irrelevant. This leads to statistical models that are incapable of modeling more complex motion and also struggle with beating a linear model.

FIGURE 8. - Depicts the fraction of samples assigned to the constant velocity prototype (sample ratio) for a range of low to high complexity scene datasets. The fractions are calculated by taking the mean sample ratio, taking multiple iterations and sequence lengths into account. Even with a high number of distinct motion patterns in the data (e.g. sdd:nexus01), constant velocity samples make up 52% of all samples in the dataset.
FIGURE 8.

Depicts the fraction of samples assigned to the constant velocity prototype (sample ratio) for a range of low to high complexity scene datasets. The fractions are calculated by taking the mean sample ratio, taking multiple iterations and sequence lengths into account. Even with a high number of distinct motion patterns in the data (e.g. sdd:nexus01), constant velocity samples make up 52% of all samples in the dataset.

4) Comparison With OpenTraj

Dataset complexity estimation in the context of human trajectory prediction still poses an unexplored topic in the literature. Because of that, there is no quantitative approach for measuring and comparing the performance of different approaches for dataset complexity estimation. As a result, this section resorts to a qualitative comparison of the pseudo-entropy approach presented in this paper and the OpenTraj approach and aims to serve as a verification of results. Using the result provided in [21], a superficial comparison can be made by comparing the coarse dataset ranking based on pseudo-entropy with the clustering and entropy analyses in OpenTraj (figure 3 in [21]). It has to be noted, that in this paper every scene of each dataset has been treated independently (e.g. sdd:deathcircle - scene 1), while OpenTraj pooled the results for each individual dataset. Being common to both evaluation sections, the eth hotel, zara and sdd:deathcircle datasets can be used as a sample for the comparison. Looking at the pseudo-entropy based coarse ranking, these datasets provide a dataset of low, medium and high entropy, respectively. This relative ranking is consistent with the findings provided by OpenTraj, confirming the plausibility of both approaches.

SECTION VI.

Concluding Remarks

In the context of statistical learning, dataset complexity is closely related to the entropy of a given dataset. Thus, an approach for estimating the amount of information contained in trajectory datasets was proposed. The approach relies on a velocity-agnostic dataset representation generated by an alignment followed by vector quantization. Using this approach, a coarse complexity ranking of commonly used benchmarking datasets has been generated. A following discussion addressed the results and methods used, as well as interesting findings based on the analyses, stressing the importance of a well-rounded data basis.

5) Implications for the State of Benchmarking

The approach, methods and results presented in this paper can be valuable in the context of dataset and prediction model analysis, as well as benchmarking in general. First of all, the spatial sequence alignment combined with the LVQ approach can be used for analyzing datasets on different timescales, e.g. for extracting underlying motion patterns. This analysis especially benefits the selection of observation and prediction sequence lengths in benchmarking, as well as the selection of an appropriate prediction model in terms of model capacity. The latter is motivated by the fact, that low-capacity models usually suffice for less complex data, which might in turn reduce cases of over-fitting and unnecessary computational effort. Further, the resulting dataset decomposition can be used to enhance qualitative analyses of prediction model capabilities in cases where the model might struggle with specific subsets of the data. Lastly, a hierarchy of tasks within a benchmark with increasing difficulty could be built on the dataset decomposition in combination with the presented coarse dataset ranking.

6) Future Research Direction

This paper aims to constitute a step towards thorough dataset complexity analysis. The following paragraphs try to give some open research directions in order to expand on the approach and findings of this paper.

Consider Model Uncertainty. Currently, model uncertainty is only considered implicitly by averaging multiple instances of the presented pipeline when estimating the dataset pseudo-entropy. However, the variance of the ensemble is disregarded in this proof of concept. Thus it could be interesting to examine if explicitly incorporating the models ’ uncertainty about its output could benefit the entropy estimation.

Birds-eye View. So far, all compared datasets are birds-eye view datasets. While this view is the common case7 for long-term human trajectory prediction datasets, trajectory complexity analysis is also relevant for other views (e.g. frontal view). Considering the structure of the presented approach, the entropy estimation should work as long as the spatial sequence alignment is applicable to the scenario of interest. In the current state, the alignment model expects complete tracklets as input and thus does not have to cope with missing observations, for example arising through occlusions when using a frontal view. Following this, the alignment model should be extended accordingly and be evaluated on a range of different datasets with varying views and object types.

Appendix A

Dataset Details

In order to give more details on the datasets used throughout this paper, table 3 lists the number of samples included in each dataset, the recording conditions (location and acquisition) as well the average trajectory length (with standard deviation). In accordance with the evaluation section V, the annotation rate has been aligned for all datasets to a fixed rate of 2.5 annotations per second.

TABLE 3 Details of Pedestrian Trajectory Datasets Used in This Paper. As the BIWI, Crowds and Stanford Drone Datasets Consist of Several Scenes and Potentially Multiple Recordings, the (Sub-)Datasets are Denoted as ‘DatasetName’: ‘SceneName’ ‘RecordingNumber’
Table 3- 
Details of Pedestrian Trajectory Datasets Used in This Paper. As the BIWI, Crowds and Stanford Drone Datasets Consist of Several Scenes and Potentially Multiple Recordings, the (Sub-)Datasets are Denoted as ‘DatasetName’: ‘SceneName’ ‘RecordingNumber’

References

References is not available for this document.