I. Introduction
The concept of Big Data mainly refers to data that exceeds the processing capacity of conventional database systems. Data is too big, moves to too fast and/or does not fit in classical database based architectures [1]. To address these new challenges, research innovation on elastic parallel and scalable algorithms is necessary [2]. Computational modelling and simulation are central to numerous scientific and engineering domains, being a good example of Big Data generation and analysis [3]. Basic simulation data is often 4D (three spatial dimensions and time), but additional variable types, such as vector or tensor fields, multiple variables, multiple spatial scales, parameter studies, and uncertainty analysis can increase the dimensionality. Workflows and systems for interacting, storage, managing, visualizing and analysing this data are already at the breaking point [4]. And as computations grow in complexity and fidelity and run on larger computers and clusters, the analysis of the data they generate will become more challenging still [5].