I. Introduction
EVOLUTIONARY algorithms (EAs) are bio-inspired stochastic optimization techniques [1], [2] that are characterized by a population of virtual agents or individuals. Their remarkable feature is that from a collection of simple rules mimicking Darwinian evolution emerges an implicitly parallel search engine capable of dealing with a variety of complex optimization problems. Although significant research has been carried out over several years towards the advancement of EAs, we find that the majority of these works are limited to the case of handling a single optimization problem (usually belonging to a specific domain) at a time. Despite the known power of implicit parallelism [3], seldom has an effort been made toward exploring the implications of evolutionary multitasking, i.e., to solve multiple optimization problems concurrently using a single population of evolving individuals [4]. It is contended that the potential for multitasking is in fact a feature exclusive to population-based search algorithms, one that undeniably sets them apart from their classical mathematical counterparts. Moreover, the benefits of appropriately harnessing this potential can be numerous. From a theoretical point of view, it may be possible to leverage upon the underlying synergies between objective function landscapes of distinct optimization tasks in an implicit manner, thereby enabling accelerated convergence towards the global optimum of multiple tasks at once. In fact, in the long run, an ideal evolutionary multitasking engine is envisioned to be a complex adaptive system with its performance being at least comparable to that of standard serial evolutionary optimizers of the present day.