I. Introduction
Multimedia has been extensively researched over the past decades, in which all the forms of multimedia, such as texts, audios, images and videos etc. can be regarded as external, since all of them essentially record what we see rather than what we thought. Yet some elements of the thoughts, such as imaginations, aspirations, and emotional memories etc., could be visualized and reproduced into a new form of multimedia. For the convenience of presentation, we coin such a new form of multimedia, reflecting the internal world inside human brains, as “brain-media”. As a matter of fact, studies on brain activities, especially via electroencephalograms (EEGs), have been researched across a number of areas, including neuroscience, brain science, psychology and computer science [1]–[4]. For the past decades, research on understanding brain activities has been active through EEGs evoked by specifically designed stimuli for brain computer interfacing (BCI) [5]–[7], and studies in both psychology and neuroscience reveal that up to a dozen of special categories can be recognized by event-related potential (ERP) recorded via EEGs [2], [8]. Further, a range of machine learning models [9]–[11] have also been developed to address the problem of multimedia-evoked brain understanding through approaches of pattern recognition and classifications, and many improved results have been reported in the literature. In this paper, we push the existing EEG-based brain research a step further and promote such research towards the direction of introducing a new concept of multimedia, i.e. brain-media, and hence explore the possibility of enabling people to see what we thought rather than what we see. To turn such an ambitious notion into a feasible research direction, we propose a GAN-based deep framework to visualize those brain activities evoked by natural images.