I. Introduction
Human influence on the climate has been proven and recent anthropogenic emissions of greenhouse gases are the highest in history [1]. Global CO2 emission from energy-relevant fields amounted to approx. 33 Gt in 2019, despite around 2 Gt drop in 2020 due to Covid influence [2]. Decarbonization of the energy fields has become a common census in recent years, especially after the return of the US on the COP26. However, the increasingly stringent carbon mitigation objective leads to considerable techno-economic challenges: Imbalance of developments is becoming non-negligible. For instance, the cost of carbon capture, utilization and sequestration (CCUS) is still prohibitive in today's market, especially for natural gas carbon capture at around 80–90 USD/tCO2[3]. The massive penetration of intermittent renewables are supposed to be mandatory for the future energy system [4], nevertheless, their growth is hindered due to the asynchronized development of energy storage technologies, though their costs are competitive to fossil power plants in certain regions and conditions. One example is the abolished wind power projects in China that had been planned but had not yet been completed, due to concerns of energy waste in short-term by over-construction [5]. Another case occurred in recent decades within US cities: with a rapid rise in temperatures, the frequency of regional electric grid system failures is also rising, resulting in a growing number of blackouts during periods of extreme heat [6]. These two examples can be regarded as the direct consequence of inadequate energy planning, or the planning being not sufficiently flexible to be adjusted once the energy pathways are diverted from the original plan due to exogenous factors (such as oil price surge) and indigenous causes (e.g. increasing energy demands).