Introduction
The earliest industrial electrical power systems were deployed as direct current, and eventually alternating current, microgrids [1], [2]. Over time, advances in technology, economies of scale, and regulatory structures led to the modern interconnected bulk power system (BPS), where the majority of generation is in the form of large central units [3], [4]. However, over the past 30 years, there has been a resurging interest in microgrid deployments because of increasing deployment of distributed energy resources (DERs) and continuing advancements in technology [5].
While microgrids have been the standard of service in remote and islanded areas such as Bethel Alaska and the Hawaiian island of Moloka‘i [6], modern BPSs provide service in far more regions. However, the role of microgrids in regions with a BPS is expanding. This is due in part to the fact that at the end of the 20th century microgrids began to be seen as a potential technical solution to the integration of DERs. Specifically, the microgrid can be used as a point of aggregation for collections of DERs and end-use loads across the microgrid point of common coupling (PCC) [7]. This also includes the use of the microgrid to support critical end-use loads during an outage of the BPS.
Following over a decade of work on how stand-alone microgrids can aggregate DERs and support critical end-use loads, research began to examine how microgrids can operate as a resiliency resource, supporting end-use loads outside of the PCC [8]. In these scenarios, individual microgrids could support BPS operations and restoration in addition to traditional distribution system operations. Work in this area has included providing ancillary services, voltage support, and microgrids serving as active agents in power system restoration. In all these operations, each of the microgrids was treated as an agent under the control of a central authority and there was no bi-lateral coordination among microgrids. Additionally, it was typically assumed that the operational characteristics of the microgrids were similar.
As the number of microgrids continues to increase, the opportunity exists to coordinate the operations of networks of microgrids [9], [10]. The challenge with the control architectures of previous work was that they assumed a uniform operational environment, often with strong centralized control and similar microgrid characteristics [9]. This works for a system where the operating utility owns all of the microgrids, but it would be difficult to implement in a mixed ownership environment. In addition to the challenges of a mixed ownership environment where operational goals can vary, there is an increasing diversity in the type of microgrids being deployed. While the majority of the early microgrids were supported by diesel and natural gas rotating machines, newer microgrids are moving increasingly towards power electronics interfaced generation. The mix of rotating machines, grid-forming inverters, and grid-following inverters presents challenges in supporting the switching operations necessary to support networked microgrid operations [11], [12].
While there are benefits to centralized operations, there are limitations with respect to scalability, maintainability, resiliency, and flexibility. As an alternative to centralized control, the work in [5] presented a distributed control architecture using the Open Field Message Bus (OpenFMB) [13]. The benefit of the presented distributed architecture is that it enables peer-to-peer communication at the application layer, which can be implemented on commercial-off-the-shelf (COTS) intelligent electronic devices (IEDs). While the presented architecture enables peer-to-peer communications at the application layer, it does not pre-define any specific control actions. For example, while the OpenFMB architecture is distributed, it can still be used for the implementation of a centralized control system. However, the real benefit is to implement distributed control algorithms using a distributed architecture.
In comparison with the centralized control, the proposed distributed control framework requires additional investment cost for communication network since the existing supervisory control and data acquisition (SCADA) network is not exploited. Also, the performance of the centralized control is better than distributed control under normal conditions since more information about the system is available to the centralized control. However, in severe and adversary conditions when the centralized control is not available or attacked, the proposed distributed control can safeguard the system and loads. The cost of failure of the system and loads in these severe conditions can be much higher than the investment cost for distributed control.
A related work using distributed control for networked microgrids interconnection/disconnection was presented in [14], in which an average consensus algorithm is used to allocate the power support from the normally-operating microgrids to the on-emergency microgrid. Also, the microgrids are interconnected to a common point, i.e., a fixed topology is used for interconnection. In this work, a cloture vote consensus algorithm is used to determine the best interconnection of microgrids, through an iterative process, [15]. Therefore, the purpose and algorithm for microgrid interconnection are different from those in [14], while the topology for microgrids interconnection is not fixed as in [14]. Rather than that, the topology for microgrids interconnection is determined through an iterative process.
This paper presents a framework for how distributed controls can be implemented using an OpenFMB architecture to support networked microgrid operations. In particular, consensus algorithms are implemented to enable distributed networks of microgrids to coordinate their operations and achieve global objectives, without the need of a centralized control. While this framework can be used for a variety of operational goals, this paper focuses on microgrid self-assembly in support of critical end-use loads when the BPS is not available. The use of consensus algorithms addresses the complex operational environment of networked microgrids, where there is mixed ownership, mixed control systems, and mixed microgrid characteristics.
The rest of this paper is organized as follows. Section II discusses the use of consensus algorithms and Section III provides the operations of networked microgrids for self-assembly. Section IV shows the implementation of control architecture in hardware and software as being developed for deployment at the Electric Power Board of Chattanooga (EPB). Section V presents a worked example of how the consensus algorithms achieve global objectives and Section VI contains a summary and the concluding comments.
Consensus Algorithms for Distributed Decision Making
The majority of modern electric power systems rely on centralized control systems that enable utilities to provide safe, reliable, and cost effective electricity to their end-use customers [3], [4]. While centralized control has historically served utilities well, it faces a number of challenges with changing operational environments. Specifically, the increasing number of extreme weather events and the increasing deployment of DERs.
A centralized control system supports optimized operations because of the centralization of data, but it also represents a single point of failure. During an extreme event the loss of the control system, or communications with it, can result in degraded operation or a loss of electric service. Additionally, there are practical limitations to the number of DERs that a centralized control system can actively manage. These are just two of the reasons that utilities are beginning to transition away from fully centralized systems, to systems that have some level of distributed control.
Distributed controls offer a range of capabilities for reducing single points of failures and managing large numbers of distributed devices. However, there are also challenges associated with the practical and effective implementation of distributed controls on electric power systems. Because many of the DERs being deployed today are not utility-owned assets, it is necessary to develop distributed controls that can coordinate the large number of non-utility assets with the utility assets. Specifically, moving beyond the simple ability of a centralized distributed energy resource management systems (DERMS) to issue dispatch signals. One option for coordinating these assets is the use of a consensus algorithm, which is a process for achieving agreement on a single data value among distributed processes or systems.
While there are a wide range of consensus algorithms, two classes that have been used for power systems applications are: 1) agreement/consensus protocols and 2) decentralized optimization/consensus algorithms [16]–[21]. When applied to electric power systems, the benefits of consensus algorithms over centralized systems can include:
Removal of single point(s) of failure: while centralized systems such as a DERMS or a distribution management system (DMS) have significant operational benefits with respect to optimal operation of the system, they represent single points of failure.
Scalability: With the increasing deployment of distributed devices, the scalability of centralized control architectures becomes a challenge [22], [23]. By enabling a level of distributed coordination, increased numbers of devices can be deployed.
Reduced data concentration: Instead of needing to support the centralized collection of all data, it is possible for decisions to be made remotely and only the necessary derivative data sent to the central control, instead of all raw data.
Other prominent features of consensus algorithms include, but are not limited to, Byzantine fault tolerance, low computing requirements, and asynchronous and practical (time) convergence. Consensus algorithms leverage the communication interconnectedness of agents updating local values based on information from neighboring nodes, known as information fusion [24]. Algorithms such as averaging [16] and binary consensus [17] have been used for drone formation control and sensor anomaly detection and are starting to be applied to power systems. For example, in [18] average-based consensus protocols were used to determine droop control setpoints to achieve a global objective of balanced power sharing. Consensus algorithms also leverage local exchange and information fusion, but to the end that each agent updates its local solution, often a decentralized subproblem of a centralized parallel problem. Decentralized variants of common parallel algorithms, such as alternating direction method of multipliers (ADMM) [25], have begun to see power systems applications in optimal power flow [19], economic dispatch [20], and load or electric vehicle charging scheduling [21].
The complexity of the use-case of interest and availability of information that can be exchanged between the microgrid controller agents are drivers in which consensus algorithms are suitable. For self-assembly, it is assumed that the participating microgrids seek to achieve the same global objective, for this paper, increasing the energy supplied to critical end-use loads.
This scenario can be further complicated by considering the situation where there is still the community objective of increasing critical load run-time, however, some microgrid agents are instead acting as “greedy” actors, prioritizing instead their own local objective. Similarly, microgrids may provide biased or inaccurate information unintentionally, but an appropriate collaborative algorithm should still be able to provide a solution in the presence of this potential unreliability. In either case, the determination of global actions involves the synthesis of the local calculations, the determination of global actions from the synthesized data, and communication of the validated actions.
Networked Microgrid Self-Assembly Operations
Despite the size and complexity of modern BPSs, outages still occur that affect millions of customers in extreme cases [26]–[29]. Historically, only the most critical end-use loads could justify the expense of providing dedicated backup generation to provide power during an outage. This typically includes loads such as hospitals, wastewater treatment facilities, and police stations, but is expanding with increased DER deployments.
When DERs are interconnected with a microgrid controller to form a microgrid, it has been well documented that they can support end-use loads when the BPS is not available [30], [31]. By combining multiple generating units in a microgrid it has been shown that there are increases in efficiencies when compared to stand-alone operation [32]. An extension of the single stand-alone microgrid is the networking of multiple microgrids to further increase operational flexibility. When implemented, networking microgrids have historically been envisioned with nested or adjacent microgrids with a strong central control [9], [11], [33].
With the increasing deployment of DERs, the associated deployment of microgrids is also increasing, to the point that there are now instances of multiple microgrids on the same, or adjacent distribution circuits. With the proper planning and optional coordination this presents the potential for networking individual microgrids across primary distribution systems to support critical end-use loads [34], [35].
A. Networked Microgrid Operations
While microgrids can be networked at lower voltages, e.g., 480V, for utility-scale deployments they typically are connected at primary distribution voltage levels, between 4.2 kV and 50.0 kV depending on the global region. When connected at primary distribution level voltages, networked microgrids provide the technical potential for increased operational flexibility. The benefits of networked microgrid operations can include, but are not limited to, increased efficiency, reduced emissions, increased run-time for critical end-use loads, and an increased amount of load that can be supported [32]. With regards to supporting end-use loads during outages, networked microgrid operations share some characteristics with existing self-healing systems, but there are also significant differences.
B. Traditional Self-Healing Systems
The deployment of distribution automation systems is increasing with fault location, isolation, and service restoration (FLISR) being one of the most common [36], [37]. The deployment of FLISR, also referred to as self-healing systems, allows for the automated reconfiguration of a distribution system to maximize the amount of load that can be supplied after an outage event. These systems can be stand-alone applications or integrated as part of a DMS and require at least one substation source. Operational systems typically only reconfigure the system without dispatching DERs. There are academic papers that have examined DERs and microgrids as active elements of a FLISR scheme [38], but they still require a centralized control and at least one energized substation.
C. Microgrid Self-Assembly
The self-assembly of microgrids is a combination of microgrid operations and distribution system reconfiguration, similar to FLISR, but without the need for centralized control. The goal of self-assembly is for microgrids to distributedly determine which microgrids should electrically interconnect, or potentially separate, to achieve operational objectives. While there is a wide range of possible operational objectives, two of the most common are increased run-time of critical end-use loads and/or increased redundancy. For this process to work, it is necessary for microgrids to have a minimum level of agreement on what the objectives are. It would not be practical for microgrids with conflicting operational objective to be coordinating because they would not have a common basis to agree on. However, it would be possible for groups of microgrids with different microgrids to form sub-groups with common operating objectives. This would be a more complicated process and the work in this paper will focus on the first case where the microgrids do have a single common objective.
It is the distributed operation of microgrid controllers to support end-use load that differentiates self-assembly from the traditional reconfiguration problem [38], [39]. In addition to supporting end-use loads within the microgrid PCCs, microgrids may support end-use loads between microgrid PCCs as a consequence of the switching operations to interconnect microgrids.
A six-step process, as shown in Figure 1, is presented for how microgrid self-assembly can be implemented without centralized control. The objective of the self-assembly process is to determine which microgrids should interconnect, or separate, to increase the value of a calculated objective function, such as maximizing the energy supplied to critical end-use loads.
1) Self-Assembly Step 1: Data Exchange I
To be independent of centralized control, it is necessary for microgrids to exchange information distributedly. While this could be accomplished using a SCADA system if it were properly configured, most SCADA systems use a centralized architecture. To avoid centralization, this work uses the distributed reference architecture provided by OpenFMB [13]. OpenFMB can implement a range of publish and subscribe (pub/sub) protocols such as data distribution services (DSS), NATS, and message queuing telemetry transport (MQTT) [5]. Containerized applications at each device support connections to device hardware using protocols such as distributed network protocol 3 (DNP-3), American National Standards Institute (ANSI) C12, and Modbus. With this reference architecture, it is possible for each device to exchange information peer-to-peer at the application layer [5].
The specific data to be exchanged can vary, but it must be sufficient for the local calculations to be conducted in Step 2 Common data to exchange includes voltage at the PCC, installed generating capacity (active and reactive), committed generating capacity, end-use load estimates, connectivity and/or planning model infrastructure information (i.e., the network model), and recloser/breaker status values. An important aspect of mixed ownership is that some microgrids may not share all available information for business and/or information security concerns. As such, each microgrid controller can select what data it shares with the other microgrids [40]. Byzantine (“greedy”) actors may even act in opposition to the goals of other participants. Despite the fact restricted data sharing may lead to sub-optimal calculations, it is an essential element of operating in a mixed ownership environment.
2) Self-Assembly Step 2: Local Calculations I
For the problem of self-assembly, the fundamental question is which microgrids should electrically interconnect, or separate, to achieve the operational objectives. Because self-assembly is accomplished in a distributed environment it is not possible to reduce this to a centralized optimization. Additionally, the mixed-ownership environment of networked microgrid operations means that each microgrid will have varying levels of measurement accuracy, as well as different methods/algorithms for calculating which microgrids should interconnect. In Step 2, each microgrid first calculates an objective function based on shared information from Step 1, as well as local calculations. The specific objective function can be as simple or complex as desired, as long as each microgrid is calculating the same value. For example, a simple objective function would be to maximize the amount of energy delivered to critical end-use loads. A generalized example of such an objective function is shown in (1).\begin{equation*} \boldsymbol {J}\left ({\boldsymbol {x} }\right) =\sum \limits _{ \boldsymbol {i}=\mathbf {1}}^{ \boldsymbol {j}} \int _{\mathbf {0}}^{ \boldsymbol {T} } { \boldsymbol {S}\left ({{ \boldsymbol {LD}}_{ \boldsymbol {i}} }\right) \boldsymbol {dx}}\tag{1}\end{equation*}
A key aspect of the presented method is that each microgrid can use its own method for determining the objective function. This is key to a mixed-ownership environment where microgrid controllers may have different levels of data available because of different levels of participation. Additionally, some microgrid controllers may use more complicated calculations while others use simple rank-ordered lists. By not predefining algorithms the architecture allows for a wider range of controllers to participate.
The objective function must be calculated for the microgrid in stand-alone operation, the microgrids when interconnected with other microgrids, and for combination of other microgrids that do not include itself. The objective function for any two microgrids \begin{align*}&\hspace {-.5pc} \boldsymbol {R}_{ \boldsymbol {n}, \boldsymbol {m}}^{ \boldsymbol {i}} =\begin{cases} \boldsymbol {h}_{ \boldsymbol {i}}\left ({\boldsymbol {S}_{ \boldsymbol {i}} }\right)& \boldsymbol {if}~ \boldsymbol {i}= \boldsymbol {n}= \boldsymbol {m} \\ \boldsymbol {f}_{ \boldsymbol {n}, \boldsymbol {m}}^{ \boldsymbol {i}}\left ({\boldsymbol {S}_{ \boldsymbol {i}}, \boldsymbol {D}_{ \boldsymbol {n}}, \boldsymbol {D}_{ \boldsymbol {m}} }\right)& \boldsymbol {if}~ \boldsymbol {n}\in \boldsymbol {E}_{ \boldsymbol {i}}~ \boldsymbol {and}~ \boldsymbol {m}\in \boldsymbol {E}_{ \boldsymbol {i}}, \end{cases} \\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \forall \boldsymbol {i}\in \boldsymbol {L}\subseteq \boldsymbol {G}\tag{2}\end{align*}
Using the notation in (2), and the appropriate objective function that calculates total energy served (1), each microgrid calculates the value of the objective function for stand-alone operation and for each pair of possible microgrid interconnections, including pair-wise connections that it is not included in; specifically,
The values in Table 1 are provided as representative for the consensus algorithm process and are provided without additional details. A more complicated objective function could account for variations over time, but this is not included here since the focus of this paper is the implementation of a consensus process.
3) Self-Assembly Step 3: Data Exchange II
Once the individual microgrid controllers have calculated the values of their objective functions they have an estimate of how much energy can be supplied to critical end-use loads when they operate independently and when they are connected to another. A second data exchange is conducted to exchange objective function values for independent stand-alone operation. Specifically,
4) Self-Assembly Step 4: Local Calculations II
Once the values of the local objective functions are calculated and the stand-alone values are exchanged, it is necessary to calculate the differential values between stand-alone operation and when interconnected to another microgrid. For example, the value of interest is not just how much load can be supplied when two microgrids are interconnected, it is how much more load can be supplied compared to the two microgrids operating independently. This local calculation is shown in (3):\begin{align*} { \boldsymbol {\Delta R}}_{ \boldsymbol {n}, \boldsymbol {m}}^{ \boldsymbol {i}} \!=\! \boldsymbol {R}_{ \boldsymbol {n}, \boldsymbol {m}}^{ \boldsymbol {i}} - \boldsymbol {R}_{ \boldsymbol {n}, \boldsymbol {n}}^{ \boldsymbol {i}} - \boldsymbol {R}_{ \boldsymbol {m}, \boldsymbol {m}}^{ \boldsymbol {i}},\quad \forall \boldsymbol {n}, \boldsymbol {m}\in \boldsymbol {E}_{ \boldsymbol {i}},~\forall \boldsymbol {i}\in \boldsymbol {L}\subseteq \boldsymbol {G}\!\! \\{}\tag{3}\end{align*}
Because the calculations of (2) and (3) are performed locally, possibly using different equations for the objective function (1), it is possible, and even probable, that microgrid controllers will calculate different values of the objective function. This can be due to differences in measured values and/or differences in how the value of the objective function is calculated. Regardless, the differences will be addressed using the consensus algorithm in Step 5.
Using (3), microgrid \begin{equation*} \boldsymbol {x}_{ \boldsymbol {i}}\left ({\mathbf {0} }\right) =\mathop { \boldsymbol {max}}\limits _{ \boldsymbol {n}, \boldsymbol {m}\in \{ \boldsymbol {i}\} \boldsymbol {E}_{ \boldsymbol {i}}}\left \{{{ \boldsymbol {\Delta R}}_{ \boldsymbol {n}, \boldsymbol {m}}^{ \boldsymbol {i}} }\right \},\quad \forall \boldsymbol {i}\in \boldsymbol {L}\subseteq \boldsymbol {G}\tag{4}\end{equation*}
Using (4) each microgrid determines the pair-wise interconnection that would yield the highest value of the objective function differential. The calculation made at each individual microgrid controller is a local maximum. However, due to the local nature of the calculation, it is necessary to compare this value with the values developed by other microgrid controllers, and to have a method to address outliers that occur due to error or Byzantine behavior.
5) Self-Assembly Step 5: Global Maximum
At this point, each of the microgrids will have a list of objective functions calculations, including the differentials from (3), and an estimate of the local maximum from (4). However, with only local calculations there is the possibility of outliers and/or Byzantine behavior will mask the optimum value. It is in Step 5 that the microgrids have to coordinate their local calculations from Step 4 into a single global value, i.e., the consensus as to which two microgrids should interconnect, or separate, at this iteration. While the local maximum can be found in a centralized manner by gathering the distributed values at one node and applying a maximization function, a distributed approach provides many benefits. One such simple distributed approach is a ring-reduce algorithm [41]. While the ring-reduce approach will allow for the local maximum to be compared to determining a global maximum, it does not provide for the identification and elimination of outliers. For example, a single error in any calculation of (3) or (4) could yield an incorrectly large local value of (4) for a microgrid controller, which in the ring-reduce algorithm would be determined as the global maximum.
To be able to implement a consensus that is able to reject outlier data a Cloture Votes approach is used [15]. The cloture approach is based on a parliamentary process where it is desirable to achieve the consensus within a fixed number of rounds. Specifically, the cloture approach of [15] will achieve a consensus in
The implementation of the cloture vote, also referred to a Phase King Algorithm, follows the process presented in [15]. The algorithm is executed in a two round per phase approach with a maximum number of phases equal to
Round 1:
Step 1: Every processor,
, broadcast preferred value,p_{i} , to all other processors.v_{i} Step 2: Let
be the most frequently received value within a processor, as defined as a simple majority, includinga . Setv_{i} to an arbitrary value in case of a tie.a Step 3: Set
.v_{i}=a
Round 2:
Step 1: The King Processor,
, broadcasts its preferred valuep_{k} , to all other processors.v_{k} Step 2: For each processor
, if there was not a simple majority in Round 1, setp_{i} . Otherwise, retain preferred value forv_{i}=v_{k} .v_{i} Step 3: If no processor updates it value in Step 2 a consensus has been reach. If any processor updates its value in Step 2 the process moves to the next phase, restarting with Round 1.
The two round process continues until at the end of the second round no values of
6) Self-Assembly Step 6: Execute Switching
Once a consensus has been reached in Step 5, The necessary switching operations are executed to synchronize, parallel, and interconnect the two microgrids. Since this involves operations on the primary distribution systems, it would be expected that utility-owned microgrids, or other utility asset, would issue the switching commands since non-utility microgrid typically do not have operational control of utility assets. While not considered in this example, in an industrial setting it would be possible for the customer to own, and operate, the system equipment at the primary distribution voltage level, allowing any microgrid to issue switching commands. Once the switching commands have been issued, the process repeats itself.
The above Steps 1–5 are used to determine the best pair of microgrids to interconnect at each time. This process is performed continually over time to determine the best interconnection of multiple microgrids by the following iterations. For iteration
Assuming that a set of set(s), denoted as
, ofL_{k} microgrids were determined for interconnection in the previous iteration (N ). That is, each element ofk-1 is a set of two or more interconnected microgrids. Note thatL_{k} can be an empty set. E.g., the situation ofL_{k} in Figure 5. In addition, a sett=0 is defined, where\Upsilon _{k}=\mathrm {\zeta }\left \{{L_{k} }\right \} denotes the operator that extracts the elements of the set(s) of\zeta \left \{{\cdot }\right \} to form a new set without duplications.L_{k} Objective function: each microgrid calculates
with the understanding that one set of microgrids, denoted as\Delta R_{n,m}^{i} , are already interconnected. Note thatI_{k} . In particular, ifI_{k}\in L_{k} ,n , thenm \in I_{k} is the global maximum calculated in the iteration (\Delta R_{n,m}^{i} ) corresponding with the setk-1 . IfI_{k} but mn\in I_{k} , then\notin \Upsilon _{k} where\Delta R_{n,m}^{i}=R_{n,m}^{i}-R_{n,n}^{i}-R_{m,m}^{i} is the supporting time when the individual microgridR_{n,m}^{i} is added into the interconnectionm ,I_{k} is the supporting time of interconnectionR_{n,n}^{i} , andI_{k} is calculated as normal. IfR_{m,m}^{i} andn\notin \Upsilon _{k} thenm \notin \Upsilon _{k} is calculated as normal. If\Delta R_{n,m}^{i} ,n\in I_{k} , butm\in \Upsilon _{k} (e.g.,m\notin I_{k} ), thenm\in J_{k}\in L_{k} where\Delta R_{n,m}^{i}=R_{n,m}^{i}-R_{n,n}^{i}-R_{m,m}^{i} is the supporting time when the interconnectionR_{n,m}^{i} is merged into the interconnectionJ_{k} ,I_{k} andR_{n,n}^{i} are the supporting time of interconnectionR_{m,m}^{i} andI_{k} , respectively.J_{k} Interconnection determination: Note that the set
supports the load for the longest time compared to all other sets ofI_{k} microgrids withh . As such, if this new global maximum is positive, then the new global maximum value obtained at iterationh\le \mathrm {k} can only correspond to the interconnection ofk with another microgrid. Also, the interconnection ofI_{k} and this microgrid can support load in the longest time compared to any interconnection ofI_{k} microgrids withh , and hence, the interconnection ofh\le k+1 and this microgrid should be formed. If the new global maximum is negative, then there is no interconnection ofI_{k} and another microgrid can support load for a longer time thanI_{k} , and hence, no new interconnection of (I_{k} ) microgrids should be formed at this time.k+1
Architecture for communications and control to support the implementation of consensus algorithms.
Illustration of cloture voting implementation in a single phase. (a) initial values and data exchange (b) formation of local list and selection of most frequent value a (c) King value sent to all processors (d) comparison of King value and local values of a, consensus of solution.
Conceptual sequence diagram additional iterations of the process in Figure 1, resulting in various microgrids interconnecting and separating over time.
The process in Figure 1 is iterative because a single pass-through only determines the first pair of microgrids to be interconnected or separated. It is necessary to continue the process to determine if there are other interconnections that should be made. Additionally, over time the resources and loads of each microgrid will change, which will change the results. As a result, over time different microgrids will interconnect, and separate, as dictated by the global objectives. In this way, the microgrids continually evaluate the options for networked microgrids operations, it is not just a single static decision.
7) Robustness to Outliers
In the presence of Byzantine (“greedy”) actors or significant noise, the method as described in [15] is robust to a known level. Specifically,
If additional capabilities to reject Byzantine actors is necessary, additional robust data synthesis or fusion methods can be incorporated in Step 4, further leveraging the second data exchange of Step 3. Some of the most common methods are based on probabilistic identification and down-weighting of biased or outlier objective value function estimates when determining a local maximum or reaching a consensus. In the simpler case of unbiased, but noisy, estimates a weighted mean can be used. In the case of greedy actors, they may be providing biased estimates from an unknown distribution. Such a case requires a more complex approach such as Bayesian Markov chain Monte Carlo mixture model probabilistic outlier rejection [42]. All such methods require some means of associating an uncertainty with each microgrid controller estimate. This can be based on a controller’s confidence level mapped to a noise distribution, or empirically estimated variance as the process in Figure 1 is repeated. The overall self-assembly method proposed is flexible enough to incorporate techniques such as this in the calculation and consensus steps to leverage additional information and provide more robust results. Future work will incorporate the additional complexity to address Byzantine actors and noisy raw data.
Architecture and Implementation Example
The implementation of the concepts in the previous two sections requires an architecture that allows for operations in an electrical power system. Specifically, a system architecture that allows for the exchange of information, the determination of a consensus, and the execution of operational actions as shown in Figure 1, on an industrial control system.
To maximize the resiliency of self-assembly, distributed communications and control architectures provide a range of benefits. While there have been many distributed control architectures proposed in the literature for power systems [43], few have progressed past the stage of simulation or laboratory-level evaluation. This section examines how the OpenFMB reference architecture has been used to implement hierarchical controls in preliminary field tests and deployments. It is via the OpenFMB architecture that individual microgrids exchange information in Steps 1 and 3 of Figure 1, as well as being the mechanism by which the consensus is achieved in Step 5. OpenFMB is a standards-based reference architecture that enables the coordination of grid edge devices through interoperability and distributed controls [5].
A. Layered Control Architecture
A key purpose of “grid architecture” is to help manage complexity and risk [44], [45]. To this end, a properly developed architecture is designed to illustrate a basic relationship between structures, and not to present a complete design. For the determination of a consensus, the relationships between elements may represent a range of interactions that can include, but are not limited to, the flow of power, control signals, and equipment data.
For the purposes of this work, the structure is created by showing the relationship between the key entities/elements that participate in the functions included in networked microgrid operations. Figure 2 illustrates the general architectural features for the implementation shown in Figure 1. In Figure 2, the individual elements are connected by three colors of lines, each indicating a different type of interaction. First, the red lines indicate the flow of electricity between entities or devices. Second, the green lines indicate data/information flow between entities and/or devices. And third, the blue lines indicate control signals.
One approach to managing the complexity of networked microgrid operations is to apply the principles of laminar decomposition. In this approach, an optimization problem and associated constraints are defined and then decomposed into one or more layers of sub-problems that can be solved simultaneously in each layer. To implement a consensus for networked microgrid operations the optimization problem must be defined. For the goal of microgrid self-assembly, the problem is as follows: multiple individual microgrid controllers operate to achieve their local objectives informed by data and operational information from other microgrids. Specifically, determining the interconnection of which two microgrids will result in the greatest increase in supported critical end-use loads. To achieve this goal, this paper will present an architecture with three layers. The layers are as follow:
Layer-1: Individual device to microgrid controller. Individual devices including IEDs, controllers, and sensors will communicate directly with the individual microgrid controllers. This includes the exchange of information as well as control signals. Interactions at this layer can be a traditional centralized approach or a peer-to-peer implementation; either is feasible given the limited size of individual microgrids. For this work, individual devices will only be able to communicate with their host microgrid controller, and no other microgrid controllers.
Layer-2: Microgrid controller to microgrid controller. At this layer, the individual microgrids exchange information and control signals. Interactions at this layer could be a traditional mapped SCADA system, but for scalability, a peer-to-peer approach is typically be more appropriate; this work is using OpenFMB. For microgrid self-assembly, Layer 2 operations are where the information exchange to support consensus algorithms operates. This can be seen by the green data connections between the microgrid controllers.
Layer-3: Microgrid controller to centralized control. At this layer, the utility’s centralized DMS communicates with the microgrid controllers. For non-utility assets the centralized control is an aggregators or some other similar entity. At this level observability and supervisory functions can be included.
B. Implementation of Layered Architecture
The architecture of the previous section shows the layers at which the control and communications of Figure 1 can be implemented. Implementing this architecture in hardware and software can be done in a number of ways, and this section shows one such possible implementation. This implementation is based on associated past work and currently ongoing work as shown in Figure 3.
The implementation of the architecture presented in this section is accomplished using COTS equipment and open-source software. Specifically, Schweitzer Engineering Laboratory (SEL) Compact Automation Controllers (SEL-3360) are used for the hardware, with containerized software installed [46]. On each automation controller, there are three software containers. The first is a containerized version of the open-source Complete System level Efficient and Interoperable Solution for Microgrid Integrated Controls (CSEISMIC) microgrid controller [47]. The second is a containerized version of the OpenFMB adaptors that connect the hardware to the pub/sub network of the OpenFMB Harness [13]. The third is a containerized version of the consensus algorithms.
In the implementation of the layered architecture, the controller for each microgrid, CSEISMIC for this work, directly manages the assets within the microgrid behind the PCC. This includes data collection as well as controls. The OpenFMB adaptors allow each of the microgrids to publish and subscribe selective data from other devices. The containerized consensus algorithms allow for the exchanged data to be compared and a global objective to be achieved.
All of these functions can be implemented on each of the local automation controllers, effectively implementing the architecture shown in Figure 2. Also reflected from the architecture in Figure 2 is a connection via the pub/sub network to the DMS, allowing for centralized supervision and/or control if desired, but it is not required [5].
While this implementation used SEL hardware, CSEISMIC microgrid controllers, and OpenFMB adaptors, there is nothing in the presented work that is specific to these selections. All of the presented work can be implemented using different hardware and software; it is vendor and device agnostic.
Five Microgrid Example
This section contains a worked example of the process presented in Section II, which can be implemented in the Architecture of Section III. To demonstrate the proposed collaborative decision-making procedure, a scenario with 5 participating microgrid controllers is presented. While a full one-line diagram is not included, the following assumptions provide the necessary level of detail to highlight the collaborative process:
It is considered that 8 microgrids are present, however only 5 of the 8 microgrids have electrically viable interconnection paths, and/or have agreed to the same global objective (#1, #3, #4, #6, and #7).
All 5 of these microgrids have the capability to interconnect to any one of the other microgrids.
The list contains all possible combinations (i.e., there are no other electrically feasible combinations of microgrids)
The objective function to maximize is the Critical kVA-hr. supplied.
A. Step 1: Data Exchange I
In Step 1, each of the microgrid controllers exchange information that informs the calculation of their list of objective function calculations in Step 2. This includes raw data as well as calculated data.
B. Step 2: Local Calculations I
Each of the microgrid controllers calculates the kVA-hr. of critical end-use load it can supply in stand-alone operation, and for all pair-wise combinations of other microgrids. As previously mentioned, there are numerous possible methods for determining these estimates, and different microgrid controllers may use different methods. Regardless of the method, the value calculated in (1) will be uniform with units of kVA-hr. For this example, Table 2 shows the individual values of
It should be noted that calculations for stand-alone operation are only performed when
C. Step 3: Data Exchange II
The primary piece of data that is exchanged in this step is the individual microgrids estimates for stand-alone operations. Specifically,
D. Step 4: Local Calculations II
Once the individual values of (2) are calculated and the values of
For each of the individual lists of differential energy values in Table 3, equation (3) is used to determine the pair-wise set of microgrids in each list that results in the largest increase in kVA-hr. supported. Each microgrid then calculates its local maximum using (4), The highest value from each microgrid is shown in Table 4.
From Table 4 it can be seen that 3 of the five microgrid controllers have determined that the 1–4 connection is the best choice. However, it can also be seen that the microgrid controller 4 determined the best connection to be 1-7, and it also generated the highest value of
E. Step 5: Consensus Determination
In the literature, there are several methods for implementing a distributed calculation of a global maximum [41], [48]. From the previous section it can be seen that a simple consensus algorithm such as a ring-reduce would not be sufficient because of the existence of outliers. Specifically, microgrid controller 4 has the highest value of
From Table 4, it can be seen that the 5 microgrid controllers do not all arrive at the same local maximums. For this example, the implementation of the cloture vote is shown in the four panels of Figure 4. In Figure 4a microgrid controller 1 is selected as the King Processor,
In Round 2, the King Processor sends it value,
For this example, the consensus was achieved in a single phase because there were no Byzantine actors, only outlier values. However, the cloture method has shown to be effective for Byzantine actors as well as outliers, as long as there is not an excessive number of Byzantine actors. For this system, the cloture algorithm is proven to reach a consensus in no more than 2 phases if there is no more than a single Byzantine actor. With 5 microgrid controllers, a consensus is guaranteed with a single Byzantine actor, but for no more [15]. Once the consensus has been determined in Step 5, the switching operations are executed in Step 6.
F. Step 6: Execute Switching
After the completion of Step 5, it has been determined that the objective function is maximized by interconnecting microgrid 1 and 4. In Step 6, the appropriate switching actions are taken to synchronize and parallel the two microgrids. From Figure 2, it can be seen that this is either done distributedly by a utility microgrid or centrally with the utility DMS if it is available. While the DMS can issue the switching commands, they key of the distributed process is that the DMS is not required. The process for paralleling two sources is well understood and as such will not be addressed in this paper.
In this case, the objective function was maximized by paralleling two microgrids. In practice, there will be scenarios where the objective function is maximized by separating a microgrid that was previously formed by combining two microgrids. Specifically, a microgrid that has two controllers can be separated into two individual microgrids. This could occur for a number of reasons, such as increased consumption of end-use loads between the two microgrid PCCs.
For this reason, the six-step process in Figure 1 is iterative and over time microgrids would interconnect and separate as necessary. A conceptual example diagram of this is shown in Figure 5, with the example from Section V shown as the transition between time t = 0 and t = 1.
For practical considerations, thresholds for time and/or minimum change in objective functions would need to be established to prevent excessive operations of switching equipment at the primary distribution level. The value of these thresholds would vary by utility and could change over time, but initial values such as 15 minutes and 5% change in objective function would be reasonable initial values to consider.
Concluding Comments
This paper has presented a framework for how consensus algorithms implemented on a distributed architecture, OpenFMB, can coordinate microgrid self-assembly. The use of consensus algorithms has been shown to be an option for addressing the control challenges associated with the mixed-ownership environment of networked microgrid operations. Specifically, equitably addressing the variations in microgrid ownership, operational objectives, control structures, and microgrid capabilities.
While the presentation of this framework has been applied to microgrid self-assembly using a cloture algorithm, it can be used for a variety of networked microgrid operational goals. Future work will extend the framework to include objective functions where networked microgrids support the bulk power system as well as examining consensus algorithms that can account for multi-objective optimizations.
ACKNOWLEDGMENT
Contributions to this project were achieved through the Grid Modernization Laboratory Consortium (GMLC), a strategic partnership between the U.S. Department of Energy and the National Laboratories. The GMLC was established as part of the U.S. Department of Energy Grid Modernization Initiative (GMI) to accelerate the modernization of the U.S. electricity infrastructure. Work at Lawrence Livermore National Laboratory was conducted under contract pursuant to release number LLNL-JRNL-828940. The views expressed in the article do not necessarily represent the views of the U.S. Department of Energy or the U.S. Government.