I. Introduction
F og computing first proposed in [1] brings cloud services to the edge of the network at a closer proximity to the source of data (end-user terminals). This allows local data processing at reduced latency levels without sending them back to the cloud core. Along this, fog paradigms serve delay-sensitive IoT applications that demand rapid service instantiation at minimal control-plane delays. The motivation for fog computing is to reduce the aggregated delays in cloud domains due to the extended propagation distances over geographically dispersed datacenters. Fog computing provides reduced link delays, thus reducing latency and improving response times. It also enhances services for remote locations to enable data computation and storage, where access to the cloud core is limited. Further, the local processing of data reduces cost of bandwidth and alleviates traffic at the cloud domain. However, one limitation of fog computing is the limited resources. Consequently, fog nodes are vulnerable to network congestion and rapid saturation during high traffic volumes, particularly when terminals demand intensive data computation for prolonged durations. This exhausts network resources, degrades the QoS, and reduces capacity and revenue. Hence, efficient resource management is vital for fog computing for enhanced network utilization.