I. Introduction
FIBER-OPTIC systems have become the backbone of the telecommunications industry. The introduction of the erbium-doped fiber amplifier (EDFA) and the subsequent growth of wavelength division multiplexed (WDM) signals have enabled a rapid growth in the high-bandwidth infrastructure based on fiber optics. Several key technological advancements, in particular the introduction of 10-Gb/s transmission optics and the more recent advancement of reconfigurable optical add/drop multiplexers (ROADMs), have continued to strengthen the capabilities of the fiber-optic network. These technologies have improved the economics of optical transport systems. However, one of the inherent shortcomings of the optical management of the communication network has been the high cost of gaining access to the bits themselves. In response, the optical network has been designed to minimize the need for management of the signal via digital infrastructure, with the result that it has been burdened instead with the need for analog management and forced to use proxies such as optical signal-to-noise ratio (OSNR) to infer the bit error rate (BER) of the system. This strategy has indeed improved the capacity and cost per gigabit of today's fiber-optic networks compared to those operating circa 1990 but at the cost of a reduction in network functionality and flexibility. Synchronous optical network (SONET)-like performance management is carried out less frequently throughout the network, and lower granularity in fault management has been the result. Issues connected with wavelength reuse have led to increased complexity in the network and inefficient bandwidth utilization. Switching and grooming, once applied at subwavelength granularity, have been restricted to management at the wavelength scale. In addition, the network is now encumbered with an infrastructure that is no longer scalable at a rate consistent with data growth (~2× per year) and is unable to yield the cost reductions necessary for continued economic well being.