Terabit Switching and Routing | Electronics Seminar Topic
Terabit Switching and Routing
In the present network infrastructure, world's communication
service providers are laying fiber at very rapid rates. And most of the fiber
connections are now being terminated using DWDM. The combination of fiber and
DWDM has made raw bandwidth available in abundance. 64-channel OC-192 capacity
fibers are not uncommon these days and OC-768 speeds will be available soon.
Terabit routing technologies are required to convert massive amounts of raw
bandwidth into usable bandwidth.
Present day network infrastructure is shown in Fig 1.
Currently, Add/Drop multiplexers are used for spreading a high-speed optical
interface across multiple lower-capacity interfaces of traditional routers. But
carriers require high-speed router interfaces that can directly connect to the
high-speed DWDM equipment to ensure optical inter operability.
This will also remove the overhead associated with the extra
technologies to enable more economical and efficient wide area communications.
As the number of channels transmitted on a single fiber increases with DWDM,
routers must also scale port densities to handle all those channels. With
increase in the speed of interfaces as well as the port density, next thing
which routers need to improve on is the internal switching capacity. 64-channel
OC-192 will require over a terabit of switching capacity.
Considering an example, a current state-of-the-art gigabit
router with 40 Gbps switch capacity can support only a 4-channel OC-48 DWDM
connection. Four of these will be required to support a 16-channel OC-48 DWDM
connection. And 16 of these are required to support 16-channel OC-192 DWDM
connection with a layer of 16 4::1 SONET Add/Drop Multiplexers in between. In
comparison to that a single router with terabit switching capacity can support
16-channel OC-192 DWDM connection. With this introduction, we now proceed to
understand what is required to build full routers with terabit capacities.
This section gives a general introduction about the
architecture of routers and the functions of its various components. This is
very important for understanding about the bottlenecks in achieving high speed
routing and how are these handled in the design of gigabit and even terabit
capacity routers available today in the market.
The Forwarding Decision: Routing table search is done for
each arriving datagram and based on the destination address, output port is
determined. Also, a next-hop MAC address is appended to the front of the
datagram, the time-to-live(TTL) field of the IP datagram header is decremented,
and a new header checksum is calculated.
Forwarding through the backplane: Backplane refers to the
physical path between the input port and the output port. Once the forwarding
decision is made, the datagram is queued before it can be transferred to the
output port across the backplane. If there are not enough space in the queues,
then it might even be dropped.
Output Link Scheduling: Once a datagram reaches the output
port, it is again queued before it can be transmitted on the output link. In
most traditional routers, a single FIFO queue is maintained. But most advanced
routers maintain separate queues for different flows, or priority classes and
then carefully schedule the departure time of each datagram in order to meet
various delay and throughput guarantees