[3 pages — 2993 words]


When cascading switches, a reduction in performance can occur if the backbone connection is not properly designed. This lesson addresses the concerns involved when cascading switches.

Repeating Hubs

With repeating hubs, all stations on the network occupy the same collision domain and obey the rules for arbitrating access to the network. This is called shared Ethernet since all stations share the same media including the repeaters residing within the collision domain. No one station has precedence over another station. Even when repeating hubs are cascaded, there is no perceivable change in network performance since arbitration rules do not change.

Switching Technology

However, the introduction of switch technology changes everything. Switch ports terminate collision domains allowing for increased distances over what can be achieved using repeating hubs. Traffic can be restricted to certain ports once the switch learns the location of station addresses. Switches have what is called a "switch fabric" that allows for the rapid transfer of data frames from port-to-port within the switch. A switch is called "non-blocking" or "wire- speed" if the switch fabric is fast enough, so that there is no noticeable degradation in throughput with the switch present or absent. For example, we have an eight-port switch and six connected stations all operating at 100 Mbps. It should be possible for all stations to communicate to one another as if the switch were not there. If that is the case, the switch is said to be non-blocking.

What happens if we want to add six more stations to the network? We would either need to replace the eight-port switch with a 16-port switch or we could simply add another eight-port switch using a switch-to-switch connection. Is there a difference in performance between the two approaches? Assume that both the 16-port and eight-port switches are non-blocking. This must mean that the 16-port switch fabric has higher performance over the eight-port switch fabric in order to accommodate twice as many ports in the same time frame. However, to the user there is no change in performance when twelve stations are each connected, each to a port on the 16-port switch.

What happens when these same twelve stations are split into two groups of six with six stations connected to one eight-port switch and another six connected to the other eight-port switch? (Figure 1) A single cable connects the two switches together for a net loss of two ports. Since both switches are non-blocking, there should be no change in performance. However, this is not the case. Assume port 8 on each switch is dedicated to the "backbone" connection linking the two switches together. Further, assume that stations 1 to 6 are on switch A and 7 to 12 are on switch B. For station 1 to send a message to station 12, the traffic must go through port 8. The same is true for any message originating from a station on switch B, attempting communication to a station on switch A. Therefore, port 8 theoretically handles half the traffic assuming equal distribution of messages. Port 8 becomes the bottleneck as frames are queued for transmission. The throughput is constrained by the data rate of port 8.

Figure 1 — A possible bottleneck is created when two switches are cascaded.

This is not the case with a single non-blocking switch handling all the traffic since no one port has concentrated traffic. (Figure 2) The only exception would be if one of the ports was very popular by being connected to a centralized file server or to a master controller. Assuming equal distribution of traffic, a single switch arrangement is superior to a cascaded switch arrangement.

Figure 2 — Wire-speed can be achieved if all stations are connected to a single switch.

(No part of this article may be reproduced without the written consent of the Industrial Ethernet University.)