What is a Switch?

Switches occupy the same place in the network as hubs. Unlike hubs, switches examine each packet and process it accordingly rather than simply repeating the signal to all ports. Switches map the Ethernet addresses of the nodes residing on each network segment and then allow only the necessary traffic to pass through the switch. When a packet is received by the switch, the switch examines the destination and source hardware addresses and compares them to a table of network segments and addresses. If the segments are the same, the packet is dropped ("filtered"); if the segments are different, then the packet is "forwarded" to the proper segment. Additionally, switches prevent bad or misaligned packets from spreading by not forwarding them.

Filtering of packets, and the regeneration of forwarded packets enables switching technology to split a network into separate collision domains. Regeneration of packets allows for greater distances and more nodes to be used in the total network design, and dramatically lowers the overall collision rates. In switched networks, each segment is an independent collision domain. In shared networks all nodes reside in one, big shared collision domain.

Easy to install, most switches are self learning. They determine the Ethernet addresses in use on each segment, building a table as packets are passed through the switch. This "plug and play" element makes switches an attractive alternative to hubs.

Switches can connect different networks types (such as Ethernet and Fast Ethernet) or networks of the same type. Many switches today offer high-speed links, like Fast Ethernet or FDDI, that can be used to link the switches together or to give added bandwidth to important servers that get a lot of traffic. A network composed of a number of switches linked together via these fast uplinks is called a "collapsed backbone" network.

Dedicating ports on switches to individual nodes is another way to speed access for critical computers. Servers and power users can take advantage of a full segment for one node, so some networks connect high traffic nodes to a dedicated switch port.

Full duplex is another method to increase bandwidth to dedicated workstations or servers. To use full duplex, both network interface cards used in the server or workstation, and the switch must support full duplex operation. Full duplex doubles the potential bandwidth on that link, providing 20 Mbps for Ethernet and 200 Mbps for Fast Ethernet.

http://www.technick.net/public/code/cp_dpage.php?aiocp_dp=guide_networking_switching

 

 

Advanced Switching Technology Issues

There are some technology issues with switching that do not affect 95% of all networks. Major switch vendors and the trade publications are promoting new competitive technologies, so some of these concepts are discussed here.

Managed or Unmanaged

Management provides benefits in many networks. Large networks with mission critical applications are managed with many sophisticated tools, using SNMP to monitor the health of devices on the network. Networks using SNMP or RMON (an extension to SNMP that provides much more data while using less network bandwidth to do so) will either manage every device, or just the more critical areas. VLANs are another benefit to management in a switch. A VLAN allows the network to group nodes into logical LANs that behave as one network, regardless of physical connections. The main benefit is managing broadcast and multicast traffic. An unmanaged switch will pass broadcast and multicast packets through to all ports. If the network has logical grouping that are different from physical groupings then a VLAN-based switch may be the best bet for traffic optimization.

Another benefit to management in the switches is Spanning Tree Algorithm. Spanning Tree allows the network manager to design in redundant links, with switches attached in loops. This would defeat the self learning aspect of switches, since traffic from one node would appear to originate on different ports. Spanning Tree is a protocol that allows the switches to coordinate with each other so that traffic is only carried on one of the redundant links (unless there is a failure, then the backup link is automatically activated). Network managers with switches deployed in critical applications may want to have redundant links. In this case management is necessary. But for the rest of the networks an unmanaged switch would do quite well, and is much less expensive.

Store-and-Forward vs. Cut-Through

LAN switches come in two basic architectures, cut-through and store-and-forward. Cut-through switches only examine the destination address before forwarding it on to its destination segment. A store-and-forward switch, on the other hand, accepts and analyzes the entire packet before forwarding it to its destination. It takes more time to examine the entire packet, but it allows the switch to catch certain packet errors and collisions and keep them from propagating bad packets through the network.

Today, the speed of store-and-forward switches has caught up with cut-through switches to the point where the difference between the two is minimal. Also, there are a large number of hybrid switches available that mix both cut-through and store-and-forward architectures.

Blocking vs. Non-Blocking Switches

Take a switch's specifications and add up all the ports at theoretical maximum speed, then you have the theoretical sum total of a switch's throughput. If the switching bus, or switching components cannot handle the theoretical total of all ports the switch is considered a "blocking switch". There is debate whether all switches should be designed non-blocking, but the added costs of doing so are only reasonable on switches designed to work in the largest network backbones. For almost all applications, a blocking switch that has an acceptable and reasonable throughput level will work just fine.

Consider an eight port 10/100 switch. Since each port can theoretically handle 200 Mbps (full duplex) there is a theoretical need for 1600 Mbps, or 1.6 Gbps. But in the real world each port will not exceed 50% utilization, so a 800 Mbps switching bus is adequate. Consideration of total throughput versus total ports demand in the real world loads provides validation that the switch can handle the loads of your network.

Switch Buffer Limitations

As packets are processed in the switch, they are held in buffers. If the destination segment is congested, the switch holds on to the packet as it waits for bandwidth to become available on the crowded segment. Buffers that are full present a problem. So some analysis of the buffer sizes and strategies for handling overflows is of interest for the technically inclined network designer.

In real world networks, crowded segments cause many problems, so their impact on switch consideration is not important for most users, since networks should be designed to eliminate crowded, congested segments. There are two strategies for handling full buffers. One is "backpressure flow control" which sends packets back upstream to the source nodes of packets that find a full buffer. This compares to the strategy of simply dropping the packet, and relying on the integrity features in networks to retransmit automatically. One solution spreads the problem in one segment to other segments, propagating the problem. The other solution causes retransmissions, and that resulting increase in load is not optimal. Neither strategy solves the problem, so switch vendors use large buffers and advise network managers to design switched network topologies to eliminate the source of the problem - congested segments.

Layer 3 Switching

A hybrid device is the latest improvement in internetworking technology. Combining the packet handling of routers and the speed of switching, these multilayer switches operate on both layer 2 and layer 3 of the OSI network model. The performance of this class of switch is aimed at the core of large enterprise networks. Sometimes called routing switches or IP switches, multilayer switches look for common traffic flows, and switch these flows on the hardware layer for speed. For traffic outside the normal flows, the multilayer switch uses routing functions. This keeps the higher overhead routing functions only where it is needed, and strives for the best handling strategy for each network packet.

Many vendors are working on high end multilayer switches, and the technology is definitely a "work in process". As networking technology evolves, multilayer switches are likely to replace routers in most large networks.

 http://www.lantronix.com/resources/net-tutor-switching.html

 

 

Networking Basics

Computer networking has become an integral part of business today. Individuals, professionals and academics have also learned to rely on computer networks for capabilities such as electronic mail and access to remote databases for research and communication purposes. Networking has thus become an increasingly pervasive, worldwide reality because it is fast, efficient, reliable and effective. Just how all this information is transmitted, stored, categorized and accessed remains a mystery to the average computer user.

This tutorial will explain the basics of some of the most popular technologies used in networking, and will include the following:

§                Types of Networks - including LANs, WANs and WLANs

§                The Internet and Beyond - The Internet and its contributions to intranets and extranets

§                Types of LAN Technology - including Ethernet, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet,
ATM, PoE and Token Ring

§                Networking and Ethernet Basics - including standard code, media, topographies, collisions and CSMA/CD

§                Ethernet Products - including transceivers, network interface cards, hubs and repeaters

Types of Networks

In describing the basics of networking technology, it will be helpful to explain the different types of networks in use.

Local Area Networks (LANs)

A network is any collection of independent computers that exchange information with each other over a shared communication medium. Local Area Networks or LANs are usually confined to a limited geographic area, such as a single building or a college campus. LANs can be small, linking as few as three computers, but can often link hundreds of computers used by thousands of people. The development of standard networking protocols and media has resulted in worldwide proliferation of LANs throughout business and educational organizations.

Wide Area Networks (WANs)

Often elements of a network are widely separated physically. Wide area networking combines multiple LANs that are geographically separate. This is accomplished by connecting the several LANs with dedicated leased lines such as a T1 or a T3, by dial-up phone lines (both synchronous and asynchronous), by satellite links and by data packet carrier services. WANs can be as simple as a modem and a remote access server for employees to dial into, or it can be as complex as hundreds of branch offices globally linked. Special routing protocols and filters minimize the expense of sending data over vast distances.

Wireless Local Area Networks (WLANs)

Wireless LANs, or WLANs, use radio frequency (RF) technology to transmit and receive data over the air. This minimizes the need for wired connections. WLANs give users mobility as they allow connection to a local area network without having to be physically connected by a cable. This freedom means users can access shared resources without looking for a place to plug in cables, provided that their terminals are mobile and within the designated network coverage area. With mobility, WLANs give flexibility and increased productivity, appealing to both entrepreneurs and to home users. WLANs may also enable network administrators to connect devices that may be physically difficult to reach with a cable.

The Institute for Electrical and Electronic Engineers (IEEE) developed the 802.11 specification for wireless LAN technology. 802.11 specifies over-the-air interface between a wireless client and a base station, or between two wireless clients. WLAN 802.11 standards also have security protocols that were developed to provide the same level of security as that of a wired LAN.
The first of these protocols is Wired Equivalent Privacy (WEP). WEP provides security by encrypting data sent over radio waves from end point to end point.

The second WLAN security protocol is Wi-Fi Protected Access (WPA). WPA was developed as an upgrade to the security features of WEP. It works with existing products that are WEP-enabled but provides two key improvements: improved data encryption through the temporal key integrity protocol (TKIP) which scrambles the keys using a hashing algorithm. It has means for integrity-checking to ensure that keys have not been tampered with. WPA also provides user authentication with the extensible authentication protocol (EAP).

Wireless Protocols

Specification

Data Rate

Modulation Scheme

Security

802.11

1 or 2 Mbps in the 2.4 GHz band

FHSS, DSSS

WEP and WPA

802.11a

54 Mbps in the 5 GHz band

OFDM

WEP and WPA

802.11b/High Rate/Wi-Fi

11 Mbps (with a fallback to 5.5, 2, and 1 Mbps) in the 2.4 GHz band

DSSS with CCK

WEP and WPA

802.11g/Wi-Fi

54 Mbps in the 2.4 GHz band

OFDM when above 20Mbps, DSSS with CCK when below 20Mbps

WEP and WPA

The Internet and Beyond

More than just a technology, the Internet has become a way of life for many people, and it has spurred a revolution of sorts for both public and private sharing of information. The most popular source of information about almost anything, the Internet is used daily by technical and non-technical users alike.

The Internet:  The Largest Network of All

With the meteoric rise in demand for connectivity, the Internet has become a major communications highway for millions of users. It is a decentralized system of linked networks that are worldwide in scope. It facilitates data communication services such as remote log-in, file transfer, electronic mail, the World Wide Web and newsgroups. It consists of independent hosts of computers that can designate which Internet services to use and which of their local services to make available to the global community.

Initially restricted to military and academic institutions, the Internet now operates on a three-level hierarchy composed of backbone networks, mid-level networks and stub networks. It is a full-fledged conduit for any and all forms of information and commerce. Internet websites now provide personal, educational, political and economic resources to virtually any point on the planet.

Intranet:  A Secure Internet-like Network for Organizations

With advancements in browser-based software for the Internet, many private organizations have implemented intranets. An intranet is a private network utilizing Internet-type tools, but available only within that organization. For large organizations, an intranet provides easy access to corporate information for designated employees.

Extranet:  A Secure Means for Sharing Information with Partners

While an intranet is used to disseminate confidential information within a corporation, an extranet is commonly used by companies to share data in a secure fashion with their business partners. Internet-type tools are used by content providers to update the extranet. Encryption and user authentication means are provided to protect the information, and to ensure that designated people with the proper access privileges are allowed to view it.

Types of LAN Technology

Ethernet

Ethernet is the most popular physical layer LAN technology in use today. It defines the number of conductors that are required for a connection, the performance thresholds that can be expected, and provides the framework for data transmission. A standard Ethernet network can transmit data at a rate up to 10 Megabits per second (10 Mbps). Other LAN types include Token Ring, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, Fiber Distributed Data Interface (FDDI), Asynchronous Transfer Mode (ATM) and LocalTalk.

Ethernet is popular because it strikes a good balance between speed, cost and ease of installation. These benefits, combined with wide acceptance in the computer marketplace and the ability to support virtually all popular network protocols, make Ethernet an ideal networking technology for most computer users today.

The Institute for Electrical and Electronic Engineers developed an Ethernet standard known as IEEE Standard 802.3. This standard defines rules for configuring an Ethernet network and also specifies how the elements in an Ethernet network interact with one another. By adhering to the IEEE standard, network equipment and network protocols can communicate efficiently.

Fast Ethernet

The Fast Ethernet standard (IEEE 802.3u) has been established for Ethernet networks that need higher transmission speeds. This standard raises the Ethernet speed limit from 10 Mbps to 100 Mbps with only minimal changes to the existing cable structure. Fast Ethernet provides faster throughput for video, multimedia, graphics, Internet surfing and stronger error detection and correction.

There are three types of Fast Ethernet: 100BASE-TX for use with level 5 UTP cable; 100BASE-FX for use with fiber-optic cable; and 100BASE-T4 which utilizes an extra two wires for use with level 3 UTP cable. The 100BASE-TX standard has become the most popular due to its close compatibility with the 10BASE-T Ethernet standard.

Network managers who want to incorporate Fast Ethernet into an existing configuration are required to make many decisions. The number of users in each site on the network that need the higher throughput must be determined; which segments of the backbone need to be reconfigured specifically for 100BASE-T; plus what hardware is necessary in order to connect the 100BASE-T segments with existing 10BASE-T segments. Gigabit Ethernet is a future technology that promises a migration path beyond Fast Ethernet so the next generation of networks will support even higher data transfer speeds.

Gigabit Ethernet

Gigabit Ethernet was developed to meet the need for faster communication networks with applications such as multimedia and Voice over IP (VoIP). Also known as "gigabit-Ethernet-over-copper" or 1000Base-T, GigE is a version of Ethernet that runs at speeds 10 times faster than 100Base-T. It is defined in the IEEE 802.3 standard and is currently used as an enterprise backbone. Existing Ethernet LANs with 10 and 100 Mbps cards can feed into a Gigabit Ethernet backbone to interconnect high performance switches, routers and servers.

From the data link layer of the OSI model upward, the look and implementation of Gigabit Ethernet is identical to that of Ethernet. The most important differences between Gigabit Ethernet and Fast Ethernet include the additional support of full duplex operation in the MAC layer and the data rates.

10 Gigabit Ethernet

10 Gigabit Ethernet is the fastest and most recent of the Ethernet standards. IEEE 802.3ae defines a version of Ethernet with a nominal rate of 10Gbits/s that makes it 10 times faster than Gigabit Ethernet.

Unlike other Ethernet systems, 10 Gigabit Ethernet is based entirely on the use of optical fiber connections. This developing standard is moving away from a LAN design that broadcasts to all nodes, toward a system which includes some elements of wide area routing. As it is still very new, which of the standards will gain commercial acceptance has yet to be determined.

Asynchronous Transfer Mode (ATM)

ATM is a cell-based fast-packet communication technique that can support data-transfer rates from sub-T1 speeds to 10 Gbps. ATM achieves its high speeds in part by transmitting data in fixed-size cells and dispensing with error-correction protocols. It relies on the inherent integrity of digital lines to ensure data integrity.

ATM can be integrated into an existing network as needed without having to update the entire network. Its fixed-length cell-relay operation is the signaling technology of the future and offers more predictable performance than variable length frames. Networks are extremely versatile and an ATM network can connect points in a building, or across the country, and still be treated as a single network.

Power over Ethernet (PoE)

PoE is a solution in which an electrical current is run to networking hardware over the Ethernet Category 5 cable or higher. This solution does not require an extra AC power cord at the product location. This minimizes the amount of cable needed as well as eliminates the difficulties and cost of installing extra outlets.

LAN Technology Specifications

Name

IEEE Standard

Data Rate

Media Type

Maximum Distance

Ethernet

802.3

10 Mbps

10Base-T

100 meters

Fast Ethernet/
100Base-T

802.3u

100 Mbps

100Base-TX
100Base-FX

100 meters
2000 meters

Gigabit Ethernet/
GigE

802.3z

1000 Mbps

1000Base-T
1000Base-SX
1000Base-LX

100 meters
275/550 meters
550/5000 meters

10 Gigabit Ethernet

IEEE 802.3ae

10 Gbps

10GBase-SR
10GBase-LX4
10GBase-LR/ER
10GBase-SW/LW/EW

300 meters
300m MMF/ 10km SMF
10km/40km
300m/10km/40km

Token Ring

Token Ring is another form of network configuration. It differs from Ethernet in that all messages are transferred in one direction along the ring at all times. Token Ring networks sequentially pass a “token” to each connected device. When the token arrives at a particular computer (or device), the recipient is allowed to transmit data onto the network. Since only one device may be transmitting at any given time, no data collisions occur. Access to the network is guaranteed, and time-sensitive applications can be supported. However, these benefits come at a price. Component costs are usually higher, and the networks themselves are considered to be more complex and difficult to implement. Various PC vendors have been proponents of Token Ring networks.

Networking and Ethernet Basics

Protocols

After a physical connection has been established, network protocols define the standards that allow computers to communicate. A protocol establishes the rules and encoding specifications for sending data. This defines how computers identify one another on a network, the form that the data should take in transit, and how this information is processed once it reaches its final destination. Protocols also define procedures for determining the type of error checking that will be used, the data compression method, if one is needed, how the sending device will indicate that it has finished sending a message, how the receiving device will indicate that it has received a message, and the handling of lost or damaged transmissions or "packets".

The main types of network protocols in use today are: TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms); IPX (for Novell NetWare); DECnet (for networking Digital Equipment Corp. computers); AppleTalk (for Macintosh computers), and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks).

Although each network protocol is different, they all share the same physical cabling. This common method of accessing the physical network allows multiple protocols to peacefully coexist over the network media, and allows the builder of a network to use common hardware for a variety of protocols. This concept is known as "protocol independence," which means that devices which are compatible at the physical and data link layers allow the user to run many different protocols over the same medium.

The Open System Interconnection Model

The Open System Interconnection (OSI) model specifies how dissimilar computing devices such as Network Interface Cards (NICs), bridges and routers exchange data over a network by offering a networking framework for implementing protocols in seven layers. Beginning at the application layer, control is passed from one layer to the next. The following describes the seven layers as defined by the OSI model, shown in the order they occur whenever a user transmits information.

Layer 7: Application

This layer supports the application and end-user processes. Within this layer, user privacy is considered and communication partners, service and constraints are all identified. File transfers, email, Telnet and FTP applications are all provided within this layer.

Layer 6: Presentation (Syntax)

Within this layer, information is translated back and forth between application and network formats.  This translation transforms the information into data the application layer and network recognize regardless of encryption and formatting.

Layer 5: Session

Within this layer, connections between applications are made, managed and terminated as needed to allow for data exchanges between applications at each end of a dialogue.

Layer 4: Transport

Complete data transfer is ensured as information is transferred transparently between systems in this layer. The transport layer also assures appropriate flow control and end-to-end error recovery.

Layer 3: Network

Using switching and routing technologies, this layer is responsible for creating virtual circuits to transmit information from node to node. Other functions include routing, forwarding, addressing, internetworking, error and congestion control, and packet sequencing.

Layer 2: Data Link

Information in data packets are encoded and decoded into bits within this layer. Errors from the physical layer flow control and frame synchronization are corrected here utilizing transmission protocol knowledge and management. This layer consists of two sub layers: the Media Access Control (MAC) layer, which controls the way networked computers gain access to data and transmit it, and the Logical Link Control (LLC) layer, which controls frame synchronization, flow control and error checking.

Layer 1: Physical

This layer enables hardware to send and receive data over a carrier such as cabling, a card or other physical means. It conveys the bitstream through the network at the electrical and mechanical level. Fast Ethernet, RS232, and ATM are all protocols with physical layer components.

This order is then reversed as information is received, so that the physical layer is the first and application layer is the final layer that information passes through.

Standard Ethernet Code

In order to understand standard Ethernet code, one must understand what each digit means. Following is a guide:

Guide to Ethernet Coding

10

at the beginning means the network operates at 10Mbps.

BASE

means the type of signaling used is baseband.

2 or 5

at the end indicates the maximum cable length in meters.

T

the end stands for twisted-pair cable.

X

at the end stands for full duplex-capable cable.

FL

at the end stands for fiber optic cable.

For example: 100BASE-TX indicates a Fast Ethernet connection (100 Mbps) that uses a
twisted pair cable capable of full-duplex transmissions.

Media

An important part of designing and installing an Ethernet is selecting the appropriate Ethernet medium. There are four major types of media in use today: Thickwire for 10BASE5 networks; thin coax for 10BASE2 networks; unshielded twisted pair (UTP) for 10BASE-T networks; and fiber optic for 10BASE-FL or Fiber-Optic Inter-Repeater Link (FOIRL) networks. This wide variety of media reflects the evolution of Ethernet and also points to the technology's flexibility. Thickwire was one of the first cabling systems used in Ethernet, but it was expensive and difficult to use. This evolved to thin coax, which is easier to work with and less expensive. It is important to note that each type of Ethernet, Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet, has its own preferred media types.

The most popular wiring schemes are 10BASE-T and 100BASE-TX, which use unshielded twisted pair (UTP) cable. This is similar to telephone cable and comes in a variety of grades, with each higher grade offering better performance. Level 5 cable is the highest, most expensive grade, offering support for transmission rates of up to 100 Mbps. Level 4 and level 3 cable are less expensive, but cannot support the same data throughput speeds; level 4 cable can support speeds of up to 20 Mbps; level 3 up to 16 Mbps. The 100BASE-T4 standard allows for support of 100 Mbps Ethernet over level 3 cables, but at the expense of adding another pair of wires (4 pair instead of the 2 pair used for 10BASE-T). For most users, this is an awkward scheme and therefore 100BASE-T4 has seen little popularity. Level 2 and level 1 cables are not used in the design of 10BASE-T networks.

For specialized applications, fiber-optic, or 10BASE-FL, Ethernet segments are popular. Fiber-optic cable is more expensive, but it is invaluable in situations where electronic emissions and environmental hazards are a concern. Fiber-optic cable is often used in inter-building applications to insulate networking equipment from electrical damage caused by lightning. Because it does not conduct electricity, fiber-optic cable can also be useful in areas where heavy electromagnetic interference is present, such as on a factory floor. The Ethernet standard allows for fiber-optic cable segments up to two kilometers long, making fiber-optic Ethernet perfect for connecting nodes and buildings that are otherwise not reachable with copper media.

Cable Grade Capabilities

Cable Name

Makeup

Frequency Support

Data Rate

Network Compatibility

Cat-5

4 twisted pairs of copper wire -- terminated by RJ45 connectors

100 MHz

Up to 1000Mbps

ATM, Token Ring,1000Base-T, 100Base-TX, 10Base-T

Cat-5e

4 twisted pairs of copper wire -- terminated by RJ45 connectors

100 MHz

Up to 1000Mbps

10Base-T, 100Base-TX, 1000Base-T

Cat-6

4 twisted pairs of copper wire -- terminated by RJ45 connectors

250 MHz

1000Mbps

10Base-T, 100Base-TX, 1000Base-T

Topologies

Network topology is the geometric arrangement of nodes and cable links in a LAN. Two general configurations are used, bus and star. These two topologies define how nodes are connected to one another in a communication network. A node is an active device connected to the network, such as a computer or a printer. A node can also be a piece of networking equipment such as a hub, switch or a router.

A bus topology consists of nodes linked together in a series with each node connected to a long cable or bus. Many nodes can tap into the bus and begin communication with all other nodes on that cable segment. A break anywhere in the cable will usually cause the entire segment to be inoperable until the break is repaired. Examples of bus topology include 10BASE2 and 10BASE5.

Topology ExamplesGeneral Topology Configurations

10BASE-T Ethernet and Fast Ethernet use a star topology where access is controlled by a central computer. Generally a computer is located at one end of the segment, and the other end is terminated in central location with a hub or a switch. Because UTP is often run in conjunction with telephone cabling, this central location can be a telephone closet or other area where it is convenient to connect the UTP segment to a backbone. The primary advantage of this type of network is reliability, for if one of these 'point-to-point' segments has a break; it will only affect the two nodes on that link. Other computer users on the network continue to operate as if that segment were non-existent.

Collisions

Ethernet is a shared medium, so there are rules for sending packets of data to avoid conflicts and to protect data integrity. Nodes determine when the network is available for sending packets. It is possible that two or more nodes at different locations will attempt to send data at the same time. When this happens, a packet collision occurs.

Minimizing collisions is a crucial element in the design and operation of networks. Increased collisions are often the result of too many users on the network. This leads to competition for network bandwidth and can slow the performance of the network from the user's point of view. Segmenting the network is one way of reducing an overcrowded network, i.e., by dividing it into different pieces logically joined together with a bridge or switch.

CSMA/CD

In order to manage collisions Ethernet uses a protocol called Carrier Sense Multiple Access/Collision Detection (CSMA/CD). CSMA/CD is a type of contention protocol that defines how to respond when a collision is detected, or when two devices attempt to transmit packages simultaneously. Ethernet allows each device to send messages at any time without having to wait for network permission; thus, there is a high possibility that devices may try to send messages at the same time.

After detecting a collision, each device that was transmitting a packet delays a random amount of time before re-transmitting the packet. If another collision occurs, the device waits twice as long before trying to re-transmit.

Ethernet Products

The standards and technology just discussed will help define the specific products that network managers use to build Ethernet networks. The following presents the key products needed to build an Ethernet LAN.

Transceivers

Transceivers are also referred to as Medium Access Units (MAUs). They are used to connect nodes to the various Ethernet media. Most computers and network interface cards contain a built-in 10BASE-T or 10BASE2 transceiver which allows them to be connected directly to Ethernet without the need for an external transceiver.

Many Ethernet devices provide an attachment unit interface (AUI) connector to allow the user to connect to any type of medium via an external transceiver. The AUI connector consists of a 15-pin D-shell type connector, female on the computer side, male on the transceiver side.

For Fast Ethernet networks, a new interface called the MII (Media Independent Interface) was developed to offer a flexible way to support 100 Mbps connections. The MII is a popular way to connect 100BASE-FX links to copper-based Fast Ethernet devices.

Network Interface Cards

Network Interface Cards, commonly referred to as NICs, are used to connect a PC to a network. The NIC provides a physical connection between the networking cable and the computer's internal bus. Different computers have different bus architectures. PCI bus slots are most commonly found on 486/Pentium PCs and ISA expansion slots are commonly found on 386 and older PCs. NICs come in three basic varieties: 8-bit, 16-bit, and 32-bit. The larger the number of bits that can be transferred to the NIC, the faster the NIC can transfer data to the network cable. Most NICs are designed for a particular type of network, protocol, and medium, though some can serve multiple networks.

Many NIC adapters comply with plug-and-play specifications. On these systems, NICs are automatically configured without user intervention, while on non-plug-and-play systems, configuration is done manually through a set-up program and/or DIP switches.

Cards are available to support almost all networking standards. Fast Ethernet NICs are often 10/100 capable, and will automatically set to the appropriate speed. Gigabit Ethernet NICs are 10/100/1000 capable with auto negotiation depending on the user’s Ethernet speed. Full duplex networking is another option where a dedicated connection to a switch allows a NIC to operate at twice the speed.

Hubs/Repeaters

Hubs/repeaters are used to connect together two or more Ethernet segments of any type of medium. In larger designs, signal quality begins to deteriorate as segments exceed their maximum length. Hubs provide the signal amplification required to allow a segment to be extended a greater distance. A hub repeats any incoming signal to all ports.

Ethernet hubs are necessary in star topologies such as 10BASE-T. A multi-port twisted pair hub allows several point-to-point segments to be joined into one network. One end of the point-to-point link is attached to the hub and the other is attached to the computer. If the hub is attached to a backbone, then all computers at the end of the twisted pair segments can communicate with all the hosts on the backbone. The number and type of hubs in any one-collision domain is limited by the Ethernet rules. These repeater rules are discussed in more detail later.

A very important fact to note about hubs is that they only allow users to share Ethernet. A network of hubs/repeaters is termed a "shared Ethernet," meaning that all members of the network are contending for transmission of data onto a single network (collision domain). A hub/repeater propagates all electrical signals including the invalid ones. Therefore, if a collision or electrical interference occurs on one segment, repeaters make it appear on all others as well. This means that individual members of a shared network will only get a percentage of the available network bandwidth.

Basically, the number and type of hubs in any one collision domain for 10Mbps Ethernet is limited by the following rules:

Network Type

Max Nodes Per Segment

Max Distance Per Segment

10BASE-T

2

100m

10BASE-FL

2

2000m

 

 

 

Ethernet Tutorial - Part II: Adding Speed

The phrase “you can never get too much of a good thing” can certainly be applied to networking. Once the benefits of networking are demonstrated, there is a thirst for even faster, more reliable connections to support a growing number of users and highly-complex applications.

How to obtain that added bandwidth can be an issue. While repeaters allow LANs to extend beyond normal distance limitations, they still limit the number of nodes that can be supported.
Bridges and switches on the other hand allow LANs to grow significantly larger by virtue of their ability to support full Ethernet segments on each port. Additionally, bridges and switches selectively filter network traffic to only those packets needed on each segment, significantly increasing throughput on each segment and on the overall network.

Network managers continue to look for better performance and more flexibility for network topologies, bridges and switches. To provide a better understanding of these and related technologies, this tutorial will cover:

§                Bridges

§                Ethernet Switches

§                Routers

§                Network Design Criteria

§                When and Why Ethernets Become Too Slow

§                Increasing Performance with Fast and Gigabit Ethernet

Bridges

Bridges connect two LAN segments of similar or dissimilar types, such as Ethernet and Token Ring. This allows two Ethernet segments to behave like a single Ethernet allowing any pair of computers on the extended Ethernet to communicate. Bridges are transparent therefore computers don’t know whether a bridge separates them.

Bridges map the Ethernet addresses of the nodes residing on each network segment and allow only necessary traffic to pass through the bridge. When a packet is received by the bridge, the bridge determines the destination and source segments. If the segments are the same, the packet is dropped or also referred to as “filtered"; if the segments are different, then the packet is "forwarded" to the correct segment. Additionally, bridges do not forward bad or misaligned packets.

Bridges are also called "store-and-forward" devices because they look at the whole Ethernet packet before making filtering or forwarding decisions. Filtering packets and regenerating forwarded packets enables bridging technology to split a network into separate collision domains. Bridges are able to isolate network problems; if interference occurs on one of two segments, the bridge will receive and discard an invalid frame keeping the problem from affecting the other segment. This allows for greater distances and more repeaters to be used in the total network design.

Dealing with Loops

Most bridges are self-learning task bridges; they determine the user Ethernet addresses on the segment by building a table as packets that are passed through the network. However, this self-learning capability dramatically raises the potential of network loops in networks that have many bridges. A loop presents conflicting information on which segment a specific address is located and forces the device to forward all traffic. The Distributed Spanning Tree (DST) algorithm is a software standard (found in the IEEE 802.1d specification) that describes how switches and bridges can communicate to avoid network loops.

Ethernet Switches

Ethernet switches are an expansion of the Ethernet bridging concept. The advantage of using a switched Ethernet is parallelism. Up to one-half of the computers connected to a switch can send data at the same time.

LAN switches link multiple networks together and have two basic architectures: cut-through and store-and-forward. In the past, cut-through switches were faster because they examined the packet destination address only before forwarding it on to its destination segment. A store-and-forward switch works like a bridge in that it accepts and analyzes the entire packet before forwarding it to its destination.

Historically, store-and-forward took more time to examine the entire packet, although one benefit was that it allowed the switch to catch certain packet errors and keep them from propagating through the network. Today, the speed of store-and-forward switches has caught up with cut-through switches so the difference between the two is minimal. Also, there are a large number of hybrid switches available that mix both cut-through and store-and-forward architectures.

Both cut-through and store-and-forward switches separate a network into collision domains, allowing network design rules to be extended. Each of the segments attached to an Ethernet switch has a full 10 Mbps of bandwidth shared by fewer users, which results in better performance (as opposed to hubs that only allow bandwidth sharing from a single Ethernet). Newer switches today offer high-speed links, either Fast Ethernet, Gigabit Ethernet, 10 Gigabit Ethernet or ATM. These are used to link switches together or give added bandwidth to high-traffic servers. A network composed of a number of switches linked together via uplinks is termed a "collapsed backbone" network.

Switches and Dedicated Ethernet Examples

Routers

A router is a device that forwards data packets along networks, and determines which way to send each data packet based on its current understanding of the state of its connected networks. Routers are typically connected to at least two networks, commonly two LANs or WANs or a LAN and its Internet Service Provider’s (ISPs) network. Routers are located at gateways, the places where two or more networks connect.

Routers filter out network traffic by specific protocol rather than by packet address. Routers also divide networks logically instead of physically. An IP router can divide a network into various subnets so that only traffic destined for particular IP addresses can pass between segments. Network speed often decreases due to this type of intelligent forwarding. Such filtering takes more time than that exercised in a switch or bridge, which only looks at the Ethernet address. However, in more complex networks, overall efficiency is improved by using routers.

Network Design Criteria

Ethernets and Fast Ethernets have design rules that must be followed in order to function correctly. The maximum number of nodes, number of repeaters and maximum segment distances are defined by the electrical and mechanical design properties of each type of Ethernet media.

A network using repeaters, for instance, functions with the timing constraints of Ethernet. Although electrical signals on the Ethernet media travel near the speed of light, it still takes a finite amount of time for the signal to travel from one end of a large Ethernet to another. The Ethernet standard assumes it will take roughly 50 microseconds for a signal to reach its destination.

Ethernet is subject to the "5-4-3" rule of repeater placement: the network can only have five segments connected; it can only use four repeaters; and of the five segments, only three can have users attached to them; the other two must be inter-repeater links.

If the design of the network violates these repeater and placement rules, then timing guidelines will not be met and the sending station will resend that packet. This can lead to lost packets and excessive resent packets, which can slow network performance and create trouble for applications. New Ethernet standards (Fast Ethernet, GigE, and 10 GigE) have modified repeater rules, since the minimum packet size takes less time to transmit than regular Ethernet. The length of the network links allows for a fewer number of repeaters. In Fast Ethernet networks, there are two classes of repeaters. Class I repeaters have a latency of 0.7 microseconds or less and are limited to one repeater per network. Class II repeaters have a latency of 0.46 microseconds or less and are limited to two repeaters per network. The following are the distance (diameter) characteristics for these types of Fast Ethernet repeater combinations:

Fast Ethernet

Copper

Fiber

No Repeaters
One Class I Repeater
One Class II Repeater
Two Class II Repeaters

100m
200m
200m
205m

412m*
272m
272m
228m

* Full Duplex Mode 2 km

When conditions require greater distances or an increase in the number of nodes/repeaters, then a bridge, router or switch can be used to connect multiple networks together. These devices join two or more separate networks, allowing network design criteria to be restored. Switches allow network designers to build large networks that function well. The reduction in costs of bridges and switches reduces the impact of repeater rules on network design.

Each network connected via one of these devices is referred to as a separate collision domain in the overall network.

When and Why Ethernets Become Too Slow

As more users are added to a shared network or as applications requiring more data are added, performance deteriorates. This is because all users on a shared network are competitors for the Ethernet bus. On a moderately loaded 10Mbps Ethernet network that is shared by 30-50 users, that network will only sustain throughput in the neighborhood of 2.5Mbps after accounting for packet overhead, interpacket gaps and collisions.

Increasing the number of users (and therefore packet transmissions) creates a higher collision potential. Collisions occur when two or more nodes attempt to send information at the same time. When they realize that a collision has occurred, each node shuts off for a random time before attempting another transmission. With shared Ethernet, the likelihood of collision increases as more nodes are added to the shared collision domain of the shared Ethernet. One of the steps to alleviate this problem is to segment traffic with a bridge or switch. A switch can replace a hub and improve network performance. For example, an eight-port switch can support eight Ethernets, each running at a full 10 Mbps. Another option is to dedicate one or more of these switched ports to a high traffic device such as a file server.

Greater throughput is required to support multimedia and video applications. When added to the network, Ethernet switches provide a number of enhancements over shared networks that can support these applications. Foremost is the ability to divide networks into smaller and faster segments. Ethernet switches examine each packet, determine where that packet is destined and then forward that packet to only those ports to which the packet needs to go. Modern switches are able to do all these tasks at "wirespeed," that is, without delay.

Aside from deciding when to forward or when to filter the packet, Ethernet switches also completely regenerate the Ethernet packet. This regeneration and re-timing allows each port on a switch to be treated as a complete Ethernet segment, capable of supporting the full length of cable along with all of the repeater restrictions. The standard Ethernet slot time required in CSMA/CD half-duplex modes is not long enough for running over 100m copper, so Carrier Extension is used to guarantee a 512-bit slot time.

Additionally, bad packets are identified by Ethernet switches and immediately dropped from any future transmission. This "cleansing" activity keeps problems isolated to a single segment and keeps them from disrupting other network activity. This aspect of switching is extremely important in a network environment where hardware failures are to be anticipated. Full duplex doubles the bandwidth on a link, and is another method used to increase bandwidth to dedicated workstations or servers. Full duplex modes are available for standard Ethernet, Fast Ethernet, and Gigabit Ethernet. To use full duplex, special network interface cards are installed in the server or workstation, and the switch is programmed to support full duplex operation.

Increasing Performance with Fast and Gigabit Ethernet

Implementing Fast or Gigabit Ethernet to increase performance is the next logical step when Ethernet becomes too slow to meet user needs. Higher traffic devices can be connected to switches or each other via Fast Ethernet or Gigabit Ethernet, providing a great increase in bandwidth. Many switches are designed with this in mind, and have Fast Ethernet uplinks available for connection to a file server or other switches. Eventually, Fast Ethernet can be deployed to user desktops by equipping all computers with Fast Ethernet network interface cards and using Fast Ethernet switches and repeaters.

With an understanding of the underlying technologies and products in use in Ethernet networks, the next tutorial will advance to a discussion of some of the most popular real-world application

Part III: Sharing Devices

A Look at Device Server Technology

Device networking starts with a device server, which allows almost any device with serial connectivity to connect to Ethernet networks quickly and cost-effectively. These products include all of the elements needed for device networking and because of their scalability; they do not require a server or gateway.

This tutorial provides an introduction to the functionality of a variety of device servers.  It will cover print servers, terminal servers and console servers, as well as embedded and external device servers.  For each of these categories, there will also be a review of specific Lantronix offerings.

An Introduction to Device Servers

A device server is characterized by a minimal operating architecture that requires no per seat network operating system license, and client access that is independent of any operating system or proprietary protocol. In addition the device server is a "closed box," delivering extreme ease of installation, minimal maintenance, and can be managed by the client remotely via a web browser.

By virtue of its independent operating system, protocol independence, small size and flexibility, device servers are able to meet the demands of virtually any network-enabling application. The demand for device servers is rapidly increasing because organizations need to leverage their networking infrastructure investment across all of their resources. Many currently installed devices lack network ports or require dedicated serial connections for management -- device servers allow those devices to become connected to the network.

Device servers are currently used in a wide variety of environments in which machinery, instruments, sensors and other discrete devices generate data that was previously inaccessible through enterprise networks. They are also used for security systems, point-of-sale applications, network management and many other applications where network access to a device is required.
As device servers become more widely adopted and implemented into specialized applications, we can expect to see variations in size, mounting capabilities and enclosures. Device servers are also available as embedded devices, capable of providing instant networking support for developers of future products where connectivity will be required.

Print servers, terminal servers, remote access servers and network time servers are examples of device servers which are specialized for particular functions. Each of these types of servers has unique configuration attributes in hardware or software that help them to perform best in their particular arena.

External Device Servers

External device servers are stand-alone serial-to-wireless (802.11b) or serial-to-Ethernet device servers that can put just about any device with serial connectivity on the network in a matter of minutes so it can be managed remotely.

External Device Servers from Lantronix

Lantronix external device servers provide the ability to remotely control, monitor, diagnose and troubleshoot equipment over a network or the Internet.  By opting for a powerful external device with full network and web capabilities, companies are able to preserve their present equipment investments.   

Lantronix offers a full line of external device servers:  Ethernet or wireless, advanced encryption for maximum security, and device servers designed for commercial or heavy-duty industrial applications.

Wireless

Providing a whole new level of flexibility and mobility, these devices allow users to connect devices that are inaccessible via cabling.  Users can also add intelligence to their businesses by putting mobile devices, such as medical instruments or warehouse equipment, on networks.

Security:

Ideal for protecting data such as business transactions, customer information, financial records, etc., these devices provide enhanced security for networked devices.

Commercial: 

These devices enable users to network-enable their existing equipment (such as POS devices, AV equipment, medical instruments, etc.) simply and cost-effectively, without the need for special software.

Industrial: 

For heavy-duty factory applications, Lantronix offers a full complement of industrial-strength external device servers designed for use with manufacturing, assembly and factory automation equipment. All models support Modbus industrial protocols.

Embedded Device Servers

Embedded device servers integrate all the required hardware and software into a single embedded device.  They use a device’s serial port to web-enable or network-enable products quickly and easily without the complexities of extensive hardware and software integration. Embedded device servers are typically plug-and-play solutions that operate independently of a PC and usually include a wireless or Ethernet connection, operating system, an embedded web server, a full TCP/IP protocol stack, and some sort of encryption for secure communications.

Embedded Device Servers from Lantronix

Lantronix recognizes that design engineers are looking for a simple, cost-effective and reliable way to seamlessly embed network connectivity into their products.  In a fraction of the time it would take to develop a custom solution, Lantronix embedded device servers provide a variety of proven, fully integrated products.  OEMs can add full Ethernet and/or wireless connectivity to their products so they can be managed over a network or the Internet.

Module

These devices allow users tonetwork-enable just about any electronic device with Ethernet and/or wireless connectivity.

Board-Level: 

Users can integrate networking capabilities onto the circuit boards of equipment like factory machinery, security systems and medical devices.

Single-Chip Solutions: 

These powerful, system-on-chip solutions help users address networking issues early in the design cycle to support the most popular embedded networking technologies.

Terminal Servers

Terminal servers are used to enable terminals to transmit data to and from host computers across LANs, without requiring each terminal to have its own direct connection. And while the terminal server's existence is still justified by convenience and cost considerations, its inherent intelligence provides many more advantages. Among these is enhanced remote monitoring and control. Terminal servers that support protocols like SNMP make networks easier to manage.
Devices that are attached to a network through a server can be shared between terminals and hosts at both the local site and throughout the network. A single terminal may be connected to several hosts at the same time (in multiple concurrent sessions), and can switch between them. Terminal servers are also used to network devices that have only serial outputs. A connection between serial ports on different servers is opened, allowing data to move between the two devices.

Given its natural translation ability, a multi-protocol server can perform conversions between the protocols it knows such as LAT and TCP/IP. While server bandwidth is not adequate for large file transfers, it can easily handle host-to-host inquiry/response applications, electronic mailbox checking, etc. In addition, it is far more economical than the alternatives -- acquiring expensive host software and special-purpose converters. Multiport device and print servers give users greater flexibility in configuring and managing their networks.

Whether it is moving printers and other peripherals from one network to another, expanding the dimensions of interoperability or preparing for growth, terminal servers can fulfill these requirements without major rewiring. Today, terminal servers offer a full range of functionality, ranging from 8 to 32 ports, giving users the power to connect terminals, modems, servers and virtually any serial device for remote access over IP networks.

Print Servers

Print servers enable printers to be shared by other users on the network. Supporting either parallel and/or serial interfaces, a print server accepts print jobs from any person on the network using supported protocols and manages those jobs on each appropriate printer.

The earliest print servers were external devices, which supported printing via parallel or serial ports on the device. Typically, only one or two protocols were supported. The latest generations of print servers support multiple protocols, have multiple parallel and serial connection options and, in some cases, are small enough to fit directly on the parallel port of the printer itself. Some printers have embedded or internal print servers. This design has an integral communication benefit between printer and print server, but lacks flexibility if the printer has physical problems.

Print servers generally do not contain a large amount of memory; printers simply store information in a queue. When the desired printer becomes available, they allow the host to transmit the data to the appropriate printer port on the server. The print server can then simply queue and print each job in the order in which print requests are received, regardless of protocol used or the size of the job.

Terminal / Printer Server Example

Device Server Technology in the Data Center

The IT

/data center

 is considered the pulse of any modern business.  Remote management enables users to monitor and manage global networks, systems and IT equipment from anywhere and at any time.  Device servers play a major role in allowing for the remote capabilities and flexibility required for businesses to maximize personnel resources and technology ROI.

Console servers provide the flexibility of both standard and emergency remote access via attachment to the network or to a modem. Remote console management serves as a valuable tool to help maximize system uptime and system operating costs.

Secure console servers provide familiar tools to leverage the console or emergency management port built into most serial devices, including servers, switches, routers, telecom equipment - anything in a rack - even if the network is down. They also supply complete in-band and out-of-band local and remote management for the data center with tools such as telnet and SSH that help manage the performance and availability of critical business information systems.

Console Management Solutions from Lantronix

Lantronix provides complete in-band and out-of-band local and remote management solutions for the data center. SecureLinx™ secure console management products give IT managers unsurpassed ability to securely and remotely manage serial devices, including servers, switches, routers, telecom equipment - anything in a rack - even if the network is down.

The ability to manage virtually any electronic device over a network or the Internet is changing the way the world works and does business. With the ability to remotely manage, monitor, diagnose and control equipment, a new level of functionality is added to networking — providing business with increased intelligence and efficiency.  Lantronix leads the way in developing new network intelligence and has been a tireless pioneer in machine-to-machine (M2M) communication technology.

We hope this introduction to networking has been helpful and informative. This tutorial was meant to be an overview and not a comprehensive guide that explains everything there is to know about planning, installing, administering and troubleshooting a network. There are many Internet websites, books and magazines available that explain all aspects of computer networks, from LANs to WANs, network hardware to running cable. To learn about these subjects in greater detail, check your local bookstore, software retailer or newsstand for more information.

Fast Ethernet Tutorial

A Guide to Using Fast Ethernet and Gigabit Ethernet

Network managers today must contend with the requirements of utilizing faster media, mounting bandwidth and play “traffic cop” to an ever-growing network infrastructure. Now, more than ever, it’s imperative for them to understand the basics of using various Ethernet technologies to manage their networks.

This tutorial will explain the basic principles of Fast Ethernet and Gigabit Ethernet technologies, describing how each improves on basic Ethernet technology. It will offer guidance on how to implement these technologies as well as some “rules of the road” for successful repeater selection and usage.

It is nearly impossible to discuss networking without the mention of Ethernet, Fast Ethernet and Gigabit Ethernet. But, in order to determine which form is needed for your application, it’s important to first understand what each provides and how they work together.

A good starting point is to explain what Ethernet is. Simply, Ethernet is a very common method of networking computers in a LAN using copper cabling. Capable of providing fast and constant connections, Ethernet can handle about 10,000,000 bits per second and can be used with almost any kind of computer.

While that may sound fast to those less familiar with networking, there is a very strong demand for even higher transmission speeds, which has been realized by the Fast Ethernet and Gigabit Ethernet specifications (IEEE 802.3u and IEEE 802.3z respectively). These LAN (local area network) standards have raised the Ethernet speed limit from 10 megabits per second (Mbps) to 100Mbps for Fast Ethernet and 1000Mbps for Gigabit Ethernet with only minimal changes made to the existing cable structure.

The building blocks of today's networks call out for a mixture of legacy 10BASE-T Ethernet networks and the new protocols. Typically, 10Mbps networks utilize Ethernet switches to improve the overall efficiency of the Ethernet network. Between Ethernet switches, Fast Ethernet repeaters are used to connect a group of switches together at the higher 100 Mbps rate.

However, with an increasing number of users running 100Mbps at the desktop, servers and aggregation points such as switch stacks may require even greater bandwidth. In this case, a Fast Ethernet backbone switch can be upgraded to a Gigabit Ethernet switch which supports multiple 100/1000 Mbps switches. High performance servers can be connected directly to the backbone once it has been upgraded.

Many client/server networks suffer from too many clients trying to access the same server, which creates a bottleneck where the server attaches to the LAN. Fast Ethernet, in combination with switched Ethernet, can create an optimal cost-effective solution for avoiding slow networks since most 10/100Mbps components cost about the same as 10Mbps-only devices.

When integrating 100BASE-T into a 10BASE-T network, the only change required from a wiring standpoint is that the corporate premise distributed wiring system must now include Category 5 (CAT5) rated twisted pair cable in the areas running 100BASE-T. Once rewiring is completed, gigabit speeds can also be deployed even more widely throughout the network using standard CAT5 cabling.

The Fast Ethernet specification calls for two types of transmission schemes over various wire media. The first is 100BASE-TX, which, from a cabling perspective, is very similar to 10BASE-T. It uses CAT5-rated twisted pair copper cable to connect various hubs, switches and end-nodes. It also uses an RJ45 jack just like 10BASE-T and the wiring at the connector is identical. These similarities make 100BASE-TX easier to install and therefore the most popular form of the Fast Ethernet specification.

The second variation is 100Base-FX which is used primarily to connect hubs and switches together either between wiring closets or between buildings. 100BASE-FX uses multimode fiber-optic cable to transport Fast Ethernet traffic.

Gigabit Ethernet specification calls for three types of transmission schemes over various wire media. Gigabit Ethernet was originally designed as a switched technology and used fiber for uplinks and connections between buildings. Because of this, in June 1998 the IEEE approved the Gigabit Ethernet standard over fiber: 1000BASE-LX and 1000BASE-SX.

The next Gigabit Ethernet standardization to come was 1000BASE-T, which is Gigabit Ethernet over copper. This standard allows one gigabit per second (Gbps) speeds to be transmitted over CAT5 cable and has made Gigabit Ethernet migration easier and more cost-effective than ever before.

The basic building block for the Fast Ethernet LAN is the Fast Ethernet repeater. The two types of Fast Ethernet repeaters offered on the market today are:

Class I Repeater -- The Class 1 repeater operates by translating line signals on the incoming port to a digital signal. This allows the translation between different types of Fast Ethernet such as 100BASE-TX and 100BASE-FX. A Class I repeater introduces delays when performing this conversion such that only one repeater can be put in a single Fast Ethernet LAN segment.

Class II Repeater -- The Class II repeater immediately repeats the signal on an incoming port to all the ports on the repeater. Very little delay is introduced by this quick movement of data across the repeater; thus two Class II repeaters are allowed per Fast Ethernet segment.
Network managers understand the 100 meter distance limitation of 10BASE-T and 100BASE-T Ethernet and make allowances for working within these limitations. At the higher operating speeds, Fast Ethernet and 1000BASE-T are limited to 100 meters over CAT5-rated cable. The EIA/TIA cabling standard recommends using no more than 90 meters between the equipment in the wiring closet and the wall connector. This allows another 10 meters for patch cables between the wall and the desktop computer.

In contrast, a Fast Ethernet network using the 100BASE-FX standard is designed to allow LAN segments up to 412 meters in length. Even though fiber-optic cable can actually transmit data greater distances (i.e. 2 Kilometers in FDDI), the 412 meter limit for Fast Ethernet was created to allow for the round trip times of packet transmission. Typical 100BASE-FX cable specifications call for multimode fiber-optic cable with a 62.5 micron fiber-optic core and a 125 micron cladding around the outside. This is the most popular fiber optic cable type used by many of the LAN standards today. Connectors for 100BASE-FX Fast Ethernet are typically ST connectors (which look like Ethernet BNC connectors).

Many Fast Ethernet vendors are migrating to the newer SC connectors used for ATM over fiber. A rough implementation guideline to use when determining the maximum distances in a Fast Ethernet network is the equation: 400 - (r x 95) where r is the number of repeaters. Network managers need to take into account the distance between the repeaters and the distance between each node from the repeater. For example, in Figure 1 two repeaters are connected to two Fast Ethernet switches and a few servers.

Figure 1: Fast Ethernet Distance Calculations with Two Repeaters

Maximum Distance Between End nodes:
400-(rx95) where r = 2 (for 2 repeaters)
400-(2x95) = 400-190 = 210 feet, thus
A + B + C = 210 Feet

There is yet another variation of Ethernet called full-duplex Ethernet. Full-duplex Ethernet enables the connection speed to be doubled by simply adding another pair of wires and removing collision detection; the Fast Ethernet standard allowed full-duplex Ethernet. Until then all Ethernet worked in half-duplex mode which meant if there were only two stations on a segment, both could not transmit simultaneously. With full-duplex operation, this was now possible. In the terms of Fast Ethernet, essentially 200Mbps of throughput is the theoretical maximum per full-duplex Fast Ethernet connection. This type of connection is limited to a node-to-node connection and is typically used to link two Ethernet switches together.

A Gigabit Ethernet network using the 1000BASE-LX long wavelength option supports duplex links of up to 550 meters of 62.5 millimeters or 50 millimeters multimode fiber. 1000BASE-LX can also support up to 5 Kilometers of 10 millimeter single-mode fiber. Its wavelengths range from 1270 millimeters to 1355 millimeters. The 1000BASE-SX is a short wavelength option that supports duplex links of up to 275 meters using 62.5 millimeters at multimode or up to 550 meters using 55 millimeters of multimode fiber. Typical wavelengths for this option are in the range of 770 to 860 nanometers.

The CAT5 cable specification is rated up to 100 megahertz (MHz) and meets the requirement for high speed LAN technologies like Fast Ethernet and Gigabit Ethernet. The EIA/TIA (Electronics industry Association/Telecommunications Industry Association) formed this cable standard which describes performance the LAN manager can expect from a strand of twisted pair copper cable. Along with this specification, the committee formed the EIA/TIA-568 standard named the “Commercial Building Telecommunications Cabling Standard” to help network managers install a cabling system that would operate using common LAN types (like Fast Ethernet). The specification defines Near End Crosstalk (NEXT) and attenuation limits between connectors in a wall plate to the equipment in the closet. Cable analyzers can be used to ensure accordance with this specification and thus guarantee a functional Fast Ethernet or Gigabit Ethernet network.

The basic strategy of cabling Fast Ethernet systems is to minimize the re-transmission of packets caused by high bit-error rates. This ratio is calculated using NEXT, ambient noise and attenuation of the cable.  

Most network managers have already migrated from 10BASE-T or other Ethernet 10Mbps variations to higher bandwidth networks. Fast Ethernet ports on Ethernet switches are used to provide even greater bandwidth between the workgroups at 100Mbps speeds. New backbone switches have been created to offer support for 1000Mbps Gigabit Ethernet uplinks to handle network traffic. Equipment like Fast Ethernet repeaters will be used in common areas to group Ethernet switches together with server farms into large

Device Servers Tutorial

Device Server Technology -
Understanding and Imagining its Possibilities

For easy reference, please consult the glossary of terms at the end of this paper.*

The ability to manage virtually any electronic device over a network or the Internet is changing our world. Companies want to remotely manage, monitor, diagnose and control their equipment because doing so adds an unprecedented level of intelligence and efficiency to their businesses. 

With this trend, and as we rely on applications like e-mail and database management for core business operations, the need for more fully-integrated devices and systems to monitor and manage the vast amount of data and information becomes increasingly more important. And, in a world where data and information is expected to be instantaneous, the ability to manage, monitor and even repair equipment from a distance is extremely valuable to organizations in every sector.

This need is further emphasized as companies with legacy non-networked equipment struggle to compete with organizations equipped with advanced networking capabilities such as machine-to-machine (M2M) communications. There’s no denying that advanced networking provides an edge to improving overall efficiencies.

This tutorial will provide an overview and give examples of how device servers make it easy to put just about any piece of electronic equipment on an Ethernet network. It will highlight the use of external device servers and their ability to provide serial connectivity for a variety of applications. It will touch on how device networking makes M2M communication possible and wireless technology even more advanced. Finally, as any examination of networking technologies requires consideration of data security, this paper will provide an overview of some the latest encryption technologies available for connecting devices securely to the network.

For some devices, the only access available to a network manager or programmer is via a serial port. The reason for this is partly historical and partly evolutionary. Historically, Ethernet interfacing has usually been a lengthy development process involving multiple vendor protocols (some of which have been proprietary) and the interpretation of many RFCs. Some vendors believed Ethernet was not necessary for their product which was destined for a centralized computer center - others believed that the development time and expense required to have an Ethernet interface on the product was not justified.

From the evolutionary standpoint, the networking infrastructure of many sites has only recently been developed to the point that consistent and perceived stability has been obtained - as users and management have become comfortable with the performance of the network, they now focus on how they can maximize corporate productivity in non-IS capacities.

Device server technology solves this problem by providing an easy and economical way to connect the serial device to the network. 

Device Server topology exampleLet's use the Lantronix UDS100 Device Server as an example of how to network a RAID controller serial port. The user simply cables the UDS100 's serial port to the RAID controller's serial port and attaches the UDS100's Ethernet interface to the network. Once it has been configured, the UDS100 makes that serial port a networked port, with its own IP address. The user can now connect to the UDS100 's serial port over a network, from a PC or terminal emulation device and perform the same commands as if he was using a PC directly attached to the RAID controller. Having now become network enabled, the RAID can be managed or controlled from anywhere on the network or via the Internet.

The key to network-enabling serial equipment is in a device server’s ability to handle two separate areas:

1.      the connection between the serial device and the device server

2.      the connection between the device server and the network (including other network devices)

Traditional terminal, print and serial servers were developed specifically for connecting terminals, printers and modems to the network and making those devices available as networked devices. Now, more modern demands require other devices be network-enabled, and therefore device servers have become more adaptable in their handling of attached devices. Additionally, they have become even more powerful and flexible in the manner in which they provide network connectivity.

A device server is “a specialized network-based hardware device designed to perform a single or specialized set of functions with client access independent of any operating system or proprietary protocol.” 

Device servers allow independence from proprietary protocols and the ability to meet a number of different functions. The RAID controller application discussed above is just one of many applications where device servers can be used to put any device or "machine" on the network. 

PCs have been used to network serial devices with some success.  This, however, required the product with the serial port to have software able to run on the PC, and then have that application software allow the PC's networking software to access the application. This task equaled the problems of putting Ethernet on the serial device itself so it wasn’t a satisfactory solution. 

To be successful, a device server must provide a simple solution for networking a device and allow access to that device as if it were locally available through its serial port. Additionally, the device server should provide for the multitude of connection possibilities that a device may require on both the serial and network sides of a connection. Should the device be connected all the time to a specific host or PC? Are there multiple hosts or network devices that may want or need to connect to the newly-networked serial device? Are there specific requirements for an application which requires the serial device to reject a connection from the network under certain circumstances? The bottom line is a server must have both the flexibility to service a multitude of application requirements and be able to meet all the demands of those applications.

Lantronix is at the forefront of M2M communication technology.  The company is highly focused on enabling the networking of devices previously not on the network so they can be accessed and managed remotely.

Lantronix has built on its long history and vast experience as a terminal, print and serial server technology company to develop more functionality in its servers that “cross the boundary” of what many would call traditional terminal or print services. Our technology provides:

§                The ability to translate between different protocols to allow non-routable protocols to be routed

§                The ability to allow management connections to single-port servers while they are processing transactions between their serial port and the network

§                A wide variety of options for both serial and network connections including serial tunneling and automatic host connection make these servers some of the most sophisticated Ethernet-enabling devices available today.

As an independent device on the network, device servers are surprisingly easy to manage. Lantronix has spent years perfecting Ethernet protocol software and its engineers have provided a wide range of management tools for this device server technology. Serial ports are ideal vehicles for device management purposes - a simple command set allows easy configuration. The same command set that can be exercised on the serial port can be used when connecting via Telnet to a Lantronix device server.

An important feature to remember about the Lantronix Telnet management interface is that it can actually be run as a second connection while data is being transferred through the server - this feature allows the user to actually monitor the data traffic on even a single-port server's serial port connection while active. Lantronix device servers also support SNMP, the recognized standard for IP management that is used by many large network for management purposes.

Finally, Lantronix has its own management software utilities which utilize a graphical user interface providing an easy way to manage Lantronix device servers. In addition, the servers all have Flash ROMs which can be reloaded in the field with the latest firmware.

This section will discuss how device servers are used to better facilitate varying applications such as:

§                Data Acquisition

§                M2M

§                Wireless Communication/Networking

§                Factory/Industrial Automation

§                Security Systems

§                Bar Code Readers and Point-of-sale Scanners

§                Medical Applications

Data Acquisition

Microprocessors have made their way into almost all aspects of human life, from automobiles to hockey pucks. With so much data available, organizations are challenged to effectively and efficiently gather and process the information. There are a wide variety of interfaces to support communication with devices. RS-485 is designed to allow for multiple devices to be linked by a multidrop network of RS-485 serial devices. This standard also had the benefit of greater distance than offered by the RS-232/RS-423 and RS-422 standards.

However, because of the factors previously outlined, these types of devices can further benefit from being put on an Ethernet network. First, Ethernet networks have a greater range than serial technologies. Second, Ethernet protocols actually monitor packet traffic and will indicate when packets are being lost compared to serial technologies which do not guarantee data integrity.

Lantronix full family of device server products provides the comprehensive support required for network enabling different serial interfaces. Lantronix provides many device servers which support RS-485 and allow for easy integration of these types of devices into the network umbrella. For RS-232 or RS-423 serial devices, they can be used to connect equipment to the network over either Ethernet or Fast Ethernet.

An example of device server collaboration at work is Lantronix's partnership with Christie Digital Systems, a leading provider of visual solutions for business, entertainment and industry. Christie integrates Lantronix SecureBox® secure device server with feature-rich firmware designed and programmed by Christie for its CCM products. The resulting product line, called the ChristieNET SecureCCM, provided the encryption security needed for use in the company’s key markets, which include higher education and government. Demonstrating a convergence of AV and IT equipment to solve customer needs, ChristieNET SecureCCM was the first product of its kind to be certified by the National Institute of Standards and Technology (NIST).

M2M and Wireless Communications

Two extremely important and useful technologies for communication that depend heavily on device servers are M2M and wireless networking.

Made possible by device networking technology, M2M enables serial-based devices throughout a facility to communicate with each other and humans over a Local Area Network/Wide Area Network (LAN/WAN) or via the Internet. The prominent advantages to business include:

§                Serial Tunneling diagramMaximized efficiency

§                More streamlined operations

§                Improved service

Lantronix Device Servers enable M2M communications either between the computer and serial device, or from one serial device to another over the Internet or Ethernet network using “serial tunneling.” Using this serial to Ethernet method, the “tunnel” can extend across a facility or to other facilities all over the globe.

M2M technology opens a new world of business intelligence and opportunity for organizations in virtually every market sector. Made possible through device servers, M2M offers solutions for equipment manufacturers, for example, who need to control service costs. Network enabled equipment can be monitored at all times for predictive maintenance. Often when something is wrong, a simple setting or switch adjustment is all that is required. When an irregularity is noted, the system can essentially diagnose the problem and send the corrective instructions. This negates a time-consuming and potentially expensive service call for a trivial issue. If servicing is required, the technician leaves knowing exactly what is wrong and with the proper equipment and parts to correct the problem. Profitability is maximized through better operating efficiencies, minimized cost overruns and fewer wasted resources.

Traditional Service Model diagram

Remote Mgmt. Service Model diagram

M2M technology also greatly benefits any organization that cannot afford downtime, such as energy management facilities where power failures can be catastrophic, or hospitals who can’t afford interruptions with lives at stake. By proactively monitoring networked-enabled equipment to ensure it is functioning properly at all times, business can ensure uptime on critical systems, improve customer service and increase profitability.

Wireless Networking

Wireless networking, allows devices to communicate over the airwaves and without wires by using standard networking protocols. There are currently a variety of competing standards available for achieving the benefits of a wireless network. Here is a brief description of each:

is a standard that provides short-range wireless connections between computers, Pocket PCs, and other equipment.

is a proprietary set of communication protocols designed to use small, low power digital radios based on the IEEE 802.15.4 standard for wireless personal area networking.

is an IEEE specification for a wireless LAN airlink.

is an industry standard for wireless LANs and supports more users and operates over longer distances than other standards. However, it requires more power and storage. 802.11b offers wireless transmission over short distances at up to 11 megabits per second. When used in handheld devices, 802.11b provides similar networking capabilities to devices enabled with Bluetooth.

is the most recently approved standard and offers wireless transmission over short distances at up to 54 megabits per second. Both 802.11b and 802.11g operate in the 2.4 GHz range and are therefore compatible.

For more in-depth information, please consult the Lantronix wireless whitepaper which is available online.

Wireless technology is especially ideal in instances when it would be impractical or cost-prohibitive for cabling; or in instances where a high level of mobility is required.

Wireless topology diagram

Wireless device networking has benefits for all types of organizations. For example, in the medical field, where reduced staffing, facility closures and cost containment pressures are just a few of the daily concerns, device networking can assist with process automation and data security. Routine activities such as collection and dissemination of data, remote patient monitoring, asset tracking and reducing service costs can be managed quickly and safely with the use of wireless networked devices. In this environment, Lantronix device servers can network and manage patient monitoring devices, mobile EKG units, glucose analyzers, blood analyzers, infusion pumps, ventilators and virtually any other diagnostic tool with serial capability over the Internet.

Forklift accidents in large warehouses cause millions of dollars in damaged product, health claims, lost work and equipment repairs each year. To minimize the lost revenue and increase their profit margin and administrative overhead, “a company” has utilized wireless networking technology to solve the problem. Using Lantronix serial-to-802.11 wireless device server “the company” wirelessly network-enables a card reader which is tied to the ignition system of all the forklifts in the warehouse. Each warehouse employee has an identification card. The forklift operator swipes his ID card before trying to start the forklift. The information from his card is sent back via wireless network to computer database and it checks to see if he has proper operator’s license, and that the license is current. If so, forklift can start. If not – the starter is disabled.

Factory Floor Automation

For shops that are running automated assembly and manufacturing equipment, time is money. For every minute a machine is idle, productivity drops and the cost of ownership soars. Many automated factory floor machines have dedicated PCs to control them. In some cases, handheld PCs are used to reprogram equipment for different functions such as changing computer numerically controlled (CNC) programs or changing specifications on a bottling or packaging machine to comply with the needs of other products. These previously isolated pieces of industrial equipment could be networked to allow them to be controlled and reprogrammed over the network, saving time and increasing shop efficiency. For example, from a central location (or actually from anywhere in the world for that matter) with network connectivity, the machines can be accessed and monitored over the network. When necessary, new programs can be downloaded to the machine and software/firmware updates can be installed remotely.

One item of interest is how that input programming is formatted. Since many industrial and factory automation devices are legacy or proprietary, any number of different data protocols could be used. Device servers provide the ability to utilize the serial ports on the equipment for virtually any kind of data transaction.

Lantronix device servers support binary character transmissions. In these situations, managing the rate of information transfer is imperative to guard against data overflow. The ability to manage data flow between computers, devices or nodes in a network, so that data can be handled efficiently is referred to as flow control. Without it, the risk of data overflow can result in information being lost or needing to be retransmitted.

Lantronix accounts for this need by supporting RTS/CTS flow control on its DB25 and RJ45 ports. Lantronix device servers handle everything from a simple ASCII command file to a complex binary program that needs to be transmitted to a device.

Security Systems

One area that every organization is concerned about is security. Card readers for access control are commonplace, and these devices are ideally suited to benefit from being connected to the network with device server technology. When networked, the cards can be checked against a centralized database on the system and there are records of all access within the organization. Newer technology includes badges that can be scanned from a distance of up to several feet and biometric scanning devices that can identify an individual by a thumbprint or handprint. Device servers enable these types of devices to be placed throughout an organization's network and allow them to be effectively managed by a minimum staff at a central location. They allow the computer controlling the access control to be located a great distance away from the actual door control mechanism.

An excellent example is how ISONAS Security Systems utilized Lantonix WiPort® embedded device server to produce the World’s first wireless IP door reader for the access control and security industry. With ISONAS reader software, network administrators can directly monitor and control an almost unlimited number of door readers across the enterprise. The new readers, incorporating Lantronix wireless technology, connect directly to an IP network and eliminate the need for traditional security control panels and expensive wiring. The new solutions are easy to install and configure, enabling businesses to more easily adopt access control, time and attendance or emergency response technology. What was traditionally a complicated configuration and installation is now as simple as installing wireless access points on a network.

One more area of security systems that has made great strides is in the area of security cameras. In some cases, local municipalities are now requesting that they get visual proof of a security breach before they will send authorities. Device server technology provides the user with a host of options for how such data can be handled. One option is to have an open data pipe on a security camera - this allows all data to be viewed as it comes across from the camera. The device server can be configured so that immediately upon power-up the serial port attached to the camera will be connected to a dedicated host system.

Another option is to have the camera transmit only when it has data to send. By configuring the device server to automatically connect to a particular site when a character first hits the buffer, data will be transmitted only when it is available.

One last option is available when using the IP protocol - a device server can be configured to transmit data from one serial device to multiple IP addresses for various recording or archival concerns. Lantronix device server technology gives the user many options for tuning the device to meet the specific needs of their application.

Scanning Devices

Device server technology can be effectively applied to scanning devices such as bar code readers or point-of-sale debit card scanners. When a bar code reader is located in a remote corner of the warehouse at a receiving dock, a single-port server can link the reader to the network and provide up-to-the-minute inventory information. A debit card scanner system can be set up at any educational, commercial or industrial site with automatic debiting per employee for activities, meals and purchases. A popular amusement park in the United States utilizes such a system to deter theft or reselling of partially-used admission tickets.

Medical Applications

The medical field is an area where device server technology can provide great flexibility and convenience. Many medical organizations now run comprehensive applications developed specifically for their particular area of expertise. For instance, a group specializing in orthopedics may have x-ray and lab facilities onsite to save time and customer effort in obtaining test results.  Connecting all the input terminals, lab devices, x-ray machines and developing equipment together allows for efficient and effective service. Many of these more technical devices previously relied upon serial communication or worse yet, processing being done locally on a PC. Utilizing device server technology they can all be linked together into one seamless application. And an Internet connection enables physicians the added advantage of access to immediate information relevant to patient diagnosis and treatment.

Larger medical labs, where there are hundreds of different devices available for providing test data, can improve efficiency and lower equipment costs by using device server technology to replace dedicated PCs at each device. Device servers only cost a fraction of PCs. And, the cost calculation is not just the hardware alone, but the man-hours required to create software that would allow a PC-serial-port-based applications program to be converted into a program linking that information to the PC's network port. Device server technology resolves this issue by allowing the original applications software to be run on a networked PC and then use port redirector software to connect up to that device via the network. This enables the medical facility to transition from a PC at each device and software development required to network that data, to using only a couple of networked PCs doing the processing for all of the devices.

Of course, with the ability to network devices comes the risk of outsiders obtaining access to important and confidential information. Security can be realized through various encryption methods. 

There are two main types of encryption: asymmetric encryption (also known as public-key encryption) and symmetric encryption. There are many algorithms for encrypting data based on these types.

AES (Advanced Encryption Standards) is a popular and powerful encryption standard that has not been broken. Select Lantronix device servers feature a NIST-certified implementation of AES as specified by the Federal Information Processing Specification (FIPS-197). This standard specifies Rijndael as a FIPS-approved symmetric encryption algorithm that may be used to protect sensitive information.  A common consideration for device networking devices is that they support AES and are validated against the standard to demonstrate that they properly implement the algorithm. It is important that a validation certificate is issued to the product’s vendor which states that the implementation has been tested. Lantronix offers several AES certified devices including the AES Certified SecureBox SDS1100 and the AES Certified SecureBox SDS2100.

Secure Shell (SSH) is a program that provides strong authentication and secure communications over unsecured channels. It is used as a replacement for Telnet, rlogin, rsh, and rcp, to log into another computer over a network, to execute commands in a remote machine, and to move files from one machine to another. AES is one of the many encryption algorithms supported by SSH. Once a session key is established SSH uses AES to protect data in transit.
Both SSH and AES are extremely important to overall network security by maintaining strict authentication for protection against intruders as well as symmetric encryption to protect transmission of dangerous packets. AES certification is reliable and can be trusted to handle the highest network security issues.

Wired Equivalent Privacy (WEP) is a security protocol for wireless local area networks (WLANs) which are defined in the 802.11b standard. WEP is designed to provide the same level of security as that of a wired LAN, however LANs provide more security by their inherent physical structure that can be protected from unauthorized access. WLANs, which are over radio waves, do not have the same physical structure and therefore are more vulnerable to tampering. WEP provides security by encrypting data over radio waves so that it is protected as it is transmitted from one end point to another.  However, it has been found that WEP is not as secure as once believed. WEP is used at the data link and physical layers of the OSI model and does not offer end-to-end security.

Supported by many newer devices, Wi-Fi Protected Access (WPA) is a Wi-Fi standard that was designed to improve upon the security features of WEP. WPA technology works with existing Wi-Fi products that have been enabled with WEP, but WPA includes two improvements over WEP. The first is improved data encryption via the temporal key integrity protocol (TKIP), which scrambles keys using a hashing algorithm and adds an integrity-checking feature to ensure that keys haven’t been tampered with. The second is user authentication through the extensible authentication protocol (EAP). EAP is built on a secure public-key encryption system, ensuring that only authorized network users have access. EAP is generally missing from WEP, which regulates access to a wireless network based on the computer’s hardware-specific MAC Address. Since this information can be easily stolen, there is an inherent security risk in relying on WEP encryption alone. 

In the simplest connection scheme where two device servers are set up as a serial tunnel, no encryption application programming is required since both device servers can perform the encryption automatically. However, in the case where a host-based application is interacting with the serial device through its own network connection, modification of the application is required to support data encryption.

While this paper provides a quick snapshot of device servers at work in a variety of applications, it should be noted that this is only a sampling of the many markets where these devices could be used. With the ever-increasing requirement to manage, monitor, diagnose and control many and different forms of equipment and as device server technology continues to evolve, the applications are literally only limited by the imagination.

 

traditionally, a unit used for connecting a modem to the network for shared access among users.

traditionally, a unit that connects asynchronous devices such as terminals, printers, hosts, and modems to a LAN or WAN.

a specialized network-based hardware device designed to perform a single or specialized set of functions with client access independent of any operating system or proprietary protocol.

a host device that connects and manages shared printers over a network.

software that allows the user to connect consoles from various equipment into the serial ports of a single device and gain access to these consoles from anywhere on the network.

a unit or program that allows the user to remotely manage serial devices, including servers, switches, routers and telecom equipment.

 http://www.fujitsu.com/downloads/TEL/fnc/pdfservices/ethernet-prerequisite.pdf

Ethernet

Ethernet, a physical layer local area network (LAN) technology, is nearly 30 years old. In the last three decades, it has become the most widely used LAN technology because of its speed, low cost, and relative ease of installation. This is combined with wide computer-market acceptance and the ability to support the majority of network protocols.

Ethernet History

Robert Metcalfe, an engineer at Xerox, first described the Ethernet network system he invented in 1973. The simple, yet innovative and, for its time, advanced system was used to

interconnect computer workstations, sending data between workstations and printers. Metcalfe’s Ethernet was modeled after the Aloha network developed in the 1960s at the University of Hawaii. However, his system detected collisions between simultaneously transmitted frames and included a listening process before frames were

transmitted, thereby greatly reducing collisions. Although Metcalfe and his coworkers received patents for Ethernet and an Ethernet repeater, and Ethernet was

wholly-owned by Xerox, Ethernet was not designed nor destined to be a proprietary system. It would soon became a worldwide standard.

Ethernet Standards

The first Metcalfe system ran at 2.94 Mb/s, but by 1980 DEC, Intel, and Xerox (DIX) issued a DIX Ethernet standard for 10 Mb/s Ethernet systems. That same year, the Institute of

Electrical and Electronics Engineers (IEEE) commissioned a committee to develop open network standards. In 1985, this committee published the portion of the standard pertaining to Ethernet (based on the DIX standard)—IEEE 802.3 Carrier Sense Multiple Access with Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications. Even though the IEEE title does not mention Ethernet, Metcalfe’s original term for his network system had caught on, and IEEE 802.3 was and

is referred to as the Ethernet standard.

Note: The IEEE standard was called 802 because

work on it started in February 1980.

As described in Table 1, many more Ethernet standards have been created since 1985. The IEEE standards have been adopted by the American National Standards Institute (ANSI) and by the International Organization of Standards (ISO). ISO standardization means that companies and organizations around the world use these standards when manufacturing Ethernet products and installing Ethernet network systems.

Fast Ethernet

While 10 Mb/s seemed very fast in the mid-1980s, the need for speed resulted in a 1995 standard (IEEE 802.3u) for 100 Mb/s Ethernet over wire or fiber-optic cable. Although the 100Base-T standard was close to 10Base-T, network designers had to determine which customers needed the extra bandwidth. Because there was a choice of bandwidths, the standard also allowed for equipment that could autonegotiate the two speeds. In other words, if an Ethernet device was transmitting or receiving from a 10 Mb/s network, it could support that network. If the network operated at 100 Mb/s, the same device could switch automatically to the higher rate. Ethernet networks then could be 10 Mb/s or 100 Mb/s (Fast Ethernet) and connected with 10/100 Mb/s Ethernet devices that automatically switched network speeds.

Gigabit Ethernet

Gigabit Ethernet works much the same way as 10 Mb/s and 100 Mb/s Ethernet, only faster. It uses the same IEEE 802.3 frame format, full duplex, and flow control methods. Additionally, it takes advantage of CSMA/CD when in half-duplex mode, and it supports simple network management protocol (SNMP) tools.

Gigabit Ethernet takes advantage of jumbo frames to reduce the frame rate to the end host. Standard Ethernet frame sizes are between 64 and 1518 bytes. Jumbo frames are between 64 and 9215 bytes. Because larger frames translate to lower frame rates, using jumbo frames on Gigabit Ethernet links greatly

reduces the number of packets (from more than 80,000 to less than 15,000 per second) that are received and processed by the

end host.

Gigabit Ethernet can be transmitted over CAT 5 cable and optical

fiber such as the following:

• 1000Base-CX—Short distance transport (copper)

• 1000Base-SX—850 nm wavelength (fiber optics)

• 1000Base-LX—1300 nm wavelength (fiber optics)

10 Gigabit Ethernet

The operation of 10 Gigabit Ethernet is similar to that of lower

speed Ethernets. It maintains the IEEE 802.3 Ethernet frame size

and format that preserves layer 3 and greater protocols.

However, 10 Gigabit Ethernet only operates over point-to-point

links in full-duplex mode. Additionally, it uses only multimode and

single mode optical fiber for transporting Ethernet frames.

Note: Operation in full-duplex mode eliminates the

need for CSMA/CD.

The 10 Gigabit Ethernet standard (IEEE 802.3ae) defines two

broad physical layer network applications:

• Local area network (LAN) PHY

• Wide area network (WAN) PHY

LAN PHY

The LAN PHY operates at close to the 10 Gigabit Ethernet rate to

maximize throughput over short distances. Two versions of LAN

PHY are standardized:

• Serial (10GBASE-R)

• 4-Channel course wave division multiplexing (CWDM)

(10GBASE-X)

The 10GBASE-R uses a 64B/66B encoding system that raises

the 10 Gigabit Ethernet line rate from a nonencoded 9.58 Gb/s to

10.313 Gb/s. The 10GBASE-X still uses 8B/10B encoding

because all of the 2.5 Gb/s CWDM channels it employs are

parallel and run at 3.125 Gb/s after encoding.

The MAC to PHY data rate for both LAN PHY versions is

10 Gb/s. Encoding is used so that long runs of ones and zeros

that could cause clock and data problems are greatly reduced.

WAN PHY

The WAN PHY supports connections to circuit-switched SONET

networks. Besides the sublayers added to the LAN PHY

(discussed in the following two pages), the WAN PHY adds

another element called the WAN interface sublayer (WIS). The

WIS takes data payload and puts it into a 9.58464 Gb/s frame

that can be transported at a rate of 9.95328 Gb/s. The WIS does

not support every SONET feature, but it carries out enough

overhead functions (including timing and framing) to make the

Ethernet frames recognizable and manageable by the SONET

equipment they pass through.

10GBase Interfaces

Just as Fast Ethernet and Gigabit Ethernet have multiple

interfaces, 10 Gigabit Ethernet has seven interfaces referred to

in Table 2.

http://www.rhyshaden.com/eth_vlan.htm

Introduction to Ethernet

 

 

1. Introduction

Ethernet was originally developed by Digital, Intel and Xerox (DIX) in the early 1970's and has been designed as a 'broadcast' system, i.e. stations on the network can send messages whenever and wherever it wants. All stations may receive the messages, however only the specific station to which the message is directed will respond.

The original format for Ethernet was developed in Xerox Palo Alto Research Centre (PARC), California in 1972. Using Carrier Sense Multiple Access with Collision Detection (CSMA/CD) it had a transmission rate of 2.94Mb/s and could support 256 devices over cable stretching for 1km. The two inventors were Robert Metcalf and David Boggs.

Ethernet versions 1.0 and 2.0 followed until the IEEE 802.3 committee re-jigged the Ethernet II packet to form the Ethernet 802.3 packet. (IEEE's Project 802 was named after the time it was set up, February 1980. It includes 12 committees 802.1 to 802.12, 802.2 is the LLC, 802.4 Token Bus, 802.11 Wireless, 802.12 100VG-AnyLAN etc.) Nowadays you will see either Ethernet II (DIX) (invented by Digital, Intel and Xerox) format or Ethernet 802.3 format being used.

The 'Ether' part of Ethernet denotes that the system is not meant to be restricted for use on only one medium type, copper cables, fibre cables and even radio waves can be used.

802.3 Ethernet uses Manchester Phase Encoding (MPE) for coding the data bits on the outgoing signal. The next few sections describe how Ethernet works and how Ethernet is structured.

2. CSMA/CD

As mentioned earlier, Ethernet uses Carrier Sense Multiple Access with Collision Detection (CSMA/CD). When an Ethernet station is ready to transmit, it checks for the presence of a signal on the cable i.e. a voltage indicating that another station is transmitting. If no signal is present then the station begins transmission, however if a signal is already present then the station delays transmission until the cable is not in use. If two stations detect an idle cable and at the same time transmit data, then a collision occurs. On a star-wired UTP network, if the transceiver of the sending station detects activity on both its receive and transmit pairs before it has completed transmitting, then it decides that a collision has occurred. On a coaxial system, a collision is detected when the DC signal level on the cable is the same or greater than the combined signal level of the two transmitters, i.e.. significantly greater than +/- 0.85v. Line voltage drops dramatically if two stations transmit at the same and the first station to notice this sends a high voltage jamming signal around the network as a signal. The two stations involved with the collision lay off transmitting again for a time interval which is randomly selected. This is determined using Binary Exponential Backoff. If the collision occurs again then the time interval is doubled, if it happens more than 16 times then an error is reported.

A Collision Domain is that part of the network where each station can 'see' other stations' traffic both unicast and broadcasts. The Collision Domain is made up of one segment of Ethernet coax (with or without repeaters) or a number of UTP shared hubs. A network is segmented with bridges (or microsegmented when using switches) that create two segments, or two Collision Domains where a station on one segment can not see traffic between stations on the other segment unless the packets are destined for itself. It can however still see all broadcasts as a segmented network, no matter the number of segments, is still one Broadcast Domain. Separate Broadcast Domains are created by VLANs on switches so that one physical network can behave as a number of entirely separate LANs such that the only way to allow stations on different VLANs to communicate is at a layer 3 level using a router, just as if the networks were entirely physically separate.

3. Ethernet Frame

3.1 Frame Formats

The diagrams below describe the structure of the older DIX (Ethernet II) and the now standard 802.3 Ethernet frames. The numbers above each field represent the number of bytes.

DIX Frame Format

From the above we can deduce that the maximum 802.3 frame size is 1518 bytes and the minimum size is 64 bytes. Packets that have correct CRC's (or FCS's) but are smaller than 64 bytes, are known as 'Runts'.

The hardware address, or MAC address is transmitted and stored in Ethernet network devices in Canonical format i.e. Least significant Bit (LSB) first. You may hear the expression Little-Endian to describe the LSB format in which Ethernet is transmitted. Token Ring and FDDI, on the other hand, transmit the MAC address with the Most Significant Bit (MSB) first, or Big-Endian, This is known as Non-Canonical format. Note that this applies on a byte by byte basis i.e. the bytes are transmitted in the same order it is just the bits in each of those bytes that are reversed! The storage of the MAC addresses in Token Ring and FDDI devices however, may sometimes still be in Canonical format so this can sometimes cause confusion. The reference to, the distribution of MAC addresses and the OUI desinations are always carried out in Canonical format.

Some discussion is warranted on the LLC field. The 802.2 committee developed the Logical Link Control (LLC) to operate with 802.3 Ethernet as seen in the above diagram. LLC is based on the HDLC format and more detail can be found by following the link. Whereas Ethernet II (2.0) combines the MAC and the Data link layers restricting itself to connectionless service in the process, IEEE 802.3 separates out the MAC and Data Link layers. 802.2 (LLC) is also required by Token Ring and FDDI but cannot be used with the Novell 'Raw' format. There are three types of LLC, Type 1 which is connectionless, Type 2 which is connection-oriented and Type 3 for Acknowledged Connections.

The Service Access Point (SAP) is used to distinguish between different data exchanges on the same end station and basically replaces the Type field for the older Ethernet II frame. The Source Service Access Point (SSAP) indicates the service from which the LLC data unit is sent, and the Destination Service Access Point (DSAP) indicates the service to which the LLC data unit is being sent. As examples, NetBIOS uses the SAP address of F0 whilst IP uses the SAP address of 06. The following lists common SAPs:

The Control Field identifies the type of LLC, of which there are three:

3.2 I/G and U/L within the MAC address

With an Ethernet MAC address, the first octet uses the lowest significant bit as the I/G bit (Individual/Group address) only and does not have such a thing as the U/L bit (Universally/Locally administered). The U/L bit is used in Token Ring A destination Ethernet MAC address starting with the octet '05' is a group or multicast address since the first bit (LSB) to be transmitted is on the right hand side of the octet and is a binary '1'. Conversely, '04' as the first octet indicates that the destination address is an individual address. Of course, in Ethernet, all source address will have a binary '0' since they are always individual.

The first 3 octets of the MAC address form the Organisational Unique Identifier (OUI) assigned to organisations that requires their own group of MAC addresses. A list of OUIs can be found at OUI Index.

3.3 Subnetwork Access Protocol (SNAP)

The SNAP protocol was introduced to allow an easy transition to the new LLC frame format for vendors. SNAP allows older frames and protocols to be encapsulated in a Type 1 LLC header so making any protocol 'pseudo-IEEE compliant'. SNAP is described in RFC 1042. The following diagram shows how it looks:

SNAP

As you can see, it is an LLC data unit (sometimes called a Logical Protocol Data Unit (LPDU)) of Type 1 (indicated by 03). The DSAP and SSAP are set to AA to indicate that this is a SNAP header coming up. The SNAP header then indicates the vender via the Organisational Unique Identifier (OUI) and the protocol type via the Ethertype field. In the example above we have the OUI as 00-00-00 which means that there is an Ethernet frame, and the Ethertype of 08-00 which indicates IP as the protocol. The official list of types can be found at Ethertypes. More and more vendors are moving to LLC1 on the LAN but SNAP still remains and crops up time and time again.

Have a look at the document IPX for further discussion of 802.3 and 802.5 headers (SNAP etc.) in an IPX environment.

4. Media

4.1 10Base5

Traditionally, Ethernet is used over 'thick' coaxial cable (Normally yellow in colour) called 10Base5 (the '10' denotes 10Mbps, base means that the signal is baseband i.e. takes the whole bandwidth of the cable (so that only one device can transmit at one time on the same cable), and the '5' denotes 500m maximum length). The minimum length between stations is 2.5m.

The cable is run in one long length forming a 'Bus Topology'. Stations attach to it by way of inline N-type connections or a transceiver which is literally screwed into the cable (by way of a 'Vampire Tap') providing a 15-pin AUI (Attachment Unit Interface) connection (also known as a DIX connector or a DB-15 connector) for a drop lead connection (maximum of 50m length) to the station. The segments are terminated with 50 ohm resistors and the shield should be grounded at one end only.

5-4-3 Rule

The segment could be appended with up to a maximum of 4 repeaters, therefore 5 segments (total length of 2,460m) can be connected together. Of the 5 segments only 3 can have devices attached (100 per segment). A total of 300 devices can be attached on a Thicknet broadcast domain.

4.2 10Base2

It was common to see the Thick coax used in Risers to connect Repeaters which in turn provide 'Thin Ethernet' coaxial connections for runs around the floors to up to 30 workstations. Thin ethernet (Thinnet) uses RG-58 cable and is called 10Base2 (The '2' now denoting 200m maximum length, strictly speaking this is 185m). The minimum length between stations is 0.5m. Following is a table detailing various types of coaxial cable:

Each station connects to the thinnet by way of a Network Interface Card (NIC) which provides a BNC (British Naval Connector). At each station the thinnet terminates at a T-piece and at each end of the thinnet run (or 'Segment') a 50-ohm terminator is required to absorb stray signals, thereby preventing signal bounce. The shield should be grounded at one end only.

A segment can be appended with other segments using up to 4 repeaters, i.e. 5 segments in total. 2 of these segments however, cannot be tapped, they can only be used for extending the length of the broadcast domain (to 925m). What this means is that 3 segments with a maximum of 30 stations on each can give you 90 devices on a Thinnet broadast domain.

(There is also a little used 10Broad36 standard where 10 Mbps Ethernet runs over broadband up to 3.6km. With broadband, a number of devices can transmit at the same time using multiple basebands e.g. multiple TV stations each with its own baseband signal frequency on one wire).

4.3 10BaseT

Nowadays, it is becoming increasingly important to use Ethernet across Unshielded Twisted Pair (UTP) or Shielded Twisted Pair (STP), this being called 10BaseT (the 'T' denoting twisted pair). For instance, Category 5 UTP is installed in a 'Star-wired' format, with runs recommended at no greater than 100m (including patch leads, cable run and flyleads) and Ethernet Hubs with UTP ports (RJ45) centrally located. It has been found though that runs of up to 150m are feasible, the limitations being signal strength. Also, there should be no more than a 11.5dB signal loss and the minimum distance between devices is 2.5m. The maximum delay for the signal in a 10Mbps network is 51.2 microseconds. This comes from the fact that the bit time (time to transmit one bit) is 0.1 microseconds and that the slot time for a frame is 512 bit times.

The wires used in the RJ45 are 1 and 2 for transmit, 3 and 6 for receive.

In order to connect to ethernet in this 'Star Topology', each station again has a NIC which, this time, contains an RJ45 socket which is used by a 4-pair RJ45 plug-ended droplead to connect to a nearby RJ45 floor or wall socket.

Each port on the hub sends a 'Link Beat Signal' which checks the integrity of the cable and devices attached, a flickering LED on the front of the port of the hub tells you that the link is running fine. The maximum number of hubs (or, more strictly speaking, repeater counts) that you can have in one segment is 4 and the maximum number of stations on one broadcast domain is 1024.

The advantages of the UTP/STP technology are gained from the flexibility of the system, with respect to moves, changes, fault finding, reliablity and security.

The following table shows the RJ45 pinouts for 10BaseT:

If you wish to connect hub to hub, or a NIC directly to another NIC, then the following 10BaseT cross-over cable should be used:

Ethernet cross-over

The 4 repeater limit manifests itself in 10/100BaseT environments where the active hub/switch port is in fact a repeater, hence the name multi-port repeater. Generally, the hub would only have one station per port but you can cascade hubs from one another up to the 4 repeater limit. The danger here of course, is that you will have all the traffic from a particular hub being fed into one port so care would need to be taken on noting the applications being used by the stations involved, and the likely bandwidth that the applications will use.

There is a semi-standard called Lattisnet (developed by Synoptics) which runs 10MHz Ethernet over twisted pair but instead of bit synchronisation occurring at the sending (as in 10BaseT) the synchronisation occurs at the receiving end.

4.4 10BaseF

The 10BaseF standard developed by the IEEE 802.3 committee defines the use of fibre for ethernet. 10BaseFB allows up to 2km per segment (on multi-mode fibre) and is designed for backbone applications such as cascading repeaters. 10BaseFL describes the standards for the fibre optic links between stations and repeaters, again allowing up to 2km per segment on multi-mode fibre. In addition, there is the 10BaseFP (Passive components) standard and the FOIRL (Fibre Optic Inter-Repeater Link) which provides the specification for a fibre optic MAU (Media Attachment Unit) and other interconnecting components.

The 10BaseF standard allows for 1024 devices per network.

4.5 Fast Ethernet (802.3u) 100BaseTx

Fast Ethernet uses the same frame formats and CSMA/CD technology as normal 10Mbps Ethernet. The difference is that the maximum delay for the signal across the segment is now 5.12 microseconds instead of 51.2 microseconds. This comes from the fact that the bit time (time to transmit one bit) is 0.01 microseconds and that the slot time for a frame is 512 bit times. The Inter-Packet Gap (IPG) for 802.3u is 0.96 microseconds as opposed to 9.6 microseconds for 10Mbps Ethernet.

Fast Ethernet is the most popular of the newer standards and is an extension to 10BaseT, using CSMA/CD. The '100' denotes 100Mbps data speed and it uses the same two pairs as 10BaseT (1 and 2 for transmit, 3 and 6 for receive) and must only be used on Category 5 UTP cable installations with provision for it to be used on Type 1 STP. The Copper physical layer being based on the Twisted Pair-Physical Medium Dependent (TP-PMD) developed by ANSI X3T9.5 committee. The actual data throughput increases by between 3 to 4 times that of 10BaseT.

Whereas 10BaseT uses Normal Link Pulses (NLP) for testing the integrity of the connection, 100BaseT uses Fast Link Pulses (FLP) which are backwardly compatible with NLPs but contain more information. FLPs are used to detect the speed of the network (e.g. in 10/100 switchable cards and ports).

The ten-fold increase in speed is achieved by reducing the time it takes to transmit a bit to a tenth that of 10BaseT. The slot-time is the time it takes to transmit 512 bits on 10Mbps Ethernet (i.e. 5.12 microseconds) and listen for a collision (see earlier). This remains the same for 100BaseT, but the network distance between nodes, or span, is reduced. The encoding used is 4B/5B with MLT-3 wave shaping plus FSR. This wave-shaping takes the clock frequency of 125MHz and reduces it to 31.25MHz which is the frequency of the carrier on the wire.

The round trip signal timing is the critical factor when it comes to the distance that the signal can run on copper UTP. The cable has to be Category 5 and the distance must not exceed 100m.

The IEEE use the term 100BaseX to refer to both 100BaseTx and 100BaseFx and the Media-Independent Interface (MII) allows a generic connector for transceivers to connect to 100BaseTx, 100BaseFx and 100BaseT4 LANs.

There is no such thing as the 5-4-3 rule in Fast Ethernet. All 10Base-T repeaters are considered to be functionally identical. Fast Ethernet repeaters are divided into two classes of repeater, Class I and Class II. A Class I repeater has a repeater propagation delay value of 140 bit times, whilst a Class II repeater is 92 bit times. The Class I repeater (or Translational Repeater) can support different signalling types such as 100BaseTx and 100BaseT4. A Class I repeater transmits or repeats the incoming line signals on one port to the other ports by first translating them to digital signals and then retranslating them to line signals. The translations are necessary when connecting different physical media (media conforming to more than one physical layer specification) to the same collision domain. Any repeater with an MII port would be a Class I device. Only one Class I repeater can exist within a single collision domain, so this type of repeater cannot be cascaded. There is only allowed one Class I repeater hop in any one segment.

A Class II repeater immediately transmits or repeats the incoming line signals on one port to the other ports: it does not perform any translations. This repeater type connects identical media to the same collision domain (for example, TX to TX). At most, two Class II repeaters can exist within a single collision domain. The cable used to cascade the two devices is called and unpopulated segment or IRL (Inter-Repeater Link). The Class II repeater (or Transparent Repeater) can only support one type of physical signalling, however you can have two Class II repeater hops in any one segment (Collision Domain).

4.6 100BaseT4

100BaseT4 uses all four pairs and is designed to be used on Category 3 cable installations. Transmit is on pairs 1 and 2, receive is on pairs 3 and 6, whilst data is bidirectional on 4 and 5 and on 7 and 8. The signaling is on three pairs at 25MHz each using 8B/6T encoding. The fourth pair is used for collision detection. Half-Duplex is supported on 100BaseT4.

4.7 100BaseFx

100BaseFx uses two cores of fibre (multi-mode 50/125um, 60/125um or single-mode) and 1300nm wavelength optics. The connectors are SC, Straight Tip (ST) or Media Independent Connector (MIC). The 100BaseT MAC mates with the ANSI X3T9.5 FDDI Physical Medium Dependent (PMD) specification. At half-duplex you can have distances up to 412m, whereas Full-duplex will give 2km.

There is also a proposed 100BaseSx which uses 850nm wavelength optics giving 300m on multi-mode fibre.

The encoding used is 4B/5B with NRZ-I wave shaping with a clock frequency of 125MHz.

4.8 100BaseT2

This little known version of Fast Ethernet is for use over two pairs of Category 3 cable and uses PAM-5 for encoding. There is simultaneous transmission and reception of data in both pairs and the electronics uses DSP technology to handle alien signals in adjacent pairs.

100BaseT2 can run up to 100m on Category 3 UTP.

4.9 100VG-AnyLAN

Based on 802.12 (Hewlett Packard), 100VG-AnyLAN uses an access method called Demand Priority. The 'VG' stands for 'Voice Grade' as it is designed to be used with Category 3 cable. This is where the repeaters (hubs) carry out continuous searches round all of the nodes for those that wish to send data. If two devices cause a 'contention' by wanting to send at the same time, the highest priority request is dealt with first, unless the priorities are the same, in which case both requests are dealt with at the same time (by alternating frames). The hub only knows about connected devices and other repeaters so communication is only directed at them rather than broadcast to every device in the broadcast domain (which could mean 100's of devices!). This is a more efficient use of the bandwidth. This is the reason why a new standard was developed called 802.12 as it is not strictly Ethernet. In fact 802.12 is designed to better support both Ethernet and Token Ring.

The encoding techniques used are 5B/6B and NRZ.

All four pairs of UTP are used. On Cat3 the longest cable run is 100m but this increases to 200m on Cat5.

The clock rate on each wire is 30MHz, therefore 30Mbits per second are transmitted on each pair giving a total data rate of 120Mbits/sec. Since each 6-bits of data on the line represents 5 bits of real data due to the 5B/6B encoding, the rate of real data being transmitted is 25Mbits/sec on each pair, giving a total rate of real data of 100Mbits/sec. For 2-pair STP and fiber, the data rate is 120Mbits/sec on the transmitting pair, for a real data transmission rate of 100Mbits/sec.

4.10 Gigabit Ethernet

Although the functional principles of Gigabit Ethernet are the same as Ethernet and Fast Ethernet i.e. CSMA/CD and the Framing format, the physical outworking is very different. One difference is the slot time. The standard Ethernet slot time required in CSMA/CD half-duplex mode is not long enough for running over 100m of copper, so Carrier Extension is used to guarantee a 512-bit slot time.

1000BaseX (802.3z)

802.3z is the committee responsible for formalising the standard for Gigabit Ethernet. The 1000 refers to 1Gb/s data speed. The existing Fibre Channel interface standard (ANSI X3T11) is used and allows up to 4.268Gbps speeds. The Fibre Channel encoding scheme is 8B/10B.

Gigabit Ethernet can operate in half or full duplex modes and there is also a standard 802.3x which manages XON/XOFF flow control in full duplex mode. With 802.3x, a receiving station can send a packet to a sending station to stop it sending data until a specified time interval has passed.

There are three media types for 1000BaseX. 1000BaseLX, 1000BaseSX and 1000BaseCX.

With 1000BaseSX, 'S' is for Short Haul, and this uses short-wavelength laser (850nm) over multi-mode fibre. 1000BaseSX can run up to 300m on 62.5/125um multimode fibre and up to 550m on 50/125um multimode fibre.

Using 1300nm wavelength, Gigabit Ethernet (1000BaseLX where the 'L' is for Long wavelength laser, or Long Haul) can run up to 550m on 62.5/125um multi-mode fibre or 50/125um multi-mode fibre. In addition, 1000BaseLX can run up to 5km (originally 3km) on single-mode fibre using 1310nm wavelength laser.

1000BaseCX is a standard for STP copper cable and allows Gigabit Ethernet to run up to 25m over STP cable.

There is currently an issue as many multimode fibre installations using 62.5/125um fibre and so 220m is often the limit for the backbone when it should be 500m to satisfy ISO 11801 and EIA/TIA 568A.

1000BaseT (802.3ab)

Many cable manufacturers are enhancing their cable systems to 'enhanced Category 5' standards in order to allow Gigabit Ethernet to run at up to 100m on copper. The Category 6 standard has yet to be ratified, and is not likely to be due for a while.

In order to obtain the 1000Mbps data bit rate across the UTP cable without breaking the FCC rules for emission, all 4 pairs of the cable are used. Hybrid circuits at each end of each pair are used to allow simultaneous transmission and reception of data (full-duplex) by separating the transmission signal from the receiving signal. Because some transmission signal still manages to couple itself to the receiving side there is an additional echo canceller built in, this is called a NEXT canceller. This system minimises the symbol rate.

Encoding is carried out with PAM-5.

1. Transparent Bridging

A Bridge connects distinct segments allowing traffic to pass between the segments. The maximum size of the network can be extended both in distance, repeater and station count. A bridge can be used to split up a large segment into two smaller ones. The benefit of this is that there are less chances for collisions on the smaller segments leaving more bandwidth for real data. There are some protocols such as Local Area Transport (LAT), Maintenance Operation Protocol (MOP) and native NetBIOS that do not have any network layer addressing and so these protocols cannot be routed across a large network, they have to be bridged. There are implementations of NetBIOS that run over IP and IPX thereby allowing NetBIOS to be routed in that way.

A Transparent or Learning Bridge learns the MAC addresses of the stations on all the attached segments since it receives and examines every frame transmitted on the attached networks. This is called Promiscuous Mode and the ports of the bridge operate in Learning Mode as they build the Forwarding Tables. This source address list is called the Forwarding Table. If a frame arrives at a bridge port destined for a station connected to the same port, then the bridge does not forward that frame out of any port. If the frame's destination address is held in the forwarding table then the bridge knows on which port the destination device is connected and the bridge forwards the frame out of that port only. If the destination is unknown, then the bridge forwards the frame out of all the ports, that is it Floods the frame out of all ports except the one that the frame arrived on. This whole process is known as Transparent Bridging because the bridges are 'transparent' to the stations, they just see one large segment, the bridge's MAC address is invisible to the frames and the frames are never altered in any way.

These forwarding tables can grow very large so some manufacturers apply a process called Aging whereby the oldest addresses (typically 300 seconds) are removed to free up memory.

Transparent bridges connect LANs that use the same protocols at the physical and data link layer and they do all the work with regards to tracking which station sits on which network. There is no route discovery process or route selection process with transparent bridges.

A problem with the bridge is that it adds 20 to 30% latency in a network for acknowledgement-oriented protocols or 10 to 20% for sliding window type protocols, plus it relies on redistributing broadcasts and multicasts from a particular segment to all other segments (since the source address for broadcasts does not exist and is not in the forwarding table). These segments therefore see more broadcasts than they would if the segments were totally isolated. There is therefore a greater risk of broadcast storms occurring.

A Remote Bridge has a LAN interface on one side and a WAN interface on the other. Another remote bridge at the other end of the WAN completes the connection. This allows WAN connections without having to use layer 3 devices (routers).

2. Spanning Tree (802.1d)

Spanning Tree applies to both bridged networks and switched networks, so bear this in mind as a switched network is basically made up of multiple LAN segments (Collision Domains). Consider the following network:

2.1 Loop Problem

Bridge Loop

In the above setup (without Spanning Tree), there is a possibility that a bridging loop can occur as Sonny tries to talk to Jim for the first time. The following events occur in the communication process:

This sequence of events is based on the operation of Transparent Bridging i.e. broadcasts and unknown unicasts are forwarded out of all ports, bar the incoming port.

2.2 Loop Solution

To get around this problem, within Spanning Tree, only one bridge/switch is allowed to be the Root Bridge. All traffic for the whole Spanning Tree network of bridged/switched LANS goes through the root bridge. In addition, only one link is allowed between devices, and on each LAN segment, only one device becomes the Designated Bridge, through which ALL that particular LANs traffic must go. The other links remain dormant. The designated bridge has a port that leads to the root bridge of the bridged network of multiple LAN segments. The designated bridge is determined by the bridge with the smallest path cost to the root bridge. The path cost is determined by the cumulative port costs on the path to the root bridge. A 100Mbps port has a cost of 19 for example.

There are three main steps that Spanning Tree takes when building the loop free Spanning Tree network:

  1. Root bridge election for the whole network.
  2. Root port election on each bridge/switch.
  3. Designated port election on each LAN segment.

2.3 Spanning Tree Initialisation

On initial setup, all participating bridges declare themselves to be the root, they then exchange Hello (or Configuration) Bridge Protocol Data Units (BPDU) containing specific bridge information. The BPDUs are sent to the well-known multicast address 0x0180c2000000. There are two types of BPDUs, Configuration BPDUs used for bridge and port elections, and there are Topology Change Notification (TCN) BPDUs which are sent back up towards the root when there is a topology change. BPDUs are sent to a well-known multicast address.

2.4 Bridge Protocol Data Unit (BPDU)

These BPDUs are in the following format:

BPDU

The timers are based on the assumption that the diameter of the network is no more than 7 bridges/switches away from the root (including the root bridge).

The following port costs are the current IEEE defaults:

2.5 Spanning Tree Operation

Whilst the BPDUs move between links on the network, the bridge ports are in Listening Mode as the bridges listen to the BPDUs and decide which is the root bridge, and which bridges are the designated bridges. When a bridge sees a BPDU from a bridge with a lower Bridge ID it stops sending its own BPDUs. Eventually, one of the bridges is determined as the Root Bridge, it is the one with the lowest ID value (the bridge priority is user-defined and is looked at first, followed by the MAC address).

Once the the root bridge has been determined, all the other bridges work out a least cost path to the root bridge. The root bridge sends a BPDU with an initial value for the Root Path Cost of zero. Each bridge receives the BPDU and adds the cost of the particular interface on which the BPDU was received to the Root Path Cost and sends its own generated BPDU out of all its interfaces. Fast Ethernet ports default to a port cost of 19. The cumulative Root Path Cost is used to determine which interface a particular bridge received the lowest cost BPDU and this interface becomes the Root Port.

Then the other bridges designated to be part of the tree, determine whether any ports are not root ports. These non-designated ports are then placed into the following conditions in order:

After this process, if a port fails, then it is placed into a Disabled State.

From this sequence, it can be seen that the worst case scenario is if a link is up but it fails to see any more BPDUs, there will be a 20 second wait before the information from the last BPDU ages out. At this point, the port goes into listening mode for 15 seconds and then learning mode for 15 seconds before it starts forwarding data frames again. This adds up to 50 seconds with the default timers.

2.6 Using Spanning Tree

In the case of the above diagram, Bridge A has the lowest ID so it becomes the root bridge. The cost from Sonny to Jim is 4 via Bridge A and 5 via Bridges B and C. Bridge C is taken out of the tree leaving Bridges A and B to be the designated bridges. If Bridge A or B failed, then Bridge C would come back on line and the Spanning Tree re-shaped.

A good network design should ensure that the root bridge is as close to the centre of the network as possible in order to facilitate faster convergence and also to make sure that the network is not running sub-optimally where traffic is taking less direct routes thereby doubling LAN bandwidth usage. Rather than rely on Spanning Tree using the default settings, it is often a good idea to shape the tree by setting bridge IDs and port costs.

Default timers are advertised by the root bridge, the default for the Forward Delay Timer is usually 15 seconds which gives enough time for the transition stage to complete before traffic is forwarded. On some vendor's equipment you can come across a mechanism called Port Fast (LAN traffic) or Uplink-Fast (Dial-up traffic) that allows you to bypass the Forward Delay Timer and forward data immediately rather than wait for a BPDU and all the delays between the modes mentioned earlier. This is useful only on point-to-point links such as trunk ports acting in a redundant manner in Spanning Tree where you know that you are in no danger of creating a loop.

2.7 Steady State Operation

Configuration BPDUs are sent regularly by the root bridge along with the age of the message. These are resent by each designated bridge in the Spanning Tree. The receiving bridges store this limited age. If the information times out for a particular port, then the bridge will try to become the designated bridge for that LAN. If root bridge information times out due to a better root port being found, or a root bridge failure then the bridge tries to become a root bridge itself and the election process restarts.

If a designated bridge (a bridge with active ports pointing to the root bridge) fails, or is removed from the network, or the root port fails, a directly connected bridge in the LAN segment detects that it is not receiving Configuration BPDUs from that bridge. This is because the information from the last BPDU times out according to the Maximum Age timer (default 20 seconds). It then sends a TCN BPDU to its designated bridge/switch that is destined for the root bridge/switch. The designated bridge receiving this TCN BPDU sends back a Configuration BPDU containing an acknowledgement as well as sending another TCN BPDU on towards the root bridge. The root bridge, on receipt of the TCN BPDU, sends a modified Configuration BPDU to all bridges in the network indicating that a toplogy change has occurred by setting the Topology Change Flag. Any directly connected bridges on the same segment, receiving the configuration change BPDU, create their own BPDUs and send those out. They also age out their forwarding tables according to the Forward Delay timer (15 seconds) rather than use the default time of 300 seconds. This fast aging lasts until the root bridge resets the Topology Change Flag once enough time has elapsed for the configuration change notification to have propagated throughout the tree. This time is determined by the formula Max Age + Forward Delay which equals 20 + 15 = 35 seconds by default. This speedy aging out is important as a topology change could mean that a particular device is learned via another port now and we want to avoid sending data traffic into a black hole whilst the normal aging time of 300 seconds times out.

It is quite normal for TCNs to occur in a network e.g. ports change state as users switch machines off. TCNs DO NOT in themselves cause a recalculation of Spanning Tree, although they could be a symptom of STP recalculation. Spanning Tree recalculation occurs when priorities are administratively changed, or configuration BPDUs fail to reach a Designated Bridge i.e. If a network segment in the spanning tree fails and a redundant path exists. So this will happen if any bridge/switch is added or removed or parameter changes occur.

2.8 Port Fast

A feature on some manufacturers' equipment called Port Fast allows certain ports that are say connected to key devices such as servers to still play a part in Spanning Tree but rather than the port go through the process of waiting to go through the Spanning Tree states as described above, the port can jump straight from blocking mode to forwarding mode. This is only of use for ports connecting to end devices since you could end up creating loops if this 'port fast' technique was applied to trunk links to other switches. This is an important feature to prevent host machines and servers from timing out their network connections when they initially connect.

2.9 Uplink Fast

There is another technique called Uplink Fast that can be used on access switches that have multiple paths to the root bridge/switch. An uplink group is created containing the root port and all alternate ports. On a failure of the root port link frames are immediately forwarded on the alternate link and Spanning Tre converges in a second or two instead of 35 seconds. A problem with this is that the forwarding tables are temporarily incorrect thereby causing dropped frames. Cisco use a proprietary multicast to cause relevant switches to update their forwarding tables. Uplink fast is all very well but it applies to all VLANs rather than on a per VLAN basis so it is no good in environments where you want to load share traffic.

2.10 Issues with Spanning Tree

The trouble with Spanning Tree is that it necessitates the need for some links to remain dormant, thus wasting network bandwidth (e.g. an expensive serial link between two remote bridges). In addition, no load sharing can take place using the standard 802.1q VLAN specification although Cisco has a proprietary implementation of 802.1q which does allow Per VLAN Spanning Tree (PVST), plus its own Inter-Switch Link (ISL) which also allows PVST. The issues with PVST occur when connecting to non-Cisco devices running 802.1q trunking.

Spanning Tree allows for a maximum of 7 concatenated bridges (assuming 1 second for a default Hold Time that a bridge holds a frame before discarding it) A frame should be delivered no more than 7 seconds after initial transmission. This is important as the bridge timers need to be kept in synch at the extremities of the network, and there needs to be a limit to the accumulated forwarding delay between stations.

There are three types of Spanning Tree. The originally there was a version from DEC and one from IBM. The IBM version was the basis for the current standard form IEEE 802.1d. These different versions are not compatible with one another.

3. Translational Bridging

Where LANs using different protocols at the Physical and Data Link layers are to be connected, a Translational Bridge can be used. Examples include a bridge connecting an Ethernet network to a Token Ring network. Bridges cannot support messages of different lengths when converting between frame formats, so the end devices must be capable of configuring the messages to be of the same lengths. Nowadays you will only likely to be seeing this in an environment where FDDI is converted to Ethernet.

4. Encapsulated Bridge

It is common to see an environment where identical LAN environments are connected together via a dissimilar LAN environment. An example of this is two Ethernet LANs separated by a FDDI LAN or a Serial link. An encapsulated bridge does not change the frame headers as the Translational Bridge does, instead the central LAN protocol (e.g. FDDI) encapsulates the connected LAN transparently bridged frames (e.g. Ethernet) across its backbone and dismantles the encapsulation at the other end.

5. Concurrent Routing and Bridging (CRB)

Traditionally, if a protocol is configured on a router then if network layer addressing is available, the protocol is routed rather than bridged. If no network layer addressing is available then the protocol is bridged. In the past, at no time could a packet be both routed and bridged.

With Concurrent Routing and Bridging, one router can both route a protocol through one interface and bridge that same protocol through another interface at the same time, but not through the same interface. Routed protocols have to be routed out of routing interfaces and bridged protocols have to be bridged out of bridging interfaces.

6. Integrated Routing and Bridging (IRB)

This allows you to both route and bridge a given protocol through the same interface (this is an extension of Concurrent Routing and Bridging).

An example of its use is when you are migrating from a bridged network to a routed network where you want to connect bridged segments to routed networks. With IRB you can route between routed interfaces and bridge groups, or you can route between bridge groups. In addition, you can still bridge non-routable traffic between bridge interfaces within a bridge group.

You can conserve layer 3 logical addresses by assigning one layer 3 address to a bridge group and just bridge local traffic.

The implementation of IRB is made possible with the concept of the Bridge-Group Virtual Interface (BVI). The BVI acts as a routed interface that does not perform bridging but represents the bridge group that is used to send bridged traffic to a routed network. The BVI interface number is the bridge group number of the bridge group assigned to that interface. The BVI takes the MAC address of one of the bridged interfaces in the particular bridge group. The network layer address of the BVI must be in the same network as the routed hosts.

IRB

For host A to talk to host B, host A must have its default gateway set as the IP address of the BVI. The MAC address of the BVI is the bridged interface and is learned by host A via ARP. At this point, the destination MAC address of the packet is the BVI's MAC address and the source MAC address is that of host A.

The bridging software on the bridged interface looks at the packet and decides whether the packet is to be routed or bridged. If the destination MAC address is one of the router's interfaces (in this case it is, the BVI) and if the layer 3 protocol is configured on that interface, the bridging software makes the packet look like it has come from the BVI rather than the bridge group (10 in this case), the packet is sent to the routing engine and routed out of that interface.

If the packet is destined to a host without any layer 3 addressing and within the same bridge group as the BVI, then the bridging software sees that the destination MAC address is not the BVI but a host device. The packet is then bridged to the bridge group if the MAC address of the destination is known, or flooded out all bridge group interfaces if the destination MAC address is unknown.

Go next to Ethernet Switching.

Ethernet Switching

 

 

1. Introduction

Bridging and Frame Switching are practically one and the same technology. Frame Switching is Bridging that has been speeded up. Bridging has always been software-based and normally a bridge would just have two ports used to connect the two LANs being bridged. Switching is hardware-based and has many ports but all the rules that apply to bridging also apply to switching and more besides. The MAC address is always left unchanged in bridging (barring the bit ordering change in Translational bridging!).

LAN Frame switches can include FDDI, Token Ring or Ethernet switches. Effectively, the switch provides single Collision Domains per switch port and each port acts as a bridge port to the rest of the network. Forwarding tables are kept per port, different media, different speeds etc. can be configured on a port by port basis. The speed enhancement to the network is achieved through the 'microsegmentation' of the large Collision Domain into many smaller ones. Each port on an Ethernet switch is effectively a very fast bridge port. The switch itself has its own MAC address e.g. 0800.a300.df00, and then each of its ports is given a MAC address, commonly in port order; port 1 has address 0800.a300.df01, port 2 has address 0800.a300.df02 etc. If this particular switch is the root bridge, then the MAC address 0800.a300.df00 is advertised as the root bridge, however the BPDUs originate from whatever MAC address is assigned to the port from which the BPDU emanates.

Some switches allow you to implement a Backpressure scheme whereby, on a particular port, jamming frames can be sent to reduce traffic coming into the switch. This stops one port hogging the backplane on a switch thereby effecting other users. Obviously, you would not wish to implement this on a server port, since this will affect many people and you would wish to keep as much of the switch processing capability for the attached servers. This is why so much play is made of the backplane capability of a particular manufacturer's switch.

2. Cut-through

A Cut-through switch first reads the Destination address of a frame and then sends the frame straight to the destination before the rest of the frame has arrived at the switch. The first 20 to 30 bytes of the frame need to be read to make sure that the frame is not a collision fragment. If the destination address remains unknown, then the switch temporarily stores the frame. Cut-through switching is fine for fixed speed networks such as all 10BaseT, and it is very fast, however if the switch has mixed speed ports such as 10/100 autosensing ports, then there is a bottle neck when packets are moving across the switch fabric from a 100BaseT segment to a 10BaseT segment. Some switches, although they forward the frame as soon as they read the destination address they still read the frame up to the CRC and if there are a certain level of errors, they can be configured to automatically change to a Store and Forward mode.

3. Store and Forward

A Store and Forward switch, or 'buffered switch', stores each frame frame in a buffer before forwarding it on to the appropriate port. This gets around the underflow or overflow situation that could happen in a mixed speed environment.

4. Fragment-free Switching

This is similar to Cut Through Switching but here the frame is checked a little further than the destination address to the Length field in order to weed out collision fragments, before it is forwarded.

Latency of a network increases as the network gets busier. On a busy network, the backoffs (retransmits) that could occur with Cut-through switches increase, thereby increasing latency. A Store and Forward switch on 10Mbps LAN delays a frame by one frame time, obviously increasing latency, but there are no backoffs.

 

Ethernet Terminology and Errors


 

1. Terminology

 

1.1 Signal Quality Error (SQE)

 

The SQE test or 'heartbeat' is a test signal generated on the cable after every transmission to assess the ability of the transceiver to detect collisions. The test is a very short frame that is too short to look like a collision. Ethernet 1.0 did not have this in its standard and 802.3 says that repeaters must not connect to a transceiver that generates the SQE test because of the Jam signal that is designed to prevent redundant collisions from occurring. The option is normally available to turn off SQE test for this reason.

 

1.2 InterPacket Gap (IPG)

 

The IPG is the fixed time gap between Ethernet Frames. For 802.3 (10Mbps Ethernet) This is set at 9.6 micro seconds. Sometimes this is called the Inter-Frame Gap (IFG).

 

1.3 Promiscuous Mode

 

This mode is used by special network adaptors used in devices such as network analysers and transparent bridges. What happens is that the network controller passes ALL frames up to the upper layers regardless of destination address. Normally the frames are only passed up if they have that particular device's address, the destination address is checked and if it does not match that of the adapter then the rest of the frame is ignored. Network Analysers are interested in seeing all frames, regardless of the destination address so special adapters can be installed that run in Promiscuous mode and allow all frames to be sent to the buffer for capture and analysis.

 

1.4 Full-Duplex

 

Ethernet can exist between switch ports only and uses one pair of wires for transmit and one pair for receive. NICs for 10BaseT, 10BaseFL, 100BaseFX and 100BaseT have circuitry within them that allows full-duplex operation and bypasses the normal loopback and CSMA/CD circuitry. Collision detection is not required as the signals are only ever going one way on a pair of wires. In addition, Congestion Control is turned on which 'jams' further data frames on the receive buffer filling up.

 

1.5 Half-Duplex

 

Half-Duplex allows data to travel in only one direction at a time. Both stations use CSMA/CD to contend the right to send data. In a Twisted Pair environment when a station is transmitting, its transmit pair is active and when the station is not transmitting it's receive pair is active listening for collisions.

 

1.6 Propagation Delay

 

Propagation Delay, or Latency, is the time taken for a frame to traverse the media from the sending station to the receiving station. A 64 byte frame takes 51.2 microseconds to travel between stations, a 512 byte frame takes 410 microseconds and a 1518 byte frame takes 1214 microseconds, provided that there are no other devices between the stations. This marries with the fact that 10,000 bits traverse the network in 1 second. A bridge would typically add 300 microseconds to the latency to the network.

 

The Path Delay Value is the time it takes an Ethernet frame to travel the furthest distance across the network. It is made up of the sum of the Link Segment Delay Values (LSDV) plus the repeater and DTE delays and maybe some safety margin.

 

2. Error Conditions

 

2.1 Runt

 

A Runt is a frame that is shorter than 64 bytes (512 bits), which is the smallest allowable frame, and may have a corrupted FCS. It can be caused by a collision, dodgy software or a faulty port/NIC.

 

2.2 Long

 

This is a frame that is between 1518 and 6000 bytes long. Normally it is due to faulty hardware or software on the sending station.

 

2.3 Giant

 

This is a frame that is more than 6000 bytes long. Normally it is due to faulty hardware or software on the sending station.

 

2.4 Dribble

 

A frame that is defined as a 'dribble' is one that is greater than 1518 bytes but can still be processed. This could point to a problem where the IPG is too small or non-existent such that two frames join together.

 

2.5 Jabber

 

This is when a device is having problems electrically. Ethernet relies on electrical signalling to determine whether or not to send data, so a faulty card could stop all traffic on a network as it sends false signals causing other devices to think that the network is busy. This shows itself as a long frame (longer than 1518 bytes) with an incorrect FCS or is an alignment error. A NIC that is jabbering will send out a frame and then follow it with A's and 5's, i.e. 101010101010... or 0101010101..., which are preamble bits indicating a falsely busy network.

 

2.6 Frame Check Sequence (FCS) Error, or CRC error

 

This defines a frame which may or may not have the right number of bits but they have been corrupted between the sender and receiver, perhaps due to interference on the cable. The IEEE 802.3 says that there should be no more than 10-8 errors, i.e. 1 in 82 x 106.

 

2.7 Alignment Error

 

Frames are made up of a whole number of octets. If a frame arrives with part of an octet missing, and it has a Frame Check Sequence (FCS) error, then it is deemed to be an Alignment Error. This points to a hardware problem, perhaps EMF on the cable run between sender and receiver.

 

2.8 Broadcast Storm

 

An incorrect packet broadcast onto a network that causes multiple stations to respond all at once, typically with equally incorrect packets which causes the storm to grow exponentially in severity. When this happens there are too many broadcast frames for any data to be able to be processed. Broadcast frames have to be processed first by a NIC above any other frames. The NIC filters out unicast packets not destined for the host but multicasts and broadcasts are sent to the processor. If the broadcasts number 126 per second or above then this is deemed to be a broadcast storm. An acceptable level of broadcasts is often deemed to be less than 20% of received packets although many networks survive well enough on higher levels than this. The performance lower-specified workstations may be impacted by as little as 100 broadcasts/second. Some broadcast/multicast applications such as video conferencing and stock market data feeds can issue more than 1000 broadcasts/sec.

 

2.9 Collisions

 

Collisions are a normal occurrence on an Ethernet network. The more devices there are within a segment (Collision Domain) the more collisions are likely. A badly cabled infrastructure can cause unnecessary collisions due to a device being unable to sense a carrier and transmitting anyway.

 

If a collision rate is high then it may be worth while considering segmenting the network by way of a bridge or router. This reduces the chance of a collision occurring on each of the segment thereby releasing more bandwidth for real traffic.

 

A good guide to use is collisions should not total more than 1% of the frames transmitted. The following table gives a guide as to recommendations of collision levels with respect to bandwidth utilisation of a segment:

 

%age Utilisation

Maximum %age Collisions

less than 20

1

20-49

5

over 50

15

 

A Late Collision occurs when two devices transmit at the same time without detecting a collision. This could be because the cabling is badly installed (e.g. too long) or there are too many repeaters. If the time to send the signal from one end of the network to the other is longer than it takes to put the whole frame on to the network then neither device will see that the other device is transmitting until it is too late. The transmitting station distinguishes between a normal and a late collision by virtue that a late collision is detected after the time it takes to transmit 64 bytes. This means that a late collision can only be detected with frames of greater size than 64 bytes, they still occur for smaller frames but remain undetected and still take up bandwidth. Frames lost through late collisions are not retransmitted.

 

Excessive Collisions describe the situation where a station has tried 16 times to transmit without success and discards the frame. This means that there is excessive traffic on the network and this must be reduced.

 

For normal Ethernet traffic levels, a good guideline is if the number of deferred transmissions and retransmissions together make up for less than 5% of network traffic, then that is considered healthy.

 

A transmitting station should see no more than two collisions before transmitting a frame.

 

2.10 Jam

 

On detection of a collision, the NIC sends out a Jam signal to let the other stations know that a collision has occurred. A repeater, on seeing a collision on a particular port, will send a jam on all other ports causing collisions and making all the stations wait before transmitting. A station must see the jam signal before it finishes transmitting the frame, otherwise it will assume that another station is the cause of the collision.

 

Jamming is a term used to describe the collisions reinforcement signal output by the hub/repeater to all ports. The Jam signal consists of 96 bits of alternating 1s and 0s. The purpose is to extend a collision sufficently so that all devices cease transmitting.

 

Jamming is used when dealing with congestion. It is an to attempt to eliminate frame loss within the switch by applying "back pressure" to those end stations or segments that are consuming the switch buffer capacity. One way of accomplishing this is for the switch to issue an Ethernet "jam" signal when buffers fill beyond a design threshold level. Jam signals normally are the result of collision detection. When the sending end systems on the segment receive the jamming signal, they will back off for a random time period before attempting a retransmission.

 

Each transmitting node monitors its own transmission, and if it observes a collision (i.e. excess current above what it is generating, i.e. > 24 mA) it stops transmission immediately and instead transmits a 32-bit jam sequence. The purpose of this sequence is to ensure that any other node which may currently be receiving this frame will receive the jam signal in place of the correct 32-bit MAC CRC, this causes the other receivers to discard the frame due to a CRC error.

 

Go next to Ethernet Bridging.

 

Ethernet Virtual LANs (VLANs)

 

 

1. Overview

Virtual LANs have been made possible as the switching infrastructure has replaced the traditional shared media LANs. An individual switch port can be assigned to a logical LAN, the next switch port can be easily assigned to a completely different logical LAN. This is made possible by 'tagging' the frames entering the port so that these frames are identified as belonging to a particular logical LAN whilst they travel along the switch fabric of the box. Once these frames are sent out of their logical LAN ports, the tag is removed from the frame. In the past, proprietary frame tagging has been implemented by Cisco (Inter Switch Link or ISL) and Bay Networks (Lattis Span), but the standard is now defined by 802.1q and may be the one that you go for in order to allow interoperability between different manufacturers boxes.

More recently frame tagging has been applied to trunk ports as links to other switches and routers within the wider network, carry multiple VLANs. This 'tagging' is very important since it enables the VLANs to spread 'Enterprise-wide' as the backbones take the bulk of network traffic and VLAN information. This is why VLANs need to operate across high-bandwidth trunked FDDI (802.10), Fast Ethernet (ISL and 802.1q) and ATM links (ATM LANE).

One key difference between ISL and 802.1q is that ISL allows multiple instances of Spanning Tree (i.e. one per VLAN) to exist on a trunk link whereas 802.1q only allows one instance of Spanning Tree on a trunked link. Which trunking method you use influences the network design somewhat particularly if you are wishing to make the most of the network connections and not have part of the network not being used because Spanning Tree has blocked some ports. ISL allows you to loadshare VLANs across parallel links so you don't have to have network ports not being used. This does not mean that you are completely tied in to using Cisco equipment only since you can 'map' ISL VLANs to 802.1q VLANs before entering a load-sharing section of the network and also other manufacturers such as Lucent support ISL anyway.

The use of switches has meant that microsegmentation has increased the number of Collision Domains thereby minimising the number of collisions that occur across this 'flat' network. This in turn frees up more bandwidth for data but at the cost of increasing the amount of broadcast traffic. VLANs are the next step where multicast and broadcast traffic is restricted to each of the VLANs and reduces the possibility of broadcast storms.

On a large network one Spanning Tree would take a very long time to converge. At the best of times even a RIPv1 network converges more quickly than a Spanning Tree network. For this reason, due to the greater likelihood of changes occurring the bigger the network is, it is a good thing to have one instance of Spanning Tree for each VLAN which decreases the calculation time, improves scalability and accommodates changes more efficiently.

VLANs allow you to connect any user to any logical LAN. The benefit here is that the user could be anywhere in the building or even another building. A particular department does not have to have all its employees physically situated in the same place. In addition, security is easily maintained since the only way to communicate between the virtual LANs is by routing between them, either by way of a router (slow) or via a layer 3 switch. VLANs also simplify moves, adds and changes plus give opportunities to load share traffic. Ideally, you should look to design your logical networks such that at least 80% of LAN traffic is maintained with the LAN, and a maximum of 20% of traffic routed between LANs. Note here that Layer 3 switching makes this less of an imperative. Ultimately, layer 3 switches will completely replace routers as far as intraLAN routing is concerned, the routers will remain as edge devices, doing the job that they are best at.

In addition, multiple logical networks can be multiplexed through one physical connection, provided that the connection is between two boxes that use the same Multi-Link Trunking (MLT) standard.

Originally, VLANs were simply based on port ID i.e. different ports being assigned to different VLANs. This is fine if the different groups are local, but not flexible enough to accommodate campus-wide VLANs. 'Port-centric' VLANs require no lookup table and are easy on the processor especially if an ASIC is taking care of the switching, plus there is a high level of security as packets are very unlikely to 'leak' into other VLANs. These type of VLANs are often referred to as Static VLANs as they are manually configured.

Membership of a VLAN can be determined by the MAC address. The user can move departments and the MAC address is remembered by the switch and the user remains part of the same logical LAN. VLANs can also be based on protocol or even application. These methods enable Moves and Changes to be implemented with little effort. These types of VLANs are called Dynamic VLANs since the ports can automatically determine their VLAN assignments on a per packet basis. The problem with Dynamic VLANs is that a lookup table has to be produced with all the known MAC addresses mapped to the relevant VLANs. Not only is this table likely to be very large, but it is also going to change frequently as new devices are added to the network and old ones are removed. If an organisation has a lot of movement within the building environment, then the Dynamic VLAN may be the best design.

Frame filtering is often used by switches to aid in minimising LAN traffic. Tables are kept and frames are compared to the table entries to varying levels of frame depth. The deeper into the frame the switch has to go, the greater the switch latency. Also, the larger the table; the more the latency and these tables need to be synchronised with other switches.

2. Switched Network Design

A good general network design is shown below:

Switched network

All networks have the Core, Distribution and Access layer functions within them even if two or more of the functions are dealt with by one box. Generally, it is a good idea to have the servers connected to the Distribution switch and let the distribution switch deal with the VLANs. The routers could either be separate 'routers on a stick' or built in to the switch and these should deal with access policies and filtering. The Core switches need to be left purely to switch packets and not worry about policies, routing or server traffic.

One thing to be aware of when upgrading networks is the capability of the servers. It is all very well upgrading the client links and the backbone links, but if the servers can so easily become the bottleneck in a network. You need to make sure that the server, be it based on NT or Unix, is capable of fast network technology such as Duplex operation and Gigabit Ethernet. Not only do you need to make sure that the server can cope with fibre-based cards for 1000BaseSX or 1000BaseLX, you also need to be sure that the servers internal components such as the hard disk, processor and the bus can cope with a fast network. A duplex gigabit card can completely swamp the whole PCI bus in a standard PC which can cause problems since this PCI bus is meant to be share by the other cards in the machine.

Examples

The following diagram illustrates a typical flat network.

Flat network

The trouble with this network is that it is just one collision domain and one broadcast domain so it is prone to high collision rates and alot of the bandwidth on the network is going to be given over to broadcasts. The problem with broadcast traffic is that each station on the network be it a server or a client, will have to process the broadcast packets. This processing has to be carried out by the CPU of the computer and can have quite a large effect on the processing capability of the computer.

A chassis-based switch can replace the hubs and provide a far more efficient network:

Switch

Using the switch, not only is more bandwidth available to each client because each client has it's own collision domain, but also one can configure VLANs to separate certain groups within the organisation and thereby reduce broadcast traffic freeing up even more bandwidth. The router can either remain a separate device or become integrated into the electronics of the switch. You will notice that in this small network, the core, distribution and access layers can be satisfied by one box even though the functions are still distinct.

The following design extends the previous one for a larger network:

Larger network

Here, the clients are fed from 'access' or 'workgroup' switches perhaps located in different closets elsewhere in the building. In this case, they are linked via 100BaseT switched links which could be copper (100BaseT) or fibre (100BaseFX). Alternatively, they could be ATM or even FDDI links depending on the capabilitites of the main distribution and access switches. This is because ATM LAN Emulation or FDDI 802.10 can be used to maintain the VLANs across the trunk. The trunk to the local router could be ISL (which would allow load sharing of VLANs ) or 802.1q, this local router could be contained within the switch, making the switch a layer 3 switch.

In the diagram below, the network has been extended further to an Enterprise level:

Enterprise network

The ATM switch takes on the role of the core switch and links to other sites which also have core switches. The backbone within in site could be Gigabit Ethernet, again depending on the capabilities of the access and distribution switches. The server farm and the routing is carried out by the distribution switch, it is best to leave the core switches to high speed switching and, as described before, the distribution switches should take on the burden of access policies. If 100BaseT is to be fed to the desktop it becomes more important to upgrade the bandwidth between the access and distribution switches to either Gigabit Ethernet or a multiple link such as 4 x 100BaseT links forming a 400BaseT channel (when operating in duplex mode this can effectively allow a maximum throughput of 800Mb/s).

In a large campus environment, multiple switch/routers at the distribution level can provide the opportunity to form resilient links to the core via the use of HSRP or VRRP. These protocols can provide alternative paths for specified VLANs.

The Virtual Router redundancy protocol gives the possibility of load sharing traffic across the routers by setting up separate groups with the aim of roughly half the local traffic going through one router and half through another by way of two or more virtual default gateways.

3. Trunking Overview

The trunking protocol used can be important in deciding the structure of the network. Although 802.1q is the industry standard it has a limitation in that it only allows one instance of Spanning Tree in a trunk link even if there are a number of VLANs in the trunk. Cisco's ISL, however, allows an instance of Spanning Tree per VLAN and the advantage of this is that trunk ports can remain available for some VLANs if blocked for other VLANs and hardware need not remain dormant as it would if 802.1q was implemented.

Trunking protocols such as Cisco's ISL could be used in the link to the servers. The advantage of this is that Intel manufacture ISL capable NICs and these NICs allow the server to have a number of different VLANs, and therefore completely different networks, to be served by the server as if the server was made up of completely different devices. The NIC cards themselves take care of the processing of frames so the CPU of the server does not have to take the load! Another advantage of using ISL-aware NICs is that the traffic does not have to be dealt with by a router and this therefore aids to keeping the network fast.

4. Cisco's Inter-Switch Link (ISL)

Cisco use a proprietary tagging method called Inter-Switch Link (ISL) which takes a different approach to tagging the Ethernet frame. Instead of increasing the frame size by inserting fields, ISL encapsulates the Ethernet frame.

Cisco's Inter-Switch Link (ISL) allows Per VLAN Spanning Tree (PVST) so multiple VLANs can exist across a trunk link. Multiple Spanning Trees allow load sharing to occur at layer 2 by assigning different port priorities per VLAN. 802.1q only allows Mono Spanning Tree (MST) i.e. one instance of Spanning Tree trunk.

ISL only runs on point-to-point links on Fast Ethernet (copper or fibre) and Token Ring (ISL+). Although ISL will operate over 10Mbps links it is not recommended! ISL runs between switches, from switches to routers and from switches to Intel and Xpoint Technologies NICs which understand ISL, thereby allowing servers to distinguish between VLANs.

With ISL the data frame is not touched but is encapsulated according to the following process:

The following diagram details the ISL frame tagging format:

ISL

5. Class of Service and VLANs (802.1p & 802.1q)

Quality of Service (QoS) is becoming more important as data networks begin to carry more time sensitive traffic such as real time voice and video. At layer 2 this is referred to as Class of Service (CoS).

The 802.1 group have been working on an extension to the MAC layer that takes into account CoS. 802.1p is a standard for traffic prioritisation where network frames are tagged with one of eight priority levels, where 7 is high and 0 is low. Switches and routers that are 802.1p compliant can give traffic that is time-sensitive such as voice traffic, preferential treatment if the priority tag has been set to a higher value than other traffic.

In order to accommodate tagging an Ethernet frame a new field has been introduced called the Tag Control Info (TCI) field between the Source MAC address and the Length field of the Ethernet frame. This is illustrated below:

TCI

Although the frame illustrated is an 802.3 frame, 802.1p/802.1q can also be applied to the Ethernet frame where the TCI is inserted just before the Type field and just after the Source MAC Address.

You will note the similarity between the 802.1p priority field and the Precedence field in the Diff Serv Code Point of the IP datagram. This makes mapping between IP layer 3 and MAC layer priorities much easier.

You will note that the Ethernet frame becomes 'oversized' i.e. grows from the standard maximum size of 1518 bytes to 1522 bytes. This sometimes called a Baby Giant. Consequently these frames may be dropped by some network equipment, although most vendors now support 802.1p and 802.1q.

When applying Layer 2 Priority Queueing within a trunk, commonly two priority levels (low and high) are implemented, although as we have seen there is scope though to increase to eight. This is because each priority has to have its own queue. This is implemented in hardware and is therefore expensive so most manufacturers currently build in two queues per port, a low priority queue for priority levels 0 to 3 and a high priority queue for priority levels 4 to 7. Prioritisation is determined on the outbound packets from a switch, therefore they are already ordered on the inbound ports of the next switch so that prioritisation need not be implemented on the inbound ports, unless the ports are using buffering. Low priority frames or frames without an 802.1p tag are treated with 'best effort' delivery. As time goes on more manufacturers will include separate queues for each priority level to give more granularity and as the applications begin to demand it.

On a 802.1q trunk, one VLAN is NOT tagged. This VLAN is called the Native VLAN, and must be configured the same on each side of the trunk. This way, we can deduce to which VLAN a frame belongs when we receive a frame with no Tag, otherwise the frame will remain in whatever tagged VLAN it arrived on even if it is the wrong one. When a switch trunks a frame, it inserts the Tag and then recomputes the FCS.

The 802.1q standard implements Spanning Tree on the Native VLAN, and this applies to all the trunked VLANs, this is called Mono Spanning Tree (MST). Cisco have adapted 802.1q and use a tunnelling mechanism to provide what is called Per VLAN Spanning Tree Plus (PVST+) with VLAN numbers up to 1005. This gives the same benefits that ISL gives.

The native VLAN configured on each end of a 802.1q trunk must be the same. A switch receiving a non-tagged frame will assign it to the native VLAN of the trunk. If one end is configured for Native VLAN 1 and the other to Native VLAN 2, a frame sent in VLAN 1 on one side will be received on VLAN 2 on the other. You are then merging VLAN 1 and 2.

 

 

http://www.cisco.com/en/US/docs/internetworking/technology/handbook/Intro-to-LAN.html

 

 

 

Introduction to LAN Protocols


This chapter introduces the various media-access methods, transmission methods, topologies, and devices used in a local-area network (LAN). Topics addressed focus on the methods and devices used in Ethernet/IEEE 802.3, Token Ring/IEEE 802.5, and Fiber Distributed Data Interface (FDDI). Subsequent chapters in Part II, "LAN Protocols," address specific protocols in more detail. Figure 2-1 illustrates the basic layout of these three implementations.

Figure 2-1 Three LAN Implementations Are Used Most Commonly

What Is a LAN?

A LAN is a high-speed data network that covers a relatively small geographic area. It typically connects workstations, personal computers, printers, servers, and other devices. LANs offer computer users many advantages, including shared access to devices and applications, file exchange between connected users, and communication between users via electronic mail and other applications.

LAN Protocols and the OSI Reference Model

LAN protocols function at the lowest two layers of the OSI reference model, as discussed in Chapter 1, "Internetworking Basics," between the physical layer and the data link layer. Figure 2-2 illustrates how several popular LAN protocols map to the OSI reference model.

Figure 2-2 Popular LAN Protocols Mapped to the OSI Reference Model

LAN Media-Access Methods

Media contention occurs when two or more network devices have data to send at the same time. Because multiple devices cannot talk on the network simultaneously, some type of method must be used to allow one device access to the network media at a time. This is done in two main ways: carrier sense multiple access collision detect (CSMA/CD) and token passing.

In networks using CSMA/CD technology such as Ethernet, network devices contend for the network media. When a device has data to send, it first listens to see if any other device is currently using the network. If not, it starts sending its data. After finishing its transmission, it listens again to see if a collision occurred. A collision occurs when two devices send data simultaneously. When a collision happens, each device waits a random length of time before resending its data. In most cases, a collision will not occur again between the two devices. Because of this type of network contention, the busier a network becomes, the more collisions occur. This is why performance of Ethernet degrades rapidly as the number of devices on a single network increases.

In token-passing networks such as Token Ring and FDDI, a special network frame called a token is passed around the network from device to device. When a device has data to send, it must wait until it has the token and then sends its data. When the data transmission is complete, the token is released so that other devices may use the network media. The main advantage of token-passing networks is that they are deterministic. In other words, it is easy to calculate the maximum time that will pass before a device has the opportunity to send data. This explains the popularity of token-passing networks in some real-time environments such as factories, where machinery must be capable of communicating at a determinable interval.

For CSMA/CD networks, switches segment the network into multiple collision domains. This reduces the number of devices per network segment that must contend for the media. By creating smaller collision domains, the performance of a network can be increased significantly without requiring addressing changes.

Normally CSMA/CD networks are half-duplex, meaning that while a device sends information, it cannot receive at the time. While that device is talking, it is incapable of also listening for other traffic. This is much like a walkie-talkie. When one person wants to talk, he presses the transmit button and begins speaking. While he is talking, no one else on the same frequency can talk. When the sending person is finished, he releases the transmit button and the frequency is available to others.

When switches are introduced, full-duplex operation is possible. Full-duplex works much like a telephone—you can listen as well as talk at the same time. When a network device is attached directly to the port of a network switch, the two devices may be capable of operating in full-duplex mode. In full-duplex mode, performance can be increased, but
not quite as much as some like to claim. A 100-Mbps Ethernet segment is capable of transmitting 200 Mbps of data, but only 100 Mbps can travel in one direction at a time. Because most data connections are asymmetric (with more data traveling in one direction than the other), the gain is not as great as many claim. However, full-duplex operation does increase the throughput of most applications because the network media is no longer shared. Two devices on a full-duplex connection can send data as soon as it is ready.

Token-passing networks such as Token Ring can also benefit from network switches. In large networks, the delay between turns to transmit may be significant because the token is passed around the network.

LAN Transmission Methods

LAN data transmissions fall into three classifications: unicast, multicast, and broadcast.
In each type of transmission, a single packet is sent to one or more nodes.

In a unicast transmission, a single packet is sent from the source to a destination on a network. First, the source node addresses the packet by using the address of the destination node. The package is then sent onto the network, and finally, the network passes the packet to its destination.

A multicast transmission consists of a single data packet that is copied and sent to a specific subset of nodes on the network. First, the source node addresses the packet by using a multicast address. The packet is then sent into the network, which makes copies of the packet and sends a copy to each node that is part of the multicast address.

A broadcast transmission consists of a single data packet that is copied and sent to all nodes on the network. In these types of transmissions, the source node addresses the packet by using the broadcast address. The packet is then sent on to the network, which makes copies of the packet and sends a copy to every node on the network.

LAN Topologies

LAN topologies define the manner in which network devices are organized. Four common LAN topologies exist: bus, ring, star, and tree. These topologies are logical architectures, but the actual devices need not be physically organized in these configurations. Logical bus and ring topologies, for example, are commonly organized physically as a star. A bus topology is a linear LAN architecture in which transmissions from network stations propagate the length of the medium and are received by all other stations. Of the three
most widely used LAN implementations, Ethernet/IEEE 802.3 networks—including 100BaseT—implement a bus topology, which is illustrated in Figure 2-3.

Figure 2-3 Some Networks Implement a Local Bus Topology

A ring topology is a LAN architecture that consists of a series of devices connected to one another by unidirectional transmission links to form a single closed loop. Both Token Ring/IEEE 802.5 and FDDI networks implement a ring topology. Figure 2-4 depicts a logical ring topology.

Figure 2-4 Some Networks Implement a Logical Ring Topology

A star topology is a LAN architecture in which the endpoints on a network are connected to a common central hub, or switch, by dedicated links. Logical bus and ring topologies are often implemented physically in a star topology, which is illustrated in Figure 2-5.

A tree topology is a LAN architecture that is identical to the bus topology, except that branches with multiple nodes are possible in this case. Figure 2-5 illustrates a logical tree topology.

Figure 2-5 A Logical Tree Topology Can Contain Multiple Nodes

LAN Devices

Devices commonly used in LANs include repeaters, hubs, LAN extenders, bridges, LAN switches, and routers.


Note Repeaters, hubs, and LAN extenders are discussed briefly in this section. The function and operation of bridges, switches, and routers are discussed generally in Chapter 4, "Bridging and Switching Basics," and Chapter 5, "Routing Basics."


A repeater is a physical layer device used to interconnect the media segments of an extended network. A repeater essentially enables a series of cable segments to be treated as a single cable. Repeaters receive signals from one network segment and amplify, retime, and retransmit those signals to another network segment. These actions prevent signal deterioration caused by long cable lengths and large numbers of connected devices. Repeaters are incapable of performing complex filtering and other traffic processing. In addition, all electrical signals, including electrical disturbances and other errors, are repeated and amplified. The total number of repeaters and network segments that can be connected is limited due to timing and other issues. Figure 2-6 illustrates a repeater connecting two network segments.

Figure 2-6 A Repeater Connects Two Network Segments

A hub is a physical layer device that connects multiple user stations, each via a dedicated cable. Electrical interconnections are established inside the hub. Hubs are used to create a physical star network while maintaining the logical bus or ring configuration of the LAN. In some respects, a hub functions as a multiport repeater.

A LAN extender is a remote-access multilayer switch that connects to a host router. LAN extenders forward traffic from all the standard network layer protocols (such as IP, IPX, and AppleTalk) and filter traffic based on the MAC address or network layer protocol type. LAN extenders scale well because the host router filters out unwanted broadcasts and multicasts. However, LAN extenders are not capable of segmenting traffic or creating security firewalls. Figure 2-7 illustrates multiple LAN extenders connected to the host router through a WAN.

Figure 2-7 Multiple LAN Extenders Can Connect to the Host Router Through a WAN

 

Internetworking Basics


This chapter works with the next six chapters to act as a foundation for the technology discussions that follow. In this chapter, some fundamental concepts and terms used in the evolving language of internetworking are addressed. In the same way that this book provides a foundation for understanding modern networking, this chapter summarizes some common themes presented throughout the remainder of this book. Topics include flow control, error checking, and multiplexing, but this chapter focuses mainly on mapping the Open System Interconnection (OSI) model to networking/internetworking functions, and also summarizing the general nature of addressing schemes within the context
of the OSI model. The OSI model represents the building blocks for internetworks. Understanding the conceptual model helps you understand the complex pieces that make up an internetwork.

What Is an Internetwork?

An internetwork is a collection of individual networks, connected by intermediate networking devices, that functions as a single large network. Internetworking refers to the industry, products, and procedures that meet the challenge of creating and administering internetworks. Figure 1-1 illustrates some different kinds of network technologies that can be interconnected by routers and other networking devices to create an internetwork.

Figure 1-1 Different Network Technologies Can Be Connected to Create an Internetwork

History of Internetworking

The first networks were time-sharing networks that used mainframes and attached terminals. Such environments were implemented by both IBM's Systems Network Architecture (SNA) and Digital's network architecture.

Local-area networks (LANs) evolved around the PC revolution. LANs enabled multiple users in a relatively small geographical area to exchange files and messages, as well as access shared resources such as file servers and printers.

Wide-area networks (WANs) interconnect LANs with geographically dispersed users to create connectivity. Some of the technologies used for connecting LANs include T1, T3, ATM, ISDN, ADSL, Frame Relay, radio links, and others. New methods of connecting dispersed LANs are appearing everyday.

Today, high-speed LANs and switched internetworks are becoming widely used, largely because they operate at very high speeds and support such high-bandwidth applications as multimedia and videoconferencing.

Internetworking evolved as a solution to three key problems: isolated LANs, duplication
of resources, and a lack of network management. Isolated LANs made electronic communication between different offices or departments impossible. Duplication of resources meant that the same hardware and software had to be supplied to each office or department, as did separate support staff. This lack of network management meant that no centralized method of managing and troubleshooting networks existed.

Internetworking Challenges

Implementing a functional internetwork is no simple task. Many challenges must be faced, especially in the areas of connectivity, reliability, network management, and flexibility. Each area is key in establishing an efficient and effective internetwork.

The challenge when connecting various systems is to support communication among disparate technologies. Different sites, for example, may use different types of media operating at varying speeds, or may even include different types of systems that need to communicate.

Because companies rely heavily on data communication, internetworks must provide a certain level of reliability. This is an unpredictable world, so many large internetworks include redundancy to allow for communication even when problems occur.

Furthermore, network management must provide centralized support and troubleshooting capabilities in an internetwork. Configuration, security, performance, and other issues must be adequately addressed for the internetwork to function smoothly. Security within an internetwork is essential. Many people think of network security from the perspective of protecting the private network from outside attacks. However, it is just as important to protect the network from internal attacks, especially because most security breaches come from inside. Networks must also be secured so that the internal network cannot be used as a tool to attack other external sites.

Early in the year 2000, many major web sites were the victims of distributed denial of service (DDOS) attacks. These attacks were possible because a great number of private networks currently connected with the Internet were not properly secured. These private networks were used as tools for the attackers.

Because nothing in this world is stagnant, internetworks must be flexible enough to change with new demands.

Open System Interconnection Reference Model

The Open System Interconnection (OSI) reference model describes how information from a software application in one computer moves through a network medium to a software application in another computer. The OSI reference model is a conceptual model composed of seven layers, each specifying particular network functions. The model was developed by the International Organization for Standardization (ISO) in 1984, and it is now considered the primary architectural model for intercomputer communications. The OSI model divides the tasks involved with moving information between networked computers into seven smaller, more manageable task groups. A task or group of tasks is then assigned to each of the seven OSI layers. Each layer is reasonably self-contained so that the tasks assigned to each layer can be implemented independently. This enables the solutions offered by one layer to be updated without adversely affecting the other layers. The following list details the seven layers of the Open System Interconnection (OSI) reference model:

Layer 7—Application

Layer 6—Presentation

Layer 5—Session

Layer 4—Transport

Layer 3—Network

Layer 2—Data link

Layer 1—Physical


Note A handy way to remember the seven layers is the sentence "All people seem to need data processing." The beginning letter of each word corresponds to a layer.


All—Application layer

People—Presentation layer

Seem—Session layer

To—Transport layer

Need—Network layer

Data—Data link layer

Processing—Physical layer

Figure 1-2 illustrates the seven-layer OSI reference model.

Figure 1-2 The OSI Reference Model Contains Seven Independent Layers

Characteristics of the OSI Layers

The seven layers of the OSI reference model can be divided into two categories: upper layers and lower layers.

The upper layers of the OSI model deal with application issues and generally are implemented only in software. The highest layer, the application layer, is closest to the end user. Both users and application layer processes interact with software applications that contain a communications component. The term upper layer is sometimes used to refer to any layer above another layer in the OSI model.

The lower layers of the OSI model handle data transport issues. The physical layer and the data link layer are implemented in hardware and software. The lowest layer, the physical layer, is closest to the physical network medium (the network cabling, for example) and is responsible for actually placing information on the medium.

Figure 1-3 illustrates the division between the upper and lower OSI layers.

Figure 1-3 Two Sets of Layers Make Up the OSI Layers

Protocols

The OSI model provides a conceptual framework for communication between computers, but the model itself is not a method of communication. Actual communication is made possible by using communication protocols. In the context of data networking, a protocol is a formal set of rules and conventions that governs how computers exchange information over a network medium. A protocol implements the functions of one or more of the OSI layers.

A wide variety of communication protocols exist. Some of these protocols include LAN protocols, WAN protocols, network protocols, and routing protocols. LAN protocols operate at the physical and data link layers of the OSI model and define communication over the various LAN media. WAN protocols operate at the lowest three layers of the OSI model and define communication over the various wide-area media. Routing protocols are network layer protocols that are responsible for exchanging information between routers so that the routers can select the proper path for network traffic. Finally, network protocols are the various upper-layer protocols that exist in a given protocol suite. Many protocols rely on others for operation. For example, many routing protocols use network protocols to exchange information between routers. This concept of building upon the layers already in existence is the foundation of the OSI model.

OSI Model and Communication Between Systems

Information being transferred from a software application in one computer system to a software application in another must pass through the OSI layers. For example, if a software application in System A has information to transmit to a software application in System B, the application program in System A will pass its information to the application layer (Layer 7) of System A. The application layer then passes the information to the presentation layer (Layer 6), which relays the data to the session layer (Layer 5), and so on down to the physical layer (Layer 1). At the physical layer, the information is placed on the physical network medium and is sent across the medium to System B. The physical layer of System B removes the information from the physical medium, and then its physical layer passes the information up to the data link layer (Layer 2), which passes it to the network layer (Layer 3), and so on, until it reaches the application layer (Layer 7) of System B. Finally, the application layer of System B passes the information to the recipient application program to complete the communication process.

Interaction Between OSI Model Layers

A given layer in the OSI model generally communicates with three other OSI layers: the layer directly above it, the layer directly below it, and its peer layer in other networked computer systems. The data link layer in System A, for example, communicates with the network layer of System A, the physical layer of System A, and the data link layer in System B. Figure 1-4 illustrates this example.

Figure 1-4 OSI Model Layers Communicate with Other Layers

OSI Layer Services

One OSI layer communicates with another layer to make use of the services provided by the second layer. The services provided by adjacent layers help a given OSI layer communicate with its peer layer in other computer systems. Three basic elements are involved in layer services: the service user, the service provider, and the service access point (SAP).

In this context, the service user is the OSI layer that requests services from an adjacent OSI layer. The service provider is the OSI layer that provides services to service users. OSI layers can provide services to multiple service users. The SAP is a conceptual location at which one OSI layer can request the services of another OSI layer.

Figure 1-5 illustrates how these three elements interact at the network and data link layers.

Figure 1-5 Service Users, Providers, and SAPs Interact at the Network and Data Link Layers

OSI Model Layers and Information Exchange

The seven OSI layers use various forms of control information to communicate with their peer layers in other computer systems. This control information consists of specific requests and instructions that are exchanged between peer OSI layers.

Control information typically takes one of two forms: headers and trailers. Headers are prepended to data that has been passed down from upper layers. Trailers are appended to data that has been passed down from upper layers. An OSI layer is not required to attach a header or a trailer to data from upper layers.

Headers, trailers, and data are relative concepts, depending on the layer that analyzes the information unit. At the network layer, for example, an information unit consists of a Layer 3 header and data. At the data link layer, however, all the information passed down by the network layer (the Layer 3 header and the data) is treated as data.

In other words, the data portion of an information unit at a given OSI layer potentially
can contain headers, trailers, and data from all the higher layers. This is known as encapsulation. Figure 1-6 shows how the header and data from one layer are encapsulated into the header of the next lowest layer.

Figure 1-6 Headers and Data Can Be Encapsulated During Information Exchange

Information Exchange Process

The information exchange process occurs between peer OSI layers. Each layer in the source system adds control information to data, and each layer in the destination system analyzes and removes the control information from that data.

If System A has data from a software application to send to System B, the data is passed to the application layer. The application layer in System A then communicates any control information required by the application layer in System B by prepending a header to the data. The resulting information unit (a header and the data) is passed to the presentation layer, which prepends its own header containing control information intended for the presentation layer in System B. The information unit grows in size as each layer prepends its own header (and, in some cases, a trailer) that contains control information to be used by its peer layer in System B. At the physical layer, the entire information unit is placed onto the network medium.

The physical layer in System B receives the information unit and passes it to the data link layer. The data link layer in System B then reads the control information contained in the header prepended by the data link layer in System A. The header is then removed, and the remainder of the information unit is passed to the network layer. Each layer performs the same actions: The layer reads the header from its peer layer, strips it off, and passes the remaining information unit to the next highest layer. After the application layer performs these actions, the data is passed to the recipient software application in System B, in exactly the form in which it was transmitted by the application in System A.

OSI Model Physical Layer

The physical layer defines the electrical, mechanical, procedural, and functional specifications for activating, maintaining, and deactivating the physical link between communicating network systems. Physical layer specifications define characteristics such as voltage levels, timing of voltage changes, physical data rates, maximum transmission distances, and physical connectors. Physical layer implementations can be categorized as either LAN or WAN specifications. Figure 1-7 illustrates some common LAN and WAN physical layer implementations.

Figure 1-7 Physical Layer Implementations Can Be LAN or WAN Specifications

OSI Model Data Link Layer

The data link layer provides reliable transit of data across a physical network link. Different data link layer specifications define different network and protocol characteristics, including physical addressing, network topology, error notification, sequencing of frames, and flow control. Physical addressing (as opposed to network addressing) defines how devices are addressed at the data link layer. Network topology consists of the data link layer specifications that often define how devices are to be physically connected, such as in a bus or a ring topology. Error notification alerts upper-layer protocols that a transmission error has occurred, and the sequencing of data frames reorders frames that are transmitted out of sequence. Finally, flow control moderates the transmission of data so that the receiving device is not overwhelmed with more traffic than it can handle at one time.

The Institute of Electrical and Electronics Engineers (IEEE) has subdivided the data link layer into two sublayers: Logical Link Control (LLC) and Media Access Control (MAC). Figure 1-8 illustrates the IEEE sublayers of the data link layer.

Figure 1-8 The Data Link Layer Contains Two Sublayers

The Logical Link Control (LLC) sublayer of the data link layer manages communications between devices over a single link of a network. LLC is defined in the IEEE 802.2 specification and supports both connectionless and connection-oriented services used by higher-layer protocols. IEEE 802.2 defines a number of fields in data link layer frames that enable multiple higher-layer protocols to share a single physical data link. The Media Access Control (MAC) sublayer of the data link layer manages protocol access to the physical network medium. The IEEE MAC specification defines MAC addresses, which enable multiple devices to uniquely identify one another at the data link layer.

OSI Model Network Layer

The network layer defines the network address, which differs from the MAC address. Some network layer implementations, such as the Internet Protocol (IP), define network addresses in a way that route selection can be determined systematically by comparing the source network address with the destination network address and applying the subnet mask. Because this layer defines the logical network layout, routers can use this layer to determine how to forward packets. Because of this, much of the design and configuration work for internetworks happens at Layer 3, the network layer.

OSI Model Transport Layer

The transport layer accepts data from the session layer and segments the data for transport across the network. Generally, the transport layer is responsible for making sure that the data is delivered error-free and in the proper sequence. Flow control generally occurs at the transport layer.

Flow control manages data transmission between devices so that the transmitting device does not send more data than the receiving device can process. Multiplexing enables data from several applications to be transmitted onto a single physical link. Virtual circuits are established, maintained, and terminated by the transport layer. Error checking involves creating various mechanisms for detecting transmission errors, while error recovery involves acting, such as requesting that data be retransmitted, to resolve any errors that occur.

The transport protocols used on the Internet are TCP and UDP.

OSI Model Session Layer

The session layer establishes, manages, and terminates communication sessions. Communication sessions consist of service requests and service responses that occur between applications located in different network devices. These requests and responses are coordinated by protocols implemented at the session layer. Some examples of session-layer implementations include Zone Information Protocol (ZIP), the AppleTalk protocol that coordinates the name binding process; and Session Control Protocol (SCP), the DECnet Phase IV session layer protocol.

OSI Model Presentation Layer

The presentation layer provides a variety of coding and conversion functions that are applied to application layer data. These functions ensure that information sent from the application layer of one system would be readable by the application layer of another system. Some examples of presentation layer coding and conversion schemes include common data representation formats, conversion of character representation formats, common data compression schemes, and common data encryption schemes.

Common data representation formats, or the use of standard image, sound, and video formats, enable the interchange of application data between different types of computer systems. Conversion schemes are used to exchange information with systems by using different text and data representations, such as EBCDIC and ASCII. Standard data compression schemes enable data that is compressed at the source device to be properly decompressed at the destination. Standard data encryption schemes enable data encrypted at the source device to be properly deciphered at the destination.

Presentation layer implementations are not typically associated with a particular protocol stack. Some well-known standards for video include QuickTime and Motion Picture Experts Group (MPEG). QuickTime is an Apple Computer specification for video and audio, and MPEG is a standard for video compression and coding.

Among the well-known graphic image formats are Graphics Interchange Format (GIF), Joint Photographic Experts Group (JPEG), and Tagged Image File Format (TIFF). GIF is a standard for compressing and coding graphic images. JPEG is another compression and coding standard for graphic images, and TIFF is a standard coding format for graphic images.

OSI Model Application Layer

The application layer is the OSI layer closest to the end user, which means that both the OSI application layer and the user interact directly with the software application.

This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication.

When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit.
When determining resource availability, the application layer must decide whether sufficient network resources for the requested communication exist. In synchronizing communication, all communication between applications requires cooperation that is managed by the application layer.

Some examples of application layer implementations include Telnet, File Transfer Protocol (FTP), and Simple Mail Transfer Protocol (SMTP).

Information Formats

The data and control information that is transmitted through internetworks takes a variety of forms. The terms used to refer to these information formats are not used consistently
in the internetworking industry but sometimes are used interchangeably. Common information formats include frames, packets, datagrams, segments, messages, cells, and data units.

A frame is an information unit whose source and destination are data link layer entities. A frame is composed of the data link layer header (and possibly a trailer) and upper-layer data. The header and trailer contain control information intended for the data link layer entity in the destination system. Data from upper-layer entities is encapsulated in the data link layer header and trailer. Figure 1-9 illustrates the basic components of a data link layer frame.

Figure 1-9 Data from Upper-Layer Entities Makes Up the Data Link Layer Frame

A packet is an information unit whose source and destination are network layer entities. A packet is composed of the network layer header (and possibly a trailer) and upper-layer data. The header and trailer contain control information intended for the network layer entity in the destination system. Data from upper-layer entities is encapsulated in the network layer header and trailer. Figure 1-10 illustrates the basic components of a network layer packet.

Figure 1-10 Three Basic Components Make Up a Network Layer Packet

The term datagram usually refers to an information unit whose source and destination are network layer entities that use connectionless network service.

The term segment usually refers to an information unit whose source and destination are transport layer entities.

A message is an information unit whose source and destination entities exist above the network layer (often at the application layer).

A cell is an information unit of a fixed size whose source and destination are data link layer entities. Cells are used in switched environments, such as Asynchronous Transfer Mode (ATM) and Switched Multimegabit Data Service (SMDS) networks. A cell is composed
of the header and payload. The header contains control information intended for the destination data link layer entity and is typically 5 bytes long. The payload contains upper-layer data that is encapsulated in the cell header and is typically 48 bytes long.

The length of the header and the payload fields always are the same for each cell.
Figure 1-11 depicts the components of a typical cell.

Figure 1-11 Two Components Make Up a Typical Cell

Data unit is a generic term that refers to a variety of information units. Some common data units are service data units (SDUs), protocol data units, and bridge protocol data units (BPDUs). SDUs are information units from upper-layer protocols that define a service request to a lower-layer protocol. PDU is OSI terminology for a packet. BPDUs are used by the spanning-tree algorithm as hello messages.

ISO Hierarchy of Networks

Large networks typically are organized as hierarchies. A hierarchical organization provides such advantages as ease of management, flexibility, and a reduction in unnecessary traffic. Thus, the International Organization for Standardization (ISO) has adopted a number of terminology conventions for addressing network entities. Key terms defined in this section include end system (ES), intermediate system (IS), area, and autonomous system (AS).

An ES is a network device that does not perform routing or other traffic forwarding functions. Typical ESs include such devices as terminals, personal computers, and printers. An IS is a network device that performs routing or other traffic-forwarding functions. Typical ISs include such devices as routers, switches, and bridges. Two types of IS networks exist: intradomain IS and interdomain IS. An intradomain IS communicates within a single autonomous system, while an interdomain IS communicates within and between autonomous systems. An area is a logical group of network segments and their attached devices. Areas are subdivisions of autonomous systems (AS's). An AS is a collection of networks under a common administration that share a common routing strategy. Autonomous systems are subdivided into areas, and an AS is sometimes called a domain. Figure 1-12 illustrates a hierarchical network and its components.

Figure 1-12 A Hierarchical Network Contains Numerous Components

Connection-Oriented and Connectionless Network Services

In general, transport protocols can be characterized as being either connection-oriented or connectionless. Connection-oriented services must first establish a connection with the desired service before passing any data. A connectionless service can send the data without any need to establish a connection first. In general, connection-oriented services provide some level of delivery guarantee, whereas connectionless services do not.

Connection-oriented service involves three phases: connection establishment, data transfer, and connection termination.

During connection establishment, the end nodes may reserve resources for the connection. The end nodes also may negotiate and establish certain criteria for the transfer, such as a window size used in TCP connections. This resource reservation is one of the things exploited in some denial of service (DOS) attacks. An attacking system will send many requests for establishing a connection but then will never complete the connection. The attacked computer is then left with resources allocated for many never-completed connections. Then, when an end node tries to complete an actual connection, there are not enough resources for the valid connection.

The data transfer phase occurs when the actual data is transmitted over the connection. During data transfer, most connection-oriented services will monitor for lost packets and handle resending them. The protocol is generally also responsible for putting the packets in the right sequence before passing the data up the protocol stack.

When the transfer of data is complete, the end nodes terminate the connection and release resources reserved for the connection.

Connection-oriented network services have more overhead than connectionless ones. Connection-oriented services must negotiate a connection, transfer data, and tear down the connection, whereas a connectionless transfer can simply send the data without the added overhead of creating and tearing down a connection. Each has its place in internetworks.

Internetwork Addressing

Internetwork addresses identify devices separately or as members of a group. Addressing schemes vary depending on the protocol family and the OSI layer. Three types of internetwork addresses are commonly used: data link layer addresses, Media Access Control (MAC) addresses, and network layer addresses.

Data Link Layer Addresses

A data link layer address uniquely identifies each physical network connection of a network device. Data-link addresses sometimes are referred to as physical or hardware addresses. Data-link addresses usually exist within a flat address space and have a pre-established and typically fixed relationship to a specific device.

End systems generally have only one physical network connection and thus have only one data-link address. Routers and other internetworking devices typically have multiple physical network connections and therefore have multiple data-link addresses. Figure 1-13 illustrates how each interface on a device is uniquely identified by a data-link address.

Figure 1-13 Each Interface on a Device Is Uniquely Identified by a Data-Link Address.

MAC Addresses

Media Access Control (MAC) addresses consist of a subset of data link layer addresses. MAC addresses identify network entities in LANs that implement the IEEE MAC addresses of the data link layer. As with most data-link addresses, MAC addresses are unique for each LAN interface. Figure 1-14 illustrates the relationship between MAC addresses, data-link addresses, and the IEEE sublayers of the data link layer.

Figure 1-14 MAC Addresses, Data-Link Addresses, and the IEEE Sublayers of the Data Link Layer Are All Related

MAC addresses are 48 bits in length and are expressed as 12 hexadecimal digits. The first 6 hexadecimal digits, which are administered by the IEEE, identify the manufacturer or vendor and thus comprise the Organizationally Unique Identifier (OUI). The last 6 hexadecimal digits comprise the interface serial number, or another value administered by the specific vendor. MAC addresses sometimes are called burned-in addresses (BIAs) because they are burned into read-only memory (ROM) and are copied into random-access memory (RAM) when the interface card initializes. Figure 1-15 illustrates the MAC address format.

Figure 1-15 The MAC Address Contains a Unique Format of Hexadecimal Digits

Mapping Addresses

Because internetworks generally use network addresses to route traffic around the network, there is a need to map network addresses to MAC addresses. When the network layer has determined the destination station's network address, it must forward the information over a physical network using a MAC address. Different protocol suites use different methods to perform this mapping, but the most popular is Address Resolution Protocol (ARP).

Different protocol suites use different methods for determining the MAC address of a device. The following three methods are used most often. Address Resolution Protocol (ARP) maps network addresses to MAC addresses. The Hello protocol enables network devices to learn the MAC addresses of other network devices. MAC addresses either are embedded in the network layer address or are generated by an algorithm.

Address Resolution Protocol (ARP) is the method used in the TCP/IP suite. When a network device needs to send data to another device on the same network, it knows the source and destination network addresses for the data transfer. It must somehow map the destination address to a MAC address before forwarding the data. First, the sending station will check its ARP table to see if it has already discovered this destination station's MAC address. If it has not, it will send a broadcast on the network with the destination station's IP address contained in the broadcast. Every station on the network receives the broadcast and compares the embedded IP address to its own. Only the station with the matching IP address replies to the sending station with a packet containing the MAC address for the station. The first station then adds this information to its ARP table for future reference and proceeds to transfer the data.

When the destination device lies on a remote network, one beyond a router, the process is the same except that the sending station sends the ARP request for the MAC address of its default gateway. It then forwards the information to that device. The default gateway will then forward the information over whatever networks necessary to deliver the packet to the network on which the destination device resides. The router on the destination device's network then uses ARP to obtain the MAC of the actual destination device and delivers the packet.

The Hello protocol is a network layer protocol that enables network devices to identify one another and indicate that they are still functional. When a new end system powers up, for example, it broadcasts hello messages onto the network. Devices on the network then return hello replies, and hello messages are also sent at specific intervals to indicate that they are still functional. Network devices can learn the MAC addresses of other devices by examining Hello protocol packets.

Three protocols use predictable MAC addresses. In these protocol suites, MAC addresses are predictable because the network layer either embeds the MAC address in the network layer address or uses an algorithm to determine the MAC address. The three protocols are Xerox Network Systems (XNS), Novell Internetwork Packet Exchange (IPX), and DECnet Phase IV.

Network Layer Addresses

A network layer address identifies an entity at the network layer of the OSI layers. Network addresses usually exist within a hierarchical address space and sometimes are called virtual or logical addresses.

The relationship between a network address and a device is logical and unfixed; it typically is based either on physical network characteristics (the device is on a particular network segment) or on groupings that have no physical basis (the device is part of an AppleTalk zone). End systems require one network layer address for each network layer protocol that they support. (This assumes that the device has only one physical network connection.) Routers and other internetworking devices require one network layer address per physical network connection for each network layer protocol supported. For example, a router with three interfaces each running AppleTalk, TCP/IP, and OSI must have three network layer addresses for each interface. The router therefore has nine network layer addresses. Figure 1-16 illustrates how each network interface must be assigned a network address for each protocol supported.

Figure 1-16 Each Network Interface Must Be Assigned a Network Address for Each Protocol Supported

Hierarchical Versus Flat Address Space

Internetwork address space typically takes one of two forms: hierarchical address space or flat address space. A hierarchical address space is organized into numerous subgroups, each successively narrowing an address until it points to a single device (in a manner similar to street addresses). A flat address space is organized into a single group (in a manner similar to U.S. Social Security numbers).

Hierarchical addressing offers certain advantages over flat-addressing schemes. Address sorting and recall is simplified using comparison operations. For example, "Ireland" in a street address eliminates any other country as a possible location. Figure 1-17 illustrates the difference between hierarchical and flat address spaces.

Figure 1-17 Hierarchical and Flat Address Spaces Differ in Comparison Operations

Address Assignments

Addresses are assigned to devices as one of two types: static and dynamic. Static addresses are assigned by a network administrator according to a preconceived internetwork addressing plan. A static address does not change until the network administrator manually changes it. Dynamic addresses are obtained by devices when they attach to a network, by means of some protocol-specific process. A device using a dynamic address often has a different address each time that it connects to the network. Some networks use a server to assign addresses. Server-assigned addresses are recycled for reuse as devices disconnect.
A device is therefore likely to have a different address each time that it connects to the network.

Addresses Versus Names

Internetwork devices usually have both a name and an address associated with them. Internetwork names typically are location-independent and remain associated with a device wherever that device moves (for example, from one building to another). Internetwork addresses usually are location-dependent and change when a device is moved (although MAC addresses are an exception to this rule). As with network addresses being mapped to MAC addresses, names are usually mapped to network addresses through some protocol. The Internet uses Domain Name System (DNS) to map the name of a device to its IP address. For example, it's easier for you to remember www.cisco.com instead of some IP address. Therefore, you type www.cisco.com into your browser when you want to access Cisco's web site. Your computer performs a DNS lookup of the IP address for Cisco's web server and then communicates with it using the network address.

Flow Control Basics

Flow control is a function that prevents network congestion by ensuring that transmitting devices do not overwhelm receiving devices with data. A high-speed computer, for example, may generate traffic faster than the network can transfer it, or faster than the destination device can receive and process it. The three commonly used methods for handling network congestion are buffering, transmitting source-quench messages, and windowing.

Buffering is used by network devices to temporarily store bursts of excess data in memory until they can be processed. Occasional data bursts are easily handled by buffering. Excess data bursts can exhaust memory, however, forcing the device to discard any additional datagrams that arrive.

Source-quench messages are used by receiving devices to help prevent their buffers from overflowing. The receiving device sends source-quench messages to request that the source reduce its current rate of data transmission. First, the receiving device begins discarding received data due to overflowing buffers. Second, the receiving device begins sending source-quench messages to the transmitting device at the rate of one message for each packet dropped. The source device receives the source-quench messages and lowers the data rate until it stops receiving the messages. Finally, the source device then gradually increases the data rate as long as no further source-quench requests are received.

Windowing is a flow-control scheme in which the source device requires an acknowledgment from the destination after a certain number of packets have been transmitted. With a window size of 3, the source requires an acknowledgment after sending three packets, as follows. First, the source device sends three packets to the destination device. Then, after receiving the three packets, the destination device sends an acknowledgment to the source. The source receives the acknowledgment and sends three more packets. If the destination does not receive one or more of the packets for some reason, such as overflowing buffers, it does not receive enough packets to send an acknowledgment. The source then retransmits the packets at a reduced transmission rate.

Error-Checking Basics

Error-checking schemes determine whether transmitted data has become corrupt or otherwise damaged while traveling from the source to the destination. Error checking is implemented at several of the OSI layers.

One common error-checking scheme is the cyclic redundancy check (CRC), which detects and discards corrupted data. Error-correction functions (such as data retransmission) are left to higher-layer protocols. A CRC value is generated by a calculation that is performed at the source device. The destination device compares this value to its own calculation to determine whether errors occurred during transmission. First, the source device performs a predetermined set of calculations over the contents of the packet to be sent. Then, the source places the calculated value in the packet and sends the packet to the destination. The destination performs the same predetermined set of calculations over the contents of the packet and then compares its computed value with that contained in the packet. If the values are equal, the packet is considered valid. If the values are unequal, the packet contains errors and is discarded.

Multiplexing Basics

Multiplexing is a process in which multiple data channels are combined into a single data or physical channel at the source. Multiplexing can be implemented at any of the OSI layers. Conversely, demultiplexing is the process of separating multiplexed data channels at the destination. One example of multiplexing is when data from multiple applications is multiplexed into a single lower-layer data packet. Figure 1-18 illustrates this example.

Figure 1-18 Multiple Applications Can Be Multiplexed into a Single Lower-Layer Data Packet

Another example of multiplexing is when data from multiple devices is combined into a single physical channel (using a device called a multiplexer). Figure 1-19 illustrates this example.

Figure 1-19 Multiple Devices Can Be Multiplexed into a Single Physical Channel

A multiplexer is a physical layer device that combines multiple data streams into one or more output channels at the source. Multiplexers demultiplex the channels into multiple data streams at the remote end and thus maximize the use of the bandwidth of the physical medium by enabling it to be shared by multiple traffic sources.

Some methods used for multiplexing data are time-division multiplexing (TDM), asynchronous time-division multiplexing (ATDM), frequency-division multiplexing (FDM), and statistical multiplexing.

In TDM, information from each data channel is allocated bandwidth based on preassigned time slots, regardless of whether there is data to transmit. In ATDM, information from data channels is allocated bandwidth as needed by using dynamically assigned time slots. In FDM, information from each data channel is allocated bandwidth based on the signal frequency of the traffic. In statistical multiplexing, bandwidth is dynamically allocated to any data channels that have information to transmit.

Standards Organizations

A wide variety of organizations contribute to internetworking standards by providing forums for discussion, turning informal discussion into formal specifications, and proliferating specifications after they are standardized.

Most standards organizations create formal standards by using specific processes: organizing ideas, discussing the approach, developing draft standards, voting on all or certain aspects of the standards, and then formally releasing the completed standard to the public.

Some of the best-known standards organizations that contribute to internetworking standards include these:

International Organization for Standardization (ISO)—ISO is an international standards organization responsible for a wide range of standards, including many that are relevant to networking. Its best-known contribution is the development of the OSI reference model and the OSI protocol suite.

American National Standards Institute (ANSI)—ANSI, which is also a member of
the ISO, is the coordinating body for voluntary standards groups within the United States. ANSI developed the Fiber Distributed Data Interface (FDDI) and other communications standards.

Electronic Industries Association (EIA)—EIA specifies electrical transmission standards, including those used in networking. The EIA developed the widely used EIA/TIA-232 standard (formerly known as RS-232).

Institute of Electrical and Electronic Engineers (IEEE)—IEEE is a professional organization that defines networking and other standards. The IEEE developed the widely used LAN standards IEEE 802.3 and IEEE 802.5.

International Telecommunication Union Telecommunication Standardization Sector (ITU-T)Formerly called the Committee for International Telegraph and Telephone (CCITT), ITU-T is now an international organization that develops communication standards. The ITU-T developed X.25 and other communications standards.

Internet Activities Board (IAB)—IAB is a group of internetwork researchers who discuss issues pertinent to the Internet and set Internet policies through decisions and task forces. The IAB designates some Request For Comments (RFC) documents as Internet standards, including Transmission Control Protocol/Internet Protocol (TCP/IP) and the Simple Network Management Protocol (SNMP).

Bridging and Switching Basics


This chapter introduces the technologies employed in devices loosely referred to as bridges and switches. Topics summarized here include general link layer device operations, local and remote bridging, ATM switching, and LAN switching. Chapters in Part V, "Bridging and Switching," address specific technologies in more detail.

What Are Bridges and Switches?

Bridges and switches are data communications devices that operate principally at Layer 2 of the OSI reference model. As such, they are widely referred to as data link layer devices.

Bridges became commercially available in the early 1980s. At the time of their introduction, bridges connected and enabled packet forwarding between homogeneous networks. More recently, bridging between different networks has also been defined and standardized.

Several kinds of bridging have proven important as internetworking devices. Transparent bridging is found primarily in Ethernet environments, while source-route bridging occurs primarily in Token Ring environments. Translational bridging provides translation between the formats and transit principles of different media types (usually Ethernet and Token Ring). Finally, source-route transparent bridging combines the algorithms of transparent bridging and source-route bridging to enable communication in mixed Ethernet/Token Ring environments.

Today, switching technology has emerged as the evolutionary heir to bridging-based internetworking solutions. Switching implementations now dominate applications in which bridging technologies were implemented in prior network designs. Superior throughput performance, higher port density, lower per-port cost, and greater flexibility have contributed to the emergence of switches as replacement technology for bridges and as complements to routing technology.

Link Layer Device Overview

Bridging and switching occur at the link layer, which controls data flow, handles transmission errors, provides physical (as opposed to logical) addressing, and manages access to the physical medium. Bridges provide these functions by using various link layer protocols that dictate specific flow control, error handling, addressing, and media-access algorithms. Examples of popular link layer protocols include Ethernet, Token Ring, and FDDI.

Bridges and switches are not complicated devices. They analyze incoming frames, make forwarding decisions based on information contained in the frames, and forward the frames toward the destination. In some cases, such as source-route bridging, the entire path to the destination is contained in each frame. In other cases, such as transparent bridging, frames are forwarded one hop at a time toward the destination.

Upper-layer protocol transparency is a primary advantage of both bridging and switching. Because both device types operate at the link layer, they are not required to examine upper-layer information. This means that they can rapidly forward traffic representing any network layer protocol. It is not uncommon for a bridge to move AppleTalk, DECnet, TCP/IP, XNS, and other traffic between two or more networks.

Bridges are capable of filtering frames based on any Layer 2 fields. For example, a bridge can be programmed to reject (not forward) all frames sourced from a particular network. Because link layer information often includes a reference to an upper-layer protocol, bridges usually can filter on this parameter. Furthermore, filters can be helpful in dealing with unnecessary broadcast and multicast packets.

By dividing large networks into self-contained units, bridges and switches provide several advantages. Because only a certain percentage of traffic is forwarded, a bridge or switch diminishes the traffic experienced by devices on all connected segments. The bridge or switch will act as a firewall for some potentially damaging network errors and will accommodate communication between a larger number of devices than would be supported on any single LAN connected to the bridge. Bridges and switches extend the effective length of a LAN, permitting the attachment of distant stations that was not previously permitted.

Although bridges and switches share most relevant attributes, several distinctions differentiate these technologies. Bridges are generally used to segment a LAN into a couple of smaller segments. Switches are generally used to segment a large LAN into many smaller segments. Bridges generally have only a few ports for LAN connectivity, whereas switches generally have many. Small switches such as the Cisco Catalyst 2924XL have 24 ports capable of creating 24 different network segments for a LAN. Larger switches such as the Cisco Catalyst 6500 can have hundreds of ports. Switches can also be used to connect LANs with different media—for example, a 10-Mbps Ethernet LAN and a 100-Mbps Ethernet LAN can be connected using a switch. Some switches support cut-through switching, which reduces latency and delays in the network, while bridges support only store-and-forward traffic switching. Finally, switches reduce collisions on network segments because they provide dedicated bandwidth to each network segment.

Types of Bridges

Bridges can be grouped into categories based on various product characteristics. Using one popular classification scheme, bridges are either local or remote. Local bridges provide a direct connection between multiple LAN segments in the same area. Remote bridges connect multiple LAN segments in different areas, usually over telecommunications lines. Figure 4-1 illustrates these two configurations.

Figure 4-1 Local and Remote Bridges Connect LAN Segments in Specific Areas

Remote bridging presents several unique internetworking challenges, one of which is the difference between LAN and WAN speeds. Although several fast WAN technologies now are establishing a presence in geographically dispersed internetworks, LAN speeds are often much faster than WAN speeds. Vast differences in LAN and WAN speeds can prevent users from running delay-sensitive LAN applications over the WAN.

Remote bridges cannot improve WAN speeds, but they can compensate for speed discrepancies through a sufficient buffering capability. If a LAN device capable of a 3-Mbps transmission rate wants to communicate with a device on a remote LAN, the local bridge must regulate the 3-Mbps data stream so that it does not overwhelm the 64-kbps serial link. This is done by storing the incoming data in onboard buffers and sending it over the serial link at a rate that the serial link can accommodate. This buffering can be achieved only for short bursts of data that do not overwhelm the bridge's buffering capability.

The Institute of Electrical and Electronic Engineers (IEEE) differentiates the OSI link layer into two separate sublayers: the Media Access Control (MAC) sublayer and the Logical Link Control (LLC) sublayer. The MAC sublayer permits and orchestrates media access, such as contention and token passing, while the LLC sublayer deals with framing, flow control, error control, and MAC sublayer addressing.

Some bridges are MAC-layer bridges, which bridge between homogeneous networks (for example, IEEE 802.3 and IEEE 802.3), while other bridges can translate between different link layer protocols (for example, IEEE 802.3 and IEEE 802.5). The basic mechanics of such a translation are shown in Figure 4-2.

Figure 4-2 A MAC-Layer Bridge Connects the IEEE 802.3 and IEEE 802.5 Networks

Figure 4-2 illustrates an IEEE 802.3 host (Host A) formulating a packet that contains application information and encapsulating the packet in an IEEE 802.3-compatible frame for transit over the IEEE 802.3 medium to the bridge. At the bridge, the frame is stripped of its IEEE 802.3 header at the MAC sublayer of the link layer and is subsequently passed up to the LLC sublayer for further processing. After this processing, the packet is passed back down to an IEEE 802.5 implementation, which encapsulates the packet in an IEEE 802.5 header for transmission on the IEEE 802.5 network to the IEEE 802.5 host (Host B).

A bridge's translation between networks of different types is never perfect because one network likely will support certain frame fields and protocol functions not supported by the other network.

Types of Switches

Switches are data link layer devices that, like bridges, enable multiple physical LAN segments to be interconnected into a single larger network. Similar to bridges, switches forward and flood traffic based on MAC addresses. Any network device will create some latency. Switches can use different forwarding techniques—two of these are store-and-forward switching and cut-through switching.

In store-and-forward switching, an entire frame must be received before it is forwarded. This means that the latency through the switch is relative to the frame size—the larger the frame size, the longer the delay through the switch. Cut-through switching allows the switch to begin forwarding the frame when enough of the frame is received to make a forwarding decision. This reduces the latency through the switch. Store-and-forward switching gives the switch the opportunity to evaluate the frame for errors before forwarding it. This capability to not forward frames containing errors is one of the advantages of switches over hubs. Cut-through switching does not offer this advantage, so the switch might forward frames containing errors. Many types of switches exist, including ATM switches, LAN switches, and various types of WAN switches.

ATM Switch

Asynchronous Transfer Mode (ATM) switches provide high-speed switching and scalable bandwidths in the workgroup, the enterprise network backbone, and the wide area. ATM switches support voice, video, and data applications, and are designed to switch fixed-size information units called cells, which are used in ATM communications. Figure 4-3 illustrates an enterprise network comprised of multiple LANs interconnected across an ATM backbone.

Figure 4-3 Multi-LAN Networks Can Use an ATM-Based Backbone When Switching Cells

LAN Switch

LAN switches are used to interconnect multiple LAN segments. LAN switching provides dedicated, collision-free communication between network devices, with support for multiple simultaneous conversations. LAN switches are designed to switch data frames at high speeds. Figure 4-4 illustrates a simple network in which a LAN switch interconnects a 10-Mbps and a 100-Mbps Ethernet LAN.

Figure 4-4 A LAN Switch Can Link 10-Mbps and 100-Mbps Ethernet Segments