Friday, February 13, 2009

Cisco Nexus 7K


Some very important things that I learnt today about the Nexus platform

1) When we say 80Gbps full duplex, it actually means 80In+80Out = 160Gbps of effective bandwidth

2) When we talk about bandwidth of a module it is the amount of data it can put on the switch fabric & the amount of data it can absorb from the fabric. For example if I say a module has a bandwidth of 80Gbps, it means that module can sent 80Gbps of data to the fabric & obtain 80Gbps of data from the fabric.
The bandwidth of a module is measured by the bandwidth of the interconnections between the fabric interface & the fabric ASIC

Nexus 7k System Bandwidth Calculation.
(230Gbps/slot) * (8 payload slots) = 1840Gbps
(115Gbps/slot) * (2 supervisor slots) = 230Gbps
(1840 + 230 = 2070Gbps) * (2 for full duplex operation) = 4140Gbps = 4.1Tbps system bandwidth

Wednesday, February 11, 2009

DCE, CEE and DCB. What is the difference?

Thank God, I found this link, or else I would have been confused by all these acronyms.
 
In one word, NOTHING. They are all three letter acronyms used to describe the same thing. All three of these acronyms describe an architectural collection of Ethernet extensions (based on open standards) designed to improve Ethernet networking and management in the Data Center.
 
The Ethernet extensions are as follows:
 
- Priority-Based Flow Control = P802.1Qbb
- Enhanced Transmission Selection = P802.1Qaz
- Congestion Notification = P802.1Qau
- Data Center Bridging Exchange Protocol = This protocol is expected to leverage functionality provided by 802.1AB (LLDP)
 
Cisco has co-authored many of the standards referenced above and is focused on providing a standards-based solution for a Unified Fabric in the data center
 
The IEEE has decided to use the term “DCB” (Data Center Bridging) to describe these extensions to the industry. You can find additional information here:
http://www.ieee802.org/1/pages/dcbridges.html
 
CEE on the other hand is a similar concept that IBM is following for its product family.
 
In summary, all three acronyms mean essentially the same thing “today”. Cisco’s DCE products and solutions are NOT proprietary and are based on open standards.
 
This article has been taken from the blog:

"non-blocking architecture"

 
It's a popular marketing term that refers broadly to the ability of a switch to handle independent packets simultaneously. For example, suppose a packet is traveling from port A to port B when a new packet arrives on port C. A ``nonblocking'' switch will accept and process the new packet before it completes the previous transfer. If the new packet is destined for port A or B (which are currently busy), the switch will queue the incoming packet until the destination port becomes available. Of course, the queue is finite; even a nonblocking switch must eventually reject packets (i.e., must eventually block).

Tuesday, February 10, 2009

Technical Dictionary - Wire speed

 
Wire speed or wirespeed refers to the hypothetical maximum data transmission rate of a cable or other transmission medium. The wire speed is dependent on the physical and electrical properties of the cable, combined with the lowest level of the connection protocols.
 
When used as an adjective, wire speed describes any hardware device or function that processes data without reducing the overall transmission rate. It is common to refer to functions embedded in microchips as working at wire speed, especially when compared to those implemented in software. Network switches, routers, and similar devices are sometimes described as operating at wire speed. Data encryption and decryption and hardware emulation are software functions that might run at wire speed (or close to it) when embedded in a microchip.
 
The wire speed is rarely achieved in connections between computers due to CPU limitations, disk read/write overhead, or contention for resources. However, it is still a useful concept for estimating the theoretical best throughput, and how far the real-life performance falls short of the maximum.

Technical Dictionary - Line rate

 
The line rate of a communications link is the data rate of its raw bitstream, including all framing bits and other physical layer overhead.
 
For example, the line rate of a T1 data link is 1.544 Mbit/s, of which 1.536 Mbit/s is available for data communications, and the remaining 8000 bit/s is framing overhead. ISDN Basic Rate Interface has a line rate of 160 kbit/s, and a user data rate of 144 kbit/s
 
Additional factors that reduce the throughput of communications links below their line rate can include packetization overhead, burstiness, link contention, and inefficient use of link resources by higher-level protocols.

Data Center Ethernet / Converged Enhanced Ethernet

 
Wow, I found this today, Data Center have there own Ethernet formaly known as DCE (Data Center Ethernet) or CEE (Converged Enhanced Ethernet). Both of them mean the same thing. It is actually Ethernet with some additions to optimize it to be used in the Data Center's.
 
More about DCE/CEE here: