Thursday, April 30, 2009

Windows NT 4 Domain Models

Nice article that I found on Win NT domain model.

Single Domain model:
there is one domain with accounts and resources. The advantages:

    * Works best for small organizations
    * Centralized management of users and resources
    * No trusts involved

The least complex structure. One security boundary with no internal divisions. The disadvanages are performances issues as the domain grows and lack of internal security divisions (for units or departments) to reflect entities in a growing enterprise. The SAM can manage up to about 40,000 accounts. As the number of accounts grows, the power of the domain controllers needs to increase - but with modern inexpensive pentium-based PCs, this is not particulary important. You will see some penalty in browsing as the number of members in the domain increases. The maximum size of the SAM is approximately 40MB and this is a real limitation for this model. User account, group definitions, and PC accounts all add to the cumulative size.

Single Master Domain model : there is one account domain and multiple resource domains with each resource domain trusting the account (user) domain. The advantages of the single master domain are:

    * Good solution for moderately sized networks
    * Departmental control of resources based on resource domains (departmental, unit, ...)
    * Centralized user account management
    * Global groups are defined centrally in account domain

Basically, the accounts are centralized under one administrative unit and the resources are decentralized. This fits the departmental political model of resource ownership. For the model to work well, the account domain admins must create the appropriate global groups needed to manage the security of resources in the resource domains and the resource admins should manage security by assigning permissions to groups, not individuals. Resource domain admins can assign permissions to global groups once and thats the end of their permissions management task. Its set once and forget it. When permissions need to be added or removed, one does not search through many resources to add or remove that persons access, one simply adds or removes that person's account from the group (or groups) in the account domain. The one change in group membership results in permission changes in many resource permissions. The single master domain model has a single account domain with the 40MB SAM and approximately 40,000 account limitation.

The number of trusts:

 T  =  R

that is, the number of trusts is equal to the number of resource domains, one trust per resource domain where the resource domain trusts the account domain.

Multiple Master Domain model: an extension of the single master domain model. Most appropriate for divisions separated geographically and when one must scale beyond the number of accounts supported in a single account domain. You have multiple single master domains linked together by two way trusts. Each account domain trusts every other account domain. Each resource domain trusts each account domain. The advantages are:

    * Good solution for very large organizations
    * Scaleable to accommodate any number of users - just add more account domains
    * Resources are locally and logically grouped
    * Departmental-focused management of resources
    * Any master domain could administer all user accounts or not if wished

The disadvantage of the multiple master domain is complexity: there are multiple account domains, the number of global groups needed multipled by at least the number of account domains and the number of trusts explodes.

The number of trusts :

 T  =  M * (M - 1) + R * M

where M is number of account masters and R is the number of resource domains. Actually this is the maximum number of trusts. You generally can not avoid the

 M * M-1

trusts between account domains. One has the

 R * M

trusts only if all resource domains have users needing access in all account domains.

Complete Trust Domain model: a mesh model is a set of single domains with trusts between each domain. Appropriate for early phase of consolidation between small organizations with existing single domains or politically sensitive departmentally organized enterprises with control issues over accounts and resources. The advantages are:

    * Useful for organizations with no MIS department
    * Scaleable for any number of users
    * Each department (entity with a domain) has Full Control over its users and resources
    * Users and resources are located within the same domain

The disadvantages reflect the other side of the coin:

    * No centralized management
    * Many trust relationships to manage
    * Administrators must trust each other to properly manage users, groups, and resources

That is there is a lot of trust required in many senses. It is a decentralized, high overhead environment.

The number of trusts :

 T  =  D * ( D - 1) 

where D is number of domains.

One sees the term two-way trusts. There are no two way trusts. When domainA trusts domainB

 domainA --> domainB

domainA is the trusting domain and domainB is the trusted domain. The relationship is that users in B may be permitted to access resources in A. The resources are in the trusting domain and the users are in the trusted domain. If one needs it to work both way, you need to create another trust going the other way

 domainA <-- domainB

domainB is the trusting domain and domainA is the trusted domain. To create a "two-way" trust, you have to create the two one-way trusts. I use the memory aid that the accounts include an account for Ed and that resources are thINGs. Thus the trustED domain, the domain with accounts, is the trustED domain and the trustING domain, the domain with thINGs (resources), is the trustING domain. There is no transitivity in trust relationships: if domainA trusts domainB and domainB trusts domainC, this does not mean that domainA also trusts domainC.

Full article can be found at the following link:
http://www.windowsnetworking.com/kbase/WindowsTips/WindowsNT/AdminTips/Network/WindowsNT4DomainModels.html

Understanding Windows NT trust relationships

Nice article that I found while searching for Trust relationships in the Windows NT domain.

What's a trust relationship?
A trust relationship is nothing more than an agreement between two Windows NT domains. This agreement allows users from one domain to use resources in a different domain, as long as the Administrator allows them to do so. For example, a trust relationship might be used to allow users in domain B to use a printer or a mail server that's located in domain A.

Trusting a domain
The domain that has the resource that the other domain wants to use is called the trusting domain. This is the case because in the situation where someone in domain B wants to use a printer in domain A, the Administrator of domain A must agree to trust users from domain B. Therefore, domain A is doing the trusting.

The trusted domain
The domain containing users who need access to the resource in a foreign domain is called the trusted domain. This is the case because they are trusted by the Administrator of the resource domain. If you have trouble remembering the difference, just keep in mind that the trusted domain always contains users. One silly, but effective, way to remember this is that the word trusted ends in the letters ed. Ed could be a username within a trusted domain.

Two-way trusts
If you have a situation in which users in both domains need to access resources in both domains, you can establish a two-way trust. By doing so, users in either domain may access resources in either domain. For example, a user in domain A could access resources in domain A and domain B. Likewise, a user in domain B could access resources in domain B and domain A.

Transitive trusts
Transitive trusts—in which more than two domains are involved—are trust relationships passed between domains. An example of a transitive trust is a situation in which domain A trusts domain B. Domain B trusts domain C. Therefore, through transitive trusts, domain A trusts domain C.

In Windows NT 4, transitive trusts don't exist. It's still possible to create such an arrangement, but domain A would have to establish separate trust relationships with domain B and domain C. In Windows 2000, transitive trusts will finally be supported. Therefore, in Windows 2000 environments, be careful who you trust, because you never know who they trust.

What about security?
The thought of opening your domain up to another domain may sound scary at first, but remember that as an Administrator, you're always in control. Simply establishing a trust relationship doesn't give anyone rights to anything. For anyone from the foreign domain to access a resource on your system, you must grant them rights to do so, in the same way that you would grant rights to a user within your domain.

Conclusion
In this article, I've tried to simplify the concept of Windows NT trusts. I also explained the various types of trusts and how they work.

Brien M. Posey is an MCSE and works as a freelance technical writer and as a network engineer for the Department of Defense. If you'd like to contact Brien, send him an e-mail. (Because of the large volume of e-mail he receives, it's impossible for him to respond to every message. However, he does read them all.)
The authors and editors have taken care in preparation of the content contained herein, but make no expressed or implied warranty of any kind and assume no responsibility for errors or omissions. No liability is assumed for any damages. Always have a verified backup before making any changes.

Here is the link to the original article on TechRepublic:
http://articles.techrepublic.com.com/5100-10878_11-5027007.html

Saturday, March 21, 2009

What is all this 'MPLS Fundamentals' Stuff!!!

Well, I have started reading the 'MPLS Fundamentals' book & was making some notes for revision later on when I am preparing for my lab. So thought that instead of keeping it with myself, I would share it with all you guys out there preparing for CCIE SP.

I will keep posting on every chapter as I go through the entire book!!

Hope it helps!!

MPLS Fundamentals - Chapter 4 – Review Questions

1. What is the fundamental purpose of LDP?

- The fundamental purpose of LDP is to carry the label bindings for the Forwarding Equivalence Classes (FECs) in the MPLS network.

 

2. Name the four main functions that LDP takes care of.?

- LDP has four major functions:

i) The discovery of LSRs that are running LDP

ii) Session establishment and maintenance

iii) Advertising of label mappings

iv) Housekeeping by means of notification

 

3. How can you reduce the number of label bindings on an LSR?

- Following are the 2 ways by which we can reduce the label bindings on an LSR:

You can configure LDP to advertise or not to advertise certain labels to certain LDP peers. You can then use the locally assigned labels that are advertised to the LDP peers as outgoing label on those LSRs. The syntax for this command is as follows: mpls ldp advertise-labels [vrf vpn-name] [interface interface |for prefix-access-list [to peer-access-list]]. The command mpls ldp request-labels is used instead of mpls ldp advertiselabels for LC-ATM interfaces. The Cisco IOS LDP implementation allows you to specify more than one mpls ldp advertiselabels for prefix-access-list to peer-access-list command. This brings greater flexibility when you are deciding which label bindings to send to which LDP peers.

 

You can filter out incoming label bindings from an LDP neighbor. In effect, this is the opposite of the feature that prevents the advertising of label bindings. You can use the inbound label binding filtering on the receiving LDP peer if you cannot apply the outbound filtering of label bindings, as described in the previous section. This feature can limit the number of label bindings stored in the LIB of the router. For instance, you can filter out all received label bindings from the LDP peers, except for the label bindings of the loopback interfaces of PE routers in an MPLS VPN network. Usually, these loopback interfaces have the BGP next-hop IP addresses, and the LSRs can use the label associated with that prefix to forward the labeled customer VPN traffic. Following is the command to enable the inbound label binding filtering: mpls ldp neighbor [vrf vpn-name] nbr-address labels accept acl

 

4. What problem does MPLS LDP-IGP synchronization solve?

- Following are the problems that MPLS LDP-IGP synchronization solves:

 

A common problem with MPLS networks that are running LDP is that when the LDP session is broken on a link, the IGP still has that link as outgoing; thus, packets are still forwarded out of that link. This happens because the IGP installs the best path in the routing table for any prefix. Therefore, traffic for prefixes with a next hop out of a link where LDP is broken becomes unlabeled.

 

Problem can occur when LSRs restart. The IGP can be quicker in establishing the adjacencies than LDP can establish its sessions. This means that the IGP forwarding is already happening before the LFIB has the necessary information to start the correct label forwarding. The packets are incorrectly forwarded (unlabeled) or dropped until the LDP session is established.

 

The problem that LDP-IGP Synchronization solves cannot happen with BGP and label distribution. Because BGP takes care of the binding advertisement and the control plane for IP routing, the before-mentioned problem cannot happen. Although it is possible for the IGP adjacency to be up while LDP is down on a link, BGP is either up or down, meaning that the installation of the IP prefix in the routing table by BGP is linked to the advertisement of the label binding for that prefix by BGP.

 

5. How many LDP sessions are established between two LSRs that have six links between them, of which two links are LC-ATM links and four are frame links?

- There would be 3 LDP sessions that would be needed between these two LSRs. One session for each LC-ATM link & one for the four frame links.

You might think that one LDP session between a pair of LSRs is enough to do the job. You might be right in most cases! When the per-platform label space is the only label space used between a pair of LSRs, one LDP session suffices. This is so because only one set of label bindings is exchanged between the two LSRs, no matter how many links are between them. Basically, the

interfaces can share the same set of labels when the per-platform label space is used. The reason for this is that all the label bindings are relevant to all the links between the two LSRs, because they all belong to the same label space. Interfaces belong to the per-platform label space when they are frame-mode interfaces. Interfaces that are not frame-mode interfaces—such as LC-ATM interfaces—have a per-interface label space. With per-interface label space, each label binding has relevance only to that interface. Therefore, for each interface that has a per-interface label space, one LDP session must exist between the pair of routers.

 

6. What do you need to configure to protect the LDP sessions against attacks?

- LDP sessions are TCP sessions. TCP sessions can be attacked by spoofed TCP segments. To protect LDP against such attacks, you can use Message Digest 5 (MD5) authentication. MD5 adds a signature—called the MD5 digest—to the TCP segments. The MD5 digest is calculated for the particular TCP segment using the configured password on both ends of the connection. The configured MD5 password is never transmitted. This would leave a potential hacker having to guess the TCP sequence numbers and the MD5 password.

 

7. What trick does MPLS LDP-IGP Synchronization employ to ensure that the link is not used to forward traffic while the LDP session is unsynchronized?

- When the MPLS LDP-IGP synchronization is active for an interface, the IGP announces that link with maximum metric until the synchronization is achieved, or until the LDP session is running across that interface. The maximum link metric for OSPF is 65536 (hex 0xFFFF). No path through the interface where LDP is down is used unless it is the only path. (No other paths have a better metric.) After the LDP session is established and label bindings have been exchanged, the IGP advertises the link with its normal IGP metric. At that point, the traffic is label-switched across that interface. Basically, OSPF does not form an adjacency across a link if the LDP session is not established first across that link. (OSPF does not send out Hellos on the link.)

Until the LDP session is established or until the synchronization Holddown timer has expired, the OSPF adjacency is not established. Synchronized here means that the local label bindings have been sent over the LDP session to the LDP peer. However, when the synchronization is turned on at router A and that router has only one link to router B and no other IP connectivity to router B via another path (this means via other routers), the OSPF adjacency never comes up. OSPF waits for the LDP session to come up, but the LDP session cannot come up because router A cannot have the route for the LDP router ID of router B in its routing table. The OSPF and LDP adjacency can stay down forever in this situation! If router A has only router B as a neighbor, the LDP router ID of router B is not reachable; this means that no route exists for it in the routing table of router A. In that case, the LDP-IGP synchronization detects that the peer is not reachable and lets OSPF bring up the adjacency anyway. In this case, the link is advertised with maximum metric until the synchronization occurs. This makes the path through that link a path of last resort.

In some cases, the problem with the LDP session might be a persistent one; therefore, it might not be desirable to keep waiting for the IGP adjacency to be established. The solution for this is to configure a Holddown timer for the synchronization. If the timer expires before the LDP session is established, the OSPF adjacency is built anyway. If everything is fine with LDP across that link, LDP also forms a session across the link. While OSPF is waiting to bring up its adjacency until LDP synchronizes, the OSPF interface state is down and OSPF does not send Hellos onto that link.

 

8. What does LDP Session Protection use to protect an LDP session?

- When the LDP session between two directly connected LSRs is protected, a targeted LDP session is built between the two LSRs. When the directly connected link does go down between the two LSRs, the targeted LDP session is kept up as long as an alternative path exists between the two LSRs. The LDP link adjacency is removed when the link goes down, but the targeted adjacency keeps the LDP session up. When the link comes back up, the LSR does not need to re-establish the LDP session; therefore, the convergence is better.

MPLS Fundamentals - Chapter 4 – Useful Commands

  • Enable CEF with the global 'ip cef' command.
  • Enable LDP globally with the 'mpls ip' command.
  • To discover whether the LSR sends and receives LDP Hellos, the Hello interval, and the Hold time, use the 'show mpls ldp discovery [detail]' command.
  • The 'show mpls interfaces' command allows you to quickly see which interfaces are running LDP.
  • You can change the LDP router ID manually by using the command 'mpls ldp router-id interface [force]'.
  • If the LDP peers agree on the session parameters, they keep the TCP connection between them. If not, they retry to create the LDP session between them, but at a throttled rate. In Cisco IOS, the LDP backoff command controls this throttling rate: 'mpls ldp backoff initial-backoff maximum-backoff'
  • The command to change the LDP session keepalive timer is 'mpls ldp holdtime seconds'
  • Another command to have a look at the LIB on the LSR is 'show mpls ip binding'.
  • You can also see the discovery and session timers with the command 'show mpls ldp parameters'.
  • To change the IP address, configure the command 'mpls ldp discovery transport-address {interface | ipaddress}' on the interface of the router and specify an interface or IP address to be used to create the LDP session.
  • Command to check the LDP Neighbor Hold Time and KA Interval is 'show mpls ldp neighbor 10.200.254.5 detail'
  • Command 'show mpls ldp bindings' shows the LIB on an LSR. The advantage of the command show mpls ip binding is that it also shows which label from all possible remote bindings is used to forward traffic by indicating inuse. Inuse indicates the outgoing label in the LFIB for that prefix.
  • Command to see the label bindings for a specific prefix is 'show mpls ldp bindings <ip> <mask>'
  • In older Cisco IOS software (pre 12.0(21)ST), the default behavior was not to send a Label Withdraw message to withdraw the label before advertising the new label for the FEC. The new label advertisement was also an implicit label withdraw. If you want to keep the old behavior, you must configure the command 'mpls ldp neighbor neighbor implicit-withdraw'.
  • LDP received messages, excluding periodic Keep Alives debugging - 'debug mpls ldp messages received'
  • LDP Label Information Base (LIB) changes debugging - 'debug mpls ldp bindings'
  • For LDP neighbors that are not directly connected, the LDP neighborship needs to be configured manually on both the routers with the 'mpls ldp neighbor targeted' command.
  • To change the LDP Hello interval and the Hold time for targeted LDP sessions, you can use the command 'mpls ldp discovery {hello {holdtime | interval} seconds | targeted-hello {holdtime | interval} seconds | accept [from acl]}'
  • 'mpls ldp discovery targeted-hello accept [from acl]' command can be used to configure the other router to accept targeted LDP sessions from specific LDP routers.
  • In Cisco IOS, you can configure MD5 for LDP by configuring a password for the LDP peer with the command 'mpls ldp neighbor [vrf vpn-name] ip-addr password [0-7] pswd-string'
  • LDP lets you control the advertisement of labels. You can configure LDP to advertise or not to advertise certain labels to certain LDP peers using the command 'mpls ldp advertise-labels [vrf vpn-name] [interface interface | for prefix-access-list [to peer-access-list]]'
  • Command to enable the inbound label binding filtering: 'mpls ldp neighbor [vrf vpn-name] nbr-address labels accept acl'
  • The OSPF router command to enable LDP Autoconfiguration is this: 'mpls ldp autoconfig [area area-id]'
  • The interface command to disable LDP Autoconfiguration on an interface is as follows: 'no mpls ldp igp autoconfig'
  • 'show mpls interfaces detail' & 'show mpls ldp discovery detail' commands can be used to see if MPLS was configured using the interface comand or using the autoconfig command.
  • The command to enable it for the IGP is 'mpls ldp sync', and it is configured under the router process.
  • Disable MPLS LDP-IGP Synchronization on one particular interface with the command 'no mpls ldp igp sync'.
  • By default, if synchronization is not achieved, the IGP waits indefinitely to bring up the adjacency. You can change this with the global command 'mpls ldp igp sync holddown msecs', which instructs the IGP to wait only for the configured time.
  • Commands 'show mpls ldp igp sync serial 4/0' & 'show ip ospf mpls ldp interface' can be used to check the status of an interface in regards to the IGP-LDP synchronization.
  • The command 'debug mpls ldp sync [interface <name>] [peer-acl <acl>]' provides debug information on the LDP synchronization.
  • The global command to enable LDP Session Protection is 'mpls ldp session protection [vrf vpn-name] [for acl] [duration seconds]'
  • For the protection to work, you need to enable it on both the LSRs. If this is not possible, you can enable it on one LSR, and the other LSR can accept the targeted LDP Hellos by configuring the command 'mpls ldp discovery targeted-hello accept'.

MPLS Fundamentals - Chapter 4 - Label Distribution Protocol

  • Even if you have mpls enable on all the router/interfaces. If you do not have a IGP running among the routers they would not form ldp neighbor ship. Even if the interfaces are directly connected interfaces & you are able to ping them ldp will not form a neighbor relationship until there is a IGP running.
  • On some routers by default tdp is used a the label distribution protocol while on other's ldp is the default, if there is a mismatch in the protocol on 2 router's trying to form a ldp neighbor ship, it would not work.
  • The LFIB - which is the table that forwards labeled packets is fed by the label bindings found in the LIB. The LIB is fed by the label bindings received by LDP, Resource Reservation Protocol (RSVP), MP-BGP, or statically assigned label bindings.
  • LDP has four major functions:

1.       The discovery of LSRs that are running LDP

2.       Session establishment and maintenance

3.       Advertising of label mappings

4.       Housekeeping by means of notification

  • LDP Hello messages are UDP messages that are sent on the links to the 'all routers on this subnet' multicast IP address - in other words, to the 224.0.0.2 group IP multicast address. The UDP port used for LDP is 646.
  • The Hello message contains a Hold time. If no Hello message is received from that LSR before the Hold time expires, the LSR removes that LSR from the list of discovered LDP neighbors.
  • If the two LDP peers have different LDP Hold times configured, the smaller of the two values is used as the Hold time for that LDP discovery source. Cisco IOS might overwrite the configured LDP Hello interval. It will choose a smaller LDP Hello interval than configured so that it can send at least three LDP Hellos before the Hold time expires.
  • This LDP ID is a 6-byte field that consists of 4 bytes identifying the LSR uniquely and 2 bytes identifying the label space that the LSR is using. If the last two bytes are 0, the label space is the platform-wide or per platform label space. If they are non-zero, a per-interface label space is used. If that is the case, multiple LDP IDs are used, where the first 4 bytes are the same value, but the last two bytes indicate a different label space. Per-interface label space is used for LC-ATM links.
  • The first 4 bytes of the LDP ID are an IP address taken from an operational interface on the router. If loopback interfaces exist, the highest IP address of the loopback interfaces is taken for the LDP ID or LDP router ID. If no loopback interfaces exist, the highest IP address of an interface is taken.
  • In Cisco IOS, the MPLS LDP router ID needs to be present in the routing table of the LDP neighboring routers. If it is not, the LDP session is not formed.
  • If two LSRs have discovered each other by means of the LDP Hellos, they attempt to establish an LDP session between them. One LSR tries to open a TCP connection to TCP port 646 to the other LSR.
  • The command to change the LDP session keepalive timer is 'mpls ldp holdtime seconds'.You can configure the value of the Hold time to be between 15 and 2,147,483 seconds, with a default of 180 seconds.
  • When a router has multiple links toward another LDP router, the same transport address must be advertised on all the parallel links that use the same label space.
  • When a router has multiple links toward another LDP router and a different transport address is advertised on those links, the TCP session is still formed, but there is a missing link from the LDP "discovery sources" on the other router. In the previous example, the LDP session is formed, but Ethernet 0/1/3 or Ethernet 0/1/4 is missing from the LDP discovery sources in the output of router london. As such, the traffic from router london toward router new-york is not load-balanced but uses only one outgoing Ethernet link.
  • Interfaces belong to the per-platform label space when they are frame-mode interfaces. Interfaces that are not frame-mode interfaces such as LC-ATM interfaces have a per-interface label space.
  • With per-interface label space, each label binding has relevance only to that interface. Therefore, for each interface that has a per-interface label space, one LDP session must exist between the pair of routers.
  • One example in which the two LDP peers might disagree on the parameters and not form an LDP session is the case of LC-ATM, where the two peers are using different ranges of VPI/VCI values for the labels.
  • After the LDP session has been set up, it is maintained by either the receipt of LDP packets or a periodic keepalive message. Each time the LDP peer receives an LDP packet or a keepalive message, the keepalive timer is reset for that peer.
  • The downstream LSR is found by looking up the next hop for that prefix in the routing table. Only the remote binding associated with that next-hop LSR should be used to populate the LFIB. This means that only one label from all the advertised label bindings from all the LDP neighbors of this LSR should be used as outgoing label in the LFIB for that prefix. The problem is that the label bindings are advertised as (LDP Identifier, label) without the IP addresses of the interfaces. This means that to find the outgoing label for a particular prefix, you must map to the LDP Identifier the IP address of the interface—pointing back to this LSR—on the downstream LSR. You can only do this if each LDP peer advertises all its IP addresses. These IP addresses are advertised by the LDP peer with Address messages and withdrawn with Withdraw Address messages. You can find these addresses when you are looking at the LDP peer. They are called the bound addresses for the LDP peer.
  • The concept of split horizon does not exist; an LDP peer assigns its own local label to a prefix and advertises that back to the other LDP peer, even though that other LDP peer owns the prefix (it is a connected prefix) or that other LDP peer is the downstream LSR.
  • Examples in which the targeted LDP session is needed are AToM networks and TE tunnels in an MPLS VPN network.
  • If one LSR has MD5 configured for LDP and the other not, the following message is logged: %TCP-6-BADAUTH: No MD5 digest from 10.200.254.4(11092) to 10.200.254.3(646)
  • If both LDP peers have a password configured for MD5 but the passwords do not match, the following message is logged: %TCP-6-BADAUTH: Invalid MD5 digest from 10.200.254.4(11093) to 10.200.254.3(646)
  • You do not have to clear the LDP neighbor to which you apply the mpls ldp advertise-labels command for it to take effect.
  • "Interface config" indicates that LDP is enabled through the interface mpls ip command. "IGP config" indicates that LDP is enabled through the router mpls ldp autoconfig command.
  • With MPLS VPN, AToM, Virtual Private LAN Switching (VPLS), or IPv6 over MPLS, the packets must not become unlabeled in the MPLS network. If they do become unlabeled, the LSR does not have the intelligence to forward the packets anymore and drops them.
  • The solution is MPLS LDP-IGP Synchronization. This feature ensures that the link is not used to forward (unlabeled) traffic when the LDP session across the link is down. Rather, the traffic is forwarded out another link where the LDP session is still established.
  • At the time of writing this book, the only IGP that is supported with MPLS LDP-IGP Synchronization is OSPF.
  • The problem that LDP-IGP Synchronization solves cannot happen with BGP and label distribution. Because BGP takes care of the binding advertisement and the control plane for IP routing, the before-mentioned problem cannot happen. Although it is possible for the IGP adjacency to be up while LDP is down on a link, BGP is either up or down, meaning that the installation of the IP prefix in the routing table by BGP is linked to the advertisement of the label binding for that prefix by BGP.
  • OSPF does not form an adjacency across a link if the LDP session is not established first across that link. (OSPF does not send out Hellos on the link.)
  • By default, if synchronization is not achieved, the IGP waits indefinitely to bring up the adjacency. You can change this with the global command mpls ldp igp sync holddown msecs, which instructs the IGP to wait only for the configured time. After the synchronization Holddown timer expires, the IGP forms an adjacency across the link. As long as the IGP adjacency is up, while the LDP session is not synchronized, the IGP advertises the link with maximum metric.
  • When the LDP session between two directly connected LSRs is protected, a targeted LDP session is built between the two LSRs.
  • Finally, a useful LDP feature is LDP Graceful Restart. It specifies a mechanism for LDP peers to preserve the MPLS forwarding state when the LDP session goes down. As such, traffic can continue to be forwarded without interruption, even when the LDP session restarts.

MPLS Fundamentals - Chapter 3 – Review Questions

1. What does the push operation do on a labeled packet?

- The top label is replaced with a new label (swapped), and one or more labels are added (pushed) on top of the swapped label.

 

2. Which Cisco IOS command do you use to see what the swapped label is and which labels are pushed onto a received packet for a certain prefix?

- To see all the labels that change on an already labeled packet, you must use the 'show mpls forwarding-table [network {mask | length}] [detail]' command

 

3. What does the outgoing label entry of "Aggregate" in the LFIB of a Cisco IOS LSR mean?

- The outgoing label entry showing 'Aggregate' means that the aggregating LSR needs to remove the label of the incoming packet and must do an IP lookup to determine the more specific prefix to use for forwarding this IP packet.

 

5. What are the value and the function of the Router Alert label?

- The Router Alert label is the one with value 1. This label can be present anywhere in the label stack except at the bottom. When the Router Alert label is the top label, it alerts the LSR that the packet needs a closer look. Therefore, the packet is not forwarded in hardware, but it is looked at by a software process. When the packet is forwarded, the label 1 is removed. Then a lookup of the next label in the label stack is performed in the LFIB to decide where the packet needs to be switched

to. Next, a label action (pop, swap, push) is performed, the label 1 is pushed back on top of the label stack, and the packet is forwarded.

 

6. Why does an LSR forward the ICMP message "time exceeded" along the LSP of the original packet with the TTL expiring instead of returning it directly?

- The reason for this forwarding of the ICMP message along the LSP that the original packet with the expiring TTL was following is that in some cases the LSR that is generating the ICMP message has no knowledge of how to reach the originator of the original packet. Equally so, an intermediate LSR closer to the originator of the packet might not have that knowledge. One such case is a network with MPLS VPN. In this scenario, the P router does not have the knowledge to send back the ICMP messages to the originator of the VPN packet, because the P router does not have a route to directly return the ICMP message. (In general, the P routers do not hold the VPN routing tables.) Hence, the P router builds the ICMP message and forwards the packet along the LSP, in the hope that the ICMP message reaches a router at the end of the LSP that can return the packet to the originating routing. In the case of MPLS VPN, the ICMP message is returned by the egress PE or the CE that is attached to that PE, because these routers certainly have the route to correctly return the packet.

 

7. Is using Path MTU Discovery a guarantee that there will be no MTU problems in the MPLS network?

- Path MTU Discovery is not guaranteed to work in all cases; sometimes the ICMP message does not make it back to the originator. Possible causes for the ICMP message not making it to the originator of the packet are firewalls, access lists, and routing problems.

 

8. Why is MTU or MRU such an important parameter in MPLS networks?

- The interface MTU command in Cisco IOS specifies how big a Layer 3 packet can be without having to fragment it when sending it on a data link. For the Ethernet encapsulation, for example, MTU is by default set to 1500. However, when n labels are added, n * 4 bytes are added to an already maximum sized IP packet of 1500 bytes. This would lead to the need to fragment the packet.

Cisco IOS has the 'mpls mtu' command that lets you specify how big a labeled packet can be on a data link. If, for example, you know that all packets that are sent on the link have a maximum of two labels and the MTU is 1500 bytes, you can set the MPLS MTU to 1508 (1500 + 2 * 4). Thus, all labeled packets of size 1508 bytes (labels included) can be sent on the link without fragmenting them. The default MPLS MTU value of a link equals the MTU value.

Take the example of Ethernet: The payload can be a maximum of 1500 bytes. However, if the packet is a maximum sized packet and labels are added, the packet becomes slightly too big to be sent on the Ethernet link. It is possible to close one eye and allow frames that are bigger (perhaps by just a few bytes) to be sent on the Ethernet link, even though it is not the correct thing according to the Ethernet specifications, which say that such frames should be dropped. This is, of course, possible only if the Ethernet hardware in the router and all switches in the Ethernet network support receiving and sending baby giant frames.

On Ethernet data links on LSRs, you can set the MPLS MTU to 1508 bytes to allow IP packets with a size of 1500 bytes with two labels to be received and forwarded. If, however, the hardware of the router does not support this, or if an Ethernet switch exists in between, dropping baby giant frames, you can lower the MPLS MTU parameter on the LSRs. When you set the MPLS MTU to 1500, all the IP packets with a size of 1492 bytes are still forwarded, because the size of the labeled packet then becomes 1500 (1492 plus 8) bytes at Layer 3. However, all IP packets sized between 1493 through 1500 bytes (or more) are fragmented. Because of the performance impact of fragmentation, you should use methods to avoid it, such as path MTU discovery.

MPLS Fundamentals - Chapter 3 – Useful Commands

  • You can see an extract from the LFIB, by issuing the command 'show mpls forwarding-table'.
  • To see all the labels that change on an already labeled packet, you must use the 'show mpls forwarding-table [network {mask | length}] [detail]' command
  • The command 'no mpls ip' on an interface disables LDP on that interface.
  • 'debug mpls packet' to check the labels on the packets
  • You can change the label range with the 'mpls label range min max' command.
  • Use the command 'show mpls label range' to see the range of labels being currently used by the Cisco IOS.
  • Cisco IOS has the 'mpls mtu' command that lets you specify how big a labeled packet can be on a data link.
  • 'show mpls interfaces fastEthernet 2/6 detail' command would show you the MPLS MTU for an interface
  • The command 'system jumbomtu' can be used to enable jumbo Ethernet frames on an Ethernet switch.

MPLS Fundamentals - Chapter 3 – Forwarding Labeled Packets

  • In Cisco IOS, CEF switching is the only IP switching mode that you can use to label packets. Other IP switching modes, such as fast switching, cannot be used, because the fast switching cache does not hold information on labels. Because CEF switching is the only IP switching mode that is supported in conjunction with MPLS, you must turn on CEF when you enable MPLS on the router.
  • If a prefix is reachable via a mix of labeled and unlabeled (IP) paths, Cisco IOS does not consider the unlabeled paths for load-balancing labeled packets. That is because in some cases, the traffic going over the unlabeled path does not reach its destination.
  • Label 0 is the explicit NULL label, whereas label 3 is the implicit NULL label. Label 1 is the router alert label, whereas label 14 is the OAM alert label. The other reserved labels between 0 and 15 have not been assigned yet.
  • The egress LSR signals the penultimate LSR to use implicit NULL by not sending a regular label, but by sending the special label with value 3. The use of implicit NULL at the end of an LSP is called penultimate hop popping (PHP)
  • PHP is the default mode in Cisco IOS. In the case of IPv4-over-MPLS, Cisco IOS only advertises the implicit NULL label for directly connected routes and summarized routes.
  • The use of the implicit NULL label does not mean that all labels of the label stack must be removed. Only one label is popped off. In any case, the use of the implicit NULL label prevents the egress LSR from having to perform two lookups. Although the label value 3 signals the use of the implicit NULL label, the label 3 will never be seen as a label in the label stack of an MPLS packet. That is why it is called the implicit NULL label.
  • In Cisco IOS, however, a safeguard guards against possible routing loops by not copying the MPLS TTL to the IP TTL if the MPLS TTL is greater than the IP TTL of the received labeled packet.
  • The default MPLS MTU value of a link equals the MTU value.
  • In some Cisco IOS releases, you cannot configure the MPLS MTU to be bigger than the interface MTU.
  • Maximum receive unit (MRU) is a parameter that Cisco IOS uses. It informs the LSR how big a received labeled packet of a certain FEC can be that can still be forwarded out of this LSR without fragmenting it. This value is actually a value per FEC (or prefix) and not just per interface.
  • Path MTU Discovery is not guaranteed to work in all cases; sometimes the ICMP message does not make it back to the originator. Possible causes for the ICMP message not making it to the originator of the packet are firewalls, access lists, and routing problems.

MPLS Fundamentals - Chapter 2 – Review Questions

1. Name the four fields that are part of a label?

- Label (first 20bits)

- EXP (next 3 bits)

- BoS (next 1 bit)

- TTL (next 8bits)

In all 32 bits.

 

2. How many labels can reside in a label stack?

- No limit, but you seldom see labels more than 4 in a label stack.

 

3. In which layer does MPLS fit in the OSI reference model?

- The MPLS label sits between the frame (layer 2) & the transported protocol packet (layer3), hence it can be conceived to be at layer 2.5

 

4. Which table does an LSR use to forward labeled packets?

- Label Information Base (LIB) is used to store all the remote labels received from multiple adjacent LSR's. On the other hand the Label Forwarding Instance Base (LFIB) consists of only those labels from the LIB which are being currently used for taking forwarding decisions.

 

5. What type of interfaces in Cisco IOS uses the Downstream-on-Demand label distribution mode and the per-interface label space?

- In Cisco IOS, all interfaces except LC-ATM interfaces use the UD label distribution mode. All LC-ATM interfaces use the DoD label distribution mode.

 

6. Why does the MPLS label have a Time To Live (TTL) field?

- Bits 24 to 31 are the eight bits used for Time To Live (TTL). This TTL has the same function as the TTL found in the IP header. It is simply decreased by 1 at each hop, and its main function is to avoid a packet being stuck in a routing loop. If a routing loop occurs and no TTL is present, the packet loops forever. If the TTL of the label reaches 0, the packet is discarded.

MPLS Fundamentals - Chapter 2 - MPLS Architecture

  • An ordered sequence of LSRs is a label switched path (LSP)
  • LSP is unidirectional
  • A Forwarding Equivalence Class (FEC) is a group or flow of packets that receive the same forwarding treatment throughout the MPLS network.
  • All packets belonging to the same FEC have the same label. However, not all packets that have the same label belong to the same FEC, because their EXP values might differ; the forwarding treatment could be different, and they could belong to a different FEC.
  • Labels are local to each pair of adjacent routers. Labels have no global meaning across the network.
  • For every IGP IP prefix in its IP routing table, each LSR creates a local binding—that is, it binds a label to the IPv4 prefix. The LSR then distributes this binding to all its LDP neighbors. These received bindings become remote bindings. The neighbors then store these remote and local bindings in a special table, the label information base (LIB).
  • The LFIB is the table used to forward labeled packets. It is populated with the incoming and outgoing labels for the LSPs. The incoming label is the label from the local binding on the particular LSR. The outgoing label is the label from the remote binding chosen by the LSR from all possible remote bindings. All these remote bindings are found in the LIB. The LFIB chooses only one of the possible outgoing labels from all the possible remote bindings in the LIB and installs it in the LFIB.
  • In the case of MPLS traffic engineering, the labels are distributed by RSVP. In the case of MPLS VPN, the VPN label is distributed by BGP.
  • If per-platform label space is used, the packet is forwarded solely based on the label, independently from the incoming interface.
  • If per-interface label space is used, the packet is not forwarded solely based on the label, but based on both the incoming interface and the label.
  • In Cisco IOS, all Label Switching Controlled-ATM (LC-ATM) interfaces have a per-interface label space, whereas all ATM frame-based and non-ATM interfaces have a per-platform label space.
  • In Cisco IOS, all interfaces except LC-ATM interfaces use the UD label distribution mode. All LC-ATM interfaces use the DoD label distribution mode.
  • In Cisco IOS, the retention mode for LC-ATM interfaces is the CLR mode. It is the LLR mode for all other types of interfaces.
  • Cisco IOS uses Independent LSP Control mode. ATM switches that are running Cisco IOS use Ordered LSP Control mode by default.

MPLS Fundamentals - Chapter 1 – Review Questions

1. What are the MPLS applications mentioned in this chapter?

- Traffic engineering

- MPLS VPNs

- AToM

 

2. Name three advantages of running MPLS in a service provider network.

- The use of one unified network infrastructure

- Better IP over ATM integration

- The core routers in the service provider network no longer need to run BGP.

- Adding one customer site means that on the PE router, only the peering with the CE router must be added. You do not have to hassle with creating many virtual circuits as with the overlay model or with configuring packet filters or route filters with the peer-to-peer VPN model over an IP network. This is the benefit of MPLS VPN for the service provider.

- As the operator of the MPLS-with-traffic-engineering-enabled network, you can steer the traffic from A to B over the bottom path, which is not the shortest path between A and B (four hops versus three hops on the top path).

- An extra advantage of running MPLS traffic engineering is the possibility of Fast ReRouting(FRR).

 

3. What are the advantages of the MPLS VPN solution for the service provider over all the other VPN solutions?

- The peer-to-peer VPN model demanded a lot from provisioning because adding one customer site demanded many configuration changes at many sites. MPLS VPN is one application of MPLS that made the peer-to-peer VPN model much easier to implement. Adding or removing a customer site is now easier to configure and thus demands much

less time and effort. With MPLS VPN, one customer router, called the customer edge (CE) router, peers at the IP Layer with at least one service provider router, called the provider edge (PE) router. Adding one customer site means that on the PE router, only the peering with the CE router must be added. You do not have to hassle with creating many virtual circuits as with the overlay model or with configuring packet filters or route filters with the peer-to-peer VPN model over an IP network. This is the benefit of MPLS VPN for the service provider.

 

4. Name the four technologies that can be used to carry IP over ATM.

- RFC 1483,"Multiprotocol Encapsulation over ATM Adaptation Layer 5"

- LAN Emulation (LANE).

- Multiprotocol over ATM (MPOA)

- MPLS on ATM

 

5. Name two pre-MPLS protocols that use label switching.

- Frame relay

- ATM

 

6. What do the ATM switches need to run so that they can operate MPLS?

- The ATM switches had to run an IP routing protocol and implement a label distribution protocol

 

7. How do you ensure optimal traffic flow between all the customer sites in an ATM or Frame Relay overlay network?

- Because the ATM or Frame Relay switches are purely Layer 2 devices, the routers interconnect through them by means of virtual circuits created between them. For any router to send traffic directly to any other router at the edge, a virtual circuit must be created between them directly. Creating the virtual circuits manually is tedious. In any case, if the requirement is the any-to-any connection between sites, it is necessary to have a full mesh of virtual circuits between the sites, which is cumbersome and costly. If the sites are only interconnected as in the traffic from CE1 to CE3 must first go through CE2.

Friday, February 13, 2009

Cisco Nexus 7K


Some very important things that I learnt today about the Nexus platform

1) When we say 80Gbps full duplex, it actually means 80In+80Out = 160Gbps of effective bandwidth

2) When we talk about bandwidth of a module it is the amount of data it can put on the switch fabric & the amount of data it can absorb from the fabric. For example if I say a module has a bandwidth of 80Gbps, it means that module can sent 80Gbps of data to the fabric & obtain 80Gbps of data from the fabric.
The bandwidth of a module is measured by the bandwidth of the interconnections between the fabric interface & the fabric ASIC

Nexus 7k System Bandwidth Calculation.
(230Gbps/slot) * (8 payload slots) = 1840Gbps
(115Gbps/slot) * (2 supervisor slots) = 230Gbps
(1840 + 230 = 2070Gbps) * (2 for full duplex operation) = 4140Gbps = 4.1Tbps system bandwidth

Wednesday, February 11, 2009

DCE, CEE and DCB. What is the difference?

Thank God, I found this link, or else I would have been confused by all these acronyms.
 
In one word, NOTHING. They are all three letter acronyms used to describe the same thing. All three of these acronyms describe an architectural collection of Ethernet extensions (based on open standards) designed to improve Ethernet networking and management in the Data Center.
 
The Ethernet extensions are as follows:
 
- Priority-Based Flow Control = P802.1Qbb
- Enhanced Transmission Selection = P802.1Qaz
- Congestion Notification = P802.1Qau
- Data Center Bridging Exchange Protocol = This protocol is expected to leverage functionality provided by 802.1AB (LLDP)
 
Cisco has co-authored many of the standards referenced above and is focused on providing a standards-based solution for a Unified Fabric in the data center
 
The IEEE has decided to use the term “DCB” (Data Center Bridging) to describe these extensions to the industry. You can find additional information here:
http://www.ieee802.org/1/pages/dcbridges.html
 
CEE on the other hand is a similar concept that IBM is following for its product family.
 
In summary, all three acronyms mean essentially the same thing “today”. Cisco’s DCE products and solutions are NOT proprietary and are based on open standards.
 
This article has been taken from the blog:

"non-blocking architecture"

 
It's a popular marketing term that refers broadly to the ability of a switch to handle independent packets simultaneously. For example, suppose a packet is traveling from port A to port B when a new packet arrives on port C. A ``nonblocking'' switch will accept and process the new packet before it completes the previous transfer. If the new packet is destined for port A or B (which are currently busy), the switch will queue the incoming packet until the destination port becomes available. Of course, the queue is finite; even a nonblocking switch must eventually reject packets (i.e., must eventually block).

Tuesday, February 10, 2009

Technical Dictionary - Wire speed

 
Wire speed or wirespeed refers to the hypothetical maximum data transmission rate of a cable or other transmission medium. The wire speed is dependent on the physical and electrical properties of the cable, combined with the lowest level of the connection protocols.
 
When used as an adjective, wire speed describes any hardware device or function that processes data without reducing the overall transmission rate. It is common to refer to functions embedded in microchips as working at wire speed, especially when compared to those implemented in software. Network switches, routers, and similar devices are sometimes described as operating at wire speed. Data encryption and decryption and hardware emulation are software functions that might run at wire speed (or close to it) when embedded in a microchip.
 
The wire speed is rarely achieved in connections between computers due to CPU limitations, disk read/write overhead, or contention for resources. However, it is still a useful concept for estimating the theoretical best throughput, and how far the real-life performance falls short of the maximum.

Technical Dictionary - Line rate

 
The line rate of a communications link is the data rate of its raw bitstream, including all framing bits and other physical layer overhead.
 
For example, the line rate of a T1 data link is 1.544 Mbit/s, of which 1.536 Mbit/s is available for data communications, and the remaining 8000 bit/s is framing overhead. ISDN Basic Rate Interface has a line rate of 160 kbit/s, and a user data rate of 144 kbit/s
 
Additional factors that reduce the throughput of communications links below their line rate can include packetization overhead, burstiness, link contention, and inefficient use of link resources by higher-level protocols.

Data Center Ethernet / Converged Enhanced Ethernet

 
Wow, I found this today, Data Center have there own Ethernet formaly known as DCE (Data Center Ethernet) or CEE (Converged Enhanced Ethernet). Both of them mean the same thing. It is actually Ethernet with some additions to optimize it to be used in the Data Center's.
 
More about DCE/CEE here:

Monday, January 12, 2009

Contest#3 @ Packetlife.net

 
Nice contest posted by 'Jeremy' on his blog Packetlife.net. This was the 3rd contest that he had posted on his blog & was very interesting in my opinion. The contest dealt with finding the IOS of the router from within a packet capture that he had uploaded on his blog.
 
Well, I could not find the answer but people did & were rewarded with a Cisco press book. Nice to know that Cisco press is sponsoring the prizes for his contest's.
 
This contest gives some good tips on how to use Wireshark as a took to decode traffic streams & find useful information.

Saturday, January 10, 2009

Test

This is test post. Ignore!

Extended ping & traceroute!

I know it's very basic, we use ping & traceroute all the time for troubleshooting, but I still found this one a interesting & informative read.
 
Using the Extended ping and Extended traceroute Commands:

David’s Top Cisco Tips of 2008

David Davis on his blog Happy Router posted the top 5 Cisco tips of 2008 which were later posted on the TechRepublic website. Following are the 5 topics he talks about.

1) The emulator Dynamips Vs any other simulator in the market. What it is & how is it better.
2) A basic article about the 5 things you should know when starting to configure a router from scratch.
3) Packet Trap 360 (pt360) tool suite as a very good tool that combines managing & troubleshooting in one.
4) An article on the 10 stupid things that you can do with your router & how to get out of the mess.
5) Using extended ping & extended traceroute to troubleshoot your network

Here is the link for the entire article.

Nice read.

Blocking traffic destined to unknown unicast/multicast MAC addresses

Nice post on CCIE TO BE about how to block traffic destined for unknown unicast/multicast addresses on a switchport. By default, if a switchport receives unknown unicast traffic it floods it out to all the ports. This behaviour can be stopped using the command "switchport block unicast".

Configuration guide link for this command:
http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_44_se/configuration/guide/swtrafc.html#wp1087814

Command reference link for this command:
http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_44_se/command/reference/cli3.html#wp1948063