ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.


AddThis Social Bookmark Button

Java Patterns and Network Management Java Patterns and Network Management

by Stephen B. Morris

Ever wonder why programming always seems to take longer than expected? Or why what appears to be a simple task often ends up being very difficult? I have a theory that many developers don't use programming patterns nearly as often as they should. Too much wheel re-invention! This article briefly introduces Multiprotocol Label Switching (MPLS) network management and shows how two Java patterns can be applied to this complex area. You'll see that the use of patterns is quite straightforward and that knowledge of them helps in rapidly solving common application problems.

An unexpected benefit of the use of patterns is that it moves programmers up the value chain and allows for more beneficial interaction with designers and strategists. If all parties understand patterns, then there is greater scope for mutual understanding and stronger software solutions.

Back to the Past

The networking industry often reminds me of the 1980s pre-IBM PC software sector -- you can have anything as long as it's a proprietary solution! Characterized by many competing vendors, the networking industry now labors under the burden of non-standard, multi-vendor architectures. This is seen in service-provider and enterprise networks in the form of a rich mix of devices (switches, routers, hubs, etc.) cobbled together to provide a growing range of services. Traditional service revenues are shrinking as demand for bandwidth and new realtime services is growing. This is a tough industry living in interesting times.

Cisco is emerging as the dominant vendor, but its products are still de facto standards. It is this lack of a standard platform that is complicating the migration to converged IP-based networks. As in the 1980s software industry, the problem is the need for a convergence technology that provides a standard platform (just as the IBM PC with the DOS operating system did back then).

Today, MPLS is the best candidate for providing such a platform, and it is being deployed worldwide by hundreds of service providers. Why is MPLS so special, compared to its predecessors ATM and Frame Relay (FR)? In a nutshell, ATM and FR have scalability problems and they don't provide easy integration with IP. MPLS succeeds by leveraging proven IP protocols and separating control and forwarding into distinct components.

Related Reading

Rick Gallaher's MPLS Training Guide
Building Multi Protocol Label Switching Networks
By Rick Gallaher

Componentizing control and forwarding means that the former can be made arbitrarily complex without compromising the packet-forwarding mechanism. The control component can be used to enact complex algorithms on incoming IP traffic, such as performing queue assignment and path selection while leaving the forwarding component untouched. This separation means that forwarding can be performed in hardware, if required. Let's now take the dime tour of MPLS.

MPLS Nuts and Bolts

MPLS provides the following major elements:

  • A virtual circuit-based model (rather than IP hop-by-hop): These are called label switched paths (LSPs). One of the Java patterns I use illustrates virtual circuits.

  • Nodes that understand IP and MPLS are typically called label edge routers (LERs). LERs encapsulate traffic from the outer domain. This traffic can be either layer 2 (Ethernet, ATM, FR, etc.) or layer 3 (IP).

  • Core nodes inside the MPLS domain are called label switching routers (LSRs).

  • Traffic engineering (TE): Allows traffic to be explicitly directed through the core.

  • Quality of service (QoS): Allows resource reservation for different traffic types (e.g., bandwidth, queues, colors, etc.). IP offers just one QoS level: best effort.

  • Migration from legacy technologies, such as ATM and FR.

  • Differentiated services: Allows specific traffic to enjoy better service (e.g., real time voice packets versus email packets).

  • Deployment of IP-based services, such as layer 2 and layer 3 VPN.

We'll see most of these in the following discussion. Figure 1 illustrates a corporate HQ with a remote branch office interconnected by a service provider network. The HQ site enterprise architecture supports a range of applications, including voice over IP (VoIP), video over IP, email, etc. Access to these applications is available over the MPLS-based service provider network.

Figure 1

Figure 1. Multisite Enterprise using IP/MPLS Service Provider

Figure 1 illustrates two LSPs (LSP 1 and LSP 2). Both LSPs have been configured with explicit route objects (EROs): LSP 1 follows the path made up of the interfaces (d, e, f, g, h, i) on nodes (LER A, LSR A, LSR B, LER B).

LSP 2 follows the path made up of the interfaces (c, j, k, l) on nodes (LER A, LSR C, LER B). Typically, the above interfaces would be recorded as IP addresses (e.g., d = -- I use symbols just for simplicity. Selecting paths that optimize network-resource utilization in advance of circuit creation is called traffic engineering. One of the Java patterns I'll use illustrates TE.

LSP 1 has also been configured to reserve bandwidth (in a process called QoS provisioning) along its path of 2Mbps (i.e., 2,000,000 bits/second). This means that the VoIP and video-over-IP traffic can be MPLS-encapsulated and pushed onto this path. LSP 1 terminates on LER B, where any MPLS information is stripped from the packets. At this point, a normal IP lookup occurs and the realtime traffic is forwarded to either the adjacent transit service provider or the branch office, via CE2.

LSP 2 has no bandwidth resources reserved -- it offers a best effort (or standard IP) QoS. This LSP is used to forward the SMTP (email) traffic across the core to LER B. Again, at LER B, the MPLS information is stripped off, and normal IP lookup occurs. The traffic then forwarded to CE Router 2 in the direction of the branch office site.

CE, PE and P Routers

Figure 1 illustrates three different types of nodes: customer edge (CE), provider edge (PE), and provider core (P). CEs reside on the customer premises and can be basic IP routers. PEs reside at the edge or point of ingress of the provider network, and function as an "on ramp" to the MPLS core. Ps are found inside of the core and may be basic ATM/FR switches that are running MPLS protocols.

A major strength of MPLS is that it uses proven IP protocols to replace existing legacy technologies, such as ATM and Frame Relay. Network management (NM) is a key element of this evolution.

MPLS Network Management: FCAPS

NM is traditionally divided into the five major functional areas called FCAPS, or "fault, configuration, accounting, performance, and security." A network management system (NMS) operates in conjunction with the managed network elements (NEs) to fulfill the FCAPS. This is typically done using a combination of NE command-line interfaces (CLI) and SNMP entities. The CLI is the NE user-level menu system, typically accessed using telnet or a serial interface on the device. SNMP is a message-oriented protocol from the TCP/IP suite. In Figure 1, the NMS can use either CLI or SNMP on the various NEs. One advantage of SNMP is that it is a standard protocol.

Faults (or informational events) can occur at any time on devices embedded deep within the network. SNMP provides a mechanism by which fault information can be communicated to an NMS. The NMS operator (or the NMS itself) can then take appropriate action. Configuration is the process by which the settings on NEs can be retrieved or updated. In many commercial NMS products, this is done using the CLI. Accounting (or billing) is the process by which managed network resources are financially analyzed; e.g., the generation of departmental or user bills. Performance information is crucial to network operators to determine if the network is fulfilling contractual service-level agreements or even just to determine if NEs are experiencing congestion or the onset of failure. Finally, security is required to ensure protection of the NEs and the data in transit over them. For a more thorough description of the FCAPS functional areas, see "Network Management and MPLS."


SNMP facilitates FCAPS support in the form of Management Information Bases (MIBs, described below) and a simple messaging protocol. Messages are provided that allow retrieval and configuration of NE data, as well as access to fault data. Fault (or informational) data is typically emitted autonomously by the NEs as and when problems occur. Accounting and performance data can also be derived from MIB objects. SNMPv3 provides strong security and authentication features. More information on SNMP can be found in SNMP, SNMPv2, SNMPv3, and RMON 1 and 2, 3rd Edition, by William Stallings, Pearson Education 1998 and Network Management, MIBs & MPLS: Principles, Design & Implementation, by Stephen B. Morris, Prentice Hall 2003.

Many commercial packages provide SNMP API -- one that I've used is Sun Microsystems' JDMK.


A principal component of network management is the MIB. This is the schema used by the NMS to understand the data maintained by the managed NEs. Think of a MIB as a large collection of defined data objects that are of interest to network management. The MIB is shared by both the NMS and the NEs, but the NEs actually implement and maintain the values of the managed object instances. The NMS uses the MIB schema to understand the NE-resident MIB objects.


The cornerstone of network management is the NMS. Typically packaged as a client/server (or N-tier) application suite, the NMS features thin clients and simple topological GUIs. Clients interact with the topology (e.g., the network in Figure 1) using the various NMS server components (see Network Management, MIBs & MPLS: Principles, Design & Implementation by Stephen B. Morris, Prentice Hall 2003). The code examples that follow are of software elements that would typically fit into the NMS server code base. Let's now look at our Java patterns.

Pages: 1, 2

Next Pagearrow