Stack it in your favor
Designing and building a network that meets your organization's needs isn't a trivial exercise. Today's wide array of technology options means you have many decisions to make. Should you deploy wireless technology in an enterprise network? What firewalls can or should you install? What about Gigabit Ethernet? WAN options? IP addressing? Should you deploy hardware-based or software-based solutions? Obviously, trying to design anything but the smallest network can be daunting. Fortunately, many established best practices can help guide you through the process and help you determine the right mix of technologies and design that will meet your organization's IT goals. Let me start by laying the foundation for any good network-design discussion, then I'll discuss some network-design best practices.
The OSI 7-Layer Model
A solid understanding of network design starts with a model. The Open System Interconnection (OSI) 7-layer model is an industry-standard way to describe the network protocol stack and how it applies to practical aspects of networking. Figure 1 shows the OSI model and lists examples of technologies that correspond to each layer.
When you design a network, you're most often concerned with Layer 2, Layer 3, and Layer 4. Devices such as NICs, firewalls, routers, and switches work mainly with these three layers (and on rare occasions Layer 5). By understanding how the various technologies relate to one another at each level of the OSI model, you can better design and operate a network that meets your needs.
Here's how the OSI model stacks up. Layer 2 defines the type of topology you'll use to move traffic around your network (e.g., Ethernet, token ring, Asynchronous Transfer Mode—ATM, DSL). Layer 3 defines the protocol—typically IP—you'll use to route traffic from one location to another. Finally, Layer 4 defines the protocol that higher-layer applications (i.e., applications that work at layers 5-7 in the OSI stack) will use to communicate with one another across the network. If those applications require a guarantee that the data they send is received, TCP will guarantee delivery. When such a guarantee isn't necessary, UDP can provide quick, lightweight data transmission between applications. (For more information about the OSI model, see Resources.) So, armed with this basic understanding of the OSI model, lets move on to a discussion of how to build a typical network.
The Typical Network
Although defining a typical network is difficult, I'd wager that most organizations with multiple physical locations, a decent Web presence, and more than 1000 users will have a network similar to the one that Figure 2 shows. Let's pick apart this diagram, starting from the Internet and working our way in, and I'll give tips and advice on each aspect of the network design. You'll come out with a set of guidelines for designing your network.
The Internet and Firewalls
Organizations use a variety of methods to connect to the Internet. For small businesses, a DSL router to the local phone company might be adequate, but most medium-sized and large organizations typically require multiple redundant connections to and from the Internet for failover. Telecommunications companies (telcos) such as AT&T or Sprint provide various high-bandwidth options for connecting to the Internet—offering everything from a T1 (1.544Mbps) to OC-12 (622Mbps) connection. Typically, when you set up an Internet connection, you have a range of public IP addresses for devices that need direct Internet access, such as Web servers, VPN devices, proxies, and routers. The carrier that provides your Internet connection might also provide the necessary connection hardware, or you might need to provide your own routers or WAN access devices.
Regardless of your connection speed, you need a firewall to protect your internal network from the wild and woolly Internet. You have several options for designing and implementing a firewall. Figure 2 shows one common method, in which an external router connects directly to the Internet, and an internal router connects to the organization's internal network. In the OSI model, a router typically operates on Layer 2 and Layer 3. Hardware-based routers are available from vendors such as Cisco Systems, 3COM, and Nortel Networks. A router, as its name implies, routes Layer 2 IP traffic according to source and destination IP address. A router also can filter packets according to Layer 3 (TCP and UDP) information. For example, you can build access lists on most routers to prevent certain kinds of inbound traffic from reaching your internal network. You can also prevent internal users from accessing certain types of services (e.g., Network News Transfer Protocol—NNTP— newsgroups) on the Internet. Access lists typically take the following form:
<destination IP network or host>
<permit or deny> <protocol> <port>
For example, the following access list prevents all HTTP traffic between an internal IP segment of 10.1.1.0 and the external host 22.214.171.124:
10.1.1.0 126.96.36.199 deny tcp 80
Port 80 is the well-known TCP port for Web or HTTP traffic.
A firewall thus becomes a set of access lists that permit or deny certain inbound and outbound traffic. The network in Figure 2 has both internal and external routers, with a network segment between them that hosts several servers, including Web servers, VPN devices, and application proxies. This intermediate network segment is often referred to as a demilitarized zone (DMZ). Servers or devices on this segment are considered unsafe because traffic from the Internet is allowed to terminate directly on them. Thus, the role of the internal router is to protect an organization's internal network from illicit traffic that might originate not from the Internet but from an intruder who might have compromised one of these DMZ servers and is attempting to get to the internal network.
Firewalls don't always need to be implemented in routers, nor must they be hardware-based. Several software routers exist that can play the same role as a hardware solution. For example, Check Point Software Technologies offers the well-known Firewall-1 product, and Microsoft provides RRAS for Windows Server 2003 and Windows 2000 Server. Many of these software-based solutions also provide an integrated VPN server that can host VPN connections between your external users and your internal network. In general, the decision to use a hardware- or software-based firewall or routing solution will depend on cost (software solutions are usually less expensive than hardware solutions for similar functionality) and performance (hardware-based solutions typically have greater throughput than software-based solutions).
Another feature often found in routers or other firewall devices is Network Address Translation (NAT). NAT is used when internal network hosts with private, non-Internet-routable IP addresses need to talk to Internet-based hosts with public IP addresses. I'll talk more about private IP addressing in the next section, but NAT servers provide a valuable service and are found in many network-perimeter devices on the market today. You'll likely need some device on your network perimeter to perform this NAT function. Often, an application proxy can provide this functionality.
Application proxies—or more simply— proxies, are common in enterprise networks. Proxies are generally software solutions (but can be hardware solutions) that provide a sort of bucket brigade of communication between hosts on the internal and external network. The most common type of application proxy is the HTTP proxy (aka Web proxy), but you can use proxies for many different types of application traffic, including FTP, Telnet, remote procedure call (RPC)-based applications, and even Internet Control Message Protocol (ICMP—Ping). Many of you are probably familiar with the Web proxy because you must enter that pesky proxy server address in Microsoft Internet Explorer (IE) whenever you want to browse the Internet from your work network. The Microsoft Internet Security and Acceleration (ISA) Server add-on to Windows Server is an example of a common software-based application proxy.
Proxies act as intermediaries between your internal and external networks: Requests from the internal network to the external network are shunted to the proxy. For example, if I browse to http://www.microsoft.com from my internal network, the page request actually goes to the proxy server. The proxy server terminates the request, then sends a new request on my behalf to the target Web site. Thus, no direct connection exists between my internal network and the Internet: The proxy server is the go-between. When the destination Web site responds, the proxy again takes that response and forwards it back to my Web browser on the original connection that I initiated. As well as providing additional security to a network, a proxy is a convenient place for logging what's going on between the internal network and the Internet, so if an employee is browsing an illicit Web site, you can easily go through the proxy's logs to determine who visited the site and when. Because application proxies require access to both the internal and external networks, they're usually located on the DMZ or equivalent segment within your network topology. Now let's move inside the network and talk about some best practices for deploying switches, routers, server farms, and workstation segments.
The Internal Network
The first thing you need to determine when building your internal network is which IP addressing scheme to use. Most organizations use a private IP address namespace rather than public IP addresses. This approach originated because public IP address space is limited. As the Internet grew and this limitation became problematic, private IP addressing mitigated the problem. Another reason private addressing has grown in popularity is because it gives organizations the flexibility to widen their IP network without fear of having to change or carve up their IP address space as they grow. Private IP addressing follows an organized Internet standard, defined in the Internet Engineering Task Force (IETF) Request for Comments 1918 (RFC 1918), which you can view at http://www.isi.edu/in-notes/rfc1918.txt. RFC 1918 defines the following three address blocks, one for each IP address class (A, B, and C), as private:
(IP address class B) 172.16.0.0 -
(IP address class C) 192.168.0.0 -
For more information about IP address classes, see "IP Addressing Basics," September 1999, InstantDoc ID 7035. You can use any of these IP address blocks to segment your internal network. Of course, using one or more of these address blocks internally means that you must have a NAT device at the perimeter of your network that can translate these nonpublic addresses into addresses that can be routed publicly; that's usually the job of a router, proxy, or multifunction edge device.
Your choice of which private address block to use is generally a function of your network's size. Large networks with many devices and many routed segments generally use the Class A 10.x address space, but a Class B address might be sufficient for smaller organizations. Much depends on how you segment your internal network. When routers first appeared on the scene in the 1980s, it was common to have many routed segments or broadcast domains within a corporate network. (A broadcast domain is so called because broadcast-based traffic—traffic intended for all devices—is bound by a router interface, as Figure 3 shows. Routers typically don't forward broadcast traffic.) Back then, Ethernet was a common Layer 2 protocol, as it remains today, but the majority of Ethernet devices were connected by shared hubs. Because the performance of Ethernet degrades significantly when too many devices are on one shared broadcast domain, the typical network workaround was to create many small routed segments.
In the mid-1990s, network switches became prevalent, and networks began to flatten out (i.e., include more devices per broadcast domain). A switch differs from a shared hub in one significant way: A server or workstation connected to a switch port has available to it all bandwidth on that port. In other words, a 100Mbps Ethernet switch provides the full 100Mbps of bandwidth to each device connected to each port on the switch; media is no longer shared. Thus a switch allows for the flattening of a broadcast domain to include many more devices, which translates into fewer routers and more switches on a typical LAN.
Switches usually can move traffic around a network at much greater speeds than routers can because switches operate only at Layer 2—the data link layer of the OSI model. Because switches don't need to make higher-layer routing decisions and maintain complicated routing tables, they can move packets quickly.
If you implement a switched network in today's networking environment, you'll likely use a variety of bandwidths to accommodate your needs. Ethernet is the most common Layer 2 protocol, and it delivers several speeds, including 1Gbps, 100Mbps, and the venerable 10Mbps. (Some vendors are working on a 10Gbps Ethernet standard.) Each of these bandwidths is available on various physical media, including traditional copper cable and fiber optic cable. In general, fiber optic cable can carry higher bandwidths over greater distances than copper cable can, so that fact might drive some of your choices. You might ask, "Why wouldn't I implement 1Gbps Ethernet everywhere?" The most obvious answer is cost: The more bandwidth you deploy, the higher the cost. For that reason, my rule of thumb is to deploy only the bandwidth that I think I'll need today, while allowing for some growth for tomorrow. Most network hardware has a useful life of about 3 to 5 years, so you should plan for your needs for at least that long.
Servers typically need more bandwidth than individual workstations because servers must satisfy requests from hundreds, if not thousands, of workstations. Nowadays, it's not uncommon to find server segments using switched Gigabit Ethernet to each server, with switched 100Mbps probably the bare minimum you should consider.
You also need to consider how much bandwidth to provide to your desktop systems. Given that large organizations might have hundreds or thousands of desktops, providing Gigabit Ethernet to the desktop might be prohibitively expensive. A good idea is to keep an eye on network usage to determine who your biggest bandwidth consumers are. You might find that the graphics department needs 100Mbps for every desktop, whereas your call center users might be just fine with dedicated 10Mbps.
Whatever your choice of bandwidth, make sure that the hardware you choose—whether switch, router, or shared hub—provides you with opportunities for expansion without requiring you to throw out the device when you upgrade your bandwidth. Most medium- and high-end switches are organized with removable cards in an expandable chassis, so when you need to upgrade your server farm to Gigabit Ethernet, you can pop out that 100Mbps card and put in the faster version without a lot of expense and headaches.
Even though a broadcast domain on a typical switch can contain more than 500 devices, you might find it beneficial to segment switched traffic the same way you can segment routed traffic. Most intelligent switches support the concept of Virtual LANs (VLANs). A VLAN is simply a way of defining a routing boundary within a switch device. Typically, you specify a set of ports on one switch to be part of one VLAN and another set of ports on the same switch or on a different switch to be part of another VLAN. In effect, you're creating a routing boundary between these two groups of switch ports—a boundary that functions as if you had put a router between the two groups. In this case, however, the switch performs the routing between the two groups of devices and creates two separate broadcast domains. VLANs let you segment your network without having to deploy costly routers in addition to your switches.
Let's look now at using WANs for internal networks. As Figure 2 shows, you can deploy an internal WAN to connect disparate locations. Some organizations have offices spread over the country, if not the world. For example, many large banks have vast branch-office networks, with thousands of locations that contain servers and workstations that are part of the organization's internal networks. You typically have two ways to build such an internal network.
The first and most common way is to build a private WAN by using your own or a third-party carrier network. Large telcos such as AT&T, MCI, and Sprint provide private frame relay networks that let you efficiently and cost-effectively extend your private IP network to many locations. Frame relay is a common Layer 2 WAN protocol that provides a network cloud that lets you serve many locations at once, as Figure 4 shows. Deploying a frame-relay network or similar private WAN is like extending your internal network to all your organization's locations. The private WAN typically has no contact with the Internet, so if users in your branch offices need to get to the Internet, they must come through the frame relay cloud to use the Internet Point of Presence (POP) at your headquarters.
A second kind of WAN deployment that's becoming more common is the use of VPNs over the Internet to build a corporate network. VPNs are advantageous because they use the Internet as their backbone and thus have little trouble reaching even the most far-flung offices. Also, because VPNs use the Internet, their expense is based only on local access costs at each location. The downside to building a VPN-based WAN is that you can't guarantee that the Internet connection (and thus the VPN) will always be available, or available at the speed you need. And, if you have many locations to manage, you'll need to deploy and manage VPN devices at each location. Additionally, because VPNs use the public Internet, a malicious user could potentially break into your network from the Internet and gain access to your internal corporate resources. Thus, choosing whether to deploy a private WAN or a VPN-based solution will depend on the complexity, cost requirements, and risk criteria inherent in your environment.
Finally, let's look at wireless (i.e., 802.11 or Wi-Fi, the 802.11b wireless standard) networks in the enterprise. Wireless networks are common in homes and small businesses, but deploying wireless capabilities in larger organizations has its share of challenges—not the least of which is security.
Of the three Wi-Fi standards in use today, 802.11b is the most popular and was first on the scene, providing 11Mbps bandwidth. 802.11a and 802.11g are two competing standards for high-speed Wi-Fi networks. Both standards provide 54Mbps of throughput but use different techniques to achieve that speed. Of the two, 802.11a seems to be more popular right now.
Deploying wireless networks in the home and enterprise requires you to implement wireless Access Points (APs) around your physical locations to support wireless users. Wireless APs designed for the enterprise differ from home versions in their built-in management features; nevertheless, they function in basically the same ways. A wireless AP designed for 802.11b or 802.11a will work only with adapter cards of the corresponding type, although some wireless APs now support all three wireless standards. For more information about wireless APs, see Buyer's Guide, "802.11g Access Points," May 2004, InstantDoc ID 42272.
The biggest challenge and concern you have in deploying Wi-Fi in your organization is security. 802.11b comes with an encryption protocol, Wired Equivalent Privacy (WEP); however, WEP has proven extremely vulnerable to intruders. You shouldn't consider deploying WEP in a commercial organization unless you don't care about the privacy of your data.
Several new schemes are available to protect wireless networks, and at least one—the 802.11i standard—is on its way to becoming an IEEE standard. The 802.11i standard, which is being dubbed Wi-Fi Protected Access (WPA), is currently supported on Windows XP. Because WPA isn't officially a standard, make sure that the wireless AP you buy supports the Microsoft implementation of WPA.
A more standards-based alternative is to deploy a traditional VPN on top of your Wi-Fi network. Because VPNs are now common, they might provide you with a quicker path to secure Wi-Fi than trying to follow the emerging WPA standard will. You'll just need to ensure that your wireless users can connect to your internal network only through a VPN connection. You can accomplish this restriction by deploying a VPN server on your internal network between your wireless APs and the rest of the network, just as you would for mobile clients that connect from the external Internet.
When implementing an enterprise network, you need to consider many aspects of network architecture and design. Choosing a firewall and a switching standard, deciding whether to deploy Gigabit Ethernet or 100Mbps connectivity, and deciding whether to deploy Wi-Fi are all elements of the design process. Securing a solid understanding of networking basics is the best place to start. After you know how network devices route and filter traffic, you can move up the OSI model stack to provide more value-added services to your users.
| WINDOWS & .NET MAGAZINE ARTICLES |
"Network Troubleshooting Basics"
"Overview of the WPA Wireless Security Update in Windows XP" http://support.microsoft.com/?kbid=815485
"Router and Switch Design"
"FAQ for Networking Basics"