This computer network and computer network security site aims to provide book reviews and free ebooks on network security, tcp ip protocols, internetworking, osi model, socket programming, internet protocols,ipv6, voice over internet protocol, port authority, port forwarding, wireless networking, home networking, computer networking,client server computing, client server software etc.

Maximum Security: Hacker's Guide to Protecting Your Internet Site and Network

Hacking and cracking are activities that generate intense public interest. Stories of hacked servers and downed Internet providers appear regularly in national news. Consequently, publishers are in a race to deliver books on these subjects. To its credit, the publishing community has not failed in this resolve. Security books appear on shelves in ever-increasing numbers. However, the public remains wary. Consumers recognize driving commercialism when they see it, and are understandably suspicious of books such as this one. They need only browse the shelves of their local bookstore to accurately assess the situation.
Books about Internet security are common (firewall technology seems to dominate the subject list). In such books, the information is often sparse, confined to a narrow range of products. Authors typically include full-text reproductions of stale, dated documents that are readily available on the Net. This poses a problem, mainly because such texts are impractical. Experienced readers are already aware of these reference sources, and inexperienced ones are poorly served by them. Hence, consumers know that they might get little bang for their buck. Because of this trend, Internet security books have sold poorly at America's neighborhood bookstores.
Another reason that such books sell poorly is this: The public erroneously believes that to hack or crack, you must first be a genius or a UNIX guru. Neither is true, though admittedly, certain exploits require advanced knowledge of the target's operating system. However, these exploits can now be simplified through utilities that are available for a wide range of platforms. Despite the availability of such programs, however, the public remains mystified by hacking and cracking, and therefore, reticent to spend forty dollars for a hacking book.
So, at the outset, Sams.net embarked on a rather unusual journey in publishing this book. The Sams.net imprint occupies a place of authority within the field. Better than two thirds of all information professionals I know have purchased at least one Sams.net product. For that reason, this book represented to them a special situation.
Hacking, cracking, and Internet security are all explosive subjects. There is a sharp difference between publishing a primer about C++ and publishing a hacking guide. A book such as this one harbors certain dangers, including
  • The possibility that readers will use the information maliciously
  • The possibility of angering the often-secretive Internet-security community


  • The possibility of angering vendors that have yet to close security holes within their software

Wireless LAN Communications

This document presents an overview of two IBM wireless LAN products, IBMWireless LAN Entry and IBM Wireless LAN, and the technology they use for wireless communications. The information provided includes product descriptions, features and functions. Some known product limitations as well as a cross-product comparison are included to assist the reader in understanding where and which product to use for given circumstances.
Also documented are examples of product setup, configuration and the development of various scenarios conducted by the authors. Our intended audience is customers, network planners, network administrators and system specialists who have a need to evaluate, implement and maintain wireless networks. A basic understanding of LAN communications terminology and familiarity with common IBM and industry network products and tools is assumed.

Netizens On the History and Impact of the Net

By Michael Hauben and Ronda Hauben

Introduction By Thomas Truscott

Netizens: On the Impact and History of Usenet and the Internet is an ambitious look at the social aspects of computer networking. It examines the present and the turbulent future, and especially it explores the technical and social roots of the "Net". A well told history can be entertaining, and an accurately told history can provide us valuable lessons. Here follow three lessons for inventors and a fourth for social engineers. Please test them out when reading the book.
The first lesson is to keep projects simple at the beginning. Projects tend to fail so the more one can squeeze into a year the better the chance of stumbling onto a success. Big projects do happen, but there is not enough time in life for very many of them, so choose carefully.
The second lesson is to innovate by taking something old and something new and putting them together in a new way. In this book the "something new" is invariably the use of a computer network. For example, ancient timesharing computer systems had local "mail" services so its users could communicate. But the real power of E-mail was when mail could be distributed to distant computers and all the networked users could communicate. Similarly, Usenet is a distributed version of preexisting bulletin-board-like systems. The spectacularly successful World Wide Web is just a distributed version of a hypertext document system. It was remarkably simple, and seemingly obvious, yet it caught the world by complete surprise. Here is another way to state this lesson: If a feature is good, then a distributed version of the feature is good. And vice-versa.
The third lesson is to keep on the lookout for "something new", or for something improved enough to make a qualitative difference. For example, in the future we will have home computers that are always on and connected to the Net. That is a qualitative difference that will trigger numerous innovations.
The fourth lesson is that we learn valuable lessons by trying out new innovations. Neither the original ARPAnet nor Usenet would have been commercially viable. Today there are great forces battling to structure and control the information superhighway, and it is invaluable that the Internet and Usenet exist as working models. Without them it would be quite easy to argue that the information superhighway should have a top-down hierarchical command and control structure. After all there are numerous working models for that.
It seems inevitable that new innovations will continue to make the future so bright that it hurts. And it also seems inevitable that as innovations permeate society the rules for them will change. I am confident that Michael Hauben and Ronda Hauben will be there to chronicle the rapidly receding history and the new future, as "Netizens" increasingly becomes more than a title for a book.

Looking Over the Fence at Networks: A Neighbor's View of Networking Research (2001)

The Internet has been highly successful in meeting the original vision of providing ubiquitous computer-to-computer interaction in the face of heterogeneous underlying technologies. No longer a research plaything, the Internet is widely used for production systems and has a very large installed base. Commercial interests play a major role in shaping its ongoing development. Success, however, has been a double-edged sword, for with it has come the danger of ossification, or inability to change, in multiple dimensions:
  • Intellectual ossification—The pressure for compatibility with the current Internet risks stifling innovative intellectual thinking. For example, the frequently imposed requirement that new protocols not compete unfairly with TCP-based traffic constrains the development of alternatives for cooperative resource sharing. Would a paper on the NETBLT protocol that proposed an alternative approach to control called “rate-based” (in place of “window-based”) be accepted for publication today?
  • Infrastructure ossification—The ability of researchers to affect what is deployed in the core infrastructure (which is operated mainly by businesses) is extremely limited. For example, pervasive network-layer multicast remains unrealized, despite considerable research and efforts to transfer that research to products.
  • System ossification—Limitations in the current architecture have led to shoe-horn solutions that increase the fragility of the system. For example, network address translation violates architectural assumptions about the semantics of addresses. The problem is exacerbated because a research result is often judged by how hard it will be to deploy in the Internet, and the Internet service providers sometimes favor more easily deployed approaches that may not be desirable solutions for the long run.

At the same time, the demands of users and the realities of commercial interests present a new set of challenges that may very well require a fresh approach. The Internet vision of the last 20 years has been to have all computers communicate. The ability to hide the details of the heterogeneous underlying technologies is acknowledged to be a great strength of the design, but it also creates problems because the performance variability associated with underlying network capacity, time-varying loads, and the like means that applications work in some circumstances but not others. More generally, outsiders advocated a more user-centric view of networking research—a perspective that resonated with a number of the networking insiders as well. Drawing on their own experiences, insiders commented that users are likely to be less interested in advancing the frontiers of high communications bandwidth and more interested in consistency and quality of experience, broadly defined to include the “ilities”—reliability, manageability, configurability, predictability, and so forth—as well as non-performance-based concerns such as security and privacy. (Interest was also expressed in higher-performance, broadband last-mile access, but this is more of a deployment issue than a research problem.) Outsiders also observed that while as a group they may share some common requirements, users are very diverse—in experience, expertise, and what they wish the network could do. Also, commercial interests have given rise to more diverse roles and complex relationships that cannot be ignored when developing solutions to current and future networking problems. These considerations argue that a vision for the future Internet should be to provide users the quality of experience they seek and to accommodate a diversity of interests.

Click to Read More

An Introduction to Socket Programming

By Reg Quinton
These course notes are directed at Unix application programmers who want to develop client/server applications in the TCP/IP domain (with some hints for those who want to write UDP/IP applications). Since the Berkeley socket interface has become something of a standard these notes will apply to programmers on other platforms.
Fundamental concepts are covered including network addressing, well known services, sockets and ports. Sample applications are examined with a view to developing similar applications that serve other contexts. Our goals are
  • to develop a function, tcpopen(server,service), to connect to service.
  • to develop a server that we can connect to.

This course requires an understanding of the C programming language and an appreciation of the programming environment (ie. compilers, loaders, libraries, Makefiles and the RCS revision control system).

Netstat Observations:
Inter Process Communication (or IPC) is between host.port pairs (or host.service if you like). A process pair uses the connection -- there are client and server applications on each end of the IPC connection.

Note the two protocols on IP -- TCP (Transmission Control Protocol) and UDP (User Datagram Prototocol). There's a third protocl ICMP (Internet Control Message Protocol) which we'll not look at -- it's what makes IP work in the first place!

TCP services are connection orientated (like a stream, a pipe or a tty like connection) while UDP services are connectionless (more like telegrams or letters).

We recognize many of the services -- SMTP (Simple Mail Transfer Protocol as used for E-mail), NNTP (Network News Transfer Protocol service as used by Usenet News), NTP (Network Time Protocol as used by xntpd(8)), and SYSLOG is the BSD service implemented by syslogd(1M).

The netstat(1M) display shows many TCP services as ESTABLISHED (there is a connection between client.port and server.port) and others in a LISTEN state (a server application is listening at a port for client connections). You'll often see connections in a CLOSE_WAITE state -- they're waiting for the socket to be torn down.

Click to Read More

Introduction to Securing Data in Transit

The secure transmission of data in transit relies on both encryption and authentication - on both the hiding or concealment of the data itself, and on ensuring that the computers at each end are the computers they say they are.
Authentication
Authentication is a difficult task - computers have no way of knowing that they are 'the computer that sits next to the printer on the third floor' or 'the computer that runs the sales for www.dotcom.com'. And those are the matters which are important to humans - humans don't care if the computer is '10.10.10.10', which is what the computers know.
However, if the computer can trust the human to tell it which computer address to look for - either in the numeric or the name form - the computers can then verify that each other is, in fact, the computer at that address. It's similar to using the post office - we want to know if 100 Somewhere Street is where our friend Sally is, but the post office just wants to know where to send the parcel.
The simplest form of authentication is to exchange secret information the first time the two computers communicate and check it on each subsequent connection. Most exchanges between computers take place over a long period of time, in computer terms, so they tend to do this in a small way for the duration of each connection - as if you were checking, each time you spoke in a phone call, that the person you were talking to was still that person. (Sally, is that you? Yeah. Good, now I was telling you about the kids .. is that still you?)
It may sound paranoid, but this sort of verification system can inhibit what is called a 'man in the middle' attack - where a third party tries to 'catch' the connection and insert their own information. Of course, this relies on the first communication not being intercepted.
Public key encryption (see below) is the other common means of authentication. It doesn't authenticate the sender, but it does authenticate the receiver - and if both parties exchange public keys, and verify by some independant means that the key they have is the key of the party they wish to send to, it authenticates both.

Introduction to Networking Technologies

There are many different computing and networking technologies -- some available today, some just now emerging, some well-proven, some quite experimental. Understanding the computing dilemma more completely involves recognizing technologies; especially since a single technology by itself seldom suffices, and instead, multiple technologies are usually necessary.
This document describes a sampling of technologies of various types, by using a tutorial approach. It compares the technologies available in the three major technology areas: application support, transport networks, and subnetworking. In addition, the applicability of these technologies within a particular situation is illustrated using a set of typical customer situations.
This document can be used by consultants and system designers to better understand, from a business and technical perspective, the options available to solve customers' networking problems.

Introduction to Intrusion Protection and Network Security

If your computer is not connected to any other computers and doesn't have a modem, the only way anyone can access your computer's information is by physically coming to the computer and sitting at it. So securing the room it's in will secure the computer. As soon as your computer is connected to another computer you add the possibility that someone using the other computer can access your computer's information.
If your network (your connected computers) consists only of other computers in the same building you can still secure the network by securing the rooms the computers are in. An example of this would be two computers sharing the same files and printer, but not having a modem and not being connected to any other computers.
However, it's wise to learn about other ways to secure a network of connected computers, in case you add something later. Networks have a tendency to grow. If you have a network, an intruder who gains access to one computer has at least some access to all of them.
Note: Note that once someone has physical access to your computer, there are a number of ways that they can access your information. Most systems have some sort of emergency feature that allows someone with physical access to get in and change the superuser password, or access the data. Even if your system doesn't have that, or it's disabled, they can always just pick up the computer or remove the hard drive and carry it out. More on this in the physical security article.

Introduction to the Internet Protocols

This is an introduction to the Internet networking protocols (TCP/IP).It includes a summary of the facilities available and briefdescriptions of the major protocols in the family.
What is TCP/IP?
TCP/IP is a set of protocols developed to allow cooperating computersto share resources across a network. It was developed by a communityof researchers centered around the ARPAnet. Certainly the ARPAnet isthe best-known TCP/IP network. However as of June, 87, at least 130different vendors had products that support TCP/IP, and thousands ofnetworks of all kinds use it.
First some basic definitions. The most accurate name for the set ofprotocols we are describing is the "Internet protocol suite". TCP andIP are two of the protocols in this suite. (They will be describedbelow.) Because TCP and IP are the best known of the protocols, ithas become common to use the term TCP/IP or IP/TCP to refer to thewhole family. It is probably not worth fighting this habit. Howeverthis can lead to some oddities. For example, I find myself talkingabout NFS as being based on TCP/IP, even though it doesn't use TCP atall. (It does use IP. But it uses an alternative protocol, UDP,instead of TCP. All of this alphabet soup will be unscrambled in thefollowing pages.)
The Internet is a collection of networks, including the Arpanet,NSFnet, regional networks such as NYsernet, local networks at a numberof University and research institutions, and a number of militarynetworks. The term "Internet" applies to this entire set of networks.The subset of them that is managed by the Department of Defense isreferred to as the "DDN" (Defense Data Network). This includes someresearch-oriented networks, such as the Arpanet, as well as morestrictly military ones. (Because much of the funding for Internetprotocol developments is done via the DDN organization, the termsInternet and DDN can sometimes seem equivalent.) All of thesenetworks are connected to each other. Users can send messages fromany of them to any other, except where there are security or otherpolicy restrictions on access. Officially speaking, the Internetprotocol documents are simply standards adopted by the Internetcommunity for its own use. More recently, the Department of Defenseissued a MILSPEC definition of TCP/IP. This was intended to be a moreformal definition, appropriate for use in purchasing specifications.However most of the TCP/IP community continues to use the Internetstandards. The MILSPEC version is intended to be consistent with it.

Internetwork Troubleshooting Handbook

Because of the rapid and ongoing developments in the field of networking, accurate troubleshooting information is an ever sought-after commodity. Because of this, the Cisco Press Internetworking Troubleshooting Handbook is a valuable resource for networking professionals throughout the industry.
For the second edition of this book, we gathered together a team of troubleshooting experts who thoroughly revised the material in each of the technology areas to include the most current and relevant troubleshooting information and solutions available today. Their goal and ours was to provide networking professionals with a guide containing solutions to the problems encountered in the field in a format that is easy to apply. We hope that this publication meets that goal.
The Internetworking Troubleshooting Handbook was written as a resource for anyone working in the field of networking who needs troubleshooting reference information. We anticipate that the information in this publication will assist users in solving specific technology issues and problems that they encounter in their existing environments.

Internetworking over ATM: An Introduction

For the foreseeable future a significant percentage of devices usingan ATM network will do so indirectly, and will continue to be directly attached to "legacy" media (such as Ethernet and token ring). In addition these devices will continue to utilize "legacy" internetwork layer protocols (for example, IP, IPX, APPN, etc.). This means that inorder to effectively use ATM, there must be efficient methods available for operating multiple internetwork layer protocols over heterogeneous networks built from ATM switches, routers, and other switched devices. This challenge is commonly referred to as theoperation of multiprotocol over ATM.
This book reviews the various options for the transport and support of multiprotocols over ATM.
This book was written for networking consultants, systems specialists, system planners, network designers and network administrators who need to learn about SVN and associated protocols in order to design and deploy networks that utilize components from this framework. It provides readers with the ability to differentiate between the different offerings. A working knowledge of ATM is assumed.
It is intended to be used with "High-Speed Networking Technology: An Introductory Survey", which describes methods for data transmission in high speed networks, and "Asynchronous Transport Mode (ATM) Technical Overview", which describes ATM, a link-level protocol using the methods described in "High-Speed Networking Technology: An Introductory Survey"to transmit various types of data together over the same physical links. This book describes the networking protocols that use ATM as the underlying link level protocol.

High-Speed Networking Technology: An Introductory Survey

This publication presents a broad overview on the emerging technology ofvery-high-speed communication. It is written at the technical conceptuallevel with some areas of greater detail. It is intended to be read bycomputer professionals who have some understanding of communications(but who do not necessarily consider themselves experts).
The primary topics of the book are:-
  • The Principles of High-Speed Networking
  • Fibre Optical Technology and Optical Networks
  • Local Area Networks (Token-Ring, FDDI, MetaRing, CRMA,Radio LANs)
  • Metropolitan Area Networks (DQDB, SMDS)
  • High-Speed Packet Switches (Frame Relay, Paris, plaNET)
  • High-Speed Cell Switching (ATM)

Click to Download

Computer Networks and Internets

Contains various network component specifications and photos with explanation. Following networking topics are covered
  • Motivation and Tools
  • Network Programming And Applications
  • Transmission Media
  • Local Asynchronous Communication (RS-232)
  • Long-Distance Communication (Carriers, Modulation, And Modems)
  • Packets, Frames, And Error Detection
  • LAN Technologies And Network Topology
  • Hardware Addressing And Frame Type Identification
  • LAN Wiring, Physical Topology, And Interface Hardware
  • Extending LANs: Fiber Modems, Repeaters, Bridges, and Switches
  • Long-Distance And Local Loop Digital Technologies
  • WAN Technologies And Routing
  • Connection-Oriented Networking And ATM
  • Network Characteristics: Ownership, Service Paradigm, And Performance
  • Protocols And Layering
  • Internetworking: Concepts, Architecture, and Protocols
  • IP: Internet Protocol Addresses
  • Binding Protocol Addresses (ARP)
  • IP Datagrams And Datagram Forwarding
  • IP Encapsulation, Fragmentation, And Reassembly
  • The Future IP (IPv6)
  • An Error Reporting Mechanism (ICMP)
  • UDP: Datagram Transport Service
  • TCP: Reliable Transport Service
  • Network Address Translation
  • Internet Routing
  • Client-Server Interaction
  • The Socket Interface
  • Example Of A Client And A Server
  • Naming With The Domain Name System
  • Electronic Mail Representation And Transfer
  • IP Telephony (VoIP)
  • File Transfer And Remote File Access
  • World Wide Web Pages And Browsing
  • Dynamic Web Document Technologies (CGI, ASP, JSP, PHP, ColdFusion)
  • Active Web Document Technologies (Java, JavaScript)
  • RPC and Middleware
  • Network Management (SNMP)
  • Network Security
  • Initialization (Configuration)

Click to Read More

Computer Networks

By Hans-Peter Bischof
An introduction to the organization and structuring of computer networks.
The following quesions describe what will be covered in this course.
  • What is a computer network?
  • What can we do with a computer network?

Keywords: (IPethernet)-address, TCP/IP, UDP, router, bridge, socket, rpc, rpcgen, server, client, arp, rarp ...

Protocol Layering
Protocol layering is a common technique to simplify networking designs by dividing them into functional layers, and assigning protocols to perform each layer's task.

For example, it is common to separate the functions of data delivery and connection management into separate layers, and therefore separate protocols. Thus, one protocol is designed to perform data delivery, and another protocol, layered above the first, performs connection management. The data delivery protocol is fairly simple and knows nothing of connection management. The connection management protocol is also fairly simple, since it doesn't need to concern itself with data delivery.

Protocol layering produces simple protocols, each with a few well-defined tasks. These protocols can then be assembled into a useful whole. Individual protocols can also be removed or replaced.
The most important layered protocol designs are the Internet's original DoD model, and the OSI Seven Layer Model. The modern Internet represents a fusion of both models.

Click to Read More

Complete WAP Security

from Certicom

The Wireless Application Protocol (WAP) is a leading technology for companies trying to unlock the value of the Mobile Internet.
Certicom products and services provide complete WAP security solutions today for all of those players involved in bringing the Internet to the mobile end-user — including content providers, equipment manufacturers, network operators, application service providers and enterprises.
WAP
The WAP (Wireless Application Protocol) is a suite of specifications that enable wireless Internet applications; these specifications can be found at http://www.wapforum.org). WAP provides the framework to enable targeted Web access, mobile e-commerce, corporate intranet access, and other advanced services to digital wireless devices, including mobile phones, PDAs, two-way pagers, and other wireless devices. The suite of WAP specifications allows manufacturers, network operators, content providers and application developers to offer compatible products and services that work across varying types of digital devices and networks. Even for companies wary of WAP, individual elements of the WAP standards can prove useful by providing industry-standard wireless protocols and data formats.
The WAP architecture is based on the realization that for the near future, networks and client devices (e.g., mobile phones) will have limited capabilities. The networks will have bandwidth and latency limitations, and client devices will have limited processing, memory, power, display and user interaction capabilities. Therefore, Internet protocols cannot be processed as is; an adaptation for wireless environments is required. The entire suite of WAP specifications are derived from equivalent IETF specifications used on the Internet, modified for use within the limited capabilities in the wireless world.
Furthermore, the WAP model introduces a Gateway that translates between WAP and Internet protocols. This Gateway is typically located at the site of the mobile operator, although sometimes it may be run by an application service provider or enterprise.

BSD Sockets

This file contains examples of client and servers using several protocols, might be very usefull.
Sockets are a generalized networking capability first introduced in 4.1cBSD and subsequently refined into their current form with 4.2BSD. The sockets feature is available with most current UNIX system releases. (Transport Layer Interface (TLI) is the System V alternative). Sockets allow communication between two different processes on the same or different machines. Internet protocols are used by default for communication between machines; other protocols such as DECnet can be used if they are available.
To a programmer a socket looks and behaves much like a low level file descriptor. This is because commands such as read() and write() work with sockets in the same way they do with files and pipes. The differences between sockets and normal file descriptors occurs in the creation of a socket and through a variety of special operations to control a socket. These operations are different between sockets and normal file descriptors because of the additional complexity in establishing network connections when compared with normal disk access.
For most operations using sockets, the roles of client and server must be assigned. A server is a process which does some function on request from a client. As will be seen in this discussion, the roles are not symmetric and cannot be reversed without some effort.
This description of the use of sockets progresses in three stages:
The use of sockets in a connectionless or datagram mode between client and server processes on the same host. In this situation, the client does not explicitly establish a connection with the server. The client, of course, must know the server's address. The server, in turn, simply waits for a message to show up. The client's address is one of the parameters of the message receive request and is used by the server for response.
The use of sockets in a connected mode between client and server on the same host. In this case, the roles of client and server are further reinforced by the way in which the socket is established and used. This model is often referred to as a connection-oriented client-server model.
The use of sockets in a connected mode between client and server on different hosts. This is the network extension of Stage 2, above.
The connectionless or datagram mode between client and server on different hosts is not explicitly discussed here. Its use can be inferred from the presentations made in Stages 1 and 3.

Asynchronous Transfer Mode (ATM) Technical Overview

This publication presents a broad overview on the emerging technology of very high-speed communications. It is written at the "technical conceptual" level with some areas of greater detail.
It was written for computer professionals who have some understanding of communications (but who do not necessarily consider themselves experts).
The primary topics of the book are:
  • Asynchronous Transfer Mode (ATM)
  • High-Speed Cell Switching
  • Broadband ISDN

This publication is published by Prentice Hall and will be sold inexternal bookstores.

Click to Download

A new TCP congestion control with empty queues and scalable stability

By Fernando Paganini, Steven H. Low, ZhikuiWang, Sanjeewa Athuraliya and John C. Doyle

We describe a new congestion avoidance system designed to maintain dynamic stability on networks of arbitrary delay, capacity, and topology. This is motivated by recent work showing the limited stability margins of TCP Reno/RED as delay or network capacity scale up. Based on earlier work establishing mathematical requirements for local stability, we develop new flow control laws that satisfy these conditions together with a certain degree of fairness in bandwidth allocation. When a congestion measure signal from links to sources is available, the system can satisfy also the key objectives of high utilization and emptying the network queues in equilibrium.
We develop a packet-level implementation of this protocol, where the congestion measure is communicated back to sources via random exponential marking of an ECN bit. We discuss parameter choices for the marking and estimation system, and demonstrate using ns-2 simulations the stability of the protocol, and the nearempty equilibrium queues, for a wide range of delays. Comparisons with the behavior of Reno /RED are provided. We also explore the situation where ECN is not used, and instead queueing delay is used as a pricing signal. This alternative protocol is also stable, but will inevitably exhibit nontrivial queues.

A Comprehensive Guide to Virtual Private Networks, Volume III: Cross-Platform Key and Policy Management

This redbook closely examines the functionality of the Internet Key Exchange protocol (IKE) - which is derived from the Internet Security Associations Key Management Protocol (ISAKMP) and the Oakley protocol. IKE provides a framework and key exchange protocol for Virtual Private Networks (VPN) that are based on the IP Security Architecture (IPSec) protocols. An overview of VPN technologies based on the latest standards is provided in Part I.
This redbook also helps you understand, install and configure the most current VPN product implementations from IBM, in particular AIX, OS/400, Nways routers, OS/390, and several client and OEM platforms. After reading this redbook, you will be able to use those products to implement different VPN scenarios. An overview of the functions and configuration of the VPN components of those products is provided in Part II.
The main focus of this redbook is on how to implement complete VPN solutions using state-of-the-art VPN technlogoies, and to document IBM product interoperability. This redbook is therefore not meant to be an exhaustive VPN design guide. The authors would like to refer the reader to IBM security and network consulting services for that purpose.
This redbook is a follow-on to the VPN Vol. 1 (SG24-5201) and VPN Vol. 2 (SG24-5234) redbooks. A basic understanding of IP security and cryptographic concepts and network security policies is assumed.

A Comprehensive Guide to Virtual Private Networks, Volume II: IBM Nways Router Solutions

The Internet nowadays is not only a popular vehicle to retrieve and exchange information in traditional ways, such as e-mail, file transfer and Web surfing. It is being used more and more by companies to replace their existing telecommunications infrastructure with virtual private networks by implementing secure IP tunnels across the Internet between corporate sites as well as to business partners and remote locations.
This updated redbook includes the IPSec enhancements provided by Version 3.3 of the IBM Nways Multiprotocol Routing Services (MRS), Nways Multiprotocol Access Services (MAS) and Access Integration Services (AIS) that implement the Internet Key Exchange (IKE) protocol. This redbook also includes other new features, such as the policy engine, digital certificate and LDAP support, and QoS. The VPN scenarios are enhanced to reflect the latest implementation of IPSec and L2-tunneling functionality.
In this redbook we delve further into these scenarios by showing you how to implement solutions that exploit Data Link Switching (DLSw), IP Bridging Tunnels, Enterprise Extender (HPR over IP), APPN DLUR, TN3270, and Tunneling on layer 2 (L2TP, L2F, PPTP) through an IPSec tunnel.
A working knowledge of the IPSec protocols is assumed.

Designing A Wireless Network

By Jeffrey Wheat, Randy Hiser, Jackie Tucker, Alicia Neely and Andy McCullough

Understand How Wireless Communication Works
  • Step-by-Step Instructions for Designing a Wireless Project from Inception to Completion
  • Everything You Need to Know about Bluetooth,LMDS, 802.11, and Other Popular Standards
  • Complete Coverage of Fixed Wireless,Mobile Wireless, and Optical
    Wireless Technology

Introduction

You’ve been on an extended business trip and have spent the long hours of the flight drafting follow-up notes from your trip while connected to the airline’s onboard server. After deplaning, you walk through the gate and continue into the designated public access area. Instantly, your personal area network (PAN) device, which is clipped to your belt, beeps twice announcing that it automatically has retrieved your e-mail, voicemail, and videomail.You stop to view the videomail—a finance meeting—and also excerpts from your children’s school play.

Meanwhile, when you first walked into the public access area, your personal area network device contacted home via the Web pad on your refrigerator and posted a message to alert the family of your arrival.Your spouse will know you’ll be home from the airport shortly.

You check the shuttlebus schedule from your PAN device and catch the next convenient ride to long-term parking.You also see an e-mail from your MP3 group showing the latest selections, so you download the latest MP3 play list to listen to on the way home.

As you pass through another public access area, an e-mail comes in from your spouse.The Web pad for the refrigerator inventory has noted that you’re out of milk, so could you pick some up on the way home? You write your spouse back and say you will stop at the store.When you get to the car, you plug your PAN device into the car stereo input port.With new music playing from your car stereo’s MP3 player, you drive home, with a slight detour to buy milk at the nearest store that the car’s navigation system can find.

The minute you arrive home, your PAN device is at work, downloading information to various devices.The data stored on your PAN device is sent to your personal computer (PC) and your voicemail is sent to the Bluetooth playback unit on the telephone-answering device.The PAN device sends all video to the television, stored as personal files for playback. As you place the milk in the refrigerator, the Web pad updates to show that milk is currently in inventory and is no longer needed.The kids bring you the television remote and you check out possible movies
together to download later that night.

Click to Download

Designing a Wireless Network

Networking with z/OS and Cisco Routers: An Interoperability Guide

The increased popularity of Cisco routers has led to their ubiquitous presence within the network infrastructure of many enterprises. In such large corporations, it is also common for many applications to execute on the z/OS (formerly OS/390) platform. As a result, the interoperation of z/OS-based systems and Cisco network infrastructures is a crucial aspect of many enterprise internetworks.
This IBM Redbook provides a survey of the components necessary to achieve full interoperation between your z/OS-based servers and your Cisco IP routing environment. It may be used as a network design guide for understanding the considerations of the many aspects of interoperation. We divide this discussion into four major components:
  • The options and configuration of channel-attached Cisco routers
  • The design considerations for combining OSPF-based z/OS systems with Cisco-based EIGRP networks
  • A methodology for deploying Quality of Service policies throughout the network
  • The implementation of load balancing and high availability using Sysplex Distributor and MNLB (including new z/OS V1R2 support)

We highlight our discussion with a realistic implementation scenario and real configurations that will aid you in the deployment of these solutions. In addition, we provide in-depth discussions, traces, and traffic visualizations to show the technology at work.

Click to Download

Networking Fundamentals, v4.0

Networks are an interconnection of computers. These computers can be linked together using a wide variety of different cabling types, and for a wide variety of different purposes.
The basis reasons why computers are networked are
  • to share resources (files, printers, modems, fax machines)
  • to share application software (MS Office)
  • increase productivity (make it easier to share data amongst users)

Take for example a typical office scenario where a number of users in a small business require access to common information. As long as all user computers are connected via a network, they can share their files, exchange mail, schedule meetings, send faxes and print documents all from any point of the network.

It would not be necessary for users to transfer files via electronic mail or floppy disk, rather, each user could access all the information they require, thus leading to less wasted time and hence greater productivity.

Imagine the benefits of a user being able to directly fax the Word document they are working on, rather than print it out, then feed it into the fax machine, dial the number etc.

Small networks are often called Local Area Networks [LAN]. A LAN is a network allowing easy access to other computers or peripherals. The typical characteristics of a LAN are,

  • physically limited ( less than 2km)
  • high bandwidth (greater than 1mbps)
  • inexpensive cable media (coax or twisted pair)
  • data and hardware sharing between users
  • owned by the user

Click to Read More about networking

Wireless Network Security 802.11, Bluetooth and Handheld Devices

By Tom Karygiannis and Les Owens
Wireless communications offer organizations and users many benefits such as portability and flexibility, increased productivity, and lower installation costs. Wireless technologies cover a broad range of differing capabilities oriented toward different uses and needs. Wireless local area network (WLAN) devices, for instance, allow users to move their laptops from place to place within their offices without the need for wires and without losing network connectivity. Less wiring means greater flexibility, increased efficiency, and reduced wiring costs. Ad hoc networks, such as those enabled by Bluetooth, allow data synchronization with network systems and application sharing between devices. Bluetooth functionality also eliminates cables for printer and other peripheral device connections. Handheld devices such as personal digital assistants (PDA) and cell phones allow remote users to synchronize personal databases and provide access to network services such as wireless e-mail, Web browsing, and Internet access. Moreover, these technologies can offer dramatic cost savings and new capabilities to diverse applications ranging from retail settings to manufacturing shop floors to first responders.
However, risks are inherent in any wireless technology. Some of these risks are similar to those of wired networks; some are exacerbated by wireless connectivity; some are new. Perhaps the most significant source of risks in wireless networks is that the technology’s underlying communications medium, the airwave, is open to intruders, making it the logical equivalent of an Ethernet port in the parking lot.
The loss of confidentiality and integrity and the threat of denial of service (DoS) attacks are risks typically associated with wireless communications. Unauthorized users may gain access to agency systems and information, corrupt the agency’s data, consume network bandwidth, degrade network performance, launch attacks that prevent authorized users from accessing the network, or use agency resources to launch attacks on other networks.

A Beginner’s Guide to Network Security

An Introduction to the Key Security Issues for the E-Business Economy
With the explosion of the public Internet and e-commerce, private computers, and computer networks, if not adequately secured, are increasingly vulnerable to damaging attacks. Hackers, viruses, vindictive employees and even human error all represent clear and present dangers to networks. And all computer users, from the most casual Internet surfers to large enterprises, could be affected by network security breaches. However, security breaches can often be easily prevented. How? This guide provides you with a general overview of the most common network security threats and the steps you and your organization can take to protect yourselves from threats and ensure that the data traveling across your networks is safe.
Importance of Security
The Internet has undoubtedly become the largest public data network, enabling and facilitating both personal and business communications worldwide. The volume of traffic moving over the Internet, as well as corporate networks, is expanding exponentially every day. More and more communication is taking place via e-mail; mobile workers, telecommuters, and branch offices are using the Internet to remotely connect to their corporate networks; and commercial transactions completed over the Internet, via the World Wide Web, now account for large portions of corporate revenue.

Local Area Network Concepts and Products: Routers and Gateways

Local Area Network Concepts and Products is a set of four reference books forthose looking for conceptual and product-specific information in the LAN environment. They provide a technical introduction to the various types of IBM local area network architectures and product capabilities.
The four volumes are as follows:
SG24-4753-00 - LAN Architecture
SG24-4754-00 - LAN Adapters, Hubs and ATM
SG24-4755-00 - Routers and Gateways
SG24-4756-00 - LAN Operating Systems and Management
These redbooks complement the reference material available for the products discussed. Much of the information detailed in these books is available through current redbooks and IBM sales and reference manuals. It is therefore assumed that the reader will refer to these sources for morein-depth information if required.
These documents are intended for customers, IBM technical professionals, services specialists, marketing specialists, and marketing representatives working in networking and in particular the local area network environments.
Details on installation and performance of particular products will not be included in these books, as this information is available from other sources.
Some knowledge of local area networks, as well as an awareness of the rapidly changing intelligent workstation environment, is assumed.

Linux IPv6 HOWTO

By Peter Bieringer
IPv6 is a new layer 3 protocol (see linuxports/howto/intro_to_networking/ISO - OSI Model) which will supersede IPv4 (also known as IP). IPv4 was designed long time ago (RFC 760 / Internet Protocol from January 1980) and since its inception, there have been many requests for more addresses and enhanced capabilities. Latest RFC is RFC 2460 / Internet Protocol Version 6 Specification. Major changes in IPv6 are the redesign of the header, including the increase of address size from 32 bits to 128 bits. Because layer 3 is responsible for end-to-end packet transport using packet routing based on addresses, it must include the new IPv6 addresses (source and destination), like IPv4.
Because of lack of manpower, the IPv6 implementation in the kernel was unable to follow the discussed drafts or newly released RFCs. In October 2000, a project was started in Japan, called USAGI, whose aim was to implement all missing, or outdated IPv6 support in Linux. It tracks the current IPv6 implementation in FreeBSD made by the KAME project. From time to time they create snapshots against current vanilla Linux kernel sources.
USAGI is now making use of the new Linux kernel development series 2.5.x to insert all of their current extensions into this development release. Hopefully the 2.6.x kernel series will contain a true and up-to-date IPv6 implementation.

Internetworking Technology Handbook

What Is an Internetwork?
An internetwork is a collection of individual networks, connected by intermediate networking devices, that functions as a single large network. Internetworking refers to the industry, products, and procedures that meet the challenge of creating and administering internetworks.
History of Internetworking

The first networks were time-sharing networks that used mainframes and attached terminals. Such environments were implemented by both IBM's Systems Network Architecture (SNA) and Digital's network architecture.
Local-area networks (LANs) evolved around the PC revolution. LANs enabled multiple users in a relatively small geographical area to exchange files and messages, as well as access shared resources such as file servers and printers.
Wide-area networks (WANs) interconnect LANs with geographically dispersed users to create connectivity. Some of the technologies used for connecting LANs include T1, T3, ATM, ISDN, ADSL, Frame Relay, radio links, and others. New methods of connecting dispersed LANs are appearing everyday.
Today, high-speed LANs and switched internetworks are becoming widely used, largely because they operate at very high speeds and support such high-bandwidth applications as multimedia and videoconferencing.
Internetworking evolved as a solution to three key problems: isolated LANs, duplication of resources, and a lack of network management. Isolated LANs made electronic communication between different offices or departments impossible. Duplication of resources meant that the same hardware and software had to be supplied to each office or department, as did separate support staff. This lack of network management meant that no centralized method of managing and troubleshooting networks existed.

Realizing the Information Future - The Internet and Beyond

The potential for realizing a national information networking marketplace that can enrich people's economic, social, and political lives has recently been unlocked through the convergence of three developments:
  • The federal government's promotion of the National Information Infrastructure through an administration initiative and supporting congressional actions;
  • The runaway growth of the Internet, an electronic network complex developed initially for and by the research community; and
  • The recognition by entertainment, telephone, and cable TV companies of the vast commercial potential in a national information infrastructure.

A national information infrastructure (NII) can provide a seamless web of interconnected, interoperable information networks, computers, databases, and consumer electronics that will eventually link homes, workplaces, and public institutions together. It can embrace virtually all modes of information generation, transport, and use. The potential benefits can be glimpsed in the experiences to date of the research and education communities, where access through the Internet to high-speed networks has begu n to radically change the way researchers work, educators teach, and students learn.

To a large extent, the NII will be a transformation and extension of today's computing and communications infrastructure (including, for example, the Internet, telephone, cable, cellular, data, and broadcast networks). Trends in each of these component areas are already bringing about a next-generation information infrastructure. Yet the outcome of these trends is far from certain; the nature of the NII that will develop is malleable. Choices will be made in industry and government, beginning with inv estments in the underlying physical infrastructure. Those choices will affect and be affected by many institutions and segments of society. They will determine the extent and distribution of the commercial and societal rewards to this country for invest ments in infrastructure-related technology, in which the United States is still currently the world leader.

1994 is a critical juncture in our evolution to a national information infrastructure. Funding arrangements and management responsibilities are being defined (beginning with shifts in NSF funding for the Internet), commercial service providers are playi ng an increasingly significant role, and nonacademic use of the Internet is growing rapidly. Meeting the challenge of "wiring up" the nation will depend on our ability not only to define the purposes that the NII is intended to serve, but also to ensure that the critical technical issues are considered and that the appropriate enabling physical infrastructure is put in place.

Click to Read More

PVM: Parallel Virtual Machine - A Users' Guide and Tutorial for Networked Parallel Computing

The PVM project began in the summer of 1989 at Oak Ridge National Laboratory. The prototype system, PVM 1.0, was constructed by Vaidy Sunderam and Al Geist; this version of the system was used internally at the Lab and was not released to the outside. Version 2 of PVM was written at the University of Tennessee and released in March 1991. During the following year, PVM began to be used in many scientific applications. After user feedback and a number of changes (PVM 2.1 - 2.4), a complete rewrite was undertaken, and version 3 was completed in February 1993. It is PVM version 3.3 that we describe in this book (and refer to simply as PVM). The PVM software has been distributed freely and is being used in computational applications around the world.
To successfully use this book, one should be experienced with common programming techniques and understand some basic parallel processing concepts. In particular, this guide assumes that the user knows how to write, execute, and debug Fortran or C programs and is familiar with Unix.

Teach Yourself THE INTERNET in 24 Hours

By Noel Estabrook
Contributing Author: Bill Vernon
Here is a quick preview of what you'll find in this book:
Part I, "The Basics," takes you through some of the things you'll need to know before you start. You'll get a clear explanation of what the Internet is really like, learn how you can actually use the Internet in real life, find tips on Internet Service Providers, and receive an introduction to the World Wide Web.
Part II, "E-Mail: The Great Communicator," teaches you all you'll need to know about e-mail. Learn basics like reading and sending e-mail, as well as more advanced functions such as attaching documents, creating aliases, and more. You'll also find out all about listservs and how to use them to your advantage.
Part III, "News and Real-Time Communication," shows you many of the things that make the Internet an outstanding tool for communication. You'll learn about newsgroups and how to communicate with thousands of people by clicking your mouse. You'll also learn how to carry on live, real-time conversations over the Internet, as well as get information on some of the hottest new technology such as Net Phones.
Part IV, "The World Wide Web," shows you what is now the most exciting part of the Internet. Learn which browser is best for you, get the basics of Web navigation, and find out how to help your browser with plug-ins. Finally, you'll discover the most powerful tool on the Web today--the search engine--and more importantly, how to use it.
Part V, "Finding Information on the Net," explains some of the other useful functions of the Net. You'll learn how to transfer files and use Gopher. You'll also learn how to access libraries and other resources by using Telnet. Finally, this section will show you how to use the Internet to locate people, places, and things that might not be available directly through the Web.
Part VI, "Getting the Most Out of the Internet," shows you practical ways to use the Internet. You can find resources and techniques on how to get information about entertainment, education, and business. Finally, learn how to use the Internet just to have fun.

Client/Server Computing Second Edition

In a competitive world it is necessary for organizations to take advantage of every opportunity to reduce cost, improve quality, and provide service. Most organizations today recognize the need to be market driven, to be competitive, and to demonstrate added value.
A strategy being adopted by many organizations is to flatten the management hierarchy. With the elimination of layers of middle management, the remaining individuals must be empowered to make the strategy successful. Information to support rational decision making must be made available to these individuals. Information technology (IT) is an effective vehicle to support the implementation of this strategy; frequently it is not used effectively. The client/server model provides power to the desktop, with information available to support the decision-making process and enable decision-making authority.
The Gartner Group, a team of computer industry analysts, noted a widening chasm between user expectations and the ability of information systems (IS) organizations to fulfill them. The gap has been fueled by dramatic increases in end-user comfort with technology (mainly because of prevalent PC literacy); continuous cost declines in pivotal hardware technologies; escalation in highly publicized vendor promises; increasing time delays between vendor promised releases and product delivery (that is, "vaporware"); and emergence of the graphical user in terface (GUI) as the perceived solution to all computing problems.
In this book you will see that client/server computing is the technology capable of bridging this chasm. This technology, particularly when integrated into the normal business process, can take advantage of this new literacy, cost-effective technology, and GUI friendliness. In conjunction with a well-architected systems development environment (SDE), it is possible for client/server computing to use the technology of today and be positioned to take advantage of vendor promises as they become real.
The amount of change in computer processing-related technology since the introduction of the IBM PC is equivalent to all the change that occurred during the previous history of computer technology. We expect the amount of change in the next few years to be even more geometrically inclined. The increasing rate of change is primarily attributable to the coincidence of four events: a dramatic reduction in the cost of processing hardware, a significant increase in installed and available processing power, the introduction of widely adopted software standards, and the use of object-oriented development techniques. The complexity inherent in the pervasiveness of these changes has prevented most business and government organizations from taking full advantage of the potential to be more competitive through improved quality, increased service, reduced costs, and higher profits. Corporate IS organizations, with an experience based on previous technologies, are often less successful than user groups in putting the new technologies to good use.
Taking advantage of computer technology innovation is one of the most effective ways to achieve a competitive advantage and demonstrate value in the marketplace. Technology can be used to improve service by quickly obtaining the information necessary to make decisions and to act to resolve problems. Technology can also be used to reduce costs of repetitive processes and to improve quality through consistent application of those processes. The use of workstation technology implemented as part of the business process and integrated with an organization's existing assets provides a practical means to achieve competitive advantage and to demonstrate value.
Computer hardware continues its historical trend toward smaller, faster, and lower-cost systems. Competitive pressures force organizations to reengineer their business processes for cost and service efficiencies. Computer technology trends prove to leading organizations that the application of technology is the key to successful reengineering of business processes.
Unfortunately, we are not seeing corresponding improvements in systems development. Applications developed by inhouse computer professionals seem to get larger, run more slowly, and cost more to operate. Existing systems consume all available IS resources for maintenance and enhancements. As personal desktop environments lead users to greater familiarity with a GUI, corporate IS departments continue to ignore this technology. The ease of use and standard look and feel, provided by GUIs in personal productivity applications at the desktop, is creating an expectation in the user community. When this expectation is not met, IS departments are considered irrelevant by their users.
Beyond GUI, multimedia technologies are using workstation power to re-present information through the use of image, video, sound, and graphics. These representations relate directly to the human brain's ability to extract information from images far more effectively than from lists of facts.
Accessing information CAN be as easy as tapping an electrical power utility. What is required is the will among developers to build the skills to take advantage of the opportunity offered by client/server computing.
This book shows how organizations can continue to gain value from their existing technology investments while using the special capabilities that new technologies offer. The book demonstrates how to architect SDEs and create solutions that are solidly based on evolving technologies. New systems can be built to work effectively with today's capabilities and at the same time can be based on a technical architecture that will allow them to evolve and to take advantage of future technologies.
For the near future, client/server solutions will rely on existing minicomputer and mainframe technologies to support applications already in use, and also to provide shared access to enterprise data, connectivity, and security services. To use existing investments and new technologies effectively, we must understand how to integrate these into our new applications. Only the appropriate application of standards based technologies within a designed architecture will enable this to happen.
It will not happen by accident.
Patrick N. Smith with Steven L. Guengerich