For some months now, we've heard rumblings about peer-to-peer (P2P) networks in the Internet space. My initial reaction to these rumblings was, shall we say, less than enthusiastic. I managed a P2P network in the early 1990s, and I wouldn't wish the experience on my worst enemy. But thanks to a couple of industry friends who believe in the model and took the time to provide the information I needed, I found that P2P deserves a bit more attention than I originally gave it. Although I'm unsure how deeply or how soon the model will affect application service provision, some features could benefit desktop-centric computing and application service provision, both for inhouse and outsourced use. Let's take a brief look at the model.

Strictly speaking, the current P2P model resembles that P2P file-sharing mess I had to contend with years ago. The basic design concepts behind the two are similar, but P2P's current application is significantly more powerful than just file and printer sharing. In the P2P model, peer computers can communicate directly without having to rely on a central server. Some possible uses for this model that are already under development include abstracted file storage (although now you don't have to know which computer a file is stored on—a welcome change from the old P2P networking days), updated virus signatures, collaborative activities such as shared Web browsing, and instant messaging.

The idea of P2P computing is diametrically opposed to that of server-based computing. The latter model assumes that the client doesn't have (or shouldn't have) a lot of resources, whereas the former assumes that the client machine is very powerful and has resources to spare.

P2P has always had two problems: security and administration. P2P networks can distribute files automatically among network members. Virus authors would LOVE to exploit a model that can spread viruses without any cooperation on the user's part. And consider management. Each P2P network member's software MUST be simple and not require any maintenance—or this model could make all your configuration nightmares twice as bad.

These security and administration problems could have a similar solution. What if each client in the P2P network had a dumb agent (rather like an SNMP agent) that was capable of executing any instructions it received from outside? This dumb agent wouldn't require any management, and you could replace it any time you had to reinstall the OS. Its only job would be to follow outside orders—which it could get from a central management server located inhouse or from an Application Service Provider (ASP). This dumb agent could have one piece of intelligence: a refusal to run any outside instructions that weren't on an "accept these" list and digitally signed. The system would run only the scripts it was prepared to accept. For example, this client could accept the instructions "spool this print job," but it could also accept the instructions "link the Web browser on another machine to the one running locally." In other words, this model is a golden opportunity for service providers whose clients don't want to depend on a network to get to their applications and data but do want the remote-management features application service provision offers.

There's a lot of heat behind P2P. Microsoft has Hailstorm, IBM has its P2P working group, and—last Wednesday—Sun Microsystems announced and posted source code for JXTA, an open-source, file-sharing P2P model currently implemented in Java for OS portability. The model has potential—and industry resources behind it. That's not a guarantee of success, but it doesn't hurt, either.

So what do you think about the P2P model? Whether you think it's the wave of the future or an unworkable flash in the pan, I'd like to hear from you.