How about liability for faulty software?
Last week, the US government unveiled a newly drafted strategy to secure cyberspace. The strategy calls for home-based users to voluntarily learn more about security and for all computer users (home, government, business) to do more to secure systems. A 65-page document outlines the strategy.
According to the President's Critical Infrastructure Protection Board Web site, the plan was drafted after "town hall meetings were held around the country, and fifty-three clusters of key questions were published to spark public debate. Even more input is needed. The public has 60 days to offer further input."
I've received press releases from several technology companies that support the strategy. But based on news reports I've read, other businesses and individuals have complained about the plan. Their objections include that the plan isn't comprehensive enough, that it targets government and home users more closely than businesses, and that it might cost businesses too much to implement when profits are down in an ailing economy. I want to discuss what the plan emphasizes—and more importantly—what it doesn't emphasize.
According to "The Washington Post," Bruce Schneier chief technology officer (CTO) of Counterpane Internet Security, said, "You really have to ask why CEOs would bother to follow any of these recommendations, particularly at a time when most companies' earnings are down 20 percent. The fact is, companies aren't rewarded for altruism; they're rewarded by the strength of their stock price."
One notable security industry figure, Allan Paller, research director of the SysAdmin, Audit, Network, and Security (SANS) Institute, seems to have forgotten that we live in a democratic society. According to "The Washington Post" story, "\[Paller\] believes the 60-day public comment period will help to show who has worked hardest to weaken the plan." Paller said, "The whiners will now have a spotlight shone on them."
So will most businesses respond to the plan, and are all its critics trying to weaken it? Many of us believe that the problem with security in cyberspace resides largely in faulty software. You've sent email messages to me stating that view, and I've written about my own concerns. In "eWEEK," Wyatt Starnes, CEO and cofounder of security vendor Tripwire, echoes that sentiment in his response to the draft strategy: "I'd like to see them make software companies take responsibility for the reliability of their products."
Perhaps if software companies were liable by law for their products' lack of security, we wouldn't need such a weighty plan to secure cyberspace. We know that regulation works reasonably well in other industries.
Consider that Microsoft currently controls 80 percent of the desktop market, not to mention the server market space. Doesn't it make sense that if software vendors, including Microsoft, were legally obligated to roll out the most secure products possible—or face stiff consequences—more than 80 percent of the computers on the planet would be more secure (and less of a risk to any country's national security)? Why are companies in the computer industry still exempt from liability?
Although the government is taking an admirable path to better computer security, it doesn't seem to notice the more obvious problem of an unregulated and not-liable software industry. Why impose restrictions on home users, government, and general business users while neglecting the manufacturers of faulty software? Wouldn't it be equally effective to consider regulating software manufacturers—or am I missing some relevant points?
If you agree that we need to regulate software manufacturers, it's time to contact your government representatives and urge them to institute strong software regulation. (Here's contact information for your representatives.)