A scan challenge can hone your security skills!
Security UPDATE, Web exclusive, April 9, 2003
* TEST YOUR FORENSIC-ANALYSIS SKILLS
I've discussed the Honeynet Project in previous Security UPDATE commentaries. Last week, the project posted another "Scan of the Month," which makes information gathered from an attacked honeypot available to the public.
The Honeynet Project posts the scans to let people use their forensic-analysis skills to analyze the log files the honeypot gathered. The Azusa Pacific University (APU) Honeynet Project provided this month's scan challenge. \[http://www.honeynet.org/scans/scan27\] APU deployed a honeypot on an unpatched Windows 2000 system that had a blank administrator password. Attackers and worms compromised the system numerous times, and the honeypot became part of a large "botnet."
The Honeynet Project tailored the current challenge to beginner and intermediate skill levels. After analyzing the logs, you can answer several questions and submit your answers for review. You can use several tools to help you arrive at answers. The tools the Honeynet Project recommends include Snort (an Intrusion Detection System--IDS) and Ethereal, which are packet-capture and analysis tools. You'll find links to those tools on the Scan of the Month page \[http://www.honeynet.org/scans\], where you can also read more about the rules of the challenge.
Taking part in such challenges can help hone your forensic-analysis skills. If you're already proficient, further practice can help you keep abreast of current trends--the sorts of activities currently compromising systems. Because this month's challenge addresses a compromised Win2K system, many of you might want to consider meeting the challenge. Submissions to the challenge are due no later than April 25.
Patching the Patch System In last week's Security UPDATE, I discussed a mishap in the disclosure of a vulnerability in Sendmail. A researcher posted various details of the vulnerability to the BugTraq mailing list, and Sendmail.org released a patched version of its application before its planned release date. I speculated and raised questions about what might have happened, and--as it turns out--I was wrong. I was missing a key fact about the situation. Reader Claus Assmann wrote to inform me about some of the missing details. At his suggestion, I also contacted Eric Allman at Sendmail.org to obtain a clearer perspective about what had transpired.
Allman took the time to offer what he knows about events--how and when they occurred. The following paragraphs present what he told me in detail.
"What we know is this: Late in the day on Tuesday, 18 March, Michal Zalewski reported a possible vulnerability to us. He included a sample case that demonstrated that there was a buffer overflow of some sort, but he had not created a 'proof of concept' exploit, nor did he speculate on the nature of the bug.
"We verified the bug that night and shortly thereafter had a first pass at a fix, which had not yet undergone code review. Code review was completed later that week.
"We then wanted to send the information to vendors so they could have a patch available. However, this was delayed due to the problems CERT was having with someone going by \[the name\] Hack4Life who seemed to have pretty direct access to security information going to vendors. It wasn't (and to the best of my knowledge, still isn't) clear where the leak actually was, but we had to consider at least the possibility that it was inside one of the vendors themselves. For this reason, we delayed release of the information to vendors in the hope that CERT could find and fix the problem. Our plan had been to go to vendors on Monday, 31 March ... whether or not they had succeeded.
"However, some time on the night of Friday, 28 March, someone by the name of 'nag' posted a message to vulndiscuss \[a mailing list\] and full-disclosure asking about a 'rumor spreading about new Sendmail vulnerability.' That message included a patch to the problem we had been working on. However, the patch that was given was quite different from the one we had come up with, so we don't believe that the patch was a leak from ourselves. At this point we have no idea where it did come from--it could even have been independently found by someone who never reported it to us.
"We decided to delay for a few hours so we could get some sleep, and we released on Saturday, 29 March. We knew that this was almost the worst possible time to release, but we felt that with the patch being distributed, it was only a matter of time before an exploit was created, and we had no idea if that would be hours, days, or even longer. As it turns out, I haven't seen an exploit in the wild today, almost a week later. Another security group \[Internet Security Systems--ISS\] has produced a proof-of-concept exploit, which we have not seen, but they did tell us that it was substantially harder to create than it would at first appear. Had we realized that an exploit was unlikely to have been released over the weekend, we might have delayed release until Monday, but we didn't know that at the time, and we felt that going out Saturday was as prudent as we could be. And that's what we know ..."
So there you have it, another case of an unknown source somehow gaining access to private communications and leaking details to the public prematurely. Two weeks ago, I discussed this problem as it pertains to CERT in my Security UPDATE commentary, "Security Research: A Double-Edged Sword." \[http://www.secadministrator.com/articles/index.cfm?articleid=38448\] I think most people aren't sure why someone is intercepting communications and leaking details about security vulnerabilities. But we can easily see that it places a lot of networks at risk unnecessarily. Sooner or later, if we can't plug the information leaks, one could cause serious repercussions. The situation is both ironic and challenging: The process of finding security vulnerabilities and patching them before they're compromised has itself become compromised--and must now be patched.