Soussan DAS Computer Consultants


Our Team
Solutions
Projects
Clients
Contact
Cool Stuff
KeyholeKeyboardLaptop ComputerComputer Chip
 

Detecting, tracking, and cleaning up a botnet infestation

The following story is true. The names have been changed to protect the innocent. Briefly, a client called with what they thought was a firewall problem that turned out to be a whole company botnet infection. This is a detailed walk-through of how I detected, analyzed, neutered, then cleaned it up.

This is mostly what I sent into Windows IT Pro, which they picked up and turned into the "IT Pro Hero" article they published in their October 2007 issue -- not the cover pictured on the right, which was another article. What was published lacked most of the technical details that would help someone facing a similar problem. The original article is online and can be read here!

Hopefully my path through this problem will help others detect and clean infections you might encounter.

I am available for hire should you require assistance. Without further delay, here is my story.

Quote
Click Here for Press Release


What started as a firewall problem

Monday… 

I received a call from a client mid-Monday about trouble getting to the internet & mail problems. I run a backup mail spooler for them, and sure enough when I checked my logs I saw mail start spooling to my location at 10:37 AM, and I couldn't ping their world. When trying to traceroute to their network, it bounced between two points at their ISP. So my phone diagnosis was to have the client's IT staff call the ISP and have them check lines and stuff. 

Curiously, I noticed brief internet outages at my own location and at another client location that seemed related, though I wasn't quite sure at the time. Since we all use various flavors of internet from the same ISP, it seemed likely an ISP issue.

Later in the afternoon with no resolution, I got into a three way call with the ISP and my client,. The ISP indicated the routing loop was due to no Ethernet signal being seen at the Cisco router on the client's side. With the client doing some re-wiring and a couple of minutes for the routes to establish, the route loop went away but I still couldn't connect to their network. The client re-wired a single workstation to connect directly to the public side of the firewall at the router's Ethernet port, and pings started working.

At that point (6:30 PM), I thanked the ISP, loaded up some equipment and cables, and jumped into the truck for some on-site network diagnosis. This wasn't the ISP's problem.

The outside layer of the network consisted of a Cisco router, Sonicwall TZ170 firewall, and a few miscellaneous hubs to allow for easy access to the data streams.

There was considerable confusion over which Cisco router had which function in life. There were three; one fractional-T1 to another location, a full T1 to the internet, and a router that used to go to the other location but now served an unknown function. Some mislabeling left me scratching my head and eventually backing off and starting from ground zero tracing cables and identifying each router separately. Once this was done, it appeared like one of the small cheap hubs was intermittent and / or the Sonicwall's WAN port was not able to receive traffic. Whenever we plugged in the hub/firewall combination, we couldn't see any DNS name resolution, pings coming back from the internet… no traffic at all made it through the firewall. Removing the hub and connecting direct, we got a bit of data then it went dead again.

 

Band-aids

By now it was late and based on what the Sonicwall was doing things weren't going to work that night the way they had in the past. We were now out of 'fix what is wrong' since the hardware was strongly suspected and into 'Make something that mostly works' to keep the data flowing while new hardware was acquired. A couple of options were to:

1)      Try and use the OPT port via upgrading the Sonicwall to the enhanced firmware and utilizing the fail-over feature in the enhanced firmware. If it was a port problem on the Sonicwall, this would at least get them up and running the next day.

2)      Try the Linksys BEFSX41 I had with me. While not a firewall with the beefy capabilities of the Sonicwall, it was better than no network at all.

3)      Try the D-Link DI-624 WAP I had with me as a router and turn off the wireless. Typically I use them for homes or home offices, but in this case as boaters will say "Any port in a storm" and with their internet and email down all day, this qualified as a storm.

Option 1 was tried and didn't seem to work at all. I didn't see any reason why and since I suspected bad hardware I didn't beat on that configuration too much trying to make it work.

Option 2 was tried and that firewall can't handle the subnet used at the client (10.x.x.x, subnet 255.0.0.0); the Linksys' firewall's biggest subnet mask is 255.255.255.0.

Option 3 worked like a charm. This pretty much indicated we either had a Sonicwall that had flaked out physically or mentally.

Swapping out an expensive firewall with a low-end home wireless router that made most things work pretty much pointed to some failing hardware. But something wasn't quite right and by 12 AM the goal was to have something working better than they had most of Monday. I was largely there, and with a few tweaks to let mail flow into the client's Exchange server they would be hobbling along.

I left the site at 1:00 AM Tuesday, and when I got home initiated a push of the backup mail that had spooled at my location back to the client, which worked nicely.

Crisis #1 over. Was the firewall good or bad?

Tuesday

Before RMA-ing the Sonicwall I wanted to confirm it was not right. I've seen cases in the older model Sonicwalls (XPRS2 & Pro-100) where they scramble their configurations and require wiping and reconfiguring in the weeks before they actually go completely dead. My spidee-sense was tingling that might be the case, and confirming a failure is always a good diagnostic step, helping prevent a day or two of downtime waiting for new equipment to ship if that really isn't the problem.

I setup the Sonicwall on the bench and reconfigured as it was prior to Monday's festivities. The TZ170 ran the whole day without any issues. All ports worked, data flowed without issues, and I was able to VPN into the firewall with only the usual problems on my side (I run the older "Safenet" Sonicwall client instead of their new client. The Safenet client lets me interface with most any firewall that has VPN functionality and the new Sonicwall client doesn't.)

One of the times I handled the Sonicwall on Monday it was quite warm. So I covered all the openings and let it cook until it was almost too hot to handle, and still it passed the test with flying colors. Reconfigured back to the way it was, I was very confident I had a known good piece of equipment and the problem must be elsewhere.

I also had a prior commitment which couldn't be broken and thus couldn't be on-site Tuesday. The client had a mostly working network, so that gave some time for thoughtful action and less fire-fighting. The plan was to be there first thing Wednesday morning, reinstall the good firewall, and see what happens.

Reinstall firewall, and poof!

Wednesday

Some final checks after testing overnight and I showed up early with the same firewall ready to plug it in and see things work now that it was re-flashed with known good firmware and a known good configuration.

Swapped out the cheap DLink, plugged the Sonicwall back in … and we didn't have a good internet connection In fact, the Sonicwall couldn't even phone home to register itself, something it needed to do after having its brains wiped clean and completely reloaded. This lead us down a "Bad DNS server from the ISP" drive-by issue for a period of time. Not totally blind, as the ISP did change DNS servers and the client wasn't yet using the new servers.

I put back the DLink, and the Internet was back working. Sonicwall back in, Internet was dead.

Needing to narrow down the problem – I >know< the Sonicwall is good – I'd bet my paycheck on that statement – I cut the world into two parts: The Sonicwall / Cisco / T1-Internet combination on one side, and the client's LAN on the other side. With just my notebook on the LAN, Sonicwall, and the ISP… I can surf the net, register the sonicwall, life is very good. Within seconds of plugging in the LAN side into the client's network, I can't ping out on the web anymore. Something on the LAN side in this client's network is breaking the Sonicwall.

Badly.

A BIG clue!

Up till now, my focus was on the WAN side, seeing traffic but not seeing much intelligent stuff.

All of a sudden the focus shifted; instead of the WAN side, I connected a hub onto the LAN port of the Sonicwall along with my notebook and sniff the network connection with Ethereal. I capture 2 minutes of data then hit the stop button. It takes 5 more minutes before Ethereal responds due to all the data buffered up. Inside this sniff, systems are "Gratuitous ARP-ing" random IP addresses and mac addresses, essentially clogging the local area network with trash. They are also sending many SYN packets to various IP addresses out on the internet.

What is happening is right away obvious to me - the why is still undiscovered. The Sonicwall, being a good little firewall, is buffering up all these addresses into internal memory tables and is becoming clogged trying to keep track of all the connections.

The DLink, being a cheap consumer model, is dropped all that excess data on the floor; what gets through gets through, the rest it doesn't care about.

So by being smarter, the Sonicwall became stupid. And by being stupid, the DLink was still able to function, thus looked smarter. A little bit of irony for the day.

Using various Ethereal sniffs and filtering down some of the traffic, eventually one machine – a Win2K server – is identified on the rack as doing some of this bad stuff. I'm sure that other machines are also doing bad things, but I need to look at one in its own world first. This machine is completely disconnected from the LAN, the question now being "What is running on that machine (and probably others on the network) that is causing this problem?" We'll deal with the "why?", "how?", and "what to do about it?" after the "what?" is identified.

I copied Mark Russonivich's tcpview and process explorer (PE) tools to a USB key and get those into our suspect system. Hub into its network connection, setup the sniffer, fresh boot our suspect system, run both tcpview and PE on our suspect system, start the sniffer, then very briefly connect it to the LAN long enough for it to get an IP address.

TCPView lights up like a Christmas tree. A couple of hundred connections instantly form to batches of addresses all over the internet. The program making all these connections is called "ifconfig.exe" in c:\windows\system32.

I view the ifconfig.exe process with Process Explorer, and suspend it. Ifconfig.exe claims it is from Microsoft, but its digital signature won't verify. Other programs verify just fine. We go to the c:\windows\system32 directory, and the file doesn't appear to be there. It is marked Read only, system, and hidden. I open it up with the attrib command and copy the file to the USB drive, sneaker-netting that file over to my notebook.

We officially have a botnet infestation - the analysis

When I copy the file from the key into my notebook and Trend AV instantly says "No! That is WORM_RBOT.CWU" and doesn't let me touch the file, I knew we had at least one part of the puzzle. Unfortunately, the Trend website doesn't tell me much more than the generic worm / bot descriptive text. We specifically run a Symantec scan on that file as the client has Symantec corporate AV deployed everywhere, and nothing bad is flagged. Looking around on a couple of other servers, we found it on both domain controllers, the Symantec AV server, main file server, … every server but one – the Exchange server. The Exchange server is protected by Trend's SMB product. We also find it on a couple of client computers.

So we now know what is happening: there is some network worm running on every server but one and a few others on the LAN. The worm isn't caught by Symantec. We don't know what else this worm does – how much remote access do other people have? Is this on every system because their domain administrative accounts have been hacked? Is there private personal information being acquired from inside the company that is now flowing out? Is this an internal sabotage? Are there other worms laying in wait and not yet noticed? How badly compromised is this network? There are a lot of questions that don't have answers.

Using Process Explorer, we suspend it on one of the other servers but don't kill it yet. What we don't know is how good a worm is this? Does it have buddies that watch each other's backs and respawn themselves when killed? Is it waiting to do something malicious if it detects someone is trying to remove it?

To attempt to answer these questions, I take the worm and put it into an "isolation chamber" built with a Windows 2000 server virtual machine (VM) running on my notebook. With Ethereal sniffing the network and the worm copied into the VM, I watch as the worm is started. It immediately disappears from the desktop copied into the c:\windows\system32 directory and it starts phoning home via DNS queries to "kockanet.myip.hu" at 81.176.6.210. There are no answers from kockanet.myip.hu, probably because the network is still clogged up with client machines doing other things this worm is doing. A later traceroute localized the target IP address to a dialup account out of Russia. Three weeks later, kockanet.myip.hu resolves to 200.251.187.2, an address in Brazil. So clearly whoever the real bot-master is he uses other hacked machines to control his bots.

 [reference traceroutes…]

 Trace taken 2/21/2007:

C:\temp>tracert kockanet.myip.hu
Tracing route to kockanet.myip.hu [81.176.6.210]
over a maximum of 30 hops:

[--snip--]

 18    40 ms    39 ms    39 ms  sl-bb27-nyc-8-0.sprintlink.net [144.232.7.38]
 19   107 ms   107 ms   107 ms  sl-bb20-lon-4-0.sprintlink.net [144.232.9.165]
 20   107 ms   108 ms   107 ms  sl-gw11-lon-14-0.sprintlink.net [213.206.128.57]
 21   101 ms   100 ms   100 ms  sle-rtcom-1-0.sprintlink.net [82.195.190.194]
 22   212 ms   211 ms   212 ms  195.161.156.14
 23   218 ms   210 ms   221 ms  217.106.16.18
 24   215 ms   211 ms     *     dialup.globalalania.ru [217.106.204.6]
 25   229 ms     *      230 ms  ip210.alania.info [81.176.6.210]

Trace taken 3/13/2007:

C:\temp>tracert kockanet.myip.hu
Tracing route to kockanet.myip.hu [200.251.187.2]
over a maximum of 30 hops:

 17    68 ms    68 ms    68 ms  as-1.r00.miamfl02.us.bb.gin.ntt.net [129.250.5.13]
 18    67 ms    68 ms    67 ms  as-0.r01.miamfl02.us.bb.gin.ntt.net 129.250.2.193]
 19    67 ms    67 ms    69 ms  p4-7.embratel.r01.miamfl02.us.ce.gin.ntt.net [157.238.179.6]
 20   178 ms   175 ms   179 ms  ebt-G7-0-intl03.rjo.embratel.net.br [200.244.111.249]
 21   174 ms   175 ms   180 ms  ebt-G3-2-core03.rjo.embratel.net.br [200.244.140.214]
 22   198 ms   199 ms   198 ms  ebt-A3-0-1-dist02.vta.embratel.net.br [200.244.40.85]
 23   193 ms   194 ms   194 ms  ebt-F8-0-0-acc03.vta.embratel.net.br [200.244.160.245]
 24   205 ms   208 ms   215 ms  prov-S1-0-1-4-acc03.vta.embratel.net.br [201.73.193.42]
 
25   208 ms   231 ms   207 ms  200.251.187.2 

Trace complete.

While the network worm is doing its thing in the virtual machine, using tcpview, process explorer, and autoruns we detected and suspend the worm on 6 other servers. Network traffic from the sniff indicates it phones home on TCP port 4212. After all this is done, another sniff tells us many client machines are also compromised and running the same worm. Meanwhile, the isolation chambered worm still hasn't gotten to talk to its master and is still phoning home looking for some direction … and finding none sits and waits, checking every few seconds for its mommy. With mommy not answering, he just sits and calls out every few seconds, hoping for a reply with some kind of orders as how to proceed. Seeing this activity, my first idea is to pin this worm to the ground and plug up its ears and mouth.

Neutering the botnet inside the company

This company recently migrated off NT4 and onto a Server 2003 / Active directory world. This means they run their own internal DNS servers which forward requests for out-of-their-domain names to the ISP's DNS servers. To prevent the worm from phoning home, I added a DNS record in their DNS servers telling the servers they are the authoritative DNS for the "myip.hu" domain. This company doesn't do any business with companies in Hungary, so this is a safe move that won't impact the business.

Now we take the first step in cleaning by stopping the worm on one server and watching – it does not respawn, giving evidence the worm doesn't have another program watching its back. I also carefully watch the network traffic to detect if any other attempts are made to phone anywhere else, none of which are seen. Deleting the ifconfig.exe file and removing run keys from the registry appear to be a valid fix, though I'm a bit cautious. On another machine, manually stopping ifconfig and rerunning it, the worm goes into a "Where is my mommy?" loop and appears lost. Not a very smart worm, not having a back-up plan. We can pin it to the ground and neutralize it by rebooting everything, but since this is the middle of the day instead we opt to leave the less critical server's worms suspended and manually cleaned the critical servers.

I also block all access to port 4212 on the Sonicwall firewall, which has since been re-installed and is working as it should. There are still running copies of the worm on client machines, and looking over the traffic on port 4212 I come up with a list of infected machines by IP address. With the servers all either cleaned or in a "running a suspended worm" state and the client's IT staff ready to attack the workstations in the morning, I call it a night.

Once home, I send the worm via Yahoo mail to a colleague. Yahoo uses Symantec, and the worm passes without any issues. He also has Symantec on his site, and the worm is not detected there.

Unfortunately, I can't submit to Symantec since I'm not a Symantec customer. The only way to submit viruses or malware to them is via the submit virus sample through their program. At least that's what I find on the website.

That is one stupid policy. The front page of a security company should be "Here is how you send us stuff you've caught in the wild!" and anyone should be able to submit samples.

That is just my opinion.

The Exchange server is the only system that faces the public, via OWA and SMTP – I turned both of those off at the firewall. While I was confident the worm didn't land in the Exchange server as Trend would have caught it, I didn't know if perhaps a vulnerability on the server allowed someone to control another server and gain access to the network that way. The fact this worm existed in every server's c:\windows\system32 had my spidee-sense tingling with fear – that the entire domain had been compromised. That or an administrator executed the worm somehow causing it to spread as far as it had so far. So until the infection vector could be better identified, I wanted to close off the one public facing item. Plus, that server was not updated with the latest patches which were now almost two weeks old, so a vulnerability could exist that was exploited.

I initiated some Microsoft patches, began a download of Trend SMB 3.5 which the client has licensed but not deployed yet, and discussed options with the client as how to proceed. Why Symantec doesn't detect the malware is a mystery. Potential plans to get their world fully working again were discussed, some options were mass deploy Trend, manual cleaning, submit sample to Symantec and get them to add the pattern to their database, etc.

More laboratory bot analysis

Thursday

In an effort to both stop this from propagating over the internet as well as get some more information, I submitted the malware to a few companies other than Symantec for their analysis.

In my own lab, I run the bot again in an isolation chamber, and this time it phones home just fine and immediately starts sending SYN packets to numerous IP addresses on port 2967. Those IP addresses are:

88.100.x.x
24.x.x.x
128.x.x.x
75.x.x.x
192.x.x.x

The initial conversation the bot had with its Internet Relay Chat (IRC) network of bot buddies is listed at the end of this paper. Seeing this, I contact the client and block outbound port 2967 via the firewall as well, just in case anything is still running internally.

Researching port 2967, I find some clues indicating this is a potential vulnerability in Symantec AV. Various links are forwarded to the client for their own research:

http://isc2.sans.org/diary.html?storyid=1893
http://isc2.sans.org/diary.html?storyid=1947
http://isc2.sans.org/diary.html?storyid=1966

Highly technical details on the Symantec AV vulnerability:

http://research.eeye.com/html/alerts/AL20061215.html

The client submits the malware via their Symantec installation and waits to hear back. With these last rules in the firewall, even the running worms can no longer reproduce. With the malware effectively neutered by the two prong attack of a dual port lockout at the firewall and the DNS entry cutting off the bots from being able to find their mommy, the client has some breathing room to plan the right counter measures.

The initial infection point and the malware propagation vectors are still unknown. Based on some of the reading via the links above and the port the malware was using being one that Symantec AV uses, I'm suspecting Symantec itself as the propagation vector.

Friday

Concerned the Symantec AV might not be working correctly, I push the eicar test virus to one server that was infected and it catches it immediately. So at least the AV engine is running on the infected machines. It is not uncommon for bots to disable AV type functionality, open up ports in software firewalls, and do other nasty things.

The client hears back from Symantec:

filename: ifconfig.exe
result: This file is detected as W32.Spybot.Worm.
http://www.symantec.com/avcenter/venc/data/w32.spybot.worm.html

[--snip--]

We have created RapidRelease definitions that will detect this threat. Please follow the instruction at the end of this email message to download and install the latest RapidRelease definitions.

Virus definition detail:
Sequence Number:  65193
Defs Version:           90222an
Extended Version: 02/22/2007 rev.40

There are no instructions for installing the rapid release definitions. Symantec was clueless about this little bit of malware until this client submitted the sample.

The link from Symantec is actually some very good news – this bot spreads via a vulnerability in the Symantec AV product. They are running SAV Corporate version 10.0.2.2000, and if it were patched it would read 10.0.2.2002.

I find it very ironic that a worm spread via a vulnerability in the anti-worm product that had administrative access to every system it was installed on. The patch is described here:

http://service1.symantec.com/SUPPORT/ent-security.nsf/docid/2006053016355448?Open&docid=2006052609181248&nsf=ent-security.nsf&view=docid

The patch has been available since mid 2006. I don't know exactly when Symantec AV was installed, but if it was done after May 2006 then it wasn't brought up to date, and if it was before May 2006 then it wasn't kept current.

Had the patch been installed, the worm would have existed on the system that was the initial infection point and not propagated to the numerous systems on the network.

(No, I didn't sell them Symantec, I didn't install it, nor was I in charge of maintaining it for the client...)

What makes this good news: This means that while a domain administrative access compromise isn't ruled out, it is less likely given the nature of the malware's propagation scheme. IE: It didn't spread by a domain admin account it had access to, but a program running in the context of an administrative access. Since it didn't appear to phone home in any other way, the malware is contained and clean-up efforts can continue. With luck, the special release of AV definitions once pushed to the clients will rid the LAN of this issue once and for all.

To flatten and rebuild every server and system, or not to rebuild - that is the question

I am not ruling out a domain compromise. But given the significant costs in time and labor involved in flattening and rebuilding the domain, the relative speed with which the network was unusable once infected, and the unsophisticated nature of the attack, IMHO the probability of a compromised domain is low but still a value that is greater than zero. The decision to rebuild or not is a (cost of compromise) VS. (cost of rebuild) * (probability of rebuild necsssary) comparison.

Were I king, I wouldn't rebuild but would very carefully monitor traffic for a week or two looking for any suspicious activity.

I presented that analysis and recommendation. While a rebuild of every server and system would be wildly profitable for me, I always recommend what I feel is in the client's best interests. So in this case, given the data they had on their network and the likely costs should it leak VS. the cost of being sure they were safe from any other latent infections, they agreed and we proceeded with cleaning VS. rebuilding.

The initial infection vector is still a mystery. But during cleanup and scanning by Trend, one system was running more malware than the rest. It was a notebook that was recently exposed to the internet at someone's house and brought back into the corporate network. That person started the system close to the time their internal network melted down. It is the current "most likely suspect".

 

During the next week, the client's IT staff cleans the machines, find Symantec is damaged and doesn't cleanly uninstall, so it is manually removed and Trend was deployed, which cleaned off any remaining bits of the infection. This was partially motivated by the worm disabling Symantec auto-updates to the point of Symantec's recommendation that each system receive manual cleaning.

Since they already had to go desk to desk, they decided to switch AV products at the same time which also cleaned the machines.

I assisted remotely -- I wrote a quick program in Borland C++ Builder that would detect any attempted socket connection on port 4212 and report what IP address attempted that connection. By redirecting DNS requests to the kockanet.myip.hu domain to the system running my new 'botnet detector' program, if any bots did reappear on the network trying to phone home they would be instantly detected on the system running the bot detector and further cleanup could proceed.

Stray systems that might have been taken off-site post infection and pre-cleaning were now detected, IT staff alerted, and cleaning could proceed. Plus, the bots still couldn't phone home so the internal network was protected from re-infection. 

One month later, the initial infection point was never definitively identified, though also not extensively looked for either. Finding such an initial infection point would involve intrusive actions on every server and workstation until enough evidence had been gathered, then lots of analysis time and money. Since both the worm writers and bot masters are somewhere on the planet Earth and hiding their tracks, sometimes the best business decision is to lick your wounds and heal, taking something positive away from the experience.

 

What can we have done differently or better?

The following was my post-mortem analysis for the client:

This is a lesson in patch management and multi-layered security. Two layers broke down here; a user somehow brought some malware into the company, and the installation of the Anti-Malware product (Symantec) wasn't patched at install and kept up to date. In today's environment, keeping everything patched for known vulnerabilities is required, not optional. Either layer working as it should would have either eliminated (person didn't run malware) or minimized (Symantec AV was properly patched, only person that ran the malware got infected) the damage caused.

"What can we do better?"

Inventory what you have – hardware [computers, routers, switches, printers, … anything with a network jack], software / firmware running on them all, and bring them up to current release(s) or current patch levels. 95% of all hacks are on known, patchable vulnerabilities.

Security is not a one-time thing. Good security is a process, not a product. Products can help, but there is no substitute for a properly configured end-user, and systems patched for known vulnerabilities.

Monitor. I'd installed a free tool for [your previous systems administrator] for watching network traffic and graphing it long-term called "MRTG" – had that still been running, it would have indicated early on this was a network traffic issue, not a firewall or ISP failure issue. The pattern also could have identified the initial infection point based on the time the traffic spiked and where it came from.

Document. I've been asking about a building wiring map since Day #1. In order to localize problems, you have to be able to cut off problem machines. Detection is good if you can stop the spread by cutting off the infected machines.

There are other products that can help with monitoring and go by various names (IDS, or Intrusion Detection Systems; IPS, or Intrusion Prevention Systems; Network based virus detection; …)

Anything pattern based is going to fail with outdated patterns, just like Symantec did until we found the root cause and sent it to them. Things that watch for typical activity and flag new activity will be rich with false positives or not trigger when someone writes a worm that fits in with existing traffic patterns. What saddens me is that the bad guys are getting better at being badder faster than the good guys are getting better at being gooder. That is my own phrase and terrible English, but in one line that anyone can understand it describes the computer industry right now.

Products like those aren't bad, and if you understand what they do and how they work and what they are telling you they can provide valuable information. But anyone that tells you "Buy this product, put it on your rack, and nothing like this will ever happen again" is selling snake oil and if it is true they should be willing to back it up with a $100M guarantee.

Remediation. Now would be a really good time to look at various "What if?" scenarios and see how you would recover from them given the current processes and procedures. Based on each system's potential loss and how it is or isn't protected, the costs to recover or not, and potential revenue lost due to the recovery time, that will drive the "We're doing too much", "We're doing just the right amount", or "We need to do more / faster / better" protection decisions

Respectfully submitted,

David Soussan

 

-----

 

That is it! They've been problem-free ever since.

 

If you found this helpful (or not), please send me a brief email -- one line will more than do. If I see people need, want, and / or use this kind of information that will encourage me to keep creating this kind of content. Whereas if I never hear from anyone, then why bother?

I can be reached at:
das (at-sign) dascomputerconsultants (dot) com

Enjoy!
 

David Soussan

(C) 2009 DAS Computer Consultants, LTD.  All Rights Reserved.


Appendix – the bot's conversation while phoning home from my location inside an isolation chamber. It picked up my domain name and phoned that home, then started its massive attack. I did some line wrapping to keep the article looking better, but this is unmodified and unfiltered IRC chat between the bot and its mother ship. You can see the subnets and the command to go attack them towards the end:

PASS iamtheking•
NICK [00|USA|2K|SP4|66166•
USER jmxaa 0 0 :[00|USA|2K|SP4|66166•
:W1.myserv NOTICE AUTH :*** Found your hostname•
:W1.myserv NOTICE [00|USA|2K|SP4|66166 :*** If you are having problems connecting 
  due to ping timeouts, PING :4324DB62•
PONG :4324DB62•
:W1.myserv 001 [00|USA|2K|SP4|66166 :Welcome to the W1.myserv IRC Network 
  [00|USA|2K|SP4|66166!jmxaa@dascomputerconsultants.:W1.myserv 002 
  [00|USA|2K|SP4|66166 :Your host is W1.myserv, running version Unreal3.2.4•
:W1.myserv 003 [00|USA|2K|SP4|66166 :This server was created Sun Feb 5 18:58:21 2006•
:W1.myserv 004 [00|USA|2K|SP4|66166 W1.myserv Unreal3.2.4 
  iowghraAsORTVSxNCWqBzvdHtGp lvhopsmntikrRcaqOALQbSeIKVfMCuzNTGj•
:W1.myserv 005 [00|USA|2K|SP4|66166 SAFELIST HCN MAXCHANNELS=15 CHANLIMIT=#:15 
  MAXLIST=b:60,e:60,I:60 :W1.myserv 005 [00|USA|2K|SP4|66166 SILENCE=15 MODES=12 
  CHANTYPES=# PREFIX=(qaohv)~&@%+ CHANMODES=beI,:W1.myserv 251 [00|USA|2K|SP4|66166 
  :There are 59 users and 562 invisible on 1 servers•
JOIN ##mys3rv2## •
:W1.myserv 252 [00|USA|2K|SP4|66166 1 :operator(s) online•
:W1.myserv 253 [00|USA|2K|SP4|66166 2 :unknown connection(s)•
:W1.myserv 254 [00|USA|2K|SP4|66166 14 :channels formed•
:W1.myserv 255 [00|USA|2K|SP4|66166 :I have 621 clients and 0 servers•
:W1.myserv 265 [00|USA|2K|SP4|66166 :Current Local Users: 621 Max: 670•
:W1.myserv 266 [00|USA|2K|SP4|66166 :Current Global Users: 621 Max: 670•
:W1.myserv 422 [00|USA|2K|SP4|66166 :MOTD File is missing•
:[00|USA|2K|SP4|66166 MODE [00|USA|2K|SP4|66166 :+iw•
USERHOST [00|USA|2K|SP4|66166•
MODE [00|USA|2K|SP4|66166 •
JOIN ##mys3rv2## •
USERHOST [00|USA|2K|SP4|66166•
MODE [00|USA|2K|SP4|66166 •
JOIN ##mys3rv2## •
USERHOST [00|USA|2K|SP4|66166•
MODE [00|USA|2K|SP4|66166 •
JOIN ##mys3rv2## •
:[00|USA|2K|SP4|66166!jmxaa@dascomputerconsultants.com JOIN :##mys3rv2##•
:W1.myserv 332 [00|USA|2K|SP4|66166 ##mys3rv2## :.join #.,#..,#...,#....,#.....•
:W1.myserv 333 [00|USA|2K|SP4|66166 ##mys3rv2## W1Myserv 1172159610•
:W1.myserv 353 [00|USA|2K|SP4|66166 @ ##mys3rv2## :[00|USA|2K|SP4|66166 ~W1Myserv •
:W1.myserv 366 [00|USA|2K|SP4|66166 ##mys3rv2## :End of /NAMES list.•
JOIN #.,#..,#...,#....,#..... (null)•
:W1.myserv 302 [00|USA|2K|SP4|66166 :[00|USA|2K|SP4|66166=+jmxaa@dascomputerconsultants.com •
:W1.myserv 221 [00|USA|2K|SP4|66166 +iw•
:W1.myserv 302 [00|USA|2K|SP4|66166 :[00|USA|2K|SP4|66166=+jmxaa@dascomputerconsultants.com •
:W1.myserv 221 [00|USA|2K|SP4|66166 +iw•
:W1.myserv 302 [00|USA|2K|SP4|66166 :[00|USA|2K|SP4|66166=+jmxaa@dascomputerconsultants.com •
:W1.myserv 221 [00|USA|2K|SP4|66166 +iw•
:[00|USA|2K|SP4|66166!jmxaa@dascomputerconsultants.com JOIN :#.•
:W1.myserv 332 [00|USA|2K|SP4|66166 #. :.advscan sym 150 1 0 -a -r -s•
:W1.myserv 333 [00|USA|2K|SP4|66166 #. W1Myserv 1172159649•
:W1.myserv 353 [00|USA|2K|SP4|66166 @ #. :[00|USA|2K|SP4|66166 •
:W1.myserv 366 [00|USA|2K|SP4|66166 #. :End of /NAMES list.•
:[00|USA|2K|SP4|66166!jmxaa@dascomputerconsultants.com JOIN :#..•
:W1.myserv 332 [00|USA|2K|SP4|66166 #.. :.advscan sym 150 1 0 88.100.x.x -a -r -s•
:W1.myserv 333 [00|USA|2K|SP4|66166 #.. W1Myserv 1172159660•
:W1.myserv 353 [00|USA|2K|SP4|66166 @ #.. :[00|USA|2K|SP4|66166 •
:W1.myserv 366 [00|USA|2K|SP4|66166 #.. :End of /NAMES list.•
:[00|USA|2K|SP4|66166!jmxaa@dascomputerconsultants.com JOIN :#...•
:W1.myserv 332 [00|USA|2K|SP4|66166 #... :.advscan sym 150 1 0 24.x.x.x -a -r -s•
:W1.myserv 333 [00|USA|2K|SP4|66166 #... W1Myserv 1172159667•
:W1.myserv 353 [00|USA|2K|SP4|66166 @ #... :[00|USA|2K|SP4|66166 •
:W1.myserv 366 [00|USA|2K|SP4|66166 #... :End of /NAMES list.•
:[00|USA|2K|SP4|66166!j
mxaa@dascomputerconsultants.com JOIN :#....•
:W1.myserv 332 [00|USA|2K|SP4|66166 #.... :.advscan sym 150 1 0 128.x.x.x -a -r -s•
:W1.myserv 333 [00|USA|2K|SP4|66166 #.... W1Myserv 1172159675•
:W1.myserv 353 [00|USA|2K|SP4|66166 @ #.... :[00|USA|2K|SP4|66166 •
:W1.myserv 366 [00|USA|2K|SP4|66166 #.... :End of /NAMES list.•
:[00|USA|2K|SP4|66166!jmxaa@dascomputerconsultants.com JOIN :#.....•
:W1.myserv 332 [00|USA|2K|SP4|66166 #..... :.advscan sym 150 1 0 75.x.x.x -a -r -s•
:W1.myserv 333 [00|USA|2K|SP4|66166 #..... W1Myserv 1172159682•
:W1.myserv 353 [00|USA|2K|SP4|66166 @ #..... :[00|USA|2K|SP4|66166 •
:W1.myserv 366 [00|USA|2K|SP4|66166 #..... :End of /NAMES list.•
PING :W1.myserv•
ERROR :Closing Link: [00|USA|2K|SP4|66166[dascomputerconsultants.com] (Ping timeout)•
 
Thought: Maybe add some packet sniff shots showing what / how the bot communicated?
 
Footer