Saturday, April 24, 2010

McAfee's 5958 DAT Fiasco


This week I got caught up in McAfee’s 5958 DAT mess back at the Salt Mines. Not only am I the local Network Nazi, but I also manage McAfee’s crappy AV for the entire enterprise. Luckily that day (Wednesday) I was telecommuting, so I was not in the thick of things.

I was “in the cloud”, as it were.

I was also wise enough never to have installed Service Pack 3 on my Salt Mine PC, so I was one of the lucky ones. For a variety of reasons, I never trusted it. I was almost ready to apply it once IE 7.0 came out, but then I heard there was no roll-back to IE 6 on machines with SP3, so I passed. I have it on all the XP machines here on DinkNet, but I use different AV on those boxes.

And that was an odd thing...

I have Microsoft Security Essentials (MSE) on my main box and that fateful morning it died. Very mysteriously. The little green system tray icon was just plain gone and when I went to restart it from Control Panel, Services the system told me it could not be found.

This was before the news came out that the whole thing was due to a turd dropped on the world by McAfee, so I was quietly sweating bullets. Had some bug followed me home? Or crawled through my other covert tunnel, OpenVPN? I switched boxes while I re-installed MSE on that system. Then I rebooted it and performed a full scan. Nothing.

And “nothing” doesn’t mean shit these days, with fast-mutating bugz like Zeus floating around the Interwebs. The virus definitions you get today are for crap that has been around for months.

While all this is going on I get a call from my sprog, Inky Dink, and it turns out he’s having AV problems too! And I know damn well he doesn’t run McAfee because I personally installed MSE on his system!

What the motherfucking fuck was going on here?

But it turned out Inky had been victimized by one of those scareware AV programs. I pointed him to malwarebytes.org and he took care of it himself later that evening.

Again, all this time we, the corporate IT proles, had no idea it was a McAfee problem. What was I to think? AV software was dying everywhere as far as I could tell from my small corner of the Universe. Was it cyberwar? Was the the “Digital Pearl Harbor” the trade press has been crying about for the last four months? Was Google’s January hack the warning shot?

No. It was ludicrous. It had to be a series of coincidences, so I kept my mouth shut during the Salt Mine phone conference.

Other people were not so cautious. They started spreading all sorts of FUD. All it takes is one jerk to read one unsubstantiated claim on one Internet forum and as soon as that happens he’s sending e-mail out to everyone and his brother and the next thing you know you’re in full chickens-with-their-heads-cut-off mode.

Luckily even though that particular jerk (our very own local security wannabee) made a complete idiot of himself that day, cooler heads prevailed. The only thing he damaged was his own credibility.

By about 10:30AM that morning the news finally came out and we went into Full Damage Control Mode. When the dust cleared, about 25% of our systems were down.

McAfee later stated it only affected one half of one percent of their customers. Do tell. Maybe they based that number on the phone calls they got that day (“All lines are busy, please hold!”). Maybe they thought it was just rubberneckers that took their site offline.

And WTF happened?

This event was curious in that the update that caused this mess arrived early that day. Normally, and I admit I haven’t checked in some time, we get that update between 11:30AM and 2:30PM EST. The timestamp on the files said they came in at 4:37AM. Why? Did their QA department in Bangalore (or Shanghai or whatever) take off early that day? What was the Big Rush?

If McAfee’s Legal Department gets their way – and there is no doubt in my mind it will get its way – we may never know what happened.

No comments:

Post a Comment