Vulnerabilities, patches & exploits, oh my
“This is the real deal. If your organization runs an OWA server exposed to the internet, assume compromise between 02/26 – 03/03. Check for 8-character aspx files in C:\\inetpub\wwwroot\aspnet_client\system_web\. If you get a hit on that search, you’re now in incident response mode.”
— A tweet from Chris Krebs, former director of the Cybersecurity & Infrastructure Security Agency.
Here we are, hit with another hacking event, fresh off the heels of the SolarWinds hack — just when many security thought leaders said it could not get any worse. I’m writing this column just days after the Microsoft Exchange vulnerability release, and I’m sure that by the time this is published, it will be much, much worse.
On March 5, it was published that at least 30,000 U.S. organizations had been hacked via holes in Microsoft’s email software by a Chinese cyber espionage unit. Their focus was on stealing emails. Cybersecurity experts who briefed national security advisers on the attack said this hacking group had seized control over “hundreds of thousands” of Microsoft Exchange servers worldwide.
The alerts started coming in fast and furious to patch your servers or shut down internet access to your exchange servers. But then we started hearing about failed patches or patches not being effective. Microsoft released several cumulative updates to address issues with the initial patches as well as guidance to install the patches.
Complicated? Shutting down email was not an option for most organizations so there were a lot of temporary fixes or stopgaps just to lower the risks and get a handle on the patching process. We also heard it was termed a zero-day exploit, but many were seeing the intrusions as far back as Jan. 6, which was, coincidentally, the day of the U.S. Capitol riots. Maybe not coincidence.
Many of our IT teams were very quick to share information in our collaboration portal and many were already in triage mode. This really speaks to their willingness to share information among their peers and to help each other out. They were competitors in the business world but allies on the cyber front.
After significant events like SolarWinds and Exchange, there is always a “what can we do better?” moment for everyone. Unfortunately, they’ve been happening so often we can’t seem to get out of response mode long enough to have a lessons-learned moment.
Historically, after these events, we point fingers at our patch management processes, or lack thereof. Think about this, though. We learned from the SolarWinds event that not mypatching the systems when recommended saved organizations from the hack.
If you’re making patching decisions in your organization, it’s not just a simple “patch ’em when you get ’em” decision. Many will argue with this point, but I think the hackers know that our teams are busy doing other business functions and they want to catch us when our guards are down. This is especially true in financial services. These are definitely the most chaotic times I can remember.
But what can we do to learn from events like these? Earlier detection maybe? Better threat intel? Automation? More resources, more budget, more tools? I think you can make an argument for every one of these.
Information technology and cybersecurity professionals must proactively look for indicators of a pending, active or successful cyberattack. Signs can be developed through analysis and correlation of threat characteristics observed across the cyberattack lifecycle over time.
Then, once we have that part mastered, we can start building automation. No organization can prevent all security breaches, so we need to implement strategies that focus as much on detection and response as prevention.