I know what you’re thinking already. This guy is funny. Look at that title! HA!

Okay- maybe not, but bad jokes are sometimes my thing. Let’s move on…

So a few weeks ago a customer of mine received what I would consider a relatively obvious phishing attempt.

Subject line: “Outlook Upgrade”

The body: “We are Upgrading and Expanding all Outlook Mailbox immediately. You are to send your Username and Password to our employee helpdesk at employeees@outlook.com<mailto:employeees@outlook.com> for immediate Validation. You will not be able to send or receive emails if you fail to do this. This message is from System Administrator.”

I also want to point out the message came from a domain different from the customers and we have provided security awareness training that covered this sort of thing after some ransomware attacks last year! Training end users is a whole different topic of it’s own as is ransomware. What I want to talk about is the war I had to fight after a bunch of users responded to the aforementioned phishing attack.

We reported to the roughly 3000 users via email that this phishing attempt may have gotten around to a bunch of them and to just ignore/delete it. Much to our surprise a good number of people responded to us telling us it’s too late, they had already sent their credentials off. We immediately changed the passwords while grumbling under our breaths. My gut told me (knowing my user-base) there will be a number of people who don’t report that they responded and this can be bad.

For 2 days there was nothing out of the norm. On the third day… it started. An email went around to everyone in the business. This time…

Subject line: Message From Administrator

The body:

“Dear User

There’ve been an automatic security update on your Email Account. Click here to login and complete update

Please note that you have withing 24 hours to complete this update. because you might lose access to your Email”

This time however, it came from an internal email address. We quickly changed the password on this account and started to prepare for more to roll in the next few days. We notified the users to be vigilant and report all suspicious email activity while we sorted things out. Things however went from bad to worse fast.

On the 5th day, the president of the customers business reported 9 different bounce backs he received from emails he was urgently trying to send. The reasons behind the bounces varied in wording, but all essentially pointed to the same thing.

“554 Your access to this mail system has been rejected due to the sending MTA’s poor reputation.”

“Delivery not authorized, connection refused, code=GL42”

“[TSS04] Messages from xx.xxx.xxx.xxx temporarily deferred due to user complaints”

…. and so on….

I googled some of these codes, and they all essentially said my domain was not quite blacklisted, but was on track to be blacklisted if we didn’t clean up our act. The first place I checked was my customers Sophos UTM where their Email Security is hosted. I was looking for some outbound SMTP logs to see who was the culprit.

firstout

This was outbound from 1 of our user accounts over a 4 day period. My thought was we were going to have to monitor the Sophos pretty closely for the next few days and hopefully catch all the compromised accounts that way. In the back of my mind, knowing 3000 users potentially received this phishing email that originally started this mess, I had a feeling my current approach would only be putting out fires. We had no true number of how many accounts were compromised. I started mentioning to the customer the idea of a domain-wide password reset. The customer asked that instead, we fight the fires for a few days and see how things go.

Much to both the customers and my own disliking, things continued to get worse. It seemed like the harder we fought, the more power this spam attack pushed back at us. Blacklisting the client IP’s initiating these messages didn’t help. These people were most likely using proxies in other countries. One happened to be in Tanzania when I did a reverse lookup. The IP’s seemed to change every few hours anyway. Shutting down the accounts that were compromised didn’t help because a different one would just pop up the next morning. Things got so bad that aol/aim, yahoo, gmail, and other various domains started blocking our emails thinking we were a known spammer.

Here is a chart showing how crazy things got over a months time. Looking at this chart can you guess what day we received the first phishing attack I mentioned?

monthlygraph

If you are thinking to yourself probably around 3.26 or 3.27 you are dead on. Most users received the “Outlook Upgrade” phishing email on one of those two days.

Through all of this we never got fully blacklisted which blew my mind. I would typically check our domain’s blacklist status on https://mxtoolbox.com/blacklists.aspx. If you are not familiar with MXToolbox I highly recommend checking it out. It has come in handy a number of times throughout my career. In addition to checking a domains blacklist status against a number of public blacklists, MXToolbox also provides a few other useful tools (as the name implies).

There was a breaking point (literally and figuratively). The literal break was when the UTM finally became so flooded that it essential went into a DOS(denial of service)-like state, not allowing any inbound or outbound email to pass. We ended up having to essentially use the “nuke” option on the built-up/spooled mail and reboot the Sophos UTM. A few users lost legitimate mail that was spooled in with the thousands and thousands of outbound spam messages waiting. The figurative break was the customer giving in to our request to change all passwords domain-wide finally, and looking at that chart above, you may be able to guess when that happened also. If you are thinking probably around the 4.13 or 4.1414 you are close. We actually forced the password change domain-wide in Active Directory on 4.12. However there was so many spooled messages built up from the constant outbound spam using compromised accounts that it took 2 days of manual cleanup before it was done. Even after we used the “nuke” option (delete all) tons more spam piled back into the outbound SMTP Spool until the messages submitted to be sent before the password reset all got through to where I could manually delete them. You can also see on the chart that things have basically returned to normal (thankfully!!!).

In response to the root cause of this, I was asked to “turn up” the sensitivity of the inbound Anti-Spam settings. I warned that this sometimes can cause more legitimate mail to be blocked but if that was what they decided was a better option than worrying about phishing attacks as much, than I would implement it. We could always white-list blocked senders or domains on a per-needed basis.

There was some form of security awareness training that took place about a year ago. I was just moving into this role as the last IT provider was moving out of it around that time. I have made the recommendation that the users be required annually to complete an updated version of the training as humans will always be the weakest part of a network.

Lastly, to find some balance between too much being blocked or too little being blocked, we are rolling out a “quarantine” solution. This will allow users to see mail that was intended for them but stopped at the UTM. This will come in the form of a daily “quarantine report” emailed to them. They will then be able to “release” mail they know was a false-positive, or ask the IT department if they think something seems suspicious.

Have you ever had a similar experience? How did you deal with it?

-N0ur5

Save

Save

Save

Save

Advertisements