I was going to title this blog, “The Money Is Gone.” You’ll see why as you read further down.
There are thousands of scary IT and security stories on the Internet. If you read them regularly like I do, you can begin to see a pattern. Users fall for a password scam, hackers get legitimate user IDs and passwords, the organization suffers, and sometimes the event is large enough to become newsworthy. While I have suffered a variety of security cracks, incidents, and other user error events throughout my career, none have become newsworthy. That said, my scariest IT security day started with phone call from the CFO who immediately yelled, “The money is gone!”
Many hacking stories on the Internet cover the actual event such as a breach of IDs and passwords, however they do not typically follow the event to reach the “so what” or the real damage. Losing your ID or password is no more eventful than losing your house keys these days, unless someone uses it to hurt you. If you lost your house key and an armed robber showed up the first night to steal your valuables and hurt your family, then the loss of the house key becomes a very big deal; it becomes a “so what.” The “so what” of losing your ID and password could be just as bad if the thieves use that information to steal your paycheck, your expense check, or otherwise take money from your checking account. That is exactly what happened on the scariest day of my IT security career.
The precursor events to the scariest day started more than two months before anyone realized a hack had occurred. In the first 30 days, a couple of expense checks did not show up on payday but employees did not think anything of it because sometimes expense checks require more than one pay cycle to post. However, by the second cycle all of the money due employees was gone. This was not a clerical delay or processing error, the checks had been posted, but not to employees’ checking accounts. That is when I got the call and the message, “The money is gone.”
As my security team engaged, they quickly determined that checking account numbers had been changed in the payments processing system. This discovery raised many red flags and created a system lockdown as part of activating the security incident response plan. Once systems were locked, the next level of the investigation including log reviews, archive investigations, deep forensics using our internal resources, and other security incident actions led us to discover unusual email activity and folders that the employees did not recognize hiding inside their email accounts. The third level of the investigation revealed email rules to move alerts from the payments processing vendor into these mystery folders which hid the fact that hackers were changing deposit information in the payment processor’s system.
As the investigation deepened, we activated a higher-level security incident response to bring in our external security partners. With our internal and external security teams investigating, we found that employees had fallen prey to a password harvesting email more than two months prior and that the employees were using their same ID and password for multiple work accounts including the external payment processor. This unfortunate decision on their part to ignore the most basic ID and password common sense, and the first rule of passwords that we have all learned, led to employees losing thousands of dollars over a two-month period. All leading to the CFO calling me to say, “The money is gone.”
Reflecting on this scariest day of my IT leadership career, I first feel sympathy for the employees who lost thousands of dollars. They were not ignorant, lazy, or silly, etc. They were trusting people following rules and trying to make their lives just a little bit easier by not having to remember yet another infernal password. Beyond feelings of sympathy though, I felt a sense of responsibility. Not that any organization rule or process caused the employees to make poor password choices, however I did feel that the proper training, testing, and password remediation, might have helped those employees make better password choices and prevented them from losing thousands of dollars.
That thought and internal questioning led me to question how much security training was too much, a question that still does not have an answer today. As an experienced organization leader, I have discussed the how much question with many other leaders and heard of practices that include a daily phishing test with loss of job consequences to an annual slide show to check the box for the auditors. I suspect the right answer lies somewhere in-between those extremes for most organizations.
Reflecting on the event during our postmortem, several questions emerged including how much training is too much, how much testing is too much, and how do we protect people from themselves as well as protect your organization from the people who work here? As we struggled with those questions, we also had to acknowledge that no matter how much you train, test, and protect there will always be one person, one scam, one missing software patch, etc. that causes you to step into the unknown and lead to another scary day. However, the unknown does not have to be the unplanned.
The Weakest Link
As a security professional, organization leader, or the person responsible for organization security outcomes, you know that end users are the weakest link. Unfortunately, they are always the biggest link too in that they represent the public contact cloud where hackers are working every millisecond to penetrate your organization. Which leads to the questions, if you have to have end users, how do you prepare for the next scary security event? During my time leading organizations and IT, I have collected and implemented a few guidelines to improve end user behavior and prepare for the next scary day. These are not hard and fast rules and they cannot decrease your risk to zero. Nothing can.
However, they may just make your organization too annoying for hackers to keep trying, so they move on to the next organization. Each of the guidelines below creates a nudge or push toward better behaviors for end users and security teams:
1 – Train them, train them all.
Train your end users on how to set appropriate passwords, what a phishing email looks like, when to call the security team, and help them learn that they could lose thousands of dollars by ignoring the rules.
2 – Test them.
Test them on a regular and random basis, not as a way to check the box for an auditor, but as a way to help them learn about the ever-evolving hackers’ trick or treat bag so that they can act based on knowledge not on fear.
3 – Reward them.
Reward good behavior by doing simple things like announcing the first person to report this week’s phishing test or the top ten scores on the quarterly security review as a way of encouraging everyone to pay attention to those events.
4 – Establish rules.
Build rules to support, identify, and redirect end users when they do not practice good behavior so that they learn and cooperate rather than feeling punished.
5 – Plan.
Build your plan for the next scary day when all of the preventions fail. Hopefully, you never use the plan, but it is orders of magnitude better to have one when the time comes. You do not want to build a boat while you are sinking.
6 – Expect the unexpected.
Leave a what if we did not plan for this part in your plan. No plan can cover 100% of the what if. This is also where your plan should include the external security team that you already have under contract so they can engage right now.
Scary days happen in security. They always have and the always will. Prevention and preparation are the building blocks you can deploy to reduce the scary day count and magnitude or impacts to your organization. Your efforts as a security leader directly influence the outcomes of the next IT horror story so that IDs or passwords harvested from a phishing attack do not lead to your CFO calling to say, “The money is gone”!
Cybersecurity Training Resources: