As audit season is finally over, (over 65% of all our assessments and audits happen in Q4) we finally have a chance to grab a cup of coffee and look back at a couple trends in 2011 that we think separate the best security teams from the worst.
First, we need to discuss how we measure the quality of a security team. At Savid, it is pretty simple. Since we perform ethical hacking to assess security programs at organizations, if we got access to something we shouldn’t have, it counts as an intrusion in our books.
Most reviews of security controls look at what went wrong because it’s harder to learn from the successes. So let’s get the major failures of 2011 out of the way and then let’s talk about what our best clients did to prevent us from breaking in. Overall, most of the security programs we assessed had application security issues. However, 2011 was the worst we have ever seen in terms of the depth and breadth of application security issues – even though the majority of the security programs we tested were in compliance with regulations such as HIPAA, PCI, and GLBA.
Ok, so with that out of the way, what did the best security teams do to prevent our ethical hackers from breaking in? One Thing: Defense In Depth. 2011 was the first year where we saw significant advancements in defense in depth deployments among our clients. For example, we saw a noticeable increase in proper system hardening (using standards such as CIS and NIST) and reduction of excessive permissions that stopped our attacks cold.
Properly deploying defense in depth can be the distinction between a data breach requiring notification or a simple documented incident. The difference between the two for some organizations could be millions of dollars. Oh, and it also has a side effect of making most malware non-functional by preventing the malware from creating temporary files, accessing DLLs, etc. Remember, an attacker can’t exfiltrate data if the exfiltration tools won’t run!
So, how did the defense in depth stop our hacking? Most of the time we were able to get entry into a server or application but because of defense in depth we weren’t able to leverage that entry for any gain (such as privilege escalation, intellectual property, or personally identifiable information). For example, if we got access to an application via SQL injection, we weren’t able to execute any commands on the server because the SQL server was hardened to prevent usage of xp_cmd and the SQL service account had no local permissions on the box to do anything other than access the database files and folders. Another example is when we got access to a Linux system running a custom PHP login system via an upload vulnerable and a PHP Shell script. The hardening of Apache and the file system prevented our low privileged web server service account from reading local files, creating files, etc. Essentially, the account we got control of was useless and the attack vector wasted our time and effort.
Wasting an attacker’s time and effort is exactly what you as the defender want to do. Every minute an attacker is stalled or delayed is more time for your detective controls such as IDS/IPS, Logging, or even Tripwire like defenses to detect an attack. We recommend that every security program have a simple theme: If You Cannot Prevent It, Detect It. Leveraging defense in depth provides additional detection points along the attack path. Every time a low privileged user attempts to access the Accounting Share – detect it. Every time a server in your DMZ attempts to connect to a server in the internal network (which should be blocked by the firewall) detect it and respond to it. These are all indicators that the server is doing something it shouldn’t.
Our number one recommendation when deploying defense in depth with proper detection controls is the use of fake records – commonly called “honeytokens”. For example, if you have a public web application that has access to an internal database server through a firewall, place a fake record in the database using a randomly generated 30-64 character value. This record has no value and should never be accessed via normal web application use. If your firewall, web filter, or DLP system ever sees this traffic move across the network – something went wrong and you need to find out why.
Every year Verizon releases their Data Breach investigations Report and year after year they mention the same problem: The time between a breach occurring and detection of the breach is too long, sometimes it takes years! So this year, add some more defense in depth controls to your security program and watch how quickly it helps reduce the impact of a vulnerability.
In case you didn’t already know, October is National Cyber Security Awareness Month. Since its inception in 2001 by the National Cyber Security Division, the NCSAM encourages cybersecurity vigilance, education, and awareness for U.S. citizens and businesses.
This year, the White House issued a press release on October 1st proclaiming CSAM by President Obama. The release discusses how our nation’s growing dependence on cyber and information-related technologies, coupled with an increasing threat of malicious cyber attacks and loss of privacy, has given rise to the need for greater security of our digital networks and infrastructures. Therefore, during CSAM, we must “rededicate ourselves to promoting cyber security initiatives that ensure the confidentiality of sensitive information.”
Obama also reiterated how his administration is committed to treating our digital infrastructure as a strategic national asset and protecting this infrastructure is a national security priority.
The President followed up this proclamation in his weekly web address. “The lesson is clear. This cyber threat is one of the most serious economic and national security challenges we face as a nation,” citing how millions of Americans are victimized by identity theft and cybercriminals cost U.S. companies billions of dollars.
Obama proposed a joint effort by the government and private sector to ensure cybersecurity but also reminded us of individual responsibility.
It’s no wonder the president is so gung ho about cybersecurity since his own campaign servers fell victim to hackers when he was running for office.
Other than reaffirming his stance on the importance of cybersecurity and providing some obvious simple tips, the address did not contain much in the way of specific plans of actions to enhance it. Still, it was the most the president has had to say about the topic since his 16-minute speech in May when he declared he would create a new cyber security office at the White House.
This office still has no appointed coordinator. The cyber czar would coordinate with disconnected agencies that cannot pool their resources on this issue, including the CIA, the FBI, the NSA, and the Department of Defense. Maybe NCSAM is a good excuse to finally choose that cyber czar we have been hearing about for so long.
I just released a report for Dark Reading on how to build a multi-enterprise vulnerability management program. If you are dealing with outsourced vendors, or an outsourced supply chain, you should definitely give the article a read.
To summarize the article:
I offer many more details and tips within the article but step #1 is so critical that an entire article should be dedicated to just that!
According to the Wall Street Journal:
A 24-year-old living with his mother in France was arrested for ‘hacking’ into Obama’s twitter accounts in April 2009. Apparently he guesses the answer to a question related to password recovery in order to break into the accounts of famous people; he has no computer science training or financial motive. He posted screenshots to a few online forums and twitter found out within a few hours, either from a tip or from noticing when someone from France logs onto twitter as the President of the United States. (He did not actually tweet as POTUS, but just wanted to show he could break into the account.)
Now, this is news in and of itself but the interesting part is that the following academic paper, released about three weeks ago, told how easy this hack really is to implement. In this paper, Joseph Bonneau of the University of Cambridge and two colleagues from the University of Edinburgh show how hackers stand a 1 in 80 chance of guessing common security questions such as someone’s mother’s maiden name or their first school within three attempts.
According to the blog post announcing the paper’s release, Joseph Bonneau states:
There’s finally been a surge of academic research into the area in the last five years. It’s been shown, for example, that these questions are easy to look up online, often found in public records, and easy for friends and acquaintances to guess.
This is probably what happened to President Obama’s account. It would be interesting to know what the answer was to Obama’s secret question is, but it is very difficult to find the screenshots referenced in the WSJ article. The academic paper continues:
It turns out the majority of personal knowledge questions ask for proper names of people, pets, and places, and the rest are trivially insecure (eg “What is my favourite day of the week?”).
Which is why your system should never ask for things like that. Companies are starting to try and solve this problem. At RSA there was a new company, RavenWhite, which seemed to have a unique new approach which you can learn about at http://www.ravenwhite.com/iforgotmypassword.html
People really need to rethink the way they implement security to the end user. There is no way any automated technology could have prevented Obama’s account from being attacked simply because they were using the system in the perfectly intended way. It is what the user did afterword that differentiated the attacker from an actual twitter user.
Verizon Business Christian Moldes as a great post about Plane Crashes and Security Breaches and how they are very similar. He hits it right on the head! During our engagement wrap-up meetings where we explain the various potential scenarios an attacker can use to break into a client’s network we are always asked to put a specific ranking on a specific risk. I argue that that almost doesn’t matter because normally the big breaches are not from a single vulnerability but many chained together.
Christian quotes Malcom Gladwell, and says:
The typical [plane] accident involves seven consecutive human errors.
When we work with clients we normally see that breaches are caused by a chaining of at least three errors: exploitation of a vulnerability, then a mis-configuration is used to find a privileged account user name and password, and then data is found on the network somewhere it wasn’t supposed to be that the privileged account has access too.
Even with many controls in place you cannot always prevent a security breach. This is the exact reason why we recommend that incident response policies and processes (Which should be tested like you test your Disaster Recovery processes!) should be the FIRST THING you implement when building a security program at an organization followed by detective controls such as logging to detect a breach as soon as possible.
It’s a fact that every company, no matter how big or small, deals with security issues. And each company accumulates their own vault of secret knowledge and best practices on how to protect their information. However, it is this fragmentation of knowledge and experience that give attackers their biggest advantage.
Most major data loss events are surprises to the organization, which signifies that there is a lack of knowledge and awareness for that entity. But most likely the breach was also experienced by another company that now knows how it could have been prevented with hindsight. From a security perspective, it makes sense for companies to share this kind of information. But from a business perspective, there are obvious alarms:
Why give something away for free? Businesses may spend a huge chunk of their budgets building security defenses. Their current security practices may have been forged from a history of breaches, recoveries, and improvements. Why share it with a new company that has yet to earn the discipline for themselves? It is simply unfair to give this information away for free.
Why help competitors? The security disciplines earned by one business would be most applicable to other businesses with similar enterprise architectures – most likely competitors. It’s a dog eat dog world, and business entities by nature have no incentive to be kind to competitors.
Why voluntarily damage your reputation? In order to save other businesses from the same breach, a company would have to divulge the sordid details of their breach, including data loss and monetary loss. Why would any company want to advertise this embarrassment to their competitors, their customers, and the rest of the world?
The only solution is for businesses to let go of this “every man for himself” approach to security and instead adopt an “all for one and one for all” stance. An organized security knowledge sharing system must be supported to prevent unnecessary breaches and redundant, wasteful security spending. The question is, how can such a system be organized so that every business, no matter the size or the security budget, has an incentive to join?
Since Albert Gonzales and coconspirators committed the greatest data breach in history by swiping tens of millions of credit and debit card numbers, people are asking if there is enough encryption in the credit card process.
In January, it was discovered that a data breach left tens of millions of credit card numbers exposed to a gang of hackers over a two-year period. Heartland Payment Systems, the victim of the attack, has so far paid $32 million in forensic investigations, legal fees, and other charges related to the breach. Now Heartland CEO and chairman, Robert Carr, is calling for more encryption as a standard.
Credit card encryption is currently enforced by PCI DSS. But this industry guideline does not cover the end-to-end transactions. At times during the process, credit card data is left exposed. These points include transit between retailers, payment processors, and card issuers. Carr wants the industry to adopt an end-to-end encryption standard where at no point is the credit card number accessible in a usable format in the merchant or processor systems.
Carr told the Senate Homeland Security and Governmental Affairs Committee about his plan in order to save customers and other companies from the crisis endured by Heartland: “I believe it is critical to implement new technology, not just at Heartland, but industrywide.”
Heartland has learned from the breach and is now deploying tamper-resistant point-of-sale terminals at its member retailers. Carr wants all companies to be forced to take these measures.
But what do you think? Should we rethink PCI to add more requirements for credit card processing? Or will more security compliance only bog down an ineffective system? Perhaps Carr wishes to even the playing field by forcing competitors to adopt the time and expense to employ such encryption.
I believe what makes Heartland’s breach so unusual is that the cybercriminals did not immediately use the stolen credit card information. Usually, these breaches are detected only when fraudulent activity is detected, but Gonzales and his guys were holding on to tens of millions of credit card numbers.
Bruce Scheiner is talking about a great post at the Boston Review about the new age of cyber-warfare, and how cyber-warfare is greatly exaggerated. I couldn’t agree more. Granted, the US government has a cyber-warfare problem. All governments do, however, the bigger problem that is more real today is cyber-crime. I spoke at the Federal Reserve last week on this exact topic.
Small businesses are now being targeted because they have more money in their accounts and it is easier to transfer larger sums of money out of their accounts without fraud detection going off at banks.
A quote from the review sums it all up:
So why is there so much concern about “cyber-terrorism”? Answering a question with a question: who frames the debate? Much of the data are gathered by ultra-secretive government agencies—which need to justify their own existence—and cyber-security companies—which derive commercial benefits from popular anxiety. Journalists do not help. Gloomy scenarios and speculations about cyber-Armaggedon draw attention, even if they are relatively short on facts.
I try very hard not to do what they describe when I speak but it can be difficult especially to those that are not familiar with the problem.Cyber-crime is the death by a thousands cuts type of problem. $3,000 here, $5,000 there, but it all adds up pretty quickly. Cyber-warfare is much bigger and easier to point at than these small little fraud issues.
If you have 10 minutes of time, read the Boston Review article and give me some feedback. Are we in a situation where we as citizens have to be concerned about cyber-warfare like we were concerned about nukes in years past?
The folks over at Securosis have the right idea. Their “Project Quant” intends to provide a framework for evaluating the costs of patch management, while providing information to help optimize the associated processes. And from the looks of their recent data survey, corporations are more in need of patch management maturity than we thought.
The results are scary indeed for security-minded individuals concerned with systematic vulnerability management. Out of 100 companies surveyed about their patch management processes:
• 70% don’t currently measure how well, or efficiently, they roll out their software patch updates.
• Most companies were driven by compliance regulation, usually more than one regulation applied
• Process maturity was generally high for operating systems, but low for other asset types such as applications and drivers (see chart)
• Companies tend to utilize multiple vendor and 3rd-party tools in their patch management process
• 40% of companies depend on user complaints as one factor for patch validation
But what is most alarming about these results is that this data was collected through self-selective participation – meaning the participants were companies that already have active patch management efforts. The results would look even worse if a random sampling of companies were surveyed.
How has patch management fallen by the wayside? We’ve had decades to become accustomed to patch management, yet so few companies have developed an effective patch management system?
We can point the finger at conflicting priorities, a lack of industry standards, vendor inconsistencies, and a variance in maturity between technology platforms. There is plenty of blame to throw around, but in the end we must take personal responsibility.
In order to constrain patch management costs and enable us to develop more sophisticated systems of patch management, we need better tools to measure them. Project Quant is a step in the right direction.
Patch Tuesday is kind of like a monthly holiday for many businesses I work with. It gives employees a chance to kick back while their computers and systems do all the work of updating (Yes, I am joking). But is Patch Tuesday really a good idea? Many have expressed concerns about creating a consistent trend to patching that informs attackers about the update patterns of their targets.
Here are the three main disadvantages to the system of Patch Tuesday:
1. Patch Tuesday, by its very nature, makes exploits public. So while Patch Tuesday may make things easier for those who take the time to patch, it severely damages those who do not. Not only are exploits announced but hackers can analyze the patch to figure out exactly how to take advantage of unpatched systems. For this reason, the existence of Patch Tuesday actually makes the need to patch that much greater.
2. By having so many patches downloaded at the same time by so many systems, there is a definite toll on the bandwidth. This could tie up the bandwidth on your corporate network. But it is a much greater problem on a vendor’s servers who must contend with downloads from everyone who uses their products.
3. If you wait until a set time before patching, then you allow for your software to remain vulnerable until then. It’s not a big problem when the vulnerability is not widely known, but there have been cases where the vulnerabilities were made publicly known for months before patches were available. Either way, hackers have a fair amount of time to take advantage of the exploit before it is corrected with the patch.
Ultimately, whether you participate in Patch Tuesday or not depends on the nature of your unique enterprise. Some organizations cannot afford the risks of waiting to patch and require more vigilant updating to protect their systems. Other organizations may value the fluidity of operations over security and prefer a monthly scheduled time for patching.