As audit season is finally over, (over 65% of all our assessments and audits happen in Q4) we finally have a chance to grab a cup of coffee and look back at a couple trends in 2011 that we think separate the best security teams from the worst.
First, we need to discuss how we measure the quality of a security team. At Savid, it is pretty simple. Since we perform ethical hacking to assess security programs at organizations, if we got access to something we shouldn’t have, it counts as an intrusion in our books.
Most reviews of security controls look at what went wrong because it’s harder to learn from the successes. So let’s get the major failures of 2011 out of the way and then let’s talk about what our best clients did to prevent us from breaking in. Overall, most of the security programs we assessed had application security issues. However, 2011 was the worst we have ever seen in terms of the depth and breadth of application security issues – even though the majority of the security programs we tested were in compliance with regulations such as HIPAA, PCI, and GLBA.
Ok, so with that out of the way, what did the best security teams do to prevent our ethical hackers from breaking in? One Thing: Defense In Depth. 2011 was the first year where we saw significant advancements in defense in depth deployments among our clients. For example, we saw a noticeable increase in proper system hardening (using standards such as CIS and NIST) and reduction of excessive permissions that stopped our attacks cold.
Properly deploying defense in depth can be the distinction between a data breach requiring notification or a simple documented incident. The difference between the two for some organizations could be millions of dollars. Oh, and it also has a side effect of making most malware non-functional by preventing the malware from creating temporary files, accessing DLLs, etc. Remember, an attacker can’t exfiltrate data if the exfiltration tools won’t run!
So, how did the defense in depth stop our hacking? Most of the time we were able to get entry into a server or application but because of defense in depth we weren’t able to leverage that entry for any gain (such as privilege escalation, intellectual property, or personally identifiable information). For example, if we got access to an application via SQL injection, we weren’t able to execute any commands on the server because the SQL server was hardened to prevent usage of xp_cmd and the SQL service account had no local permissions on the box to do anything other than access the database files and folders. Another example is when we got access to a Linux system running a custom PHP login system via an upload vulnerable and a PHP Shell script. The hardening of Apache and the file system prevented our low privileged web server service account from reading local files, creating files, etc. Essentially, the account we got control of was useless and the attack vector wasted our time and effort.
Wasting an attacker’s time and effort is exactly what you as the defender want to do. Every minute an attacker is stalled or delayed is more time for your detective controls such as IDS/IPS, Logging, or even Tripwire like defenses to detect an attack. We recommend that every security program have a simple theme: If You Cannot Prevent It, Detect It. Leveraging defense in depth provides additional detection points along the attack path. Every time a low privileged user attempts to access the Accounting Share – detect it. Every time a server in your DMZ attempts to connect to a server in the internal network (which should be blocked by the firewall) detect it and respond to it. These are all indicators that the server is doing something it shouldn’t.
Our number one recommendation when deploying defense in depth with proper detection controls is the use of fake records – commonly called “honeytokens”. For example, if you have a public web application that has access to an internal database server through a firewall, place a fake record in the database using a randomly generated 30-64 character value. This record has no value and should never be accessed via normal web application use. If your firewall, web filter, or DLP system ever sees this traffic move across the network – something went wrong and you need to find out why.
Every year Verizon releases their Data Breach investigations Report and year after year they mention the same problem: The time between a breach occurring and detection of the breach is too long, sometimes it takes years! So this year, add some more defense in depth controls to your security program and watch how quickly it helps reduce the impact of a vulnerability.
I just released a report for Dark Reading on how to build a multi-enterprise vulnerability management program. If you are dealing with outsourced vendors, or an outsourced supply chain, you should definitely give the article a read.
To summarize the article:
I offer many more details and tips within the article but step #1 is so critical that an entire article should be dedicated to just that!
Verizon Business Christian Moldes as a great post about Plane Crashes and Security Breaches and how they are very similar. He hits it right on the head! During our engagement wrap-up meetings where we explain the various potential scenarios an attacker can use to break into a client’s network we are always asked to put a specific ranking on a specific risk. I argue that that almost doesn’t matter because normally the big breaches are not from a single vulnerability but many chained together.
Christian quotes Malcom Gladwell, and says:
The typical [plane] accident involves seven consecutive human errors.
When we work with clients we normally see that breaches are caused by a chaining of at least three errors: exploitation of a vulnerability, then a mis-configuration is used to find a privileged account user name and password, and then data is found on the network somewhere it wasn’t supposed to be that the privileged account has access too.
Even with many controls in place you cannot always prevent a security breach. This is the exact reason why we recommend that incident response policies and processes (Which should be tested like you test your Disaster Recovery processes!) should be the FIRST THING you implement when building a security program at an organization followed by detective controls such as logging to detect a breach as soon as possible.
If you are annoyed by the constant updating, amending, and general tinkering of HIPAA compliance regulations, then you may have to get used to it. The proposed healthcare reform bill not only contains additional HIPAA provisions but a proposal for periodic updates.
At this moment, the healthcare reform bill has just passed a key Senate committee. Within the 1,000 page document are proposals for regular HIPAA renewals that would allow for biannual reviews of existing HIPAA standards and operation rules, and the ability to make recommendations and updates.
The bill proposes four additional HIPAA transactions for healthcare industries where their data and information must comply with the most current standards and operating rules – health claims, enrollment/disenrollment in plans, health plan premium payments, and referral certification and authorization. The bill would give healthcare industries until 2015 to get compliant in these areas. There is also a list of proposed penalties for those who fail to comply to the HIPAA requirements.
The healthcare industry already had to adjust to HIPAA amendments that were caveats to accepting money in Obama’s economic stimulus bill earlier this year. Those amendments manifested in the Health Information Technology for Economic and Clinical Health (HITECH) Act extended HIPAA regulations to business associates and required notification to patients in the event of security breaches. While HITECH provided $31.2 billion for healthcare infrastructure and adoption of electronic health records, it also increased compliance obligations and strengthened enforcement penalties.
The bill basically makes government regulation of healthcare IT regulatory with biannual updates to HIPAA. I’m not sure if more government regulation and compliance is going to improve the quality of healthcare privacy for individuals, but I am sure that many will oppose these changes.
Of course there’s no guarantee the bill will not change drastically as it goes through the House of Representatives on the next leg of its journey. And, even then, it may or may not be passed by Congress and signed by Obama.
Bruce Scheiner is talking about a great post at the Boston Review about the new age of cyber-warfare, and how cyber-warfare is greatly exaggerated. I couldn’t agree more. Granted, the US government has a cyber-warfare problem. All governments do, however, the bigger problem that is more real today is cyber-crime. I spoke at the Federal Reserve last week on this exact topic.
Small businesses are now being targeted because they have more money in their accounts and it is easier to transfer larger sums of money out of their accounts without fraud detection going off at banks.
A quote from the review sums it all up:
So why is there so much concern about “cyber-terrorism”? Answering a question with a question: who frames the debate? Much of the data are gathered by ultra-secretive government agencies—which need to justify their own existence—and cyber-security companies—which derive commercial benefits from popular anxiety. Journalists do not help. Gloomy scenarios and speculations about cyber-Armaggedon draw attention, even if they are relatively short on facts.
I try very hard not to do what they describe when I speak but it can be difficult especially to those that are not familiar with the problem.Cyber-crime is the death by a thousands cuts type of problem. $3,000 here, $5,000 there, but it all adds up pretty quickly. Cyber-warfare is much bigger and easier to point at than these small little fraud issues.
If you have 10 minutes of time, read the Boston Review article and give me some feedback. Are we in a situation where we as citizens have to be concerned about cyber-warfare like we were concerned about nukes in years past?
It isn’t easy being a healthcare organization these days. The current healthcare reform progress is set to turn their organizations upside down one way or another. And, in spite of this, they must continue to function by providing healthcare while, at the same time, dangling over the maw of the Department of Health and Human Services Office for Civil Rights. A slight slip in security compliance and the OCR is ready to bite down on them hard with HIPAA violation penalties.
It’s not really fair when you consider the IT security is not and should not be a core competency of an organization with the primary function of healing the sick – leave the tech stuff to the techies. Fortunately, it looks like healthcare organizations will receive some help with the recently announced CSF Ready Program.
Simply put, the CSF Ready Program takes some of the guesswork out of choosing security products for healthcare organizations by providing criteria for evaluation. With everything from firewalls to anti-virus software in the market, healthcare organizations have little way of knowing which products are useful and support their security framework. The criteria developed in the CSF Ready Program is intended to aid in assessing an information security product’s capabilities, functionality, effectiveness, and support of security practices.
This program was developed by HITRUST (Health Information Trust Alliance) – an alliance of healthcare professionals and IT vendors – in response to healthcare organizations requiring more assistance and guidance in selecting information security products.
The CSF Ready Program also assists with vetting IT products to determine if they promote the Common Security Framework – the first IT security framework developed specifically for healthcare information, also created by HITRUST.
So do you think this is a good idea? Should healthcare organizations rely on a third party to influence their security framework and security products? I must admit the CSF Ready Program does have some heavy hitters behind it, including an advisory committee of security professionals representing healthcare organizations and a steering committee from chairs of ICSA Labs, McAfee, Cisco, NSS Labs, Symantec, and VeriSign.
Patch Tuesday is kind of like a monthly holiday for many businesses I work with. It gives employees a chance to kick back while their computers and systems do all the work of updating (Yes, I am joking). But is Patch Tuesday really a good idea? Many have expressed concerns about creating a consistent trend to patching that informs attackers about the update patterns of their targets.
Here are the three main disadvantages to the system of Patch Tuesday:
1. Patch Tuesday, by its very nature, makes exploits public. So while Patch Tuesday may make things easier for those who take the time to patch, it severely damages those who do not. Not only are exploits announced but hackers can analyze the patch to figure out exactly how to take advantage of unpatched systems. For this reason, the existence of Patch Tuesday actually makes the need to patch that much greater.
2. By having so many patches downloaded at the same time by so many systems, there is a definite toll on the bandwidth. This could tie up the bandwidth on your corporate network. But it is a much greater problem on a vendor’s servers who must contend with downloads from everyone who uses their products.
3. If you wait until a set time before patching, then you allow for your software to remain vulnerable until then. It’s not a big problem when the vulnerability is not widely known, but there have been cases where the vulnerabilities were made publicly known for months before patches were available. Either way, hackers have a fair amount of time to take advantage of the exploit before it is corrected with the patch.
Ultimately, whether you participate in Patch Tuesday or not depends on the nature of your unique enterprise. Some organizations cannot afford the risks of waiting to patch and require more vigilant updating to protect their systems. Other organizations may value the fluidity of operations over security and prefer a monthly scheduled time for patching.
As companies cope with further compliance regulations passed down from above, they also continue to use outsourcing to minimize costs and maintain a focus on core competencies. However, the very idea of outsourcing appears to conflict with the issue of security compliance. How can you keep PHI and financial data private when you are giving it to someone else?
At least this is one way in which the new HITECH regulations may help out the covered entities. Business associates are now subject to the same compliance regulations and penalties as their covered entity partners. But I don’t believe this means healthcare companies should disregard due diligence when outsourcing PHI to their business associates just because they can rely on the strict hand of the OCR or HHS.
However, the problem is far more complicated when you consider offshore outsourcing. Foreign nations are not subject to the same compliance regulations as the US (compliance regulations are actually stricter in Europe). If a foreign partner you outsource to is caught in violation of compliance, then I suppose you are accountable to the HHS for their mistake. Of course there is also the long-term damage to your reputation to worry about.
This doesn’t necessarily mean that offshore outsourcing should be avoided. Kaiser Permanente overcame the obstacle of working with non-compliant overseas companies with excellent due diligence. Kaiser conducted interviews with all IT providers in India and had its partners sign formal business agreements. Also, Kaiser maintains complete control of its information by having Indian vendors log on to Kaiser’s U.S. database to do their programming work.
Transparency is key. Establish regulatory risk management early and often. Your outsourcing partner should be willing and able to provide and explain their service delivery model, data flows, and third party resources. Additionally, you must be willing to commit enough resources to adequately monitor the partner’s policies and practices to ensure compliance is met on their end. Consistently review contracts and make sure duties are clear and understood when it comes to compliance.
Consider this: A hacker finds a security hole on your website that exposes hundreds of thousands private customer data including names, emails, and even passwords. The hacker does not steal this information. Instead, he quietly alerts you via email; but at the same time he makes the security vulnerability public information on his blog.
Do you: A) Thank the hacker for bringing the security vulnerability to your attention? Or, B) seek legal action against the hacker who damaged your company’s reputation by alerting the public about your sloppy security?
This is the controversy surrounding “HackersBlog.org” – a blog where anonymous hackers alert the public about security vulnerabilities. Each blog entry lists the site hacked, how the data was captured, and what private information is accessible.
The site made its first splash when a Romanian hacker named “Unu” hacked the databases of Kapersky – ironically, one of the leading companies in the security and antivirus market. “Seems incredible but unfortunately, its true,” writes Unu, “Alter one of the parameters and you have access to EVERYTHING: users, activation codes, lists of bugs, admins, shop, etc.”
The next target, which occurred the very next day, was BitDefender – another antivirus software company. Unu used an SQL injection to show how data could be easily extracted.
In an official statement, Kapersky denied the attack was successful. BitDefender called the hack an attack and portrayed it negatively even though “the action did not intend to steal information but simply show a vulnerability.” Usually when sites are hacked, the companies are left scrambling to put out the public relations fires.
So, alerting the website via email about the found vulnerability? That sounds white hat enough. So why expose the flaw to everyone publicly on the Internet and wreck the reputation of that company? “If we just send an email, without making it public they would fix only that parameter that we announced,” says Unu, “and it is possible [for there] to be others too.”
It seems that HackersBlog owes its allegiance to the public and not to the companies who allow for these breaches in security. “I’m not a criminal, I [am] not a burglar,” says Unu, “You do the work of a [pentesting firm] that could test the security of the site or [sic] server at the request of the owner. The difference is that the firm makes this for a big sum of money, a very big sum of money, and we do it as a hobby, for pleasure, free, and most of the times we do that much better, but we don’t even get a simple ‘Thank you.’”
Leave me a comment and let me know what you think about this Hacker Blog site!
The reality of the situation is that there is no such thing as a 100% secure place on Earth. IT security professionals can only do what they can to make things as secure as possible. There is no computer security defense that will succeed every time, forever, or as I say when presenting at conferences “You cannot buy your security at the local best Buy”. (NOTE: If you have an indepth udnerstanding of heypots, you can skip this post)
Because of my interaction and association with the Honeynet Project I am frequently asked what benefits honeynets can provide to the normal everyday IT security engineer. Simply put, honeypots provide us with early warning so we can be vigilant and prepare our defenses accordingly.
Additionally, honeypot data is a great way to loosen the purse strings of corporate managers who are hesitant to dip into the company budget. You can make a case for a larger IT security budget by showing them the attack data on the honey pot – who is attacking, how they are attacking, how often, and, most importantly, what damage they could potentially do to the enterprise if the proper defenses are not built. Actual data speaks louder than any verbal argument.
Here’s an analogy to help you understand the importance of honeypots.
Imagine you are tasked with defending your king’s castle from an impending enemy attack. But you don’t know who the enemy is, where they are coming from, how many there are, or what kind of attacks they will use. They may use spears, rifles, or just sharp rocks. They may attack on horseback, with catapults, or maybe with tanks.
So what kind of defenses should you build? A 30 foot tall wall surrounding the castle or a moat? Should you put archers in the towers or build turrets? Maybe you should just pile up a few sandbags and hope for the best. Maybe the real problem is the village idiot on the inside… =)
Without knowing anything about the impending attack, you do not know what an appropriate defense would be. You may dig a futile trench around your castle while the enemy attacks with stealth bombers. Or you may encapsulate your entire castle in an impenetrable crystalline dome while your five attackers sling rocks at it. The latter defense may work, but your king might not be too happy with you for wasting his whole treasury on an unnecessarily robust defense.
A Honeypot is perhaps like a decoy paper version of your castle set up a mile before your actual king’s castle. The paper castle has no value, but you can see what attacks your enemy uses when they attack it, and thus prepare accordingly.
Honeypots allow you to understand what kind of attacks you can expect. With this knowledge you can allocate resources to defenses appropriately, without under or overspending. Now, with all that said not everyone can run out and install a honeypot and solve their problems. Honeypots require a lot of maintenance, watching, and i fnot properly installed you can actually decrease the security of your network.
If you don’t want to take the chance of hurting your own security posture, there are services that will configure and run honeypots for you and provide you with their data. Symantec and McAfee offer such services.