So, if malware presents such an obvious risk in data breaches, why havent security organizations done something to prevent its use?
The fact is that traditional approaches to stopping malware, such as relying on signature-based anti-virus, no longer provide sufficient protection. Its too easy for malware authors to write code that is able to avoid detection and operate successfully well below the anti-virus radar. Attackers can modify existing malware slightly, add new functionality where needed, and enhance their ability to avoid detection with little effort using readily available tools.https://o1.qnsr.com/log/p.gif?;n=203;c=204660766;s=9477;x=7936;f=201812281312070;u=j;z=TIMESTAMP;a=20392931;e=iThis was certainly the case with the Zeus family of malware, estimated to have now infected millions of computers in the U.S. and around the world. Zeus is potentially responsible for thefts measured in the tens of millions of dollars.
At the heart of the matter is that once malware is installed, it is incredibly difficult to spot. It can take weeks or even months to realize that malware is present. Often by the time a security team realizes their systems are infected, it may already be far, far too late to prevent a serious breach.
Adapting their defenses to deal with new, sophisticated attacks requires organizations to think more and more in multiple-dimensions. Its no longer sufficient to put in place tools to protect just the network infrastructure, an approach that defined the security industry for a long time. Instead, organizations must think carefully about the interactions of systems, information, and users in much more complex ways. They must watch for unusual activity, suspicious traffic flows, and changes to systems near to critical data stores.
New security approaches to fighting malware rely on malware's similarities. Individual viruses, for example, may differ from each other significantly, but they do exhibit common traits in the way they behave, especially as they steal information.
Such additional layers of defenses combine to help detect when a change has occurred that would indicate malware is active, and to bolster the defenses around the target of attack (which is often saleable data such as credit card information, healthcare data or intellectual property.) Layered anti-malware defenses now usually include such technologies as file integrity and behavioral monitoring, anomaly detection and, of course, traditional security tools such as encryption.
Of course, attackers havent stood still either. Some have moved far beyond simple brute-force efforts to find weaknesses in your firewall. The April breach at the Oak Ridge Lab is a perfect example. There, attackers combined social engineering, phishing, zero-day vulnerabilities, and malware in a well-coordinated and effective barrage resulting in the successful penetration of a national laboratory handling highly sensitive information.
However, its important to remember that this type of attack is still not the norm. Studies show that most breaches occur as result of far simpler and more easily addressed security deficiencies including such well understood attacks as SQL-injection (in which Web-facing applications are compromised using specially constructed instructions entered into web forms).
Other weaknesses in IT management processes continue to leave organizations vulnerable to an opportunistic hacker wanting to gain access and plant malware. For example, many organizations fail to enforce such basic safeguards such as not changing default accounts and passwords on new systems as they are installed -- a common entry point for malicious outsiders. According to one report, a lack of timely patching may have been part of Sony's recent Playstation network hack.
As businesses struggle to implement additional layers of security to detect malware and protect information it is hardly surprising that the prospect of moving such data out into the cloud appears even more daunting. The complexity of interaction within the network provides the perfect mask for malware to operate undetected. Virtualization, and cloud computing models add complexities that may enable malware authors to target new technical and process vulnerabilities in complex environments with multiple tenants and managed by potentially several third-party organizations.
In the final analysis, malware is extremely difficult to detect once it is on your network, therefore preventing infection in the first place is the best option. Organizations should focus on hardening Web-facing applications, training users, and implementing good procedures for detecting anomalous behavior on their network, as well as focusing on data-centric security practices.
If they do, they at least have a fighting chance at stopping, or at least reducing, the impact of dangerous malware before it drags them and their customers, into the headlines.
Geoff Webb is a senior product manager at endpoint data security firm Credant Technologies. Geoff has over 20 years of experience in the tech industry and has provided commentary on security and compliance trends, and written on a number of related topics for such journals and websites as: CIO Update, Internetnews.com, e-Finance & Payments, Law & Policy, Dark Reading, and BankInfoSecurity.com among others. Prior to Credant, Geoff held management positions at NetIQ, FutureSoft, SurfControl and JSB. He holds a combined bachelor of science degree in computer science and prehistoric archaeology from the University of Liverpool. >