Infosec Island Latest Articles Adrift in Threats? Come Ashore! en hourly 1 SOC Automation: Good or Evil? Thu, 24 May 2018 02:26:00 -0500 Many security operations centers (SOCs) face the same recurring problem — too many alerts and too few people to handle them. Over time, the problem worsens because the number of devices generating alerts increases at a much faster rate than the number of people available to analyze them. Consequently, alerts that truly matter can get buried in the noise.

Most companies look at this problem and see only two solutions:  decrease the number of alerts, or increase the number of staff. Luckily, there’s a third option: automation, which can greatly maximize the efficiency of analysts’ time

Traditionally, automation has been viewed as an all-or-nothing proposition. But, times change. Companies can implement automation at various points of the incident response process to free analysts from mundane, repetitive tasks, while maintaining human control over how they monitor and react to alerts. Ultimately, the goal should be to strike a balance between low-risk processes that can be automated with minimal impact and the higher-risk ones that need to be handled by analysts.

Before launching into some level of SOC automation, the following should be considered: 1) Is the organization winning or losing the cyber battle?; 2) if it is winning, does it have the right tools to continue doing so?; and 3) if its is losing: what should it do?

Whether an organization is winning or losing, understanding the pros and cons of automation is critical to any project’s success.

Benefits of Automation

Automation has typically been favored in low-impact environments, but it has been frowned upon in high-impact environments such as utility and healthcare because of the negative impact false positives can cause.

The main benefits of SOC automation include:

  • More consistent response to alerts and tickets
  • Higher volume of ticket closure and response to incidents
  • Better focus by analysts on higher priority items
  • Improved visibility into what is happening
  • Coverage of a larger area and a larger number of tickets

Downsides of Automation

Nothing is more taxing than dealing with a false positive, which happens when a system interprets legitimate activity and flags it as an attack. In some industries, a false positive can disrupt business processes resulting in lost revenue, downtime for industrial organizations and even put lives at risk in hospital settings.

Major downsides include:

  • Shutting down operations
  • Misclassifying an attack so the wrong action is taken
  • Automating tickets that should have been handled manually
  • Missing key information or data
  • Making the wrong or inappropriate decision

Best Practices for Automation

In the past, companies typically looked at automation’s potential downsides and then decided to avoid it because doing so seemed safer. However, today, more companies are realizing that if they do not implement some degree of automation, they increase their chances of missing an attack, which could cause more damage than the negative effects of automation.

Given this scenario, security practitioners should look at adopting the following best practices for automation.

Create a Thorough Strategy

The plan should address the following key questions:

  • What areas generate the most alerts?
  • What alerts take up most of the analysts’ time?
  • Which responses are very structured and which ones do the analysts respond to in a predictable way?
  • Can an automated playbook be used to handle certain events?

Take a Measured Approach

One of the key rules of security is to always avoid extremes. For example, automating everything can open a can of worms — forcing security executives to justify the approach by claiming analysts could not keep up with the tickets.

Finding a balance by automating tasks/tickets that are manually intensive, are highly repeatable, and distract analysts from important  functions -- is a good starting point. Automation should allow the company to improve SOC efficiency while maintaining acceptable levels of risk — both on the operational side and the security side.

The trick is to manage and control false positives, not eliminate them.

Know, and Don’t Automate, Tasks that Require Human Analysis

These include alerts that affect:

  • Critical applications or systems
  • Business process, financial and operational systems
  • Systems that contain large amounts of sensitive data
  • Large-scale compromise indicators


The need for SOC automation is increasing in urgency since adversaries are also harnessing software and hardware to develop and carry out attacks. Consequently, the velocity and sophistication of threats is rising. Keeping pace with programmatic attacks inevitably requires automating certain SOC functions and processes. Following the recommendations outlined above can help determine those that should be automated, and those that shouldn't.

About the author: John Moran is Senior Product Manager for DFLabs and a security operations and incident response expert. He has served as a senior incident response analyst for NTT Security, computer forensic analyst for the Maine State Police Computer Crimes Unit and computer forensics task force officer for the US Department of Homeland Security. John currently holds GCFA, CFCE, EnCE, CEH, CHFI, CCLO, CCPA, A+, Net+, and Security+ certifications.

Copyright 2010 Respective Author at Infosec Island]]>
Can Organisations Turn Back Time after a Cyber-Attack? Wed, 23 May 2018 07:22:00 -0500 In the aftermath of a cyber breach, the costs of disruption, downtime and recovery can soon escalate. As we have seen from recent high profile attacks, these costs can have a serious impact on an organisation’s bottom line. Last year, in the wake of the notPetya attack, Maersk, Reckitt Benckiser and FedEx all had to issue warnings that the attacks had cost each company hundreds of millions of dollars. Whilst the full extent is not yet known, it has underlined the financial impact that such breaches can have.   

The severity of a breach is often linked to the costs associated with responding and remediating the damage. However, there are ways for organisations to minimise one particularly costly part of the process: new approaches to post breach remediation mean that organisations can, in effect, roll back time to a ‘pre-breach’ state.

The costs of a breach

Cyber attacks can cripple a business and take days to clear up. For larger organisations that are affected by an incident, the cost of remediation could include damage to the brand’s reputation, legal costs, setting up response mechanisms to contact breach victims, and more. For smaller organisations, even though the costs of remediation might be smaller, they’ll take up a greater proportion of their operating revenue; from lost data to damaged or inoperable equipment, as well as the disruption to normal business. There is also the cost of any fines that are generated because of failures in compliance. In fact, Ponemon now puts the average cost of a breach at $3.62 million. 

This clean-up operation can represent a serious drain on an organisation’s time and resources. The process of repairing and recovering data from compromised IT assets is consistently reported as one of the most high-cost elements of the breach. Ransomware attacks, in particular, are likely to become more difficult to remediate, by targeting systems that are more difficult to backup, which means that the costs of cleaning-up after a breach are set to get worse. Paying the ransom is no guarantee that files will be recovered: in fact 20% of ransomware victims that paid never get their files back.    

Part of the challenge is that cyber attacks are getting smarter and stealthier, and stopping every cyber attack in its tracks, before it reaches the network and can inflict any damage, is unrealistic. What organisations should aim for is, in all cases, to identify the virus as quickly as possible, halt the executable, and isolate the infected endpoint from the network. During execution, malware often creates, modifies or deletes system files and registry settings, as well as making changes to configuration settings. These changes – or remnants left behind – can cause system malfunction or instability.  

For organisations that are dealing with hundreds of incidents every week, there can be a serious impact to the business from working to re-image or re-build systems, or reinstall files that have been affected. There’s not only the lost work to factor in, but also the downtime while systems are restored as employees are stymied if they can’t access the files and systems they need to.

There are approaches through which these costs can be minimised: a new generation of endpoint protection observes the malware’s behaviour in order to flag activities that are seen as abnormalities and steps in the line of execution to deflect it completely.  Moreover, this new generation of solutions has remediation capabilities to reverse any modifications made by malware.

This means that when files are modified or deleted, or where changes are made to configuration settings or systems, it can undo damage without teams having to re-image systems. This ability to automatically rollback compromised systems to their pre-attack state minimises any downtime and lost productivity.

Assessing the Impact

The work isn’t done yet: an often-overlooked aspect of post-event evaluation of what happened should focus on how to prevent a repetition of a similar incident. Clear visibility of the kill chain and the affected endpoints across an organisation, in a timely manner, is essential for security staff to quickly identify the scope of the problem. In order to assess the impact and potential risk, organisations need to have assurance afterwards to confirm if a particular threat was present on their estate – the ability to search for Indicators of Compromise (IoC) is vital. Real-time forensic data allows organisations to track threats or investigate post-attack to provide insights into exactly which vulnerability the attacker targeted, and how. These can pinpoint the parts of the system that were directly affected and also determine if any further remediation actions are required. 

With the costs of breaches escalating, it’s more important than ever to have the capability to learn from incidents to avoid history repeating itself. Even if it’s not possible to thwart every attack, a full security approach which includes prevention, detection, automatic mitigation and forensics will ensure that the impact of any incident is minimised and that normal operations can be resumed as quickly as possible.  

About the author: Patrice Puichaud is Senior Director for the EMEA region, at SentinelOne.

Copyright 2010 Respective Author at Infosec Island]]>
The AWS Bucket List for Security Wed, 23 May 2018 06:22:39 -0500 With organizations having a seemingly insatiable appetitefor the agility, scalability and flexibility offered by the cloud, it’s little surprise that one of the market’s largest providers, Amazon’s AWS, continues to go from strength to strength. In its latest earnings report, AWS reported a 45% revenue growth during Q4 2017.

However, AWS has also been in the news recently for the wrong reasons, following a number of breaches of its S3 data object storage service. Over the past 18 months, companies including Uber, Verizon, and Dow Jones have had large volumes of data exposed via misconfigured S3 buckets. Between them, the firms inadvertently made public the digital identities of hundreds of millions of people.

Sub-par security practices

It’s important to note that these potential breaches were not caused by problems at Amazon itself. Instead, they were the result of users misconfiguring the Amazon S3 service, and failing to ensure proper controls were set-up when uploading sensitive data to it.  In effect, data was placed in S3 buckets and secured with a weak password – or in some cases, no password at all.  

Amazon has made several tools available to make it easier for S3 customers to work out who can access their data, and to help secure it. However, organizations still need to use access controls for S3 that go beyond just passwords, such as two factor authentication, to control who can login to their S3 administration console.

But to understand why these basic mistakes are still being made by so many organizations, we need to look at the problem in the wider context of public cloud adoption in many enterprises. When speaking with IT managers that are putting data in the cloud, it is not uncommon to hear statements such as ‘there is no difference between on-premise and cloud servers.’ In other words, all servers are seen as being part of the enterprise IT infrastructure: and they will use whichever environment best suits their needs, operationally and financially.

Old habits die hard

However, that statement overlooks one critical point: cloud servers are much more exposed than physical, on-premise servers. For example, if you make a mistake when configuring the security for an on-premise server storing sensitive data, it is still protected by other security measures by default. The server’s IP address is likely to be protected by the corporate gateway, or other firewalls used to segment the network internally, and other security layers which stand in the way of potential attackers.

In contrast, when you provision a server up in the public cloud, it is accessible to any computer in the world. By default anybody can ping it, try to connect and send packets to it, or try to browse it. Beyond a password, it doesn’t have all those extra protections from its environment that an on-premise server has. And this means you must put controls in place to change that.

These are not issues that the organization’s IT teams, who have become comfortable with having all those extra safeguards of the on-premise network in place, have to regularly think about when provisioning severs in the data centre. There is often an assumption that something or someone will secure the server – and this carries over when putting servers in the cloud.

So when utilizing the cloud, security teams need to step in and establish a perimeter, define policies, implement controls, and put in governance to ensure their data and servers are secured and managed effectively – just as they do with their on-premise network.  

Security 101 for cloud data

This means you will still need to apply all the basics of on-premise network security when utilizing the public cloud: access controls defined by administration rights or access requirements and governed by passwords; filtering capabilities defined by which IP addresses need connectivity to and from one another.

You still need to consider if you should use data encryption, and whether you should segment the AWS environment into multiple virtual private clouds (VPC). Then you will need to define which VPCs can communicate with each other, and place VPC gateways accordingly with access controls in the form of security groups to manage and secure connectivity.

You will also need controls over how to connect your AWS and on-premise environments, for example using a VPN. This requires a logging infrastructure to record actions for forensics and audits, to get a trail of who did what. None of these techniques are new, but they all have to be applied correctly to the AWS deployment, to ensure it can function as expected.

Extending network security to the cloud

In addition to these security basics, IT teams also need to look at how they should extend network security to the cloud. While some security functionality is built into cloud infrastructures, it is less sophisticated than the security offerings from specialist vendors.

As such, organizations that want to use the cloud to store and process sensitive information are well advised to augment the security functionality offered by AWS with virtualized security solutions, which can be deployed within the AWS environment to bring the level of protection closer to what they are used to within on-premise environments.  

Many firewall vendors sell virtualized versions of their products customized for Amazon. While these come at a cost, if you want to be serious about security, you need more than the measures that come as part of the AWS service. Ultimately you need to deploy additional web application firewalls, network firewalls and implement encryption capabilities to mitigate your risks of being attacked and data being breached.

This has the potential to add overall complexity to the security management. However using a security policy management solution will greatly simplify this, enabling security teams to have visibility of their entire estate and enforce policies consistently across both AWS and the on-premise data centre while providing a full audit trail of every change.  

About the author: Professor Avishai Wool is co-founder and CTO at AlgoSec.

Copyright 2010 Respective Author at Infosec Island]]>
Achieving Effective Application Security in a Cloud Generation Wed, 16 May 2018 02:04:05 -0500 Today’s modern applications are designed for scale and performance. To achieve this performance, many of these deployments are hosted on public cloud platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) for their benefit of elasticity and speed of deployment. The challenge is that effectively securing cloud hosted applications to date has been difficult. There are many high-profile security events involving successful attacks on cloud-hosted applications in the media, and these are only the examples that were disclosed to the public.

In reality, traditional security deployment patterns do not work effectively with applications hosted on public cloud platforms. Organizations should not try to push their previous on-premises application security deployments into cloud environments for several reasons. 

Cloud application security requires new approaches, policies, configurations, and strategies that both allow organizations to address business needs and security risks in unison. Not incorporating these will no doubt deliver an insufficient security posture and cost unnecessary time and money. 

The balance of performance and security  

Whether your organization is a one-person startup, a global enterprise, or anything in between, you depend on applications to operate effectively. You cannot afford down time with these applications, and for many the cloud is still a confusing space when it comes to who is responsible for security. Unfortunately, a single unpatched vulnerability in an application can let an attacker penetrate your network, steal or compromise your data along with that of your customers—causing significant disruption to your operations. According to a recent report, “Unlocking the Public Cloud,”74 percent of respondents stated that security concerns restrict their organization’s migration to public cloud. Public cloud adoption is rapidly growing, yet security is the largest area of resistance when moving to the cloud. 

Many organizations still rank performance well over security, but they should be in a balance with equal importance given the risks. For example, in a May 2018 report from Ponemon Institute, 48 percent of the 1,400 IT professionals who responded said they value application performance and speed over security.

While deploying layer 7 protections is extremely paramount to securing applications, it’s also essential that any security technology integrates deeply with existing cloud platforms and licensing models.

Security measures should be deeply coupled with the dynamic scalability of public cloud providers such as AWS, Azure and GCP, ensuring that performance handling requirements are addressed in real-time without any manual interventions. Also, organizations should direct access to the native logging and reporting features available to cloud platforms.

Fixing application vulnerabilities in the cloud

You wouldn’t necessarily think this, but application vulnerabilities are pervasive and often untouched until it is too late. Unfortunately, fixes or patches are a reactive process that leaves vulnerabilities exposed for far too long (months isn’t uncommon). The problem is clear and vulnerability remediation on an automated and continuous basis is paramount in ensuring application security both on-premise and in the cloud.

In reference to the Ponemon research, 75 percent of organizations experienced a material cyber-attack or data breach within the last year due to a compromised application. Interestingly, only 25 percent of these IT professionals say their organization is making a significant investment in solutions to prevent application attacks despite the awareness of the negative impact of malicious activity.

Because of frightening statistics like these, it is essential to implement a set of policies that provide continued protection of applications with regular vulnerability management and remediation practices, which can even be automated to ensure that application changes don’t open up vulnerabilities.  

Security aligned with the cloud

Here are some best practices for effective application security in a cloud generation:

  1. Application security must provide the ability to satisfy the most demanding use-cases specific to cloud hosted applications. Also, do this without carrying the management overhead of your legacy on-premises architectures.
  2. Fully featured API that provides complete control via orchestration tools already used by DevOps teams.
  3. Security needs to be deployable in high-availability clusters and auto-scaled with the use of cloud templates. Also, they should be managed and monitored from a single pane of glass user interface.
  4. It is imperative they integrate directly with native public cloud services including Elastic Load Balancing, AWS CloudWatch, Azure ExpressRoute, Azure OMS and more.
  5. It is essential security technologies provide complete licensing flexibility including pure consumption-based billing. This allows you to deploy as many instances as needed and only pay for the traffic that is secured through those applications.

Basically, securing applications effectively in the cloud means adopting new ways of thinking about security, and it is critical to look at the security technology stack you have deployed today. Assess what is lacking and adopt what is required for regular monitoring and vulnerability remediation on those applications. It is key to focus on protecting each application with the right level of security. This means deploying security that is aligned with your current cloud consumption and leveraging tools designed for those cloud environments that allow you to build security controls.

About the author: Jonathan Bregman has global responsibility for leading Barracuda's web application security product marketing strategy. He joins Barracuda from Seattle, WA where he worked with Microsoft, Amazon and their ISV partners to build innovative marketing programs focused on driving awareness and demand for emerging products in enterprise software, cloud services and cybersecurity.

Copyright 2010 Respective Author at Infosec Island]]>
Understanding the Role of Multi-Stage Detection in a Layered Defense Tue, 08 May 2018 03:12:25 -0500 The cybersecurity landscape has changed dramatically during the past decade, with threat actors constantly changing tactics to breach businesses’ perimeter defenses, cause data breaches, or spread malware. New threats, new tools, and new techniques are regularly chained together to pull off advanced and sophisticated attacks that span across multiple deployment stages, in an effort to be as stealthy, as pervasive, and as effective as possible without triggering any alarm bells from traditional security solutions.

Security solutions have also evolved, encompassing multi-stage and multi-layered defensive technologies aimed at covering all potential attack vectors and detecting threats at pre-execution, on-execution, or even throughout execution.

Multi-Stage Detection

All malware is basically code that’s stored (on disk or in memory) and executed, just like any other application. Delivered as a file or binary, security technologies refer to these states of malware detection as pre-execution and on-execution. Basically, it boils down to detecting malware before, or after, it gets executed on the victim’s endpoint.

Layered security solutions often cover these detection stages with multiple security technologies specifically designed to detect and prevent zero-day threats, APTs, fileless attacks and obfuscated malware from reaching or executing on the endpoint.

For example, pre-execution detection technologies often include signatures and file fingerprints matched against cloud lookups (local and cloud-based machine learning models aimed at ascertaining the likelihood that an unknown file is malicious based on similarity to known malicious files), as well as hyper detection technologies, which are basically machine learning algorithms on steroids.

It helps to think that hyper detection technologies are basically paranoid machine learning algorithms for detecting advanced and sophisticated threats at pre-execution, without taking any chances. This is particularly useful for organizations in detecting potentially advanced attacks, as it can inspect and detect malicious commands and scripts - including VB scripts, JAVA scripts, PowerShell scripts, and WMI scripts – that are usually associated with sophisticated fileless attacks.

On-execution security technologies sometimes involve detonating the binary inside a sandboxed environment, letting it execute for a specific amount of time, then analyzing all system changes the binary made, the internet connections it attempted, and pretty much inspect any changes and behavior the binary had on the system after it was executed. A sandbox analyzer is highly effective as there’s no risk of infecting a production endpoint and the security tools used to analyze the binary can be set to a highly paranoid mode. The trade-off is that this would typically cause performance penalties on a production endpoint, and even risk compromising the organization’s network should the threat actually breach containment.

Of course, there are on-execution technologies that are deployed on endpoints to specifically detect and prevent exploits from occurring or for monitoring the behavior of running applications and processes throughout their entire lifetime. These technologies are designed to constantly assess the security status of all running applications, and prevent any malicious behavior from compromising the endpoint.

Layered Security Defenses

Multi-stage detection using layered security technologies gives security teams the unique ability to stop the attack kill chain at almost any stage of attack, regardless of the threat’s complexity. For instance, while a tampered document that contains a malicious Visual Basic script might bypass an email filtering solution, it will definitely be picked up by a sandbox analyzer technology as soon as the script starts to execute malicious instructions or commands, or starts to connect to and download additional components on the endpoint.

It’s important to understand that the increased sophistication of threats requires security technologies capable of covering multiple stages of attack, creating a security mesh that acts as a safety net to protect your infrastructure and data. However, it’s equally important that all these security layers be managed from a centralized console that offers a single pane of glass visibility into the overall security posture of the organization. This makes managing security aspects less cumbersome, while also helping security and IT teams focus on implementing prevention measures rather than fighting alert fatigue.

About the author: Liviu Arsene is a Senior E-Threat analyst for Bitdefender, with a strong background in security and technology. Reporting on global trends and developments in computer security, he writes about malware outbreaks and security incidents while coordinating with technical and research departments.

Copyright 2010 Respective Author at Infosec Island]]>
VirusTotal Browser Extension Now Firefox Quantum-Compatible Sat, 05 May 2018 10:23:52 -0500 VirusTotal released an updated VTZilla browser extension this week to offer support for Firefox Quantum, the new and improved Web browser from Mozilla.

The browser extension was designed with a simple goal in mind: allow users to send files to scan by adding an option in the Download window and to submit URLs via an input box.

The VTZilla extension already proved highly popular among users, but version 1.0, which had not received an update since 2012, no longer worked with Mozilla’s browser after Firefox Quantum discontinued support for old extensions.

Starting toward the end of last year, Mozilla required all developers to update their browser extensions to WebExtensions APIs, a new standard in browser extensions, and VirusTotal is now complying with the requirement.

The newly released VTZilla version 2.0 builds on the success of the previous version and brings along increased ease-of-use, more customization options, and transparency.

Once the updated browser extension has been installed, the VirusTotal icon appears in the Firefox Quantum’s toolbar, allowing quick access to various configuration options.

Clicking on the icon enables users to customize how files and URLs are sent to VirusTotal, as well as to choose a level of contribution to the security community they want.

“Users can then navigate as usual. When the extension detects a download it will show a bubble where you can see the upload progress and the links to file or URL reports,” VirusTotal’s Camilo Benito explains.

“These reports will help users to determine if the file or URL in use is safe, allowing them to complement their risk assessment of the resource,” Benito continues.

Previously, only the pertinent URL tied to the file download was scanned, and access to the file report was available only via the URL report and only if VirusTotal servers had been able to download the pertinent file.

VTZilla also allows users to send any other URL or hash to VirusTotal and other features are only one right-click away.

VirusTotal is also determined to improve the extension and add functionality to it and is also open to feedback and suggestions. The Google-owned service can now make the extension compatible with other browsers that support the WebExtensions standard as well.

The extension revamp will soon be followed by VTZilla features that should allow users further help the security industry fight against malware. “Even non-techies will be able to contribute,” Benito says.

Related: VirusTotal Launches New Android Sandbox

Related: VirusTotal Launches Visualization Tool

Copyright 2010 Respective Author at Infosec Island]]>
PyRoMine Malware Sets Security Industry on Fire Thu, 03 May 2018 09:50:58 -0500 It’s happened once again...

Recent headlines heralded the latest in cryptomining hacks to leverage stolen NSA exploits. This time in the form of PyRoMine, a Python-based malware which uses an NSA exploit to spread to Windows machines while also disabling security software and allowing the exfiltration of unencrypted data. By also configuring the Windows Remote Management Service, the machine becomes susceptible to future attacks.

Despite all the investments in cyber protection and prevention technology, it seems that the cyber terrorist’s best tool is nothing more than variations on previous exploits because most security products really can’t accommodate every variation of zero-day malware detection in order to prevent the ensuing damage.

Cryptomining Beats Out Ransomware

Ransomware was the threat that wreaked havoc across organizations for years and sent most IT Security professionals into a panic at the mere mention of a new exploit hitting the headlines. However, now it seems that Ransomware is taking a back seat to CryptoMiners. According to a recent article at by Jon Martindale titled “Cryptojacking is the new ransomware. Is that a good thing?”

“In our history of malware feature, we looked at how malware tends to come in waves. While the latest and most dangerous in recent memory has been ransomware, it’s been pushed far from the top spot of common attacks in recent months by the advent of cryptominers, which look to force infected systems to mine cryptocurrency directly.”

The article goes further with this quote from a Senior E-Threat analyst on the expected growth of this type of threat:

“Since cybercriminals are always financially motivated, cryptojacking is yet another method for them to generate revenue,” said Liviu Arsene, senior E-Threat analyst at BitDefender. “Currently, it’s outpacing ransomware reports by a factor of 1 to 100, and these numbers will continue to increase for as long as virtual currencies remain popular and the market demands it.”

Variations on Old Hacks

Everything old is new again, or so goes an old adage, and it seems to apply to cyber threats as well. Fortinet researchers spotted a malware dubbed ‘PyRoMine’ which uses the ETERNALROMANCE exploit to spread to vulnerable Windows machines, according to an April 24 blog post.

“This malware is a real threat as it not only uses the machine for cryptocurrency mining, but it also opens the machine for possible future attacks since it starts RDP services and disables security services," the blog said. "FortiGuardLabs is expecting that commodity malware will continue to use the NSA exploits to accelerate its ability to target vulnerable systems and to earn more profit.”

The malware isn't the first to mine cryptocurrency that uses previously leaked NSA exploits the malware is still a threat as it leaves machines vulnerable to future attacks because it starts RDP services and disables security services.

The odds are great that we will see other variations on this NSA exploit before the year is up. Now is clearly the time to start evaluating other technologies that take more preventative steps to protect your IT infrastructure.

About the author: Boris Vaynberg co-founded Solebit LABS Ltd. in 2014 and serves as its Chief Executive Officer. Mr. Vaynberg has more than a decade of experience in leading large-scale cyber- and network security projects in the civilian and military intelligence sectors.

Copyright 2010 Respective Author at Infosec Island]]>
GDPR Is Coming. Is Your Organization Ready? Tue, 01 May 2018 06:15:00 -0500 On May 25th of 2018, the General Data Protection Regulation (GDPR) goes into effect. This is a law passed in 2016 by the member states of the European Union that requires compliance with regard to how organizations store and process the personal data of individual residents of the EU. Now maybe you are thinking that this regulation does not apply to your organization because it is not based in the EU. Don’t stop reading just yet.

This regulation applies to any organization that offers goods or services to EU residents and/or processes the personal information of EU residents, regardless of whether the organization is based in the EU or not. And the law does not apply only to the huge multinational companies of the world. It applies to small businesses as well. For example, consider an e-commerce business that sells Tshirts online, and it sells to people in the EU. Or perhaps an email marketing company that sends out periodic emails to EU citizens. Or even a message board website that allows users to create profiles and gathers personal information during the registration process. The GDPR would apply to all these businesses, no matter how big or small.

This regulation is the biggest change to the protection of individual personal data in over twenty years and is far reaching in its scope. It is important to understand if and how it applies to your organization.

What Type Of Data Is Protected?

The GDPR is meant to protect the personal data and fundamental rights and freedoms of natural persons in the EU. It does this by requiring organizations to implement strict policies, procedures and technical controls when processing the personal data of EU citizens. The regulation defines the term “personal data” very broadly. According to the regulation, personal data means “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.” Examples of personal data would include name, email address, IP address, physical address, photos, gender, health information and national identification number.

The term processing is also defined very broadly. According to the GDPR, processing means “any operation or set of operations which is performed on personal data or on sets of personal data, whether or not by automated means, such as collection, recording, organization, structuring, storage, adaptation or alteration, retrieval, consultation, use, disclosure by transmission, dissemination or otherwise making available, alignment or combination, restriction, erasure or destruction.” Examples of processing would including simple storage of the data, sending out marketing emails, collecting personal data when a visitor places and order, processing a credit card transaction, and any other type of storage, processing or manipulation of personal data that occurs during the normal course of business.

Finally, the regulation applies to both the automated processing of data as well as the processing of data by non-automated means. In short, the regulation applies to both digital and non-digital forms of data. Examples of non-digital forms of data would include hard copies of contracts, health records, marketing information and any other type of medium containing the personal data of EU citizens.

Which Organizations Are Affected?

According to Article 3 of the GDPR, the regulation “applies to the processing of personal data in the context of the activities of an establishment of a controller or a processor in the Union, regardless of whether the processing takes place in the Union or not.” Furthermore, it applies “to the processing of personal data of data subjects who are in the Union by a controller or processor not established in the Union, where the processing activities are related to: 1) the offering of goods or services, irrespective of whether a payment of the data subject is required, to such data subjects in the Union; or 2) the monitoring of their behaviour as far as their behaviour takes place within the Union.” Finally, the regulation states that it “applies to the processing of personal data by a controller not established in the Union, but in a place where Member State law applies by virtue of public international law.”

So what does all this mean? First, if your organization collects personal data or behavioral information from someone residing in an EU country when the data was collected, your company is subject to the requirements of the GDPR, regardless of whether or not your organization is based in the EU, or even has a presence in the EU. Second, the law does not require that a financial transaction has to to take place for the scope of the law to kick in. If an organization simply collects the personal data of EU persons, then the requirements of the GDPR apply to the organization, even if the organization is based outside the EU. In sum, if your organization sells or markets goods or services to EU countries, or if your organization collects the personal data of people living in the EU, then the GDPR applies to your organization regardless of whether the organization has a presence in the EU or not.

What Are the Requirements?

The overarching goal of the GDPR is the protection of the personal data of EU citizens. As such, the GDPR requires that organizations take measures to ensure that they are implementing policies and controls that will reduce the risk of potential data breaches and will also provide transparency to the data subjects. Below is a list of the most prominent provisions of the GDPR:

  • Lawful Basis for Processing – Before an organization can begin processing the personal data of EU citizens, it must first determine if it has a lawful basis to do so. The GDPR outlines six reasons for lawfully processing personal data such as legal obligations, contracts or vital interests. The most common lawful basis that most businesses will rely on is consent from the data subject. The manner for obtaining consent must be clear, concise and transparent. It also must require subjects to explicitly opt-in, not opt-in by default. It is extremely important for each organization to determine the basis on which it may lawfully process the personal data of the subjects.
  • Privacy and Security – Organizations that collect the personal data of EU citizens may only store and process data when it’s absolutely necessary. Data protection and privacy must be integrated into an organizations data processing activities (privacy by design). Furthermore, organizations must provide protection against unauthorized or unlawful processing and against accidental loss, destruction or damage. It requires that appropriate technical and/or organizational measures are used including a method to anonymize data so that it cannot be tied back to a specific individual (e.g. data encryption). Organizations must also perform a data protection impact assessment (DPIA) for certain types of processing that is likely to result in a high risk to individuals’ interests. Finally, depending on the scale of personal information an organization processes, a data protection officer (DPO) must be assigned within the organization to ensure compliance with the GDPR.
  • Individual Rights – Data subjects have a number of individual rights according to the GDPR. Mostly importantly, individuals have the right to be informed about the collection and use of their personal data. This includes informing them of the reason for processing their data, the retention policy for storing the data, and who it will be shared with. Organizations must provide an individual residing in the EU with access to the personal data gathered about them upon request. Data subjects have the right to request that their data be erased (known as the “right to be forgotten”). Organizations have one month to respond to such requests. Finally, organizations must provide a way for individuals to transmit or move data collected on them from one data collector or data processor to another.
  • Breach Notification – The GDPR requires organizations to report data breaches to the relevant supervisory authority within 72 hours of becoming aware of the breach. If the breach is likely to result in a high risk of adversely affecting individuals’ rights and freedoms, the organization must also inform those individuals of the breach “without undue delay”. As a result of the requirement, organizations will need to ensure that they have a robust breach detection, investigation and internal reporting procedure in place. Finally, organizations must keep a record of all data breaches regardless of whether or not notification of any particular breach is required.
  • Minors – Children are provided additional protections under the GDPR and organizations that collect the personal data of minors must take special care when doing so. When offering an online service directly to a child, only children aged 13 or over are able provide their own consent. For children under age 13, an organization must also obtain the consent the child’s parent or legal guardian. Children merit specific protection when an organization uses their personal data for marketing purposes or creating personality or user profiles. Organizations must write clear privacy notices for children so that they are able to understand what will happen to their personal data, and what rights they have.

What Are the Penalties for Noncompliance?

The fines associated with noncompliance with the GDPR can be quite substantial. The regulation has a two tired system for determining fines based on the severity of the infraction(s). Before assessing fines the supervisory authority may take into account the nature, gravity and duration of the infringement. They may also determine if an organization was willfully negligent. Cooperation with the supervisory authority may also be taken into account when assessing fines. Below are the guidelines stated in the GDPR with regards to the assessment of financial penalties for noncompliance:

  1. Infringements that may be subject to administrative fines of up to 10,000.000 EUR or 2% of the total worldwide annual turnover of the preceding financial year, whichever is higher:
    • Violations of the provisions regarding data security obligations and privacy-by-default measures that need to be taken to protect data from unauthorized access
    • Not having an assigned DPO or the DPO not fulfilling her obligations
    • Violations of the DPIA requirement
    • Violations of the requirement to conclude a processing agreement with all data processors that are engaged by an organization
    • Violations of the requirement to keep a record of the processing activities carried out
  2. Infringements that may be subject to administrative fines of up to 20,000,000 EUR or 4% of the total worldwide annual turnover of the preceding financial year, whichever is higher:
    • Violations of the basic principles for processing personal data (e.g. lawful basis for processing)
    • Violations of provisions regarding a data subject’s rights such as the right to erasure, access to personal data and the right to receive information regarding the processing of personal data
    • Violation of the provisions regarding the transfer of personal data to third countries
    • Noncompliance with an order by a supervisory authority

In addition to the fines outlined above, each EU member state shall also have the right to implement its own fines with regards to noncompliance. Moreover, they may also implement criminal penalties for violations.

How Is the GDPR Enforced?

For those organizations that are based in the EU or who have a legal presence in the EU (e.g. a multinational corporation with an office in an EU member state), the GDPR will be enforced directly by the EU member states’ authorities and their court systems. For organizations that are not based in the EU and also do not have a physical presence in the EU, the GDPR requires them to appoint a “representative” who is located in the EU if the organization is actively doing business in the EU. Presumably this representative will allow the EU to enforce the regulation on such entities.

Finally, the GDPR can be enforced through international law. Written into GDPR itself is a clause stating that any action against a company from outside the EU must be issued in accordance with international law. There has been long term and increasing enforcement cooperation between the United States and EU data protection authorities. For example, there is the EU-U.S. Privacy Shield data sharing agreement which puts systems in place for the EU to issue complaints and fines against U.S. companies. In sum, there are a variety of mechanisms in place for the EU to enforce the GDPR against organizations based outside the EU.

What to Do?

If you are an organization that falls under the scope of the GDPR, then it is in your best interest to comply with the regulation, even if you are not based in the EU and do not have a physical presence there. If you are already processing the data of EU citizens, or plan to in the future, making sure your organization is compliant is good business. Putting the fines aside, residents of the EU will want to make sure that any company they are doing business with is in compliance. Moreover, the privacy and security policies and controls required will help reduce the risk to your organization. There are also potential cost savings by reducing ROT data (redundant, outdated or trivial) in terms of storage and backup costs. Being compliant may also give you a business advantage over competitors who are not.

One of the things that will likely come out of this regulation is a GDPR certification. Businesses who obtain such a certification may be able to display a certification seal on their website and other marketing material which will provide confidence to potential customers. Finally, expect your business partners to start requiring GDPR compliance even if you are not directly impacted. GDPR compliance is here to stay. Given the current events around online privacy in the United States (e.g. Facebook data disclosure), it is not inconceivable that the U.S. could also pass a similar regulation to protect individual privacy. Embracing the GDPR will only help your organization in the long run.

About the Author:Mark Baldwin is the owner and principal consultant at Tectonic Security. He has nearly 20 years of experience in the information security field and holds numerous certifications including CISSP and CISM.

Copyright 2010 Respective Author at Infosec Island]]>
Non-Malware Attacks: What They Are and How to Protect Against Them? Thu, 26 Apr 2018 07:17:05 -0500 Non-malware attacks are on the rise. According to a study by the Ponemon Institute, 29 percent of the attacks organizations faced in 2017 were fileless. And in 2018, this number may increase up to 35 percent.

So, what are non-malware attacks, how do they differ from traditional threats, why are they so dangerous, and what can you do to prevent them? Keep reading and you’ll learn the answer to each of these questions.

Non-malware attacks: what are they?

Non-malware or fileless attack is a type of cyber attack in which the malicious code has no body in the file system. In contrast to the attacks carried out with the help of traditional malicious software, non-malware attacks don’t require installing any software on a victim’s machine. Basically, hackers have found a way to turn Windows against itself and carry out fileless attacks using built-in Windows tools.

The idea behind non-malware attacks is pretty simple: instead of dropping custom tools that could be flagged as malware, hackers use the tools that already exist on a device, take over a legitimate system process and run the malicious code in its memory space. This approach is also called “living off the land.”

This is how a non-malware attack usually happens:

  1. A user opens an infected email or visits an infected website
  2. An exploit kit scans the computer for vulnerabilities and uses them for inserting malicious code into one of Windows system administration tools
  3. Fileless malware runs its payload in an available DLL and starts the attack in the memory, hiding within a legitimate Windows process

Fileless malware can be downloaded from an infected website or email, introduced as malicious code from an infected application, or even distributed within a zero-day vulnerability.

Why are non-malware attacks so dangerous?

One of the main challenges posed by fileless malware is that it doesn’t use a traditional malware and, therefore, doesn’t have any signatures that an anti-malware software could use to detect it. Thus, detecting fileless attacks is extremely challenging.

To understand better why they pose so much danger, let’s take a look at some of the most recent examples of fileless attacks.

One of the first examples of fileless malware were Terminate-Stay-Resident (TSR) viruses. TSR viruses had a body from which they started, but once the malicious code was loaded to the memory, the executable file could be deleted.

Malware that uses vulnerabilities in such scripts as JavaScript or PowerShell is also considered to be fileless. Even the much-talked-of ransomware attacks WannaCry and NotPetya used fileless techniques as a part of their kill chains.

Another example of a non-malware attack is the UIWIX threat. Just like WannaCry and Petya, UIWIX uses the EternalBlue exploit. It doesn’t drop any files on the disk but instead enables the installation of the DoublePulsar backdoor that lives in the kernel’s memory.

How do non-malware attacks work?

Since non-malware attacks use default Windows tools, they manage to hide their malicious activity behind the legitimate Windows processes. As a result, they become nearly undetectable for most anti-malware products.

Main non-malware attack targets

The hackers need to obtain as many resources as possible while keeping their malicious activity undetected. This is why the majority of fileless attacks focuses on one of the two targets:

  • Windows Management Instrumentation (WMI)
  • PowerShell

Depending on their targets, fileless attacks may either run in RAM or exploit vulnerabilities in software scripts.

The attackers chose WMI and PowerShell for several reasons. First, both these tools are built into every modern version of Windows OS, making it easier for the hackers to spread their malicious code. Secondly, turning off any of these tools is not a good idea, since it’ll significantly limit what network administrators can do. Some experts, however, suggest disabling WMI and PowerShell anyway as a preventive measure against fileless attacks.

4 common types of non-malware attacks

There are many types and variations of fileless malware. Below, we listed the four most common ones:

  • Fileless persistence methods ― the malicious code continues to run even after the system reboot. For instance, malicious scripts may be stored in the Windows Registry and re-start the infection after a reboot.
  • Memory-only threats ― the attack executes its payload in the memory by exploiting vulnerabilities in Windows services. After a reboot, the infection disappears.
  • Dual-use tools ― the existing Windows system tools are used for malicious purposes.
  • Non-Portable Executable (PE) file attacks ― a type of dual-use tool attack that uses legitimate Windows tools and applications as well as such scripts as PowerShell, CScript or WScript.

Non-malware attack techniques

In order to perform a non-malware attack, hackers use different techniques. Here are the four most frequently used ones:

  • WMI persistence ― WMI repository is used for storing malicious scripts that can be periodically invoked via WMI bindings.
  • Script-based techniques ― hackers may use script files for embedding encoded shellcodes or binaries without creating any files on the disk. These scripts can be decrypted on the fly and executed via .NET objects.
  • Memory exploits ―fileless malware may be run remotely using memory exploits on a victim’s machine.
  • Reflective DLL injection ― malicious DLLs are loaded into a process’s memory manually, without the need to save these DLLs on the disk. The malicious DLL can be either embedded in infected macros or scripts, or hosted on a remote machine and delivered through a staged network channel.

Now, it’s time to talks about the ways you can protect your company against non-malware attacks.

5 ways of protection against non-malware attacks

Experts offer different ways of preventing and stopping fileless malware: from disabling the most vulnerable Windows tools to using next-generation anti-malware solutions. The following five suggestions may be helpful in protecting your company network against non-malware attacks.

  1. Restrict unnecessary management frameworks. The majority of non-malware threats is based on the vulnerabilities found in the management frameworks like PowerShell or WMI. The attackers use these frameworks to secretly execute commands on a victim’s machine while the infection lives in its memory. Thus it would be better to disable these tools wherever it’s possible.
  2. Disable macros. Disabling macros altogether prevents unsecure and untrusted code from running on your system. If using macros is a requirement for your enterprise’s end users, you can digitally sign trusted macros and restrict the usage of any other types of macros.
  3. Monitor unauthorized traffic. By constantly monitoring the security appliance logs from different devices, you can detect unauthorized traffic in your company’s network. It would also be helpful to record a set of baselines to understand better the network operating flow and be able to detect any anomalies, such as devices communicating with unauthorized remote devices or transmitting inordinate amounts of data.
  4. Use next-generation endpoint security solutions. In contrast to traditional anti-malware software, some endpoint solutions have a heuristics component able to perform basic system behavior analysis. Since certain types of malware have a specific set of common behavioral characteristics, heuristics-based methods can halt some activities that look like behavior-based threats, thus stopping a possible attack from delivering its full payload. In case of false positive, end users may manually authorize the process to continue.
  5. Keep all the devices updated. Patch management plays a significant role in securing your system and preventing possible breaches. By delivering the latest patches timely, you can effectively increase the level of your system’s protection against non-malware attacks.

Fileless attacks are on the rise mostly because they are so difficult to detect by standard anti-malware solutions. And while effectively detecting non-malware threats remains a challenge, these tips may help you prevent possible attacks from happening.

About the author: Marcell Gogan is a specialist within digital security solution business design and development, virtualization and cloud computing R&D projects, establishment and management of software research direction. He also loves writing about data management and cyber security.

Copyright 2010 Respective Author at Infosec Island]]>
SAP Cyber Threat Intelligence Report – April 2018 Thu, 19 Apr 2018 09:44:00 -0500 The SAP threat landscape is always expanding thus putting organizations of all sizes and industries at risk of cyber attacks. The idea behind the monthly SAP Cyber Threat Intelligence report is to provide an insight into the latest security vulnerabilities and threats.

Key takeaways

  • This set of SAP Security Notes consists of 16 patches with the majority of them rated medium.
  • Implementation Flaw is the most common vulnerability type.
  • A security vulnerability addressing SAP Business Client received the highest CVSS base score of 9.8 this year.

SAP Security Notes – April 2018

SAP has released the monthly critical patch update for April 2018. This patch update closes 16 SAP Security Notes (12 SAP Security Patch Day Notes and 4 Support Package Notes). 5 of all the patches are updates to previously released Security Notes.

4 of all the notes were released after the second Tuesday of the previous month and before the second Tuesday of this month.

One of the released SAP Security Notes was assessed at Hot News, and 4 have High priority rating.

The most common vulnerability type is Implementation Flaw.

SAP users are recommended to implement security patches as they are released as it helps protect the SAP landscape.

Critical issues closed by SAP Security Notes in April

The most dangerous vulnerabilities of this update can be patched with the help of the following SAP Security Notes:

  • 2622660: SAP Business Client has a security vulnerability (CVSS Base Score: 9.8). Depending on the vulnerability, attackers can exploit a Memory corruption vulnerability for injecting specially crafted code into a working memory which will be executed by the vulnerable application. This can lead to taking complete control of an application, denial of service, command execution and other attacks. This fact has a negative influence on business processes and business reputation as a result. Install this SAP Security Note to prevent the risks.
  • 2587985: SAP Business One has an Denial of Service (DOS) vulnerability (CVSS Base Score: 7.5 CVE-2017-7668). An attacker can use Denial of service vulnerability for terminating a process of a vulnerable component. For this time nobody can use this service, this fact negatively influences on a business processes, system downtime and business reputation as result. Install this SAP Security Note to prevent the risks.
  • 2552318: SAP Visual Composer has a Code Injection vulnerability (CVSS Base Score: 7.4 ). Update 1 to Security Note 2376081. Depending on the code, attackers can perform different actions: inject and run their own code, obtain additional information that must be hidden, change or delete data, modify the output of the system, create new users with higher privileges, control the behavior of the system, or can potentially escalate privileges by executing malicious code or even to perform a DOS attack. Install this SAP Security Note to prevent the risks.

Advisories for these SAP vulnerabilities with technical details will be available in three months on Exploits for the most critical vulnerabilities are already available in ERPScan Security Monitoring Suite.

Copyright 2010 Respective Author at Infosec Island]]>
Cloud Security Alert – Log Files Are Not the Answer Wed, 18 Apr 2018 05:52:00 -0500 Once production applications and workloads have been moved to the cloud, re-evaluating the company’s security posture and adjusting the processes used to secure data and applications from cyberattacks are critical next steps.

Cloud infrastructure is ideal for providing resources on demand and significantly reducing the cost of acquiring, deploying and maintaining internal resources.  

In addition, organizations can quickly scale cloud resources up or down eliminating the need to over-provision—just in case. But losing control over the physical infrastructure means not being able to use familiar tools to develop insight into what is happening in that infrastructure.

Anyone responsible for IT security needs a strategy for monitoring what is happening in their company’ cloud, so they can shut down any attacks that occur and limit the damage.  

The use of log files

While users do not have direct access to public cloud infrastructure, cloud providers do offer access to logs of events that have taken place in the user’s cloud—often for an additional cost. With logs, administrators can view, search, analyze, and even respond to specific events if they use APIs to integrate the event data with a security event and incident management (SEIM) solution.  So why aren’t log files sufficient to maintain security?

First, all necessary data may not be collected through log files. While management events are automatically logged, data events are not. Some providers may support collection of custom logs, but users would need to specify and activate the logs ahead of time. This makes it difficult or sometimes impossible to go back and investigate areas that were not already being tracked.

Second, while event logs are useful for identifying when an alert was triggered, they do not provide enough information to determine what caused the alert. More detailed information is needed to perform root cause analysis and execute timely remediation. The rise of advanced persistent threats (APTs) as the most damaging type of breach cannot be stopped by merely analyzing log files. The most advanced network security solutions require detailed data in real-time to have a chance of detecting APTs. Log files are typically generated at specified intervals, depending on the level of service the user pays for. Users then need to set up a mechanism for storing log files for future analysis; this is not the default. So, while data useful in a breach investigation can be collected, it is not available in real-time and limits the speed of containment and recovery.

Third, sophisticated adversaries are increasingly adept at moving inside an organization without triggering any alerts. In many attacks, previously unseen malware enters an enterprise and lurks there undetected, exfiltrating data over a period of many months. Security today requires more rigorous oversight than log files provide.

And finally, in the long run, logs can be expensive to manage. Obtaining sufficient log data and sifting through it demands time, money, and a commitment to data integration. Existing security monitoring tools that use log data may not be sufficient to investigate new threats and investments may be required for additional tools. Security analysts could end up spending more time on complex data administration, rather than focusing on correlation analysis and incident response.  

What can packet data do?

Data packets are like nested Russian dolls with the content enclosed inside various headers that work to move the packet efficiently through the network. The headers can be very informative, but security today is dependent on what is called deep packet inspection (DPI) of the packet’s payload or content. DPI exposes the specific websites, users, applications, files, or hosts involved in an interaction—information that is not available by inspecting header data alone.

Cloud environments have many potential vulnerabilities that attackers can exploit. And attacks are frequently conducted in multiple stages that may not be caught by intrusion detection systems or next-generation firewalls. To stay ahead of would-be attackers, security analysts increasingly use data correlation and multi-factor analysis to find patterns associated with illegitimate activity. These sophisticated solutions require granular data to work effectively. Most organizations have solutions like these deployed on-premises to evaluate packet data captured from physical infrastructure.  

How to gain access to packet level data in the cloud Unlike physical infrastructure that can be tapped to produce copies of data packets, cloud architecture is not directly accessible. In the event of an ongoing attack or data breach, a user may be frustrated to learn that the data they need to isolate and resolve the issue is not included in the Service Level Agreement they have with their provider. Fortunately, there are new methods to access packet level data in clouds.  

Container-based sensors have been developed that sit inside the cloud instances and generate copies of packet data. The sensors are automatically deployed inside every new cloud that is spun up, for unlimited scalability. Because the sensors are inside each cloud instance, they have access to every raw packet that comes or goes from that instance. This cloud-native approach to data access ensures no data is missed, for strong cloud security.  

What are the benefits of a cloud visibility platform?

Of course, having access to all the packet-level data from every cloud instance presents another problem—volumes of data that can overwhelm security solutions and even lead them to drop packets. A cloud visibility platform filters the raw packets according to user-defined rules and strips out unnecessary data, to deliver only the relevant data to each security solution. This enables security solutions to work more efficiently.  

Today, there are two types of visibility platforms available for cloud workloads. One uses a lift-and-shift approach and takes the visibility engine developed for the data center and moves it to the cloud. The engine itself is a monolithic processor that aggregates and filters all the data in one location.  

The other approach distributes data aggregation and filtering to each of the cloud instances and communicates the results to a cloud-based management interface. Data can either be delivered directly from the cloud instances to cloud-based security monitoring solutions or backhauled to the data center. The distributed solution has the advantage of being highly scalable, since the data does not need to be transported to a central location for processing. And the distributed solution is more reliable, since there is no single point of failure.  

Whether responding to a security incident, data breach, or in support of litigation, an organization needs to have a highly-effective cloud visibility platform for accessing and preserving the digital traffic that impacts their business. Log files are just not able to fulfill that requirement.  


Ultimately, log files are diagnostic tools. They are not security solutions and they cannot facilitate an effective response to a security threat or breach. With the rising use of advanced persistent threats and multi-stage attacks, effective security requires detailed packet-level data, from every interaction that happens in the cloud. The cost of capturing and filtering packet data will be offset by the increased ability of the security team to detect attacks and accelerate incident response.  

About the author: Lora is a Cloud Solution Marketing Manager for Ixia, a Keysight Business, where she uses her knowledge of network test, security, and visibility to communicate how Ixia solutions address a range of pressing IT challenges. Lora has more than 20 years of experience in technology management in a variety of domains including networking and network management, cloud and virtualization, servers, data mining, and enterprise resource software, as well as alliance partner development.

Copyright 2010 Respective Author at Infosec Island]]>
Avoiding Holes in Your AWS Buckets Thu, 12 Apr 2018 06:06:00 -0500 Enterprises are moving to the cloud at a breathtaking pace, and they’re taking valuable data with them. Hackers are right behind them, hot on the trail of as much data as they can steal. The cloud upends traditional notions of networks and hosts, and it topples security practices that use them as a proxy to protect data access. In public clouds, networks and hosts are no longer the most adequate control options available for resources and data.

Amazon Web Services (AWS) S3 buckets are the destination for much of the data moving to the cloud. Given how important this sensitive data is, one would expect enterprises to pay close attention to their S3 security posture. Unfortunately, many news stories highlight how many S3 buckets have been mistakenly misconfigured and left open to public access. It’s one of the most common security weaknesses in the great migration to the cloud, leaving gigabytes of data for hackers to grab.

When investigating why cloud teams were making what seemed to be an obvious configuration mistake, two primary reasons surfaced:

1. Too Much Flexibility (Too Many Options) Turns into Easy Mistakes

S3 is the oldest AWS service and was available before EC2 or Identity and Access Management (IAM). Some access controls capabilities were built specifically for S3 before IAM existed. As it stands, there are five different ways to configure and manage access to S3 buckets.

  • S3 Bucket Policies
  • IAM Policies
  • Access Control Lists
  • Query string authentication/ static Web hosting
  • API access to change the S3 policies

The more ways to configure implies more flexibility but also means that higher chances of making a mistake. The other challenge is that there are two separate policies one for buckets and one for the objects within the bucket which make things more complex.

2. A “User” in AWS is Different from a “User” in your Traditional Datacenter

Amazon allows great flexibility in making sure data sharing is simple and users can easily access data across accounts or from the Internet. For traditional enterprises the concept of a “user” typically means a member of the enterprise. In AWS the definition of user is different. On an AWS account, the “Everyone” group includes all users (literally anyone on the internet) and “AWS Authenticated User” means any user with an AWS account. From a data protection perspective, that’s just as bad because anyone on the Internet can open an AWS account.

The customer moving from traditional enterprise - if not careful - can easily misread the meaning of these access groups and open S3 buckets to “Everyone” or “AWS authenticated User” - which means opening the buckets to world.

S3 Security Checklist

If you are in AWS, and using S3, here is a checklist of things you should configure to ensure your critical data is secure.

Audit for Open Buckets Regularly:  On regular intervals check for buckets which are open to the world. Malicious users can exploit these open buckets to find objects which have misconfigured ACL permissions and then can access these compromised objects.

Encrypt the Data: Enable server-side encryption on AWS as then it will encrypt the data at rest i.e. when objects are written and decrypt when data is read. Ideally you should enable client side.

Encrypt the Data in Transit: SSL in transport helps secure data in transit when it is accessed from S3 buckets. Enable Secure Transport in AWS to prevent man in middle attacks.

Enable Bucket Versioning: Ensure that your AWS S3 buckets have the versioning enabled. This will help preserve and recover changed and deleted S3 objects which can help with ransomware and accidental issues.

Enable MFA Delete: The "S3 Bucket" can be deleted by user even if he/she does not login using MFA by default. It is highly recommended that only users authenticated using MFA have ability to delete buckets. Using MFA to protect against accidental or intentional deletion of objects in S3 buckets will add an extra layer of security

Enable Logging: If the S3 buckets has Server Access Logging feature enabled you will be able to track every request made to access the bucket. This will allow user to ability to monitor activity, detect anomalies and protect against unauthorized access

Monitor all S3 Policy Changes: AWS CloudTrail provides logs for all changes to S3 policy. The auditing of policies and checking for public buckets help - but instead of waiting for regular audits, any change to the policy of existing buckets should be monitored in real time.

Track Applications Accessing S3: In one attack vector, hackers create an S3 bucket in their account and send data from your account to their bucket. This reveals a limitation of network-centric security in the cloud: traffic needs to be permitted to S3, which is classified as an essential service. To prevent that scenario, you should have IDS capabilities at the application layer and track all the applications in your environment accessing S3. The system should alert if a new application or user starts accessing your S3 buckets.

Limit Access to S3 Buckets: Ensure that your AWS S3 buckets are configured to allow access only to specific IP addresses and authorized accounts in order to protect against unauthorized access.

Close Buckets in Real time:  Even a few moments of public exposure of an S3 bucket can be risky as it can result in leakage. S3 supports tags which allows users to label buckets. Using these tags, administrators can label buckets which need to be public with a tag called “Public”. CloudTrail will alert when policy changes on a bucket and it becomes public which does not have the right tag. Users can use Lambda functions to change the permissions in real-time to correct the policies on anomalous or malicious activity.

About the author: Sanjay Kalra is co-founder and CPO at Lacework, leading the company’s product strategy, drawing on more than 20 years of success and innovation in the cloud, networking, analytics, and security industries. Prior to Lacework, Sanjay was GM of the Application Services Group at Guavus, where he guided the company to market leadership and a successful exit

Copyright 2010 Respective Author at Infosec Island]]>
The Three Great Threats to Modern Civilization Thu, 12 Apr 2018 03:44:50 -0500 Throughout the history of mankind, civilizations have risen and fallen due to a variety of factors. For the most part, the collapse of a civilization wasn’t sudden, but a gradual decline brought on by multiple causes like changing culture, climate or even the introduction of a new culture (such as when Europeans came to the “new world”).

The interconnectivity and globalization of our modern society make it less likely for a civilization to collapse due to traditional factors. But the same factors that make a traditional decline less likely also mean a collapse is apt to look quite different. To start, it would be more sudden and less localized – going across multiple regions and perhaps the entire globe. What could cause such a collapse? There are three main threats to our modern civilization that could cause humanity to go the way of the ancient Mayans.

Climate Change

Human life requires a very specific set of environmental circumstances to survive. And while we can withstand some level of extreme temperatures, climate change has the potential to change, or perhaps even damage, civilizations as we know it.

Whether you believe climate change is man-made or a natural part of the earth’s cycle, it is obvious our planet’s climate is changing rapidly. We have already seen an increase in the number and severity of storms across the planet – some with devastating effects. Climate change may also be responsible for a rash of wildfires. As climate change shifts the physical landscape of our planet, the societal impacts will ripple across the globe. Some areas will become inhabitable as rising seas cause them to sink under the waves, or areas will become too hot or cold to live. The increase in temperatures may also increase insect populations and, as a result, insect-borne diseases will skyrocket. This could force people to migrate from their current locations to new locations, increasing the population in the remaining inhabitable locations, and creating a ripe environment for disease. The shifting weather patterns will also put our crops at risk, creating the potential for famine and starvation.

Environmentalist and author Bill McKibben told Business Insider earlier this year that without intervention, the world would be: "If not hell, then a place with a similar temperature."

Nuclear War

Ever since the bombs were dropped on Hiroshima and Nagasaki, the world has feared the possibility of nuclear war. The concept of mutual mass destruction caused anxiety and terrifying standoffs throughout the Cold War, but it also helped prevent the use of nuclear weapons (testing notwithstanding). Though the Cold War is over, the threat of nuclear war still looms, as more countries now have the ability to create these powerful weapons. The Doomsday clock – which signifies the potential of a man-made global catastrophe such as nuclear war – has not stood this close to midnight since 1953

Nuclear war would obviously have a devastating impact on humanity, and this is one of the major factors preventing such a war. All nations know that to use a nuclear weapon means they will become the next target of a nuclear attack. Yet the potential and the possibility for such a war still exist, in part because of unstable governments possessing such weapons.


In the past, attacks in the cyberworld only impacted our digital lives. Consequently, the threat of a cyberattack seems minimal compared to something as major as nuclear war or global climate change. However, our growing dependence on software means the consequences of a digital war could spill over into the physical world. 

There is a long history of cyberwar dating back to the early 1980s, the main difference between the cyberwar of the past and the one of today, or the future, is the world we live in. Back in the 1980s, when cyberwar became a growing concern for our government, we did not have the World Wide Web or mobile devices with the power of a super computer. Nor were our businesses, economy and even health devices tied to applications. It would only take another nation, or even a terrorist organization, to target a vulnerability in the software running the power grid, and civilization could be thrown into chaos. We are seeing this on a small scale in Puerto Rico, where power has been out for more than a month. If this were to happen on a world-wide scale, there would be mass rioting, hording of food, and commerce would cease to exist.

There is evidence that cybercriminals are testing the fences for weaknesses already. And we know from research that our software is woefully insecure. Our civilization is dependent on software that is insecure, and all it would take is a coordinated attack to change the way we live. And although we would eventually get the electric grid or other infrastructure back up and running, it could take weeks or months – what would happen to society during this time?

The thought of climate change, nuclear war and cyberwar are all terrifying, and it is tempting to not think about it in an effort to sleep better at night. But we cannot keep our heads in the sand and hope nothing will happen. By ignoring the potential threat of any of these three catastrophes, we are forgoing the opportunity to prevent them – and prevent them we can. We can change the direction of climate change with smart environmental policies and behaviors. We can tone down the rhetoric and adhere to nuclear non-proliferation agreements to lessen the potential for nuclear war. And we can create secure development standards to ensure the software running our world doesn’t have exploitable vulnerabilities. All it takes to accomplish all these things is the desire and the will.

We have the power to ensure our civilization grows, flourishes and is even better than how we found it. The advantage we have over past civilizations is the knowledge to prevent collapse. But first we must recognize the threat so that we can neutralize the risk.

About the atuhor: Jessica Lavery is Director of Corporate Communication and Content Marketing at CA Veracode. In this role Jessica is responsible for overseeing all activities associated with Public Relations, Analyst Relations, Internal Communications, Executive Communications, Content Marketing, Social Media, Visual Identity and Brand. Jessica has nearly 10 years of security experience.

Copyright 2010 Respective Author at Infosec Island]]>
2020 Vision: How to Prepare for the Future of Information Security Threats Fri, 06 Apr 2018 12:25:36 -0500 Every day, the news is full of stories describing the weighty and often overwhelming effects new technology has on the way people live and work. Terms such as Artificial Intelligence (AI) and the Internet of Things (IoT) are fast becoming everyday jargon and plans for their deployment will land high on the agenda of business leaders over the next few years – whether they like it or not.

Headlines warning of cyber-attacks and data breaches are just as frequent. Assailants are everywhere: on the outside are hackers, organized criminal groups and nation states, whose capabilities and ruthlessness grow by the day; on the inside are employees and contractors, causing incidents either maliciously or by accident.

Business leaders are left feeling uncertain about the way forward. The dilemma is often stark: should they rush to adopt new technology and risk major fallout if things go wrong, or wait and potentially lose ground to competitors?

New attacks will impact both business reputation and shareholder value, and cyber risk exists in every aspect of the enterprise. At the Information Security Forum, we recently released Threat Horizon 2020, the latest in an annual series of reports that provide businesses a forward-looking view of emerging threats in today’s always-on, interconnected world. In Threat Horizon 2020, we drew from our research to highlight the top nine threats to information security over the next two years.

Let’s take a quick look at these threats and what they mean for your organization:

Cyber and Physical Attacks Combine to Shatter Business Resilience

Physical and cyber-attacks will be deployed simultaneously, creating unprecedented damage. Many nation states and terrorist groups (or both, working together) will have the capability to bring together the full force of their armaments – both traditional and digital – to perform a clustered ‘hybrid’ attack. The outcome, if successful, would be damage on a vast scale.

Telecommunication services and internet connections will be obvious first targets, leaving individuals and organizations cut off from the outside world. Assistance from emergency response services, as well as local and central governments, will be slow or non-existent as essential physical and digital infrastructure will have broken down.

These attacks will be designed to spread maximum chaos, fear and confusion. The stricken city, or cities, will be brought to a standstill, with both lives and businesses placed in jeopardy. Those at home will be unable and unwilling to go to work, or – without power or communications – unable to work from home. Those already in the office will be trapped with nowhere to escape to, as attacks hit them from every angle. Existing business continuity plans will be useless; they will not have been prepared to cater for an eventuality when every system is down while individuals are in physical danger. People will panic. Work will be off the agenda.

Satellites Cause Chaos on the Ground

Compromised satellite signals, whether spoofed by malicious adversaries or knocked out by collisions with other satellites or space debris, will cause widespread chaos down on Earth. As satellites become cheaper and easier for national space agencies and individual businesses to launch and maintain, they will become increasingly integral to modern life. Disabled or spoofed signals will interfere with critical transport, communications systems and even financial services.

Lives will be put at risk and supply chains hampered as spoofed GPS signals are sent to aircraft, ships and road vehicles. International financial systems – from stock exchanges to ATMs – that rely on exact timestamps on digital payments will be unable to record transactions accurately. Trading algorithms that rely on data from satellites on weather or location of specific assets (e.g. to instruct which crops to buy or sell) will be misled, potentially manipulating financial markets.

In the next few years, satellites will play an increasingly crucial role in connecting Earth-based infrastructure and systems. However, organizations will need to realise what the military has known for years – that no one will be spared if attacks against satellites succeed. The potential for crippling disruption is immense.

Weaponized Appliances Leave Organizations Powerless

Attackers will find ways to access a huge proportion of the millions of connected appliances – such as heating systems and ovens – and turn them into weapons. This mass of appliances could be commandeered and misused for a number of disruptive ends, similarly to the way botnets of poorly protected home computers have been used to initiate and sustain large scale DDoS attacks. However, one threat merits specific attention – the damage they can wreak collectively on power grids.

These appliances, forming part of the IoT – many in homes but also found in offices and factories – are always powered-on and always connected to the internet. Manipulated by attackers to switch on to full power simultaneously, appliances will create a demand for power so unexpectedly high that it overloads and brings down regional electricity grids. With the grid offline or severely degraded, organizations will be weakened and struggle to function.

The underlying foundations of many business continuity plans, such as instructing employees to work from home, will be rendered useless as they will have neither power nor a means to communicate. Dependent critical services such as water supplies, food production systems and health care will be unavailable. Power rationing will affect other utilities and services, such as heating, lighting and transport. To cap it all, organizations will lose out to competitors in non-affected areas who will be quick to take advantage of the increased demand for their services.

Quantum Arms Race Undermines the Digital Economy

The next generation of computer technology – quantum computing – will be able to crack encryption that would have taken traditional computers millions of years in mere hours or minutes. As a consequence, a security mechanism that forms the bedrock of today’s digital economy will require a complete overhaul, potentially exposing organizations to millions in transformation costs and lost trade. However, the practical problems start now. In particular, various parties will pre-empt this new technology by starting to harvest gigantic pools of encrypted information, using it later when the technology is available.

National intelligence organizations will lead the charge to be the first to get their hands on this technology.  The sensitive information, communications, services, transactions and critical infrastructure of adversaries will all become an open book. The desire to be first across the line is certain to drive a digital arms race.  Who will be the quantum winner? That remains unclear.

Some nation states will want to expand their horizons and use quantum computing as an offensive weapon to undermine the digital economies of their perceived enemies – as will others who can get early access to the technology. Organizations in both the public and private sectors will then be prime targets for a range of attackers. None will be safe, even those that believe their information is secure now.

Artificially Intelligent Malware Amplifies Attackers’ Capabilities

According to many futurists, AI will bring huge benefits to society, especially in areas such as research and healthcare. However, it will also be deployed in more damaging ways, one of which will be to build computer malware that can change both its form and purpose. Attackers will use this artificially intelligent malware to find new ways to access an organization’s network and disrupt its operations. Mission-critical information assets such as trade secrets, R&D plans and business strategies will be targets for compromise – all without detection.

As it is AI-based, this new form of malware will learn from its environment, analysing applications and systems to discover and exploit new vulnerabilities in real time. It will be hard to distinguish what is safe from unauthorised access and what isn’t. Even information previously believed to be well protected will be open to compromise.

Conventional techniques used to identify and remove malware will quickly become ineffective. Instead, AI-based solutions will be needed to fight this new malware – leading to a race for supremacy between offensive and defensive AI. The eventual winners will be hard to spot for some considerable time.

Attacks on Connected Vehicles Put the Brakes on Operations

Attackers will look to remotely hack a range of connected vehicles – cars, lorries, vessels and trains – taking advantage of vulnerabilities within on-board systems to take control of them, steal them, or disable vital safety features. All forms of vehicles will be exposed. The sheer scale of targets will be dramatic: for example, the number of connected cars manufactured globally is predicted by Gartner to grow from 12.4 million in 2016 to 61 million by 2020.

The effects will be felt by various people and organizations. Individuals who travel in connected vehicles, or are in the vicinity, will have their lives put at risk. Organizations with supply chains that rely on connected vehicles to transport goods or materials will face operational disruption. Vehicle manufacturers and their subcontractors will face reputational damage, and maintenance providers will come under pressure to perform immediate software and hardware updates.

Liability for incidents – including deliberate attacks – will be a particularly hot topic. Insurance companies will be forced to rethink their strategies to take into consideration claims over incidents involving connected vehicles; organizations will wish to consider themselves blameless but may be held liable; while vehicle manufacturers are likely to face complex class action legal battles should incidents begin to fall into recognisable patterns.

Biometrics Offer a False Sense of Security

Demands for convenience and usability will drive organizations to move to using biometric authentication methods as the default for all forms of computing and communication devices, replacing today’s multi-factor approach. However, any misplaced trust in the efficacy of one or more biometrics will leave sensitive information exposed. Attacks on biometrics will affect finances and damage reputations.

The problem will be compounded by the wide and confusing array of proprietary technologies produced by different vendors. As there are no common global security standards for biometrics, it is inevitable that some technologies will be vastly inferior to others. The question then becomes: which are secure today? And will that continue to hold true tomorrow… and the day after?

Existing security policies will fall well short of addressing the issues as new devices infiltrate organizations, from the boardroom down. Failure to plan and prepare for this major change will leave some organizations sleepwalking into a situation where critical or sensitive information is protected by a single biometric factor which proves vulnerable.

New Regulations Increase The Risk And Compliance Burden

By 2020, the number and complexity of new international and regional regulations to which organizations must adhere, combined with those already in place, will stretch compliance resources and mechanisms to breaking point. These new compliance demands will also result in an ever swelling ‘attack surface’ which must be protected fully while attackers continually scan, probe and seek to penetrate it.

For some organizations, the new compliance requirements will increase the amount of sensitive information – including customer details and business plans – that must be stockpiled and protected. Other organizations will see regulatory demands for data transparency resulting in information being made available to third parties who will transmit, process and store it in multiple locations. Most organizations will see penalties for non-compliance reach material levels.

Balancing potentially conflicting demands, while coping with the sheer volume of regulatory obligations, may either divert essential staff away from critical risk mitigation activities or raise the impact of compliance failure to new levels. Business leaders will be faced with tough decisions. Those that make a wrong call may leave their organization facing extremely heavy fines and damaged reputations.

Trusted Professionals Divulge Organizational Weak Point

The relentless hunt for profits and never-ending change in the workforce will create a constant atmosphere of uncertainty and insecurity that has the effect of reducing loyalty to an organization. This lack of loyalty will be exploited: the temptations and significant rewards from ‘cashing-in’ corporate secrets will be amplified by the growing market worth of those secrets, which include organizational weak points such as security vulnerabilities. Even trusted professionals will face temptation.

Most organizations recognise that passwords or keys to their mission-critical information assets are handed out sparingly and only to those that have both a need for them and are considered trustworthy. However, employees who pass initial vetting and background checks may now – or in the future – face any number of circumstances that entice them to break that trust: duress through coercion; being passed over for promotion; extortion or blackmail; offers of large amounts of money; or simply a change in personal circumstances.

While the insider threat has always been important, it is not only the organizational crown jewels that are under threat. The establishment of bug bounty and ethical disclosure programmes, together with a demand from cybercrime or hackers, puts a very high value on the most secret of secrets – the penetration test results and vulnerability reports that comprise the ‘keys to the kingdom’. Organizations reliant on existing mechanisms to ensure the trustworthiness of employees and contracted parties with access to sensitive information will find those mechanisms inadequate.

Be Prepared

As dangers accelerate, organizations must fully commit to disciplined and practical approaches to managing the major changes ahead. Employees at every level of the organization will need to be involved, including board members and managers in non-technical roles.

The nine threats listed above expose the dangers that should be considered most prominent. They have the capacity to transmit their impact through cyberspace at alarming speeds, particularly as the use of the Internet spreads. Many organizations will struggle to cope as the pace of change intensifies. These threats should stay on the radar of every organization, both small and large.

About the author: Steve Durbin is Managing Director of the Information Security Forum (ISF). His main areas of focus include strategy, information technology, cyber security and the emerging security threat landscape across both the corporate and personal environments. Previously, he was senior vice president at Gartner.

Copyright 2010 Respective Author at Infosec Island]]>
Why Data Loss Prevention Will Suffer the Same Fate as Anti-Virus Tue, 03 Apr 2018 04:45:56 -0500 For years, Data Loss Prevention (DLP) has been the first line of defense against data leaving an organization’s four walls. DLP solutions have been touted as having the ability to track and prevent the loss of data through unauthorized channels. However, there are challenges associated with DLP, such as solution stability, the time-consuming data classification process and ongoing maintenance, and disconnects between data owners and DLP administrators. Security teams are realizing DLP is not sufficient in keeping an organization’s critical data safe.

DLP appears to be following in the footsteps of another once-ubiquitous but now outdated technology: anti-virus. The parallels between the two technologies may not be apparent at first, but when taking another look, it is clear that DLP may suffer the same fate as traditional anti-virus.

Since 1987, the anti-virus approach has been to tag data with signatures, continuously scan systems for these signatures, and then attempt to quarantine the known bad files. In theory, this method sounds great, but in the 21st century, malware can move and morph faster than anyone ever imagined. With the dawn of malware, hackers realized how these tools operated and they customized specific ways to avoid the existing tool sets.

The dawn of DLP

Similarly, data loss prevention (DLP) tools require data classification and tagging of sensitive files, use scanning for the movement of files, and attempt to prevent these files from going places they shouldn’t be going. Since 2000, organizations implemented these tools to adhere to regulatory compliance, monitor sensitive file movement, or prevent specific files from going out specific egress points.

However, a few major factors have seriously diminished the effectiveness of data loss prevention solutions. The primary challenge being the exponential growth of unstructured and semi-structured data within organizations. To be effective, DLP tools must keep up with the constant creation and modification of sensitive data. This places a heavy burden on data owners and those that are administrating the DLP technology to stay on the on the same page. It is almost inevitable that data growth will outpace the lines of communication within the organization.

DLP and the people problem

One of the most challenging elements of DLP isn’t within the software – it is the people. It’s no secret people are the biggest challenge when it comes to implementing effective security controls. Not all users have malicious intent; they may simply be seeking to find a way to bypass existing controls to make their life easier. People are unpredictable, and ensuring organization’s have a rule for every action a person might take is hard if not impossible.

When it comes to malicious insiders operating within an organization, DLPs are notoriously ineffective at stopping data loss caused by these type of threats since DLPs are often trivial for technical users to bypass. This means if someone on the inside really wants to exfiltrate data, they will probably find a way to do it.

DLPs are incomplete as they do not offer all-in-one detection, deterrence, and mitigation of data exfiltration and insider threats. While they may catch some instances of attempted data exfiltration, they are not designed to help security teams investigate or respond effectively, and they don’t have proactive user education built in to reduce accidental misuse.

Say goodbye to traditional DLP

Traditional DLP tools have been popular given the magnitude of the data loss problem and compliance needs of some organizations. However, DLPs often fall short when it comes to preventing data loss— especially when it comes to providing visibility into user actions to detect incidents in the moment and quickly investigate them.

Instead of relying on a traditional DLP focused exclusively on data, organizations should implement a holistic people-focused strategy. Organizations should shift to an approach that enables the security organizations to have full visibility into user actions with alerts for out-of-policy actions enabling an early warning system to decrease the time to detection. This should be coupled with strong processes in place to quickly remediate incidents involving data loss and flexible prevention controls that align with the business goals, to ensure a 360-degree view. 

Now more than ever, organizations need to invest in solutions that provide full visibility into what users are doing coupled with flexible prevention policies. With this visibility, organizations are able to quickly identify risky behavior, streamline the investigation process and prevent data loss.

About the author: Mike McKee brings over 20 years of cross-functional, global experience in technology to ObserveIT. Previously, Mike led the award-winning Global Services and Customer Success organizations at Rapid7, served as Senior Vice President CAD Operations and Strategy at PTC, and Chief Financial Officer at

Copyright 2010 Respective Author at Infosec Island]]>
Unconventional Thinking — Four Practices to Help Mitigate Risk Mon, 02 Apr 2018 07:12:00 -0500 Taking a conventional approach to cybersecurity typically refers to “keeping the bad stuff out” of your network, meaning blocking any number of malicious threats such as spam, viruses, malware, and DDoS attacks. The truth is if you want your organization to be secure in today’s cyber landscape, you must proactively assess your security posture and focus on mitigating risk. This not only drastically reduces the probability of a successful attack actually transpiring, it will enable the ability to remediate and recover your business quickly in the event of exposure. How do you implement this approach?

1. Mitigate risks posed by targeted email attacks

Email is still the top threat vector used by attackers. More cunning methods such as spear phishing and business email compromise (BEC) are highly targeted and researched attempts where cybercriminals often seek to defraud individuals and lead unsuspecting employees to transfer money or willingly share credentials. The FBI estimates that upwards of $5 billion has been lost to BEC in recent years.

In these attacks, criminals engage in casual conversation with victims through email in an attempt to gain the users’ trust before actually doing anything malicious. In many cases, bad guys investigate and gather information about their targets via social media, which gives them ammunition in making their email threats more convincing. Unfortunately, traditional security solutions such as email security gateways and anti-virus solutions fail to detect these attempts, as there are no malicious attachments or links. An entirely new approach is critical, and currently the most effective technologies are artificial intelligence solutions for cyber fraud defense, domain fraud protection using DMARC authentication, and fraud simulation training for individuals of high risk within your organization.

2. Mitigate the risk posed by careless or untrained users

A significant part of mitigating the risk of targeted email attacks means having the ability to provide security training to high risk individuals. What about the mid to lower-level employees who are either careless or simply clueless? They require training just as much as high risk individuals, as attackers often begin their attack campaigns targeting these employees. Regular security and awareness training with simulation testing of their knowledge is a major key to reducing and mitigating organizational risk.

3. Mitigate the risk posed by rapid application development

Of course, risk is present in other areas beyond email and employees including websites and applications. Identifying and remediating application vulnerabilities while maintaining development agility is a challenging balance. This is particularly true when adopting cloud platforms like AWS and Azure that enable rapid application deployments. In fact, studies have shown that as many as 86 percent of websites contain at least one serious vulnerability, and the average time critical vulnerabilities remain unfixed is 300 days. This is unacceptable as vulnerabilities in websites and other public facing applications can lead to costly data breaches and infiltration. Organizations must proactively check for vulnerabilities in their sites and applications on a regular if not continuous basis.

4. Mitigate the risk of data loss

Sometimes you can do everything right in your approach to security and still have something ugly happen—such as your data getting lost or held for ransom. That’s why there is one important step to take to mitigate the risk of data loss. Protect it.

Implement a data protection strategy that not only includes a backup plan, but one that allows for easy recovery as well. The ideal solution would automatically create updated backups as files are revised, and then have the ability to duplicate them to a secure cloud or to a private off-site location. That way, if criminals encrypt your files with ransomware, you will be able to eliminate the malware, then delete the encrypted files and restore them from a recent clean backup. The whole process can take as little as an hour with the right solution, helping you to get right back to business while leaving criminals empty handed.

By taking these proactive steps to mitigate the security risks in your organization, you will greatly reduce the probability of successful attacks, and have the ability to remediate and quickly recover in the event of exposure. Being truly secure requires a lot more than just focusing on keeping the bad stuff out, but rather learning how to mitigate the potential risks before they ever come your way.

About the author: Sanjay is a 20 year veteran in technology and has a passion for cutting edge technology and a desire to innovate at the intersection of technology trends. He currently leads product management, marketing and strategy for Barracuda’s security business worldwide.

Copyright 2010 Respective Author at Infosec Island]]>