Tuesday, January 31, 2017
Monday, January 30, 2017
Thursday, January 26, 2017
I published the following diary on isc.sans.org: “IOC’s: Risks of False Positive Alerts Flood Ahead“.
Yesterday, I wrote a blog post which explained how to interconnect a Cuckoo sandbox and the MISP sharing platform. MISP has a nice REST API that allows you to extract useful IOC’s in different formats. One of them is the Suricata / Snort format. Example… [Read more]
[The post [SANS ISC Diary] IOC’s: Risks of False Positive Alerts Flood Ahead has been first published on /dev/random]
Wednesday, January 25, 2017
With the number of attacks that we are facing today, defenders are looking for more and more IOC’s (“Indicator of Compromise) to feed their security solutions (firewalls, IDS, …). It becomes impossible to manage all those IOC’s manually and automation is the key. There are two main problems with this amount of data:
- How to share them in a proper way (remember: sharing is key).
- How to collect and prepare them to be shared.
Note that I’m considering in this post only the “technical” issues with IOC’s. There are much more issues like their accuracy (which can be different between different environments).
To search for IOC’s, I’m using the following environment: A bunch of honeypots capture samples that, if interesting, are analyzed by a Cuckoo sandbox. To share the results with peers, a MISP instance is used.
In this case, a proper integration between Cuckoo and MISP is the key. It is implemented in both ways. The results of the Cucko analyzis are enriched with IOC’s found in MISP. IOC’s found in the sample are correlated with MISP and the event ID, description and level are displayed:
In the other way, Cuckoo submits the results of the ianalyzes to MISP:
Cuckoo 2.0 comes with ready-to-use modules to interact with the MISP REST API via the PyMISP Python module. There is one processing module (to search for existing IoC’s in MISP) and one reporting module (to create a new event in MISP). The configuration is very simple, just define your MISP URL and API key in the proper configuration files and you’re good to go:
# cd $CUCKOO_HOME/conf # grep -A 2 -B 2 misp *.conf processing.conf-enabled = yes processing.conf- processing.conf:[misp] processing.conf-enabled = yes processing.conf:url = https://misp.xxxxxxxxxx processing.conf-apikey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx maxioc = 100 -- reporting.conf-logname = syslog.log reporting.conf- reporting.conf:[misp] reporting.conf-enabled = yes reporting.conf:url = https://misp.xxxxxxxxxx reporting.conf-apikey = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
But, in my environment, the default modules generated too many false positives in both ways. I patched them to support the more configuration keywords to better control what is exchanged between the two tools.
In the processing module:
only_ids = [yes|no]
If defined, only attributes with the “IDS” flag set to “1” will be displayed in Cuckoo.
ioc_blacklist = 220.127.116.11,18.104.22.168,www.google.com
This parameter allows you to define a list (comma delimited) of IOC’s that you do NOT want in Cuckoo. Typically, you’ll add here DNS servers, specific URLs.
In the reporting module:
tag = Cuckoo
Specify the tag to be added to created event in MISP. Note that the tag must be created manually.
# Default distribution level: # your_organization = 0 # this_community = 1 # connected_communities = 2 # all_communities = 3 distribution=0
The default distribution level assigned to the created event (default: 0)
# Default threat level: # high = 1 # medium = 2 # low = 3 # undefined = 4 threat_level_id=4
The default threat level assigned to the created event (default: 4)
# Default analysis level: # initial = 0 # ongoing = 1 # completed = 2 analysis = 0
The default analysis status assigned to the created event (default: 0)
ioc_blacklist = 22.214.171.124,126.96.36.199,www.google.com,crl.verisign.com,sc.symcb.com
Again, a blacklist (comma separated) of IOC’s that you do NOT want to store in MISP.
Events are created in MISP with an “unpublished” status. This allows you to review them, add manually some IOC’s, to merge different events, add some tags or change default values.
The patched files for Cuckoo are available in my GitHub repository.
I published the following diary on isc.sans.org: “Malicious SVG Files in the Wild“.
In November 2016, the Facebook messenger application was used to deliver malicious SVG files to people . SVG files (or “Scalable Vector Graphics”) are vector images that can be displayed in most modern browsers (natively or via a specific plugin). More precisely, Internet Explorer 9 supports the basic SVG feature sets and IE10 extended the support by adding SVG 1.1 support. In the Microsoft Windows operating system, SVG files are handled by Internet Explorer by default… [Read more]
Wednesday, January 18, 2017
The continuing advancements of the Internet and associated technologies have brought new opportunities to governments, businesses, and private citizens. At the same time, they have also exposed them to new risks. However, Internet adoption has not been even and countries or economies have come online in different ways and at varied paces. As a result, awareness of cyber risk and approaches to managing it can differ greatly between jurisdictions. This is a particularly true when thinking about emerging economies, which have typically had a very different online journey than developed markets in Europe or United States. One way to ensure we can address this gap is through the use of confidence building measures (CBMs).
CBMs aim to instil good cybersecurity practices across the global online economy, focusing on the critical cybersecurity work that can be done in the early stages of a country’s emergence into cyberspace. Not only can CBMs help reduce vulnerability to cybercrime in general, by embedding best practices in the foundations of a country’s approach to the Internet, but they can also complement the objectives of cybersecurity norms. This is because CBMs seek to diminish the risk of a potential online inter-state escalation by enhancing transparency of government action and encouraging cooperation around areas of common interest. This, combined with their ability to act as vehicles for sharing best practices and delivering cyber-capacity building, makes CBMs worthy of more attention.
CBMs have a particular relevance for economies that have seen very recent but rapid growth of the Internet. Unlike developed economies, which saw it grow incrementally over the past twenty years, users from emerging economies have had little chance to gradually adjust their behaviors online. Typically, increased internet access and more mature technological development is correlated with improvements in cybersecurity. However, our research has suggested that some emerging countries may not be ready to secure their ICT infrastructure in a way that is commensurate with the increased use of computer systems by their citizens and businesses, as well as the government itself. The consequences of this cybersecurity gap for the countries concerned could be very serious. More than this, however, the interconnectedness of the Internet at the global level makes weaknesses in one part of it a potential threat to the rest. Since the majority of the 3+ billion people online today come from the Global South, the problems posed by such gaps represent a weakness for the globe’s overall cybersecurity and, in terms of cyber conflict risks, for its real world security too.
Governments are not oblivious to the challenges outlined above. A cursory glance at a map or a timeline of cybersecurity policies, guidelines, and regulation shows us that over sixty percent of the world is currently developing some sort of cybersecurity framework, hoping to secure their critical systems, or developing laws to help them catch cybercriminals. This is where collaboration on cybersecurity, as envisioned in CBMs, can be particularly beneficial. Moreover, the returns of CBMs are also real for the global online ecosystem itself. Despite government initiatives to limit online criminal activity in its borders, cyberspace continues to be a global endeavour. Improving not only cooperation, but the overall level and consistency of cybersecurity practices is therefore the best way of dealing with cybercriminals who show no respect for traditional borders.
There is considerable economic upside to be gained as well. The digital economy contributed $2.3 trillion to the G20’s GDP in 2010, an estimated $4 trillion in 2016, and is growing at 10% a year. For emerging markets, research suggests that the effect could be even greater. Certainly, the skills developed locally through CBMs and cybersecurity training correspond to the skills needed to enable local businesses to scale up and innovate, without having to rely on outside, more expensive talent.
For all these reasons the case for CBMs is compelling. They can equip countries to navigate the global online environment, as well as to be able to respond operationally to international requests for assistance. They also help the public and private institutions in one country join a broader community of security experts, allowing everyone to engage in a full range of protection, detection, response and recovery activities. However, bringing them into effect is not always easy. We will all need to work together, government to government and business to government – through efforts such as these and these – to create and then promote an international corpus of effective and practical CBMs in order deliver the confidence everyone needs to trust in the Internet and in the technology that is increasingly central to their lives.
from Paul Nicholas
Tuesday, January 17, 2017
This post is authored by Kristina Laidler, Security Principal, Cyber Security Services and Engineering
Each week seems to bring a new disclosure of a cybersecurity breach somewhere in the world. In 2016 alone, over 3 billion customer data records were breached in several high-profile attacks globally. As we look at current state of cybersecurity challenges today, we see the same types of attacks, but the sophistication and scope of each attack continues to grow and evolve. Cyber adversaries are now changing their tactics and targets based on the current security landscape. For example, as operating systems became more secure, hackers shifted back to credential compromise. As Microsoft Windows continually improves its security, hackers attack other systems and third-party applications.
Both the growth of the internet and the Internet of Things (IoT) is creating more connected devices, many of which are unsecure, to carry out larger Distributed Denial-of-Service (DDoS) attacks. Due to the insecure implementation of internet-connected embedded devices, they are routinely being hacked and used in cyberattacks. Smart TVs and even refrigerators have been used to send out millions of malicious spam emails. Printers and set-top-boxes have been used to mine Bitcoins and cybercriminals have targeted CCTV cameras (common IoT devices), to launch DDoS attacks.
Microsoft has unique visibility into an evolving threat landscape due to our hyper-scaled cloud footprint of more than 200 cloud services, over 100 datacenters, millions of devices, and over a billion customers around the globe and our investment in security professionals focused on secure development as well as protect, detect and respond functions. In an effort to mitigate attacks, Microsoft has developed an automated platform, as part of Microsoft Azure, that provides a rapid response to a DDoS attack. On our software-defined networks, the data plane can be upgraded to respond and stay ahead of network traffic, even while our service or corporate environment is under attack. Our DDoS protection platform analyzes traffic in real-time and has the capability to respond and mitigate an attack within 90 seconds of the detection.
Microsoft Cyber Defense Operations Center operates 24×7 to defend against cyberthreats
In November 2015, we opened the Cyber Defense Operations Center (CDOC) to bring together the company’s cybersecurity specialists and data scientists in a 24×7 facility to combat cyber adversaries.
In the year since opening, we have advanced the policies and practices that accelerate the detection, identification and resolution of cybersecurity threats, and have shared our key learnings with the thousands of enterprise customers who have visited the CDOC. Today, we are sharing a Cyber Defense Operations Center strategy brief that details some of our best practices for how we Protect, Detect and Respond to cyberthreats in real time.
Microsoft’s first commitment is to protect the computing environment used by our customers and employees to ensure the resiliency of our cloud infrastructure and services, products, devices, and the company’s internal corporate resources.
Microsoft’s protect tactics include:
- Extensive monitoring and controls over the physical environment of our global datacenters, including cameras, personnel screening, fences and barriers and multi-factor authentication for physical access.
- Software-defined networks that protect our cloud infrastructure from intrusions and distributed denial of service attacks.
- Multifactor authentication is employed across our infrastructure to control identity and access management.
- Non-persistent administration using just-in-time (JIT) and just-enough administrator (JEA) privileges to engineering staff managing infrastructure and services. This provides a unique set of credentials for elevated access that automatically expires after a pre-designated duration
- Proper hygiene is rigorously maintained through up-to-date, anti-malware software and adherence to strict patching and configuration management.
- Microsoft Malware Protection Center’s team of researchers identify, reverse engineer and develop malware signatures and then deploy them across our infrastructure for advanced detection and defense. These signatures are available to millions of customers using Microsoft anti-malware solutions.
- Microsoft Security Development Lifecycle is used to harden all applications, online services and products, and to routinely validate its effectiveness through penetration testing and vulnerability scanning.
- Threat modeling and attack surface analysis ensures that potential threats are assessed, exposed aspects of the service are evaluated, and the attack surface is minimized by restricting services or eliminating unnecessary functions.
- Classifying data according to its sensitivity—high, medium or low business impact—and taking the appropriate measures to protect it, including encryption in transit and at rest, and enforcing the principle of least-privilege access provides additional protection.
- Awareness training that fosters a trust relationship between the user and the security team to develop an environment where users will report incidents and anomalies without fear of repercussion
Having a rich set of controls and a defense-in-depth strategy helps ensure that should any one area fail, there are compensating controls in other areas to help maintain the security and privacy of our customers, cloud services, and our own infrastructure environment.
Microsoft operates under an Assume Breach posture. This simply means that despite the confidence we have in the defensive protections in place, we assume adversaries can and will find a way to penetrate security perimeters. It is then critical to detect an adversary rapidly and evict them from the network.
Microsoft’s detect tactics include:
- Monitoring network and physical environments 24x7x365 for potential cybersecurity events. Behavior profiling, based on usage patterns and an understanding of unique threats to our services.
- Identity and behavioral analytics are developed to highlight abnormal activity.
- Machine learning software tools and techniques are routinely used to discover and flag irregularities.
- Advanced analytical tools and processes are deployed to further identify anomalous activity and innovative correlation capabilities. This enables highly-contextualized detections to be created from the enormous volumes of data in near real-time.
- Automated software-based processes that are continuously audited and evolved for increased effectiveness.
- Data scientists and security experts routinely work side-by-side to address escalated events that exhibit unusual characteristics requiring further analysis of targets. They can then determine potential response and remediation efforts.
When we detect something abnormal in our systems, it triggers our response teams to engage.
Microsoft’s respond tactics include:
- Automated response systems using risk-based algorithms to flag events requiring human intervention.
- Well-defined, documented and scalable incident response processes within a continuous improvement model helps to keep us ahead of adversaries by making these available to all responders.
- Subject matter expertise across our teams, in multiple security areas, including crisis management, forensics, and intrusion analysis, and deep understanding of the platforms, services and applications operating in our cloud datacenters provides a diverse skill set for addressing incidents.
- Wide enterprise searching across both cloud, hybrid and on-premises data and systems to determine the scope of the incident.
- Deep forensic analysis, for major threats, are performed by specialists to understand incidents and to aid in their containment and eradication.
- Microsoft’s security software tools, automation and hyper-scale cloud infrastructure enable our security experts to reduce the time to detect, investigate, analyze, respond, and recover from cyberattacks.
There is a lot of data and tips in this strategy brief that I hope you will find useful. You can download the Cyber Defense Operations Center strategy brief to gain more insight into how we work to protect, detect and respond to cybersecurity threats. And I encourage you to visit the Microsoft Secure website to learn more about how we build security into Microsoft’s products and services to help you protect your endpoints, move faster to detect threats, and respond to security breaches.
from Microsoft Secure Blog Staff
Are the rules and regulations being put in place today, from the Chinese cybersecurity law to the EU’s General Data Protection Regulation (GDPR), going to be appropriate for the world 10 years from now? And if not, should this be of concern? To answer these questions, we need to learn from the past.
The technology concerns of 10 years ago are still with us in some ways, e.g. worries about data being accessed by the wrong people and important systems becoming vulnerable to cyberattacks, but much has changed as the technology has continued to develop and spread through our businesses, communities, governments, and private lives. As a result, the regulations in place in 2006 have had to be replaced, e.g. the US-EU Safe Harbour with Privacy Shield, or have been wholly supplanted, e.g. the emergence of new approaches to cybersecurity and critical infrastructure. Now that I look at it, the world of 10 years ago seems more distant than I expected. Technology was far from ubiquitous and the services offered more limited, the rules familiar but sometimes at a tangent to today’s.
2006 was an important year in technology development: Facebook emerged from university campuses and Google bought YouTube. The policy agendas of governments and regulators were driven by concerns about child online safety, e-skills and lifelong learning, access to broadband, e-commerce and online banking, and, yes, market dominance. This is not to discount the importance of these issues at the time, but cybersecurity then was more often viewed as avoiding exotically named viruses rather than combating the organized cybercrime we now face, whilst privacy was seen as protecting the vulnerable from online exploitation rather than through today’s post-Snowden lens.
Could 2006’s policy-makers have prepared better for the issues we now face? That seems unlikely. For one thing, policy-makers would have been hard-pressed to have predicted the direction of technology; self-driving cars were a near-fringe idea (Google’s first major steps were in 2005), smartphones had not yet taken off (the iPhone was launched on January 9, 2007) and 3D printing was an industrial process (the first commercial printer came out in 2009). For another thing, these policy-makers were not operating in a vacuum; the rules they were putting in place had to deal with immediate challenges and had to be built on structures and laws that dated to the turn of the millennium.
This shortfall may actually have been a good thing for technology in 2016. Regulations and laws define and fix things, disallowing certain behaviors or requiring others. This can be hard enough to do successfully with well-understood issues, but for nascent technologies or business-models it must be exceptionally difficult. Without undue constraints, technology was able to develop “naturally”. They found business models and technical solutions that worked, then built up momentum to emerge at the stage, where today they are robust enough to be more closely scrutinized and, perhaps, regulated.
So, following a similar pattern, should our 2016 efforts at rule-making focus on our immediate issues and leave the future to, in some sense, sort itself out? Perhaps. The emergence of advanced machine learning or of the Internet of Things mean those technologies can’t really be legislated for right now because we don’t know what they will mean in practical terms for businesses and consumers, criminals and law enforcers, and so on. And yet, on the other hand, the technology of tomorrow is being shaped by the decisions of today. For example, rules currently being considered about data localization or cross-border data flows will shape the future of cloud computing, whilst concerns over privacy or intellectual property will shape big data and machine learning. The wrong choices now could undermine the potential of many technologies and tools.
The answer to whether or not today’s rules are going to be appropriate for 2026 is not, therefore, black and white. We need rules today that reflect technology today, because the old rules aren’t necessarily fit for purpose any more. Equally, we have to acknowledge that rules we create today aren’t always going to last long in the face of technological evolution. This could lead us to conclude we need to have a new way of regulating technology, one that might focus on outcomes for example (and that would be a separate blog), but it could also lead us to conclude that ingenuity and innovation can thrive in the gaps we leave and can even be encouraged by imperfect situations.
Whilst there can be no excuse for making rules that assume the world and technology won’t change over a decade, we also don’t have to constantly second guess our future at the price of having useful rules today. In 2026 we might look back at today with a similar feeling to that we currently experience on looking back at 2006: familiarity, perhaps nostalgia, combined with a sense that things really have moved. This won’t necessarily be a bad thing.
from Paul Nicholas
Monday, January 16, 2017
The unprecedented scale and sophistication of modern cyberthreats, combined with the rapidly disappearing IT perimeter, means that while preventing an attack from becoming a breach is ideal, it is no longer realistic.
Microsoft proactively monitors the threat landscape for those emerging threats, to help better protect our customers. This involves observing the activities of targeted activity groups across billion of machines, which are often the first ones to introduce new exploits and techniques that are later used by other attackers.
So how can organizations defend against this triple threat?
Organizations need an approach to security that looks holistically across all critical endpoints, at all stages of a breach—before, during, and after. This means having tools that can not only protect against compromise, but can also detect the early signs of a breach and respond rapidly before it can cause damage to your system.
Windows Defender Advanced Threat Protection is a new post-breach security layer, designed to reduce the time it takes to detect, investigate and respond to advanced attacks. This post-breach layer, assumes breach and is designed to complement prevention technologies in the Windows 10 security stack, such as: Windows Defender Antivirus, SmartScreen, and various other OS hardening features.
By leveraging a combination of deep behavioral sensors, coupled with powerful cloud security analytics, Windows Defender ATP offers unparalleled detection, investigation and response experience. It uses behavioral analytics proven to detect unknown attacks and security data from over 1B machines to establish what’s normal. This is then coupled with support from our own industry leading hunters. Recordings of activity across all endpoints in the last 6 months allow users to go back in time to understand what happened.
Windows 10 has the protection you need, built-in
Windows Defender ATP is built-in to Windows 10, and provides a comprehensive post-breach solution to help security teams identify suspicious threats on your network that pre-breach solutions might miss.
Windows 10 and Windows Defender Advanced Threat Protection give you the future of cybersecurity NOW. Find out more at Microsoft Secure.
from Microsoft Secure Blog Staff
Saturday, January 14, 2017
I published the following diary on isc.sans.org: “Backup Files Are Good but Can Be Evil“.
Since we started to work with computers, we always heard the following advice: “Make backups!”. Everytime you have to change something in a file or an application, first make a backup of the existing resources (code, configuration files, data). But, if not properly managed, backups can be evil and increase the surface attack of your web application… [Read more]
[The post [SANS ISC Diary] Backup Files Are Good but Can Be Evil has been first published on /dev/random]
Friday, January 13, 2017
I published the following diary on isc.sans.org: “Who’s Attacking Me?“.
I started to play with a nice reconnaissance tool that could be helpful in many cases – offensive as well as defensive. “IVRE” (“DRUNK” in French) is a tool developed by the CEA, the Alternative Energies and Atomic Energy Commission in France. It’s a network reconnaissance framework that includes… [Read more]
Thursday, January 12, 2017
This post is authored by Gene Burrus, Assistant General Counsel
The hack of the San Francisco transit system and the subsequent hack back by a third party makes for a twenty-first century morality tale in some ways. The perpetrator of a ransomware blackmail is given a dose of his/her own medicine, undone by his/her own poor security practices. Painted at a larger scale however, is the picture we see equally salutary? Recent accusations of state or state-sponsored hacking during the US Presidential campaign led to threats of retaliation between what are arguably the world’s two preeminent nuclear powers.
At the heart of most thinking about good behavior you are likely to find the concept of consequences for actions, and even the concept of preemptive deterrence of bad actions. Those concepts of consequence and deterrence have not become embedded in our online expectations and behaviors. This may be because cyberspace is still a new “public space” and people are still working out how to behave. It is also likely, perhaps, because cyberspace allows levels of anonymity and remote actions unprecedented in the real world. People do things because they think there will be no consequences, no “pay back”. There is certainly an argument to be made, then, for hackers and cybercriminals being subject to payback in some, if for no other reason than to begin to build underpin a behavioral system in cyberspace of “do as you would be done unto”.
Is this, however, the way forward that we should collectively take? There are after all existing laws that apply to cybercriminals, and new laws are being brought into existence as both technology and criminality evolve. However, the reality of enforcement is that most cyber criminals will never be caught and operate with near impunity.
Is “retaliation” something individuals or even companies should be able to engage in, if there is a functional legal system and a police force to do it in their place? Vigilantism, mob-justice and corporate extra-judicial actions wouldn’t look any more attractive online than they do in the real world. After all, can the retaliator be certain that the right person has been targeted? And if so, what is a proportionate response? If you hack my social media profile, is it fair for me to erase your bank account?
Furthermore, could “attack back” policies open another potential cause of state to state conflict in cyberspace? Certainly that risk might exist if State-Owned-Enterprises (SOEs) became involved, as retaliator or retaliated-against. Even carrying out seemingly simple actions against a hacker might inadvertently breach national laws the target’s jurisdiction, thereby involving “real world” police and state institutions when previously they were not.
On the other hand, there may be ways to ‘hack back’ that fall short of the ‘tit for tat’ retaliation that is commonly thought of, and instead facilitate catching criminals, disrupt their operations, or deprive them of the fruits of their illegal conduct. The challenge is in making cyberspace a less consequence free realm in which criminal predators can seek victims. A colleague of mine recently mentioned the digital equivalent of the “dye packs”; and the ability to trace criminals through what they steal might be helpful. Still, for every measure taken by the forces of law and order, a countermeasure can be developed by criminals and others who operate outside the law. This is not an argument for inaction but for the realization that there is unlikely to be silver bullet to cybercrime through hacking back.
If genuine progress is to be made on this issues, the technology industry, law enforcers, lawyers and concerned society groups will have to consider at least three questions about hack back technologies and actions. First, explore what is technically feasible. Second, consider what is legal and for whom. Will law enforcement or private actors be legally allowed to use certain tools or tactics, and should some laws be changed to accommodate technical innovations that might be used to deter, track or punish criminal activity. And against the backdrop of both of these questions will be the question of what policies and tools will be wise to deploy and not do more harm than good. The intersection of these three questions may show the way forward on making cyberspace a place where crime doesn’t pay.
from Microsoft Secure Blog Staff
Wednesday, January 11, 2017
Monday, January 9, 2017
This post is authored by Joe Faulhaber, Senior Consultant ECG
The Microsoft Enterprise Cybersecurity Group (ECG) consists of three pillars: Protect, Detect, and Respond. Protection in depth is always the best defense, and being able to respond to incidents and recover is key to business continuity. Solid protection and rapid response capability are tied together by detection and intelligence, and the Enterprise Threat Detection (ETD) service enables detection in depth with global intelligence.
The detection technologies and intelligence data of ETD are brought together by a dedicated global team of cybersecurity analysts compounded by machine analytics. The analyst team merges deep knowledge of Windows and cyber threats with specific understanding of customer environments, becoming a virtual cybersecurity team for the enterprise. They provide in-depth technical knowledge along with reach-back into the vast resources of Microsoft. The ETD analyst team is tightly integrated with all cybersecurity teams in Microsoft, including ECG Global Incident Response and Recovery, the Microsoft Malware Protection Center, Azure Security Center, and the Microsoft Cyber Defense Operations Center. This brings the enterprise unparalleled access to Microsoft’s entire cyber security organization, enabling best-in-class detection, analysis, and actionable intelligence to detect the latest APT and other attacks.
In addition to the analyst team, the ETD service leverages machine analytics which uses built-in Windows features to enable powerful detection that adversaries find very difficult to avoid. These unique detection capabilities are just part of the ETD story, however, customers also benefit from global ecosystem visibility from the largest malware telemetry system in the world, as well as recommended actions specific to each customer environment from Microsoft threat analysts.
The service includes immediate alerts in the case of detection of threats. If a determined human adversary is suspected, an ETD analyst contacts the customer to further discuss the identified threat details and response steps, including the Microsoft Global Incident Response and Recovery team if required. Regular summary reports are delivered in discussion meetings with ETD analysts that cover actionable intelligence and insights. Additional analysis support is also provided as needed.
Together, these capabilities, alerts and reports provide benefits to enterprises at all levels of cybersecurity sophistication, from those with no dedicated cyber security personnel to enterprises with world-class cybersecurity capabilities.
Components of Enterprise Threat Detection Service
Corporate Error Reporting
ETD leverages Windows Error Reporting to analyze system error reports to determine if malicious code has been run on the system. This powerful technology has been a core Windows operating system component since Windows XP. It has been used extensively by Microsoft and select customers to detect novel, known, and targeted attacks across the threat lifecycle.
ETD also extends error reporting with additional capabilities and attack detection fidelity, even for processes that never generate a Windows error event. And since the feature is built natively into Windows and runs by default, configuring endpoints for ETD is achieved through policy configuration alone.
When employed alongside the Enhanced Mitigation Experience Toolkit, ETD can detect attempted exploits at 3 times the normal detection rate.
Cyber Threat Intelligence (CTI)
Cyber Threat Intelligence is a key component of Microsoft’s commitment to defending Windows and Azure customers. With an ETD subscription, the CTI data is used to provide a view into an enterprise’s security posture and enables discovery and understanding of emerging threat events in the global ecosystem.
Microsoft’s threat intelligence includes information from all Microsoft antimalware products, resulting in a vast global data set from over a billion computers and 86 billion files. It also includes URL intelligence from SmartScreen and Bing, as well as network intelligence and indicators of compromise from the Microsoft Advanced Persistent Threat hunter teams.
Personalized information for enterprises from Microsoft’s Digital Crimes Unit’s (DCU) Cyber Threat Intelligence Program is also included in the ETD data set, which includes sinkhole data from DCU botnet takedown operations.
Coordinating Microsoft Products and Services
Advanced Threat Analytics (ATA)
ATA enables detection across identities in the enterprise, which ETD advises over and enriches with endpoint information to inform even more powerful and actionable detections.
Windows Defender Advanced Threat Protection (WD-ATP)
Microsoft has taken the approach used by ETD in previous versions of Windows and perfected it for Windows 10. WD-ATP enables full behavioral monitoring in an enterprise with built-in sensors. ETD analysts have deep understanding of the WD-ATP data stream, and can help manage the comprehensive data to separate commodity malware events from targeted events.
ETD provides world-class threat detection capabilities leveraging proprietary technologies and cyber threat data sources that complement any enterprise’s cyber security strategy and deployment. Along with custom analysis, the service, benefits enterprises at any stage of cybersecurity maturity.
from Microsoft Secure Blog Staff
Saturday, January 7, 2017
I published the following diary on isc.sans.org: “Using Security Tools to Compromize a Network“.
One of our daily tasks is to assess and improve the security of our customers or colleagues. To achieve this use security tools (linked to processes). With the time, we are all building our personal toolbox with our favourite tools. Yesterday, I read an interesting blog article about extracting saved credentials from a compromised Nessus system. [Read more]
[The post [SANS ISC Diary] Using Security Tools to Compromize a Network has been first published on /dev/random]
Thursday, January 5, 2017
According to the most recent CRN Quarterly Ransomware Report, malicious infrastructure attacks increased 3500% in 2016 and the percentage is expected to increase in 2017. One important way that organizations can help protect against losses in a ransomware attack is to have a backup of business critical information in case other defenses fail. Since ransomware attackers have invested heavily into neutralizing backup applications and operating system features like volume shadow copy, it is critical to have backups that are inaccessible to a malicious attacker.
The start of a new year is the perfect time to reassess your current backup strategy and policies and the impact to your business if your backup data is compromised. As security remains a high priority for our customers, Operations Management Suite (OMS) continues its commitment to offering holistic security capabilities. To demonstrate our continued investments in OMS, Azure Backup released a set of new features to protect your on-premises to cloud backups from ransomware.
Your backups need to be protected from sophisticated bot and malware attacks. Permanent loss of data can have significant cost and time implications to your business. To help protect against this, Azure Backup guards against malicious attacks through deeper security, faster notifications, and extended recoverability.
For deeper security, only users with valid Azure credentials will receive a security PIN generated by the Azure portal to allow them to backup data. If a critical backup operation is authorized, such as “delete backup data,” a notification is immediately sent so you can engage and minimize the impact to your business. If a hacker does delete backup data, Azure Backup will store the deleted backup data for up to 14 days after deletion.
To ensure this year is your data’s most secure year yet, make revisiting your backup policy one of your new year’s resolutions.
If you are an IT professional, you can explore the new Azure Backup capabilities by creating a free Microsoft Operations Management Suite account.
Finally, to learn more about ransomware and strategies you can employ to protect against it, watch our webinar Protecting Against Ransomware Threats.
from Microsoft Secure Blog Staff